manual65110838-powerscale-onefs-9-5-0-0-web-administration-guide (1)
manual65110838-powerscale-onefs-9-5-0-0-web-administration-guide (1)
October 2023
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2016 - 2023 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Contents 3
Adding and removing licenses.................................................................................................................................. 39
Activating trial licenses............................................................................................................................................... 41
Certificates.......................................................................................................................................................................... 41
Viewing and Editing TLS Authority Certificates .................................................................................................. 41
Importing TLS Authority Certificates ..................................................................................................................... 41
Replacing TLS Authority Certificates - Overview............................................................................................... 42
Deleting TLS Authority Certificates .......................................................................................................................42
Viewing and Editing TLS Server Certificates ...................................................................................................... 42
Importing TLS Server Certificates ......................................................................................................................... 42
Configuring TLS Certificate Settings .................................................................................................................... 43
TLS certificate data example....................................................................................................................................43
Cluster identity...................................................................................................................................................................43
Set the cluster name and contact information.................................................................................................... 44
Cluster date and time....................................................................................................................................................... 44
Set the cluster date and time...................................................................................................................................44
Specify an NTP time server...................................................................................................................................... 44
SMTP email settings.........................................................................................................................................................45
Configure SMTP email settings............................................................................................................................... 45
Configuring the cluster join mode................................................................................................................................. 45
Specify the cluster join mode................................................................................................................................... 46
File system settings.......................................................................................................................................................... 46
Enable or disable access time tracking.................................................................................................................. 46
Specify the cluster character encoding.................................................................................................................46
Security hardening............................................................................................................................................................ 47
Cluster monitoring.............................................................................................................................................................47
Monitor the cluster..................................................................................................................................................... 48
View node status......................................................................................................................................................... 48
Monitoring cluster hardware.......................................................................................................................................... 48
View node hardware status...................................................................................................................................... 49
Chassis and drive states............................................................................................................................................ 49
Check battery status..................................................................................................................................................50
SNMP monitoring.........................................................................................................................................................51
Events and alerts...............................................................................................................................................................53
Events overview.......................................................................................................................................................... 53
Alerts overview............................................................................................................................................................ 53
Alert channel overview...............................................................................................................................................53
Event groups overview.............................................................................................................................................. 54
Viewing and modifying event groups......................................................................................................................54
Managing alerts........................................................................................................................................................... 55
Managing channels......................................................................................................................................................57
Managing event thresholds...................................................................................................................................... 59
Maintenance and testing........................................................................................................................................... 59
Cluster maintenance.........................................................................................................................................................60
Replacing node components..................................................................................................................................... 61
Upgrading node components.................................................................................................................................... 61
Automatic Replacement Recognition (ARR) for drives...................................................................................... 61
Managing drive firmware........................................................................................................................................... 62
Managing cluster nodes.............................................................................................................................................65
Upgrading OneFS........................................................................................................................................................ 66
Patching OneFS...........................................................................................................................................................66
4 Contents
SupportAssist .................................................................................................................................................................... 67
SupportAssist Prerequisites......................................................................................................................................67
Obtaining an Access Key and PIN........................................................................................................................... 67
Enabling SupportAssist overview............................................................................................................................ 67
Viewing SupportAssist settings overview............................................................................................................. 68
Configuring SupportAssist overview...................................................................................................................... 69
SRS Summary.....................................................................................................................................................................70
SRS Telemetry............................................................................................................................................................. 70
Obtain signed OneFS license file for evaluation clusters...................................................................................70
Configuring and Enabling SRS Overview................................................................................................................71
Diagnostic commands and scripts........................................................................................................................... 72
Enabling SRS Telemetry.............................................................................................................................................73
Disabling SRS Telemetry............................................................................................................................................73
Chapter 5: Authentication........................................................................................................... 82
Authentication overview..................................................................................................................................................82
Authentication provider features.................................................................................................................................. 82
Security Identifier (SID) history overview...................................................................................................................83
Supported authentication providers............................................................................................................................. 83
Active Directory.................................................................................................................................................................83
LDAP.....................................................................................................................................................................................84
NIS........................................................................................................................................................................................ 85
Kerberos authentication.................................................................................................................................................. 85
Keytabs and SPNs overview.....................................................................................................................................86
MIT Kerberos protocol support................................................................................................................................86
File provider........................................................................................................................................................................ 86
Local provider.....................................................................................................................................................................87
Multifactor authentication (MFA)................................................................................................................................. 87
Contents 5
Single sign-on overview................................................................................................................................................... 87
Multi-instance active directory......................................................................................................................................88
LDAP public keys...............................................................................................................................................................88
Managing Active Directory providers...........................................................................................................................88
Configure an Active Directory provider................................................................................................................. 89
Modify an Active Directory provider.......................................................................................................................89
Delete an Active Directory provider....................................................................................................................... 90
Active Directory provider settings.......................................................................................................................... 90
Managing LDAP providers............................................................................................................................................... 91
Configure an LDAP provider......................................................................................................................................91
Modify an LDAP provider.......................................................................................................................................... 92
Delete an LDAP provider........................................................................................................................................... 92
LDAP query settings................................................................................................................................................... 92
LDAP advanced settings............................................................................................................................................93
Managing NIS providers.................................................................................................................................................. 94
Configure an NIS provider.........................................................................................................................................94
Modify an NIS provider.............................................................................................................................................. 95
Delete an NIS provider............................................................................................................................................... 95
Managing MIT Kerberos authentication...................................................................................................................... 95
Managing MIT Kerberos realms............................................................................................................................... 95
Managing MIT Kerberos providers.......................................................................................................................... 97
Managing MIT Kerberos domains............................................................................................................................ 99
Managing file providers..................................................................................................................................................100
Configure a file provider........................................................................................................................................... 101
Generate a password file.......................................................................................................................................... 101
Password file format.................................................................................................................................................102
Group file format........................................................................................................................................................103
Netgroup file format................................................................................................................................................. 103
Modify a file provider................................................................................................................................................ 103
Delete a file provider................................................................................................................................................. 104
Managing local users and groups.................................................................................................................................104
View a list of users or groups by provider........................................................................................................... 104
Create a local user.....................................................................................................................................................104
Create a local group.................................................................................................................................................. 105
Naming rules for local users and groups.............................................................................................................. 106
Modify a local user.................................................................................................................................................... 106
Modify a local group..................................................................................................................................................106
Delete a local user......................................................................................................................................................107
Delete a local group................................................................................................................................................... 107
Configure a login delay............................................................................................................................................. 107
Set a concurrent session limit.................................................................................................................................107
Set a new user account to disable when inactive............................................................................................. 108
Set an existing user account to disable when inactive.....................................................................................108
Configure minimum password requirements.......................................................................................................108
Configure the password hash type....................................................................................................................... 108
Set password expiration...........................................................................................................................................108
Set minimum new password changes...................................................................................................................108
Set lock users criteria............................................................................................................................................... 109
Reset a user password............................................................................................................................................. 109
Managing SSO ................................................................................................................................................................ 109
6 Contents
Configure the Identity Provider to communicate with OneFS ...................................................................... 109
Configure SSO in OneFS ......................................................................................................................................... 110
Enable and test SSO...................................................................................................................................................111
Contents 7
Chapter 8: Home directories...................................................................................................... 148
Home directories overview............................................................................................................................................148
Home directory permissions..........................................................................................................................................148
Authenticating SMB users.............................................................................................................................................148
Home directory creation through SMB...................................................................................................................... 149
Create home directories with expansion variables............................................................................................ 149
Create home directories with the --inheritable-path-acl option....................................................................150
Create special home directories with the SMB share %U variable................................................................151
Home directory creation through SSH and FTP.......................................................................................................151
Set the SSH or FTP login shell ...............................................................................................................................151
Set SSH/FTP home directory permissions..........................................................................................................152
Set SSH/FTP home directory creation options................................................................................................. 152
Provision home directories with dot files.............................................................................................................153
Home directory creation in a mixed environment....................................................................................................154
Interactions between ACLs and mode bits................................................................................................................154
Default home directory settings in authentication providers................................................................................154
Supported expansion variables.....................................................................................................................................155
Domain variables in home directory provisioning.....................................................................................................156
8 Contents
Managing SMB shares.............................................................................................................................................. 179
NFS security..................................................................................................................................................................... 183
NFS exports................................................................................................................................................................ 184
NFS aliases.................................................................................................................................................................. 184
NFS log files................................................................................................................................................................ 184
Managing the NFS service...................................................................................................................................... 185
Managing NFS exports............................................................................................................................................. 186
Managing NFS aliases...............................................................................................................................................192
FTP......................................................................................................................................................................................193
Enable and configure FTP file sharing.................................................................................................................. 193
HTTP and HTTPS security............................................................................................................................................ 194
Enable and configure HTTP.................................................................................................................................... 194
Contents 9
View the rate of delivery of protocol audit events to the CEE server......................................................... 212
10 Contents
SnapshotIQ settings................................................................................................................................................. 230
Set the snapshot reserve.............................................................................................................................................. 231
Managing changelists.....................................................................................................................................................232
Create a changelist................................................................................................................................................... 232
Delete a changelist....................................................................................................................................................232
View a changelist...................................................................................................................................................... 232
Changelist information............................................................................................................................................. 232
Contents 11
Restrict SyncIQ source nodes..................................................................................................................................... 249
Creating replication policies......................................................................................................................................... 249
Excluding directories in replication....................................................................................................................... 249
Excluding files in replication................................................................................................................................... 250
File criteria options.................................................................................................................................................... 251
Configure default replication policy settings...................................................................................................... 252
Create a replication policy.......................................................................................................................................253
Assess a replication policy.......................................................................................................................................257
Set RPO alerts for SyncIQ policy.......................................................................................................................... 258
Managing replication to remote clusters...................................................................................................................258
Start a replication job...............................................................................................................................................258
Pause a replication job............................................................................................................................................. 258
Resume a replication job......................................................................................................................................... 259
Cancel a replication job............................................................................................................................................259
View active replication jobs.................................................................................................................................... 259
Replication job information..................................................................................................................................... 259
Initiating data failover and failback with SyncIQ..................................................................................................... 260
Fail over data to a secondary cluster................................................................................................................... 260
Revert a failover operation..................................................................................................................................... 260
Fail back data to a primary cluster........................................................................................................................ 261
Run the ComplianceStoreDelete job in a Smartlock compliance mode domain..........................................261
Performing disaster recovery for older SmartLock directories........................................................................... 262
Recover SmartLock compliance directories on a target cluster................................................................... 262
Migrate SmartLock compliance directories........................................................................................................ 263
Managing replication policies....................................................................................................................................... 263
Modify a replication policy...................................................................................................................................... 263
Delete a replication policy....................................................................................................................................... 264
Enable or disable a replication policy....................................................................................................................264
View replication policies.......................................................................................................................................... 264
Replication policy information................................................................................................................................ 265
Replication policy settings...................................................................................................................................... 265
Managing replication to the local cluster.................................................................................................................. 267
Cancel replication to the local cluster.................................................................................................................. 267
Break local target association................................................................................................................................ 267
View replication policies targeting the local cluster..........................................................................................267
Remote replication policy information..................................................................................................................268
Managing replication performance rules................................................................................................................... 268
Create a network traffic rule..................................................................................................................................268
Create a file operations rule................................................................................................................................... 268
Modify a performance rule..................................................................................................................................... 269
Delete a performance rule...................................................................................................................................... 269
Enable or disable a performance rule................................................................................................................... 269
View performance rules...........................................................................................................................................269
Managing replication reports........................................................................................................................................270
Configure default replication report settings..................................................................................................... 270
Delete replication reports........................................................................................................................................270
View replication reports...........................................................................................................................................270
Replication report information................................................................................................................................ 271
Managing failed replication jobs................................................................................................................................... 271
Resolve a replication policy.....................................................................................................................................272
12 Contents
Reset a replication policy.........................................................................................................................................272
Perform a full or differential replication...............................................................................................................272
Contents 13
Support for NDMP sessions on Generation 6 hardware....................................................................................... 292
Setting preferred IPs for NDMP three-way operations........................................................................................ 292
NDMP multi-stream backup and recovery............................................................................................................... 293
Snapshot-based incremental backups....................................................................................................................... 293
NDMP backup and restore of SmartLink files......................................................................................................... 294
NDMP protocol support................................................................................................................................................ 295
Supported DMAs.............................................................................................................................................................295
NDMP hardware support.............................................................................................................................................. 295
NDMP backup limitations..............................................................................................................................................296
NDMP performance recommendations..................................................................................................................... 296
Excluding files and directories from NDMP backups............................................................................................. 297
Configuring basic NDMP backup settings................................................................................................................ 298
Configure and enable NDMP backup................................................................................................................... 298
View NDMP backup settings..................................................................................................................................298
Disable NDMP backup............................................................................................................................................. 298
Managing NDMP user accounts..................................................................................................................................298
Create an NDMP administrator account............................................................................................................. 298
View NDMP user accounts.....................................................................................................................................299
Modify the password of an NDMP administrator account............................................................................. 299
Delete an NDMP administrator account..............................................................................................................299
NDMP environment variables overview.................................................................................................................... 299
Managing NDMP environment variables............................................................................................................. 299
NDMP environment variable settings.................................................................................................................. 300
Add an NDMP environment variable.................................................................................................................... 300
View NDMP environment variables....................................................................................................................... 301
Edit an NDMP environment variable..................................................................................................................... 301
Delete an NDMP environment variable................................................................................................................ 301
NDMP environment variables................................................................................................................................. 301
Setting environment variables for backup and restore operations...............................................................305
Managing NDMP contexts........................................................................................................................................... 306
NDMP context settings...........................................................................................................................................306
View NDMP contexts...............................................................................................................................................306
Delete an NDMP context........................................................................................................................................ 306
Managing NDMP sessions............................................................................................................................................ 307
NDMP session information..................................................................................................................................... 307
View NDMP sessions............................................................................................................................................... 309
Abort an NDMP session.......................................................................................................................................... 309
Managing NDMP Fibre Channel ports....................................................................................................................... 309
NDMP backup port settings...................................................................................................................................309
Enable or disable an NDMP backup port............................................................................................................. 310
View NDMP backup ports....................................................................................................................................... 310
Modify NDMP backup port settings..................................................................................................................... 310
Managing NDMP preferred IP settings.......................................................................................................................311
Create an NDMP preferred IP setting...................................................................................................................311
Modify an NDMP preferred IP setting...................................................................................................................311
List NDMP preferred IP settings............................................................................................................................ 311
View NDMP preferred IP settings.......................................................................................................................... 311
Delete NDMP preferred IP settings...................................................................................................................... 312
Managing NDMP backup devices................................................................................................................................ 312
NDMP backup device settings............................................................................................................................... 312
14 Contents
Detect NDMP backup devices................................................................................................................................313
View NDMP backup devices................................................................................................................................... 313
Modify the name of an NDMP backup device.................................................................................................... 313
Delete an entry for an NDMP backup device..................................................................................................... 313
NDMP dumpdates file overview...................................................................................................................................314
Managing the NDMP dumpdates file.................................................................................................................... 314
NDMP dumpdates file settings...............................................................................................................................314
View entries in the NDMP dumpdates file...........................................................................................................314
Delete entries from the NDMP dumpdates file.................................................................................................. 314
NDMP restore operations..............................................................................................................................................315
NDMP parallel restore operation............................................................................................................................315
NDMP serial restore operation............................................................................................................................... 315
Specify a NDMP serial restore operation.............................................................................................................315
Sharing tape drives between clusters........................................................................................................................ 315
Managing snapshot based incremental backups......................................................................................................316
Enable snapshot-based incremental backups for a directory......................................................................... 316
View snapshots for snapshot-based incremental backups..............................................................................316
Delete snapshots for snapshot-based incremental backups...........................................................................316
Managing cluster performance for NDMP sessions................................................................................................316
Enable NDMP Redirector to manage cluster performance............................................................................. 317
Managing CPU usage for NDMP sessions.................................................................................................................317
Enable NDMP Throttler............................................................................................................................................ 317
Contents 15
View WORM status of a file................................................................................................................................... 326
16 Contents
Quota notification rules..................................................................................................................................................351
Quota reports...................................................................................................................................................................352
Creating quotas............................................................................................................................................................... 352
Create an accounting quota................................................................................................................................... 352
Create an enforcement quota................................................................................................................................353
Managing quotas.............................................................................................................................................................353
Search for quotas..................................................................................................................................................... 354
Manage quotas.......................................................................................................................................................... 354
Export a quota configuration file...........................................................................................................................354
Import a quota configuration file...........................................................................................................................355
Managing quota notifications...................................................................................................................................... 355
Configure default quota notification settings.................................................................................................... 355
Configure custom quota notification rules......................................................................................................... 356
Map an email notification rule for a quota.......................................................................................................... 356
Email quota notification messages............................................................................................................................. 356
Custom email notification template variable descriptions...............................................................................358
Customize email quota notification templates...................................................................................................358
Managing quota reports................................................................................................................................................ 359
Create a quota report schedule.............................................................................................................................359
Generate a quota report..........................................................................................................................................359
Locate a quota report.............................................................................................................................................. 359
Basic quota settings.......................................................................................................................................................360
Advisory limit quota notification rules settings........................................................................................................360
Soft limit quota notification rules settings................................................................................................................ 361
Hard limit quota notification rules settings...............................................................................................................363
Limit notification settings............................................................................................................................................. 363
Quota report settings.................................................................................................................................................... 364
Contents 17
Add node pools to a tier.......................................................................................................................................... 375
Change the name or requested protection of a node pool............................................................................. 375
Create a node class compatibility..........................................................................................................................375
Merge compatible node pools................................................................................................................................ 376
Delete a node class compatibility.......................................................................................................................... 376
Create an SSD compatibility................................................................................................................................... 377
Delete an SSD compatibility....................................................................................................................................377
Managing L3 cache from the web administration interface.................................................................................378
Set L3 cache as the default for node pools........................................................................................................378
Set L3 cache on a specific node pool...................................................................................................................378
Restore SSDs to storage drives for a node pool............................................................................................... 378
Managing tiers................................................................................................................................................................. 379
Create a tier................................................................................................................................................................379
Edit a tier.....................................................................................................................................................................379
Delete a tier................................................................................................................................................................ 379
Creating file pool policies.............................................................................................................................................. 380
Create a file pool policy........................................................................................................................................... 380
File-matching options for file pool policies.......................................................................................................... 381
Valid wildcard characters........................................................................................................................................ 382
SmartPools settings................................................................................................................................................. 382
Managing file pool policies............................................................................................................................................ 385
Configure default file pool protection settings.................................................................................................. 385
Default file pool requested protection settings................................................................................................. 385
Configure default I/O optimization settings.......................................................................................................387
Default file pool I/O optimization settings.......................................................................................................... 387
Modify a file pool policy........................................................................................................................................... 387
Prioritize a file pool policy....................................................................................................................................... 388
Create a file pool policy from a template............................................................................................................ 388
Delete a file pool policy............................................................................................................................................ 388
Monitoring storage pools.............................................................................................................................................. 388
Monitor storage pools.............................................................................................................................................. 389
View subpools health................................................................................................................................................389
View the results of a SmartPools job................................................................................................................... 389
18 Contents
Resume a job.............................................................................................................................................................. 398
Cancel a job................................................................................................................................................................ 398
Update a job............................................................................................................................................................... 399
Modify job type settings..........................................................................................................................................399
Managing impact policies..............................................................................................................................................399
Create an impact policy...........................................................................................................................................400
Copy an impact policy..............................................................................................................................................400
Modify an impact policy.......................................................................................................................................... 400
Delete an impact policy............................................................................................................................................ 401
View impact policy settings.....................................................................................................................................401
Viewing job reports and statistics............................................................................................................................... 401
View statistics for a job in progress..................................................................................................................... 402
View a report for a completed job........................................................................................................................ 402
Contents 19
Managing IP address pools........................................................................................................................................... 424
Create an IP address pool....................................................................................................................................... 424
Modify an IP address pool.......................................................................................................................................424
Delete an IP address pool........................................................................................................................................425
View IP address pool settings................................................................................................................................ 425
Add or remove an IP address range..................................................................................................................... 425
Managing SmartConnect Settings..............................................................................................................................426
Modify a SmartConnect DNS zone...................................................................................................................... 426
Specify a SmartConnect service subnet............................................................................................................. 426
Suspend or resume a node......................................................................................................................................427
Configure IP address allocation............................................................................................................................. 427
Configure a connection balancing policy............................................................................................................. 428
Configure an IP failover policy............................................................................................................................... 429
Configure an IP rebalance policy........................................................................................................................... 429
Managing network interface members......................................................................................................................430
Add or remove a network interface......................................................................................................................430
Configure link aggregation.......................................................................................................................................431
Managing node provisioning rules...............................................................................................................................432
Create a node provisioning rule............................................................................................................................. 432
Modify a node provisioning rule.............................................................................................................................433
Delete a node provisioning rule..............................................................................................................................433
View node provisioning rule settings.................................................................................................................... 433
Managing routing options............................................................................................................................................. 434
Enable or disable source-based routing...............................................................................................................434
Add or remove a static route................................................................................................................................. 434
Managing DNS cache settings.....................................................................................................................................434
Flush the DNS cache................................................................................................................................................434
Modify DNS cache settings....................................................................................................................................435
DNS cache settings.................................................................................................................................................. 435
Managing host-based firewalls....................................................................................................................................435
Modify the OneFS firewall service .......................................................................................................................436
Create a firewall policy............................................................................................................................................ 436
View a firewall policy................................................................................................................................................ 436
Create a firewall rule................................................................................................................................................ 436
View a firewall rule....................................................................................................................................................436
Modify a firewall rule................................................................................................................................................ 437
Delete a firewall rule................................................................................................................................................. 437
Clone a firewall policy...............................................................................................................................................437
Delete a firewall policy............................................................................................................................................. 437
Associate a network subnet or pool to a firewall policy...................................................................................437
Reset the global default firewall policies............................................................................................................. 437
Managing TCP ports...................................................................................................................................................... 438
Add or remove TCP ports....................................................................................................................................... 438
20 Contents
Create an IP address pool with NFSoRDMA capabilities...................................................................................... 440
Modify an existing IP address pool..............................................................................................................................441
Contents 21
View an IP pool in a CAVA server......................................................................................................................... 454
Create an Active Directory authentication provider for the AvVendor access zone............................... 454
Update the role in the access zone...................................................................................................................... 454
Scan CloudPool files in a CAVA server................................................................................................................ 454
Create an antivirus policy..............................................................................................................................................454
Managing ICAP antivirus policies................................................................................................................................ 455
Modify an antivirus policy....................................................................................................................................... 455
Delete an antivirus policy........................................................................................................................................ 455
Enable or disable an antivirus policy.....................................................................................................................456
View antivirus policies..............................................................................................................................................456
Managing antivirus scans..............................................................................................................................................456
Scan a file................................................................................................................................................................... 456
Manually run an ICAP antivirus policy.................................................................................................................. 456
Stop a running antivirus scan.................................................................................................................................456
Managing antivirus threats........................................................................................................................................... 457
Manually quarantine a file........................................................................................................................................457
Rescan a file............................................................................................................................................................... 457
Remove a file from quarantine...............................................................................................................................457
Manually truncate a file........................................................................................................................................... 457
View threats............................................................................................................................................................... 458
Antivirus threat information................................................................................................................................... 458
Managing antivirus reports...........................................................................................................................................458
View antivirus reports.............................................................................................................................................. 458
View antivirus events............................................................................................................................................... 458
22 Contents
1
Introduction to this guide
Topics:
• About this guide
• Scale-out NAS overview
• Where to get help
Node components
As a rack-mountable appliance, a pre-Generation 6 storage node includes the following components in a 2U or 4U rack-
mountable chassis with an LCD front panel: CPUs, RAM, NVRAM, network interfaces, InfiniBand adapters, disk controllers,
and storage media. A PowerScale cluster is made up of three or more nodes, up to 252. The 4U chassis is always used for
Generation 6. There are four nodes in one 4U chassis in Generation 6; therefore, a quarter chassis makes up one node.
When you add a node to a pre-Generation 6 cluster, you increase the aggregate disk, cache, CPU, RAM, and network capacity.
OneFS groups RAM into a single coherent cache so that a data request on a node benefits from data that is cached anywhere.
NVRAM is grouped to write data with high throughput and to protect write operations from power failures. As the cluster
expands, spindles and CPU combine to increase throughput, capacity, and input-output operations per second (IOPS). The
minimum cluster for Generation 6 is four nodes and Generation 6 does not use NVRAM. Journals are stored in RAM and M.2
flash is used for a backup in case of node failure.
The PowerScale F200 and F600 nodes are 1U models that require a minimum cluster size of three nodes. PowerScale F900
nodes are 2U models that require a minimum cluster size of three nodes. Clusters can be expanded to a maximum of 252 nodes
in single node increments.
There are several types of nodes, all of which can be added to a cluster to balance capacity and performance with throughput or
IOPS:
Node Function
A-Series Performance Accelerator Independent scaling for high performance
A-Series Backup Accelerator High-speed and scalable backup-and-restore solution for tape
drives over Fibre Channel connections
PowerScale cluster
OneFS and APEX File Storage Services administrators perform cluster management tasks.
A PowerScale cluster consists of three or more hardware nodes, up to 252. Each node runs the PowerScale OneFS operating
system, the distributed file-system software that unites the nodes into a cluster. The storage capacity of a cluster ranges from a
minimum of 11 TB raw with three PowerScale F200 nodes to more than 50 PB.
Cluster administration
OneFS centralizes cluster management through a web administration interface and a command-line interface. Both interfaces
provide methods to activate licenses, check the status of nodes, configure the cluster, upgrade the system, generate alerts,
view client connections, track performance, and change various settings.
In addition, OneFS simplifies administration by automating maintenance with a Job Engine. OneFS and APEX File Storage
Services administrators can schedule jobs that scan for viruses, inspect disks for errors, reclaim disk space, and check the
integrity of the file system. The engine manages the jobs to minimize impact on the performance of the cluster.
OneFS and APEX File Storage Services administrators can monitor hardware components, CPU usage, switches, and network
interfaces remotely using SNMP versions 2c and 3. Dell Technologies PowerScale supplies management information bases
(MIBs) and traps for the OneFS operating system.
OneFS also includes an application programming interface (API) that is divided into two functional areas: One area enables
cluster configuration, management, and monitoring functionality, and the other area enables operations on files and directories
on the cluster. You can send requests to the OneFS API through a Representational State Transfer (REST) interface, which
is accessed through resource URIs and standard HTTP methods. The API integrates with OneFS role-based access control
(RBAC) to increase security. See the PowerScale API Reference.
Quorum
A PowerScale cluster must have a quorum to work correctly. A quorum prevents data conflicts—for example, conflicting
versions of the same file—in case two groups of nodes become unsynchronized. If a cluster loses its quorum for read and write
requests, you cannot access the OneFS file system.
For a quorum, more than half the nodes must be available over the internal network. A seven-node cluster, for example, requires
a four-node quorum. A 10-node cluster requires a six-node quorum. If a node is unreachable over the internal network, OneFS
separates the node from the cluster, an action referred to as splitting. After a cluster is split, cluster operations continue as long
as enough nodes remain connected to have a quorum.
In a split cluster, the nodes that remain in the cluster are referred to as the majority group. Nodes that are split from the cluster
are referred to as the minority group.
When split nodes can reconnect with the cluster and re-synchronize with the other nodes, the nodes rejoin the cluster's majority
group, an action referred to as merging.
A OneFS cluster contains two quorum properties:
● read quorum (efs.gmp.has_quorum)
● write quorum (efs.gmp.has_super_block_quorum)
sysctl efs.gmp.has_quorum
efs.gmp.has_quorum: 1
sysctl efs.gmp.has_super_block_quorum
efs.gmp.has_super_block_quorum: 1
The degraded states of nodes—such as smartfail, read-only, offline—effect quorum in different ways. A node in a smartfail or
read-only state affects only write quorum. A node in an offline state, however, affects both read and write quorum. In a cluster,
the combination of nodes in different degraded states determines whether read requests, write requests, or both work.
A cluster can lose write quorum but keep read quorum. Consider a four-node cluster in which nodes 1 and 2 are working
normally. Node 3 is in a read-only state, and node 4 is in a smartfail state. In such a case, read requests to the cluster succeed.
Write requests, however, receive an input-output error because the states of nodes 3 and 4 break the write quorum.
A cluster can also lose both its read and write quorum. If nodes 3 and 4 in a four-node cluster are in an offline state, both write
requests and read requests receive an input-output error, and you cannot access the file system. When OneFS can reconnect
with the nodes, OneFS merges them back into the cluster. Unlike a RAID system, a PowerScale node can rejoin the cluster
without being rebuilt and reconfigured.
Storage pools
Storage pools segment nodes and files into logical divisions to simplify the management and storage of data.
A storage pool comprises node pools and tiers. Node pools group equivalent nodes to protect data and ensure reliability. Tiers
combine node pools to optimize storage by need, such as a frequently used high-speed tier or a rarely accessed archive.
The SmartPools module groups nodes and files into pools. If you do not activate a SmartPools license, the module provisions
node pools and creates one file pool. If you activate the SmartPools license, you receive more features. You can, for example,
create multiple file pools and govern them with policies. The policies move files, directories, and file pools among node pools or
tiers. You can also define how OneFS handles write operations when a node pool or tier is full. SmartPools reserves a virtual hot
spare to reprotect data if a drive fails regardless of whether the SmartPools license is activated.
Where <protocol> is one of smb, nfs, hdfs, ftp, http, https, or s3.
SMB The Server Message Block (SMB) protocol enables Windows users to access the cluster. OneFS works
with SMB 1, SMB 2, and SMB 2.1, and SMB 3.0 for Multichannel only. With SMB 2.1,OneFS supports
client opportunity locks (Oplocks) and large (1 MB) MTU sizes.
NFS The Network File System (NFS) protocol enables UNIX, Linux, and Mac OS X systems to remotely mount
any subdirectory, including subdirectories created by Windows users. OneFS works with NFS versions 3
and 4.
HDFS The Hadoop Distributed File System (HDFS) protocol enables a cluster to work with Apache Hadoop, a
framework for data-intensive distributed applications. OneFS supports Ranger ACL. OneFS 9.3.0.0 and
later adds support for HDFS ACL. HDFS integration requires that you activate a separate license.
FTP FTP allows systems with an FTP client to connect to the cluster and exchange files.
HTTP and HTTPS HTTP and its secure variant, HTTPS, give systems browser-based access to resources. OneFS includes
limited support for WebDAV.
S3 The S3-on-OneFS technology enables using the Amazon Web Services Simple Storage Service (AWS S3)
protocol with OneFS. S3 support on OneFS enables storing data in the form of objects on top of the
OneFS file system storage.
You can manage users with different identity management systems; OneFS maps the accounts so that Windows and UNIX
identities can co-exist. A Windows user account managed in Active Directory, for example, is mapped to a corresponding UNIX
account in NIS or LDAP.
To control access, a PowerScale cluster works with both the access control lists (ACLs) of Windows systems and the POSIX
mode bits of UNIX systems. When OneFS must transform file permissions from ACLs to mode bits or from mode bits to ACLs,
OneFS merges the permissions to maintain consistent security settings.
OneFS presents protocol-specific views of permissions so that NFS exports display mode bits and SMB shares show ACLs. You
can, however, manage not only mode bits but also ACLs with standard UNIX tools, such as the chmod and chown commands.
ACL policies also enable you to configure how OneFS manages permissions for networks that mix Windows and UNIX systems.
Data layout
OneFS evenly distributes data among a cluster's nodes with layout algorithms that maximize storage efficiency and
performance. The system continuously reallocates data to conserve space.
OneFS breaks data down into smaller sections called blocks, and then the system places the blocks in a stripe unit. By
referencing either file data or erasure codes, a stripe unit helps safeguard a file from a hardware failure. The size of a stripe unit
depends on the file size, the number of nodes, and the protection setting. After OneFS divides the data into stripe units, OneFS
allocates, or stripes, the stripe units across nodes in the cluster.
When a client connects to a node, the client's read and write operations take place on multiple nodes. For example, when a
client connects to a node and requests a file, the node retrieves the data from multiple nodes and rebuilds the file. You can
optimize how OneFS lays out data to match your dominant access pattern—concurrent, streaming, or random.
Writing files
On a node, the input-output operations of the OneFS software stack split into two functional layers: A top layer, or initiator, and
a bottom layer, or participant. In read and write operations, the initiator and the participant play different roles.
When a client writes a file to a node, the initiator on the node manages the layout of the file on the cluster. First, the initiator
divides the file into blocks of 8 KB each. Second, the initiator places the blocks in one or more stripe units. At 128 KB, a stripe
unit consists of 16 blocks. Third, the initiator spreads the stripe units across the cluster until they span a width of the cluster,
creating a stripe. The width of the stripe depends on the number of nodes and the protection setting.
After dividing a file into stripe units, the initiator writes the data first to non-volatile random-access memory (NVRAM) and then
to disk. NVRAM retains the information when the power is off.
During the write transaction, NVRAM guards against failed nodes with journaling. If a node fails mid-transaction, the transaction
restarts without the failed node. When the node returns, it replays the journal from NVRAM to finish the transaction. The node
also runs the AutoBalance job to check the file's on-disk striping. Meanwhile, uncommitted writes waiting in the cache are
protected with mirroring. As a result, OneFS eliminates multiple points of failure.
Reading files
In a read operation, a node acts as a manager to gather data from the other nodes and present it to the requesting client.
Because a PowerScale cluster's coherent cache spans all the nodes, OneFS can store different data in each node's RAM. A
node using the internal network can retrieve file data from another node's cache faster than from its own local disk. If a read
operation requests data that is cached on any node, OneFS pulls the cached data to serve it quickly.
Metadata layout
OneFS protects metadata by spreading it across nodes and drives.
Metadata—which includes information about where a file is stored, how it is protected, and who can access it—is stored in
inodes and protected with locks in a B+ tree, a standard structure for organizing data blocks in a file system to provide instant
lookups. OneFS replicates file metadata across the cluster so that there is no single point of failure.
Working together as peers, all the nodes help manage metadata access and locking. If a node detects an error in metadata, the
node looks up the metadata in an alternate location and then corrects the error.
Striping
In a process known as striping, OneFS segments files into units of data and then distributes the units across nodes in a cluster.
Striping protects your data and improves cluster performance.
To distribute a file, OneFS reduces it to blocks of data, arranges the blocks into stripe units, and then allocates the stripe units
to nodes over the internal network.
At the same time, OneFS distributes erasure codes that protect the file. The erasure codes encode the file's data in a
distributed set of symbols, adding space-efficient redundancy. With only a part of the symbol set, OneFS can recover the
original file data.
Taken together, the data and its redundancy form a protection group for a region of file data. OneFS places the protection
groups on different drives on different nodes—creating data stripes.
Because OneFS stripes data across nodes that work together as peers, a user connecting to any node can take advantage of
the entire cluster's performance.
By default, OneFS optimizes striping for concurrent access. If your dominant access pattern is streaming—that is, lower
concurrency, higher single-stream workloads, such as with video—you can change how OneFS lays out data to increase
sequential-read performance. To better handle streaming access, OneFS stripes data across more drives. Streaming is most
effective on clusters or subpools serving large files.
The following software modules help protect data, but you must activate a separate license to use them:
Data compression
OneFS supports inline data compression on Isilon F810 and H5600 nodes, and on PowerScale F200 and F600 nodes.
The F810 node contains a Network Interface Card (NIC) that compresses and decompresses data.
Hardware compression and decompression are performed in parallel across the 40Gb Ethernet interfaces of supported nodes as
clients read and write data to the cluster. This distributed interface model allows compression to scale linearly across the node
pool as supported nodes are added to a cluster.
You can enable inline data compression on the Dell PowerScale F900 and F600-NVMe and F200 SSD nodes, PowerScale
H700/7000 and A300/3000 node, and the Isilon F810 and H5600 platforms.
The following table lists the nodes and OneFS release combinations that support inline data compression.
Mixed Clusters
In a mixed cluster environment, data is stored in a compressed form on F810, H5600, F200, and F600 node pools. Data that is
written or tiered to storage pools of other node types is uncompressed when it moves between pools.
Software modules
You can access advanced features by activating licenses for Dell Technologies PowerScale software modules.
SmartLock SmartLock protects critical data from malicious, accidental, or premature alteration or deletion to help you
comply with SEC 17a-4 regulations. You can automatically commit data to a tamper-proof state and then
retain it with a compliance clock.
HDFS OneFS works with the Hadoop Distributed File System protocol to help clients running Apache Hadoop, a
framework for data-intensive distributed applications, analyze big data.
SyncIQ SyncIQ replicates data on another PowerScale cluster and automates failover and failback between
automated clusters. If a cluster becomes unusable, you can fail over to another PowerScale cluster. Failback restores
failover and the original source data after the primary cluster becomes available again.
failback
Security Security hardening is the process of configuring your system to reduce or eliminate as many security risks
hardening as possible. You can apply a hardening policy that secures the configuration of OneFS, according to policy
guidelines.
SnapshotIQ SnapshotIQ protects data with a snapshot—a logical copy of data that is stored on a cluster. A snapshot
can be restored to its top-level directory.
SmartDedupe You can reduce redundancy on a cluster by running SmartDedupe. Deduplication creates links that can
impact the speed at which you can read from and write to files.
SmartPools SmartPools enables you to create multiple file pools governed by file-pool policies. The policies move files
and directories among node pools or tiers. You can also define how OneFS handles write operations when
a node pool or tier is full.
CloudPools Built on the SmartPools policy framework, CloudPools enables you to archive data to cloud storage,
effectively defining the cloud as another tier of storage. CloudPools supports Dell Technologies
PowerScale, Dell Technologies ECS Appliance, Amazon S3, Amazon C2S, Alibaba Cloud, and Microsoft
Azure as cloud storage providers.
SmartConnect If you activate a SmartConnect Advanced license, you can balance policies to evenly distribute CPU
Advanced usage, client connections, or throughput. You can also define IP address pools to support multiple DNS
zones in a subnet. SmartConnect also supports IP failover, also known as Dynamic IPs, to support NFS
failover. It is recommended that you define a static pool that encompasses all nodes for management
purposes. Dynamic IP addresses are configured only on nodes with quorum to ensure client connectivity.
Defining a static pool for all nodes avoids administration difficulties for out of quorum nodes that will not
have dynamic IP addresses configured for SSH connections.
InsightIQ The InsightIQ virtual appliance monitors and analyzes the performance of your PowerScale cluster to help
you optimize storage resources and forecast capacity.
SmartQuotas The SmartQuotas module tracks disk usage with reports and enforces storage limits with alerts.
S3 OneFS support for the Amazon Web Services Simple Storage Service (AWS S3) protocol enables using
the Amazon Web Services Simple Storage Service (AWS S3) protocol to store data in the form of
objects on top of the OneFS file system storage. Using S3-OneFS enables reading data from, and writing
data to, the PowerScale platform. The data resides under a single namespace. The AWS S3 protocol
becomes a primary resident of the OneFS protocol stack, along with NFS, SMB, and HDFS, allowing
multiprotocol access to objects and files. The S3 protocol supports bucket and object creation, retrieving,
updating, and deletion. Object retrievals and updates are atomic. Bucket properties can be updated.
User interfaces
OneFS provides several interfaces for managing PowerScale clusters.
IPv4 https://<yourNodeIPaddress>:8080
IPv6 https://[<yourNodeIPaddress>]:8080
If your security certificates have not been configured, the system displays a message. Resolve any certificate configurations,
then continue to the website.
2. Log in to OneFS by typing your OneFS credentials in the Username and Password fields.
After you log in to the web administration interface, there is a 4-hour login timeout.
Software licenses
Your OneFS license and optional software module licenses are contained in the license file on your cluster. Your license file must
match your license record in the Dell Technologies Software Licensing Central (SLC) repository.
Ensure that the license file on your cluster, and your license file in the SLC repository, match your upgraded version of OneFS.
Advanced cluster features are available when you activate licenses for the following OneFS software modules:
● CloudPools
● Security hardening
● HDFS
● PowerScale Swift
● SmartConnect Advanced
● SmartDedupe
● SmartLock
● SmartPools
● SmartQuotas
● SnapshotIQ
● SyncIQ
For more information about optional software modules, contact your Dell Technologies sales representative.
Hardware tiers
Your license file contains information about the PowerScale hardware that is installed in your cluster.
Your license file lists nodes by tiers. Nodes are placed into a tier according to their compute performance level, capacity, and
drive type.
NOTE: Your license file contains line items for every node in your cluster. However, pre-Generation 6 hardware is not
included in the OneFS licensing model.
License status
The status of a OneFS license indicates whether the license file on your cluster reflects your current version of OneFS. The
status of a OneFS module license indicates whether the functionality provided by a module is available on the cluster.
Licenses exist in one of the following states:
Status Description
Unsigned The license has not been updated in Dell Technologies
Software Licensing Central (SLC). You must generate and
submit an activation file to update your license file with your
new version of OneFS.
Inactive The license has not been activated on the cluster. You cannot
access the features provided by the corresponding module.
● You can view active alerts that are related to your licenses by clicking Alerts about licenses in the upper corner of the
Cluster Management > Licensing page.
SupportAssist
To allow OneFS to automatically update your license file using SupportAssist, follow these instructions:
1. Enable SupportAssist.
2. Enable remote support.
NOTE: When you first enable SupportAssist, remote support is enabled by default.
SRS
To allow OneFS to automatically update your license file using SRS, follow these instructions:
1. Enable SRS.
2. Enable in-product activation.
NOTE: When you first enable SRS, in-product activation is enabled by default.
ISLN_nnn_date.xml
Certificates
All OneFS API communication, which includes communication through the web administration interface, is over Transport Layer
Security (TLS). You can renew the TLS certificate for the OneFS web administration interface or replace it with a third-party
TLS certificate.
To configure, import, replace, or renew a TLS certificate, you must be logged in as root.
NOTE: OneFS defaults to the best supported version of TLS for each request.
In addition, if you are requesting a third-party CA-issued certificate, you should include additional attributes that are shown in
the following example:
Cluster identity
You can specify identity attributes for a PowerScale cluster.
Cluster name The cluster name appears on the login page, and it makes the cluster and its nodes more easily
recognizable on your network. Each node in the cluster is identified by the cluster name plus the node
number. For example, the first node in a cluster that is named Images may be named Images-1.
Cluster The cluster description appears below the cluster name on the login page. The cluster description is useful
description if your environment has multiple clusters.
Login message The login message appears as a separate box on the login page of the OneFS web administration
interface, or as a line of text under the cluster name in the OneFS command-line interface. The
login message can convey cluster information, login instructions, or warnings that a user should know
before logging into the cluster. Set this information in the Cluster Identity page of the OneFS web
administration interface.
4. Click Submit.
Mode Description
Manual Allows you to manually add a node to the cluster without
requiring authorization.
Secure Requires authorization of every node added to the cluster.
The node must be added through the web administration
interface or through the isi devices -a add -d
<unconfigured_node_serial_no> command in the
command-line interface.
Option Description
Manual Joins can be manually initiated
Secure Joins can be initiated only by the cluster and require authentication
3. Click Submit.
1. Click File system > File system settings > Access time tracking.
2. In the Access time tracking area, click the Enable access time tracking check box to track file access time stamps. This
feature is disabled by default.
3. In the Precision fields, specify how often to update the last-accessed time by typing a numeric value and by selecting a unit
of measure, such as Seconds, Minutes, Hours, Days, Weeks, Months, or Years.
For example, if you configure a Precision setting of one day, the cluster updates the last-accessed time once each day, even
if some files were accessed more often than once during the day.
4. Click Save changes.
Security hardening
Security hardening is the process of configuring a system to reduce or eliminate security risks. The OneFS Security Hardening
Module is primarily for use by United States federal government accounts.
OneFS is secure in its default configuration. The United States federal government requires configurations and limitations that
are more strict than the default.
The Security Hardening Module provides a hardening profile that you can apply to a OneFS cluster. A hardening profile is a
collection of rules that changes the cluster configuration so that the cluster complies with strict security rules.
The predefined STIG hardening profile is designed to enforce security principles that are defined in the United States
Department of Defense (DoD) Security Requirements Guides (SRGs) and Security Technical Implementation Guides (STIGs).
The STIG hardening profile applies controls to the OneFS cluster that reduce security vulnerabilities and attack surfaces.
For more about the STIG profile and how to apply it, see the "United States Federal and DoD Standards and Compliance"
chapter in the PowerScale OneFS Security Configuration Guide. That chapter also includes instructions for running periodic
compliance reports after applying the profile.
The Security Hardening Module is a separately licensed OneFS module. For licensing information, see the Licensing section in
the "General Cluster Administration chapter" of this guide.
Cluster monitoring
You can monitor the health, performance, and status of the PowerScale cluster.
Using the OneFS dashboard from the web administration interface, you can monitor the status and health of the OneFS system.
Information is available for individual nodes, including node-specific network traffic, internal and external network interfaces, and
details about node pools, tiers, and overall cluster health. You can monitor the following areas of the PowerScale cluster health
and performance:
Node status Health and performance statistics for each node in the cluster, including hard disk drive (HDD) and
solid-state drive (SSD) usage.
Client Number of clients connected per node.
connections
New events List of event notifications generated by system events, including the severity, unique instance ID, start
time, alert message, and scope of the event.
Cluster size Current view: Used and available HDD and SSD space and space reserved for the virtual hot spare
(VHS).
Historical view: Total used space and cluster size for a one-year period.
Cluster Current view: Average inbound and outbound traffic volume passing through the nodes in the cluster for
throughput (file the past hour.
system) Historical view: Average inbound and outbound traffic volume passing through the nodes in the cluster
for the past two weeks.
CPU usage Current view: Average system, user, and total percentages of CPU usage for the past hour.
Historical view: CPU usage for the past two weeks.
You can hide or show a plot by clicking System, User, or Total in the chart legend. To view maximum usage, next to
Show, select Maximum.
ERASE The drive is ready for removal but needs your Command-line interface
attention because the data has not been erased. only
You can erase the drive manually to guarantee
that data is removed.
NOTE: In the web administration interface,
this state is included in Not available.
The location where you send traps is specified in the isi event channels command. Event notification rules specify
which types of event types are sent to those locations. By default, both SNMP version 2c and SNMP version 3 are turned
off in OneFS . You must turn on the version you use. SNMP version 3 is recommended over SNMP version 2, as version 2 is
considered less secure.
OneFS does not support SNMP version 1. Although the command isi snmp settings modify includes the option
--snmp-v1-v2-access, OneFS monitors only through SNMP version 2c.
You can configure settings for SNMP version 3 alone or for both SNMP version 2c and version 3.
Elements in an SNMP hierarchy are arranged in a tree structure, similar to a directory tree. As with directories, identifiers move
from general to specific as the string progresses from left to right. Unlike a file hierarchy, however, each element is not only
named, but also numbered.
For example, the SNMP entity
iso.org.dod.internet.private.enterprises.powerscale.cluster.clusterStatus.clusterName.0 maps
to .1.3.6.1.4.1.12124.1.1.1.0. The element 12124 refers to the OneFS SNMP namespace. Anything further to the
right of that number is related to OneFS-specific monitoring.
Management Information Base (MIB) documents define human-readable names for managed objects and specify their datatypes
and other properties. You can download MIBs that are created for SNMP-monitoring of a PowerScale cluster from the OneFS
web administration interface or manage them using the command-line interface (CLI). MIBs are stored in /usr/share/snmp/
mibs/ on a OneFS node. The OneFS ISILON-MIBs serve two purposes:
● Augment the information available in standard MIBs.
● Provide OneFS-specific information that is unavailable in standard MIBs.
ISILON-MIB is a registered enterprise MIB. PowerScale clusters have two separate MIBs:
ISILON-MIB Defines a group of SNMP agents that respond to queries from a network monitoring system (NMS) called
OneFS Statistics Snapshot agents. These agents snapshot the state of the OneFS file system at the time
that it receives a request and reports this information back to the NMS.
ISILON-TRAP- Generates SNMP traps to send to an SNMP monitoring station when relevant circumstances occur that
MIB are defined in the trap protocol data units (PDUs).
The OneFS MIB files map the OneFS-specific object IDs with descriptions. Download or copy MIB files to a directory where your
SNMP tool can find them, such as /usr/share/snmp/mibs/.
To enable Net-SNMP tools to read the MIBs to provide automatic name-to-OID mapping, add -m All to the command, as in
the following example:
During SNMPv2c configuration, it is required that you set the community string using a command similar to the following:
You are not allowed to enable SNMPv2 unless the community string has been changed from the default.
snmpwalk -m /usr/local/share/snmp/mibs/ISILON-MIB.txt:/usr \
/share/snmp/mibs/ISILON-TRAP-MIB.txt:/usr/share/snmp/mibs \
/ONEFS-TRAP-MIB.txt -v2c -C c -c public isilon
NOTE: The previous examples are run from the snmpwalk command on a cluster. Your SNMP version may require
different arguments.
5. If your protocol is SNMPv2, ensure that the Allow SNMPv2 Access check box is selected. SNMPv2 is selected by default.
6. In the SNMPv2 Read-Only Community Name field, enter the appropriate community name. The default is
I$ilonpublic.
7. To enable SNMPv3, click the Allow SNMPv3 Access check box.
8. Configure SNMP v3 Settings:
a. In the SNMPv3 Read-Only User Name field, type the SNMPv3 security name to change the name of the user with
read-only privileges.
The default read-only user is general.
b. In the SNMPv3 Read-Only Password field, type the new password for the read-only user to set a new SNMPv3
authentication password.
The default password is password. We recommend that you change the password to improve security. The password
must contain at least eight characters and no spaces.
c. Type the new password in the Confirm password field to confirm the new password.
9. In the SNMP Reporting area, enter a cluster description in the Cluster Description field.
10. In the System Contact Email field, enter the contact email address.
11. Click Save Changes.
Events overview
Events are individual occurrences or conditions related to the data workflow, maintenance operations, and hardware
components of your cluster.
Throughout OneFS there are processes that are constantly monitoring and collecting information on cluster operations.
When the status of a component or operation changes, the change is captured as an event and placed into a priority queue at
the kernel level.
Every event has two ID numbers that help to establish the context of the event:
● The event type ID identifies the type of event that has occurred.
● The event instance ID is a unique number that is specific to a particular occurrence of an event type. When an event is
submitted to the kernel queue, an event instance ID is assigned. You can reference the instance ID to determine the exact
time that an event occurred.
You can view individual events. However, you manage events and alerts at the event group level.
Alerts overview
An alert is a message that describes a change that has occurred in an event group.
At any point in time, you can view event groups to track situations occurring on your cluster. You can also create alerts to
proactively notify you when there is a change in an event group. For example, you can generate an alert when a new event is
added to an event group, when an event group is resolved, or when the severity of an event group changes.
You can adjust the thresholds at which certain events raise alerts. For example, by default, OneFS generates an alert when a
disk pool is 95% full. You can adjust that threshold to a lower percentage.
You can configure your cluster to generate alerts only for specific event groups, conditions, severity, or during limited time
periods.
Alerts are delivered through channels. You can configure a channel to determine who will receive the alert and when.
View an event
You can view the details of a specific event.
1. Click Cluster Management > Events and Alerts.
The Event group history tab summarizes the list of all the event groups, and you can customize the list as needed.
● You can filter the data by date range, event group status, and event group severity.
● You can search for relevant event groups by entering the search string in the search box.
2. In the Actions column of the event group that contains the event you want to view, click View event details.
Managing alerts
You can view, create, modify, or delete alerts to determine the information you deliver about event groups.
This action can not be undone. Are you sure you want to delete this alert rule?
5. Click Confirm.
Managing channels
You can view, create, modify, or delete channels to determine how you deliver information about event groups.
Create a channel
You can create and configure new channels to send out alert information.
1. Click Cluster Management > Events and Alerts > Alert Management.
2. In the CELOG alerting area, click the Alert channel tab.
3. In the Alert Channel area, click Create channel.
4. Select the Enable channel check box to enable or disable the channel.
5. In the Channel name field, type the channel name.
6. Select the delivery mechanism for the channel from the Channel type list.
NOTE: Depending on the delivery mechanism you select, different settings appear.
7. If you are creating an SMTP channel, you can configure the following settings:
a. In the Send to field, enter an email address that you want to receive alerts on this channel.
To add another email address to the channel, click Add another email address.
b. To manually configure the SMTP server settings, select the Manually configured SMTP server settings radio button
and configure the following fields.
c. In the Send from field, enter the email address that you want to appear in the from field of the alert email messages.
d. In the Subject field, enter the text that you want to appear on the subject line of the alert email messages.
e. In the SMTP host or relay address field, enter your SMTP host or relay address.
f. In the SMTP relay port field, enter the number of your SMTP relay port.
g. Select the Use SMTP authentication check box to specify a username and password for your SMTP server.
h. Specify your connection security between NONE or STARTTLS.
i. From the Notification batch mode list, select whether alerts will be batched together, by severity, or by category.
j. From the Notification email template list, select whether email messages will be created from a default or custom
email template.
If you specify a custom template, enter the location of the template on your cluster in the Custom Template Location
field.
k. In the Allowed nodes field, type the node number of a node in the cluster that is allowed to send alerts through this
channel.
Modify a channel
You can modify a channel that you have created.
1. Click Cluster Management > Events and Alerts > Alert Management.
2. In the CELOG alerting area, click the Alert channel tab.
3. In the Actions column of the channel you want to modify, click Edit channel.
The Edit alert channel window appears.
4. Select the Enable channel check box to enable or disable the channel.
5. Select the delivery mechanism for the channel from the Channel type list.
NOTE: Depending on the delivery mechanism you select, different settings appear.
6. If you are modifying an SMTP channel, you can change the following settings:
a. In the Send to field, enter an email address that you want to receive alerts on this channel.
To add another email address to the channel, click Add another email address.
b. To manually configure the SMTP server settings, select the Manually configured SMTP server settings radio button
and configure the following fields.
c. In the Subject field, enter the text that you want to appear on the subject line of the alert email messages.
d. In the SMTP host or relay address field, enter your SMTP host or relay address.
e. In the SMTP relay port field, enter the number of your SMTP relay port.
f. Select the Use SMTP authentication check box to specify a username and password for your SMTP server.
g. Specify your connection security between NONE or STARTTLS.
h. From the Notification batch mode list, select whether alerts will be batched together, by severity, or by category.
i. From the Notification email template list, select whether email messages will be created from a standard or custom
email template.
If you specify a custom template, enter the location of the template on your cluster in the Custom template location
field.
j. In the Allowed nodes field, type the node number of a node in the cluster that is allowed to send alerts through this
channel.
Delete a channel
You can delete channels that you have created.
1. Click Cluster Management > Events and Alerts > Alert Management.
2. In the CELOG alerting area, click the Alert channel tab.
3. In the Alert Channel area, locate the channel you want to delete.
4. In the Actions column of the channel you want to delete, click Edit channel.
The Edit alert channel window appears
5. Click Delete to confirm the action.
View events with configurable thresholds and adjust the threshold values
You can view the events with configurable alert thresholds and adjust the thresholds.
1. Click Cluster Management > Events and Alerts > Thresholds.
2. In the Actions column of the event that you want to adjust, click Edit thresholds.
3. In the Threshold value column for each threshold you want to adjust, enter an integer in the range 0-100 for each threshold.
4. Click Apply changes.
Cluster maintenance
Trained service personnel can replace or upgrade components in PowerScale nodes.
Dell PowerScale Technical Support can assist you with replacing node components or upgrading components to increase
performance.
If you don't specify a node LNN, the command will be applied to the entire cluster.
The following example command disables ARR for the node with the LNN of 2:
4. To enable ARR for a specific node, you must perform the following steps through the command-line interface (CLI).
a. Establish an SSH connection to any node in the cluster.
b. Run the following command:
If you don't specify a node LNN, the command will be applied to the entire cluster.
The following example command enables ARR for the node with the LNN of 2:
NOTE: It is recommended that you contact PowerScale Technical Support before updating the drive firmware.
1. Go to the Dell EMC Support page that lists all the available versions of the drive support package.
2. Click the latest version of the drive support package and download the file.
3. Open a secure shell (SSH) connection to any node in the cluster and log in.
4. Copy the downloaded file to the /ifs/data/Isilon_Support directory through SCP, FTP, SMB, NFS, or any other
supported data-access protocols.
5. Install the package by running the following command:
isi_dsp_install /ifs/data/Isilon_Support/Drive_Support_<version>.isi
NOTE:
● You must run the isi_dsp_install command to install the drive support package. Do not use the isi upgrade
patches command.
● Running isi_dsp_install installs the drive support package on the entire cluster.
● The installation process takes care of installing all the necessary files from the drive support package followed by the
uninstallation of the package. You do not need to delete the package after its installation or before installing a later
version.
NOTE: Do not restart or power off nodes when the drive firmware is being updated in a cluster or issues might occur.
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. To update the drive firmware for all drives in a specific node, run the following command:
isi devices drive firmware update start all --node-lnn <node-number>
Updating the drive firmware of a single drive takes approximately 15 seconds, depending on the drive model.
CAUTION: Wait for all the drives in a node to finish updating before you initiate a firmware update on the
next node.
Where:
LNN Displays the LNN for the node that contains the drive.
Location Displays the bay number where the drive is installed.
Firmware Displays the version number of the firmware currently running on the drive.
Desired If the drive firmware should be upgraded, displays the version number of the drive firmware that the
firmware should be updated to.
Model Displays the model number of the drive.
NOTE: The isi devices drive firmware list command displays firmware information for the drives in the local
node only. You can display drive firmware information for the entire cluster, not just the local cluster, by running the
following command:
isi devices drive firmware list --node-lnn all
isi config
lnnset 12 73
4. Enter commit .
You might need to reconnect to your SSH session before the new node name is automatically changed.
Alternatively, select a node, and from the Actions column, perform one of the following options:
● Click More > Shut down node to shut down the node.
● Click More > Reboot node to stop and restart the node.
Upgrading OneFS
Upgrading OneFS can be done using either the web interface or the command line interface and includes a series of tasks that
Administrators must perform prior to, during, and after the upgrade.
There are three options available for upgrading your OneFS cluster: parallel upgrades, rolling upgrades, or simultaneous
upgrades.
For more information about how to plan, prepare, and perform an upgrade on your OneFS cluster, see the PowerScale OneFS
Upgrade Planning and Process Guide.
Patching OneFS
Patches are made available for supported versions of OneFS. Patching OneFS is performed by downloading the latest roll-up
patch (RUP) and installing it on your cluster.
There are three options available for patching a OneFS cluster: parallel patch, rolling patch, or simultaneous patch.
For more information about patching your OneFS cluster, see the PowerScale OneFS Current Patches article.
CELOG CELOG sends alerts through the SupportAssist channel to Dell Support.
isi diagnostics The isi diagnostics gather and isi_gather_info commands have a --supportassist
gather option.
License The isi license activation start command uses SupportAssist to connect.
activation
CloudIQ Telemetry data is sent using SupportAssist.
HealthCheck HealthCheck definitions are updated using SupportAssist.
Remote Support Remote Support uses SupportAssist and the Connectivity Hub to assist customers with their clusters.
SupportAssist is recommended for all clusters that can send telemetry data off-cluster and is a replacement for the legacy
connectivity system - Secure Remote Services (SRS).
OneFS clusters can continue to use SRS and setup new connections using SRS. Administrators are encouraged to install and use
SCG v5.x or later, which supports both SRS and SupportAssist.
NOTE: Clusters using IPv6 must continue using SRS. SupportAssist does not support IPv6.
SupportAssist Prerequisites
To enable SupportAssist, you must meet the following prerequisites:
● OneFS cluster must be running OneFS 9.5.0.0 (or later) and in a committed status.
● The reporting OneFS cluster has a dedicated IPv4 network.
● If upgrading, have your cluster's access key and pin ready.
● On a new cluster, SupportAssist automatically pulls the hardware key.
● User must belong to a role with ISI_PRIV_REMOTE_SUPPORT read and write access.
● If using Secure Connect Gateway, you must use SCG version 5.x or later.
● If using direct connect, network port 443 and 8443 must be routed to Dell support.
● SRS is disabled. Enabling SupportAssist automatically disables SRS.
NOTE: On newly installed clusters, a hardware key is automatically used instead of an Access Key and PIN.
8. Select if you want to enable Remote support, CloudIQ telemetry data, and Automatic case creation.
9. Click the Finish Setup button.
9. Select if you want to enable Remote support, CloudIQ telemetry data, and Automatic case creation.
10. Click the Finish Setup button.
4. (Optional) To add information for a secondary contact, click the Add secondary contact link.
Configuring Telemetry
1. To view the telemetry options, in the Cluster Management > General Settings window, click the SupportAssist tab.
2. To enable telemetry data to be sent to CloudIQ, in the Cluster Management > General Settings window, click the Enable
CloudIQ telemetry data button.
SRS Telemetry
SRS Telemetry is enabled when Secure Remote Services is enabled.
SRS Telemetry replaces phone home functionality:
● isi_phone_home was deprecated in OneFS 8.2.1.
● isi_phone_home was disabled in OneFS 8.2.2.
SRS Telemetry gathers configuration data (gconfig), system controls (sysctls), directory paths, and statistics at the cluster
level. SRS Telemetry also gathers API endpoints and statistics at the node level. This data is sent through Secure Remote
Services for use by CloudIQ.
For more information about SRS Telemetry, contact your OneFS support representative.
4. Upload the signed license file to your cluster, as described earlier in this guide.
Configuring SRS
You can configure and enable support for Secure Remote Services (SRS) on a PowerScale cluster in the OneFS web UI.
Pre-requisites:
● Clusters running OneFS 8.1.x or later must have SRS Gateway Server 3.x installed and configured.
● Clusters running OneFS 9.0.0.0 with PowerScale F200 or PowerScale F600 nodes must have SRS v3 installed.
● Clusters running OneFS 9.1.0.0 and later must have SRS v3 installed.
● The IP address pools that handle gateway connections must exist in the system and must belong to a subnet under
groupnet0, which is the default system groupnet.
1. Click Cluster Management > General Settings > Remote Support.
2. If the OneFS license is unsigned, click Update license now and follow the instructions in Licensing.
3. SRS must be configured before it can be enabled. To configure SRS, click Configure SRS.
4. In the Primary SRS gateway address field, type an IPv4 address or the name of the primary gateway server.
5. In Secondary SRS gateway address field, type an IPv4 address or the name of the secondary gateway server.
6. In the Manage subnets and pools section, select the network pools that you want to manage.
7. To send an alert if the cluster loses connection, check the Create an alert if the cluster loses its connection with SRS
checkbox.
8. To save the settings, click Save Configuration.
Enabling SRS
SRS must be configured before it can be enabled.
1. Click Cluster Management > General Settings > Remote Support.
2. Click Enable SRS to connect to the gateway.
The login dialog box opens.
3. Type the User name and Password, and click Enable SRS.
If the User name or Password is incorrect, or if the user is not registered with Dell EMC, an error message is generated. Look
for the u'message section in the error text.
Get cluster data Collects and uploads information about overall cluster
configuration and operations.
Get cluster events Gets the output of existing critical events and uploads
the information.
Get cluster status Collects and uploads cluster status details.
Get contact info Extracts contact information and uploads a text file
that contains it.
Get contents (var/crash) Uploads the contents of /var/crash.
Get job status Collects and uploads details on a job that is being
monitored.
74 Access zones
data/hr as the base directory for both the zone2 and zone3 access zones. Or if /ifs/data/hr is assigned to zone2, do not
assign /ifs/data/hr/personnel to zone3.
OneFS supports overlapping data between access zones for cases where your workflows require shared data. However, the
added complexity to the access zone configuration might lead to future issues with client access. For the best results from
overlapping data between access zones, it is recommended that the access zones also share the same authentication providers.
Shared providers ensures that users will have consistent identity information when accessing the same data through different
access zones.
If you cannot configure the same authentication providers for access zones with shared data, ensure the following:
● Select Active Directory as the authentication provider in each access zone. This causes files to store globally unique SIDs as
the on-disk identity, eliminating the chance of users from different zones gaining access to each other's data.
● Avoid selecting local, LDAP, and NIS as the authentication providers in the access zones. These authentication providers use
UIDs and GIDs, which are not guaranteed to be globally unique. This results in a high probability that users from different
zones will be able to access each other's data.
● Set the on-disk identity to native, or preferably, to SID. When user mappings exist between Active Directory and UNIX users
or if the Services for Unix option is enabled for the Active Directory provider, OneFS stores SIDs as the on-disk
identity instead of UIDs.
Access zones 75
Access zone limits
You can follow access zone limits guidelines to help size the workloads on the OneFS system.
If you configure multiple access zones on a PowerScale cluster, limits guidelines are recommended for best system performance.
The limits that are described in the PowerScale OneFS Technical Specifications Guide are recommended for heavy enterprise
workflows on a cluster, treating each access zone as a separate physical server. The Technical Specifications Guide and related
PowerScale documentation are available on Dell Online Support.
Quality of service
You can set upper bounds on quality of service by assigning specific physical resources to each access zone.
Quality of service addresses physical hardware performance characteristics that can be measured, improved, and sometimes
guaranteed. Characteristics that are measured for quality of service include but are not limited to throughput rates, CPU usage,
and disk capacity. When you share physical hardware in a PowerScale cluster across multiple virtual instances, competition
exists for the following services:
● CPU
● Memory
● Network bandwidth
● Disk I/O
● Disk capacity
Access zones do not provide logical quality of service guarantees to these resources, but you can partition these resources
between access zones on a single cluster. The following table describes a few ways to partition resources to improve quality of
service:
Use Notes
NICs You can assign specific NICs on specific nodes to an
IP address pool that is associated with an access zone.
By assigning these NICs, you can determine the nodes
and interfaces that are associated with an access zone.
This enables the separation of CPU, memory, and network
bandwidth.
SmartPools SmartPools are separated into multiple tiers of high, medium,
and low performance. The data written to a SmartPool is
written only to the disks in the nodes of that pool.
Associating an IP address pool with only the nodes of a single
SmartPool enables partitioning of disk I/O resources.
76 Access zones
zRBAC enables you to assign roles and a subset of privileges that must be assigned on a per-access-zone basis. Administrative
tasks that the zone-aware privileges covers can be delegated to an administrator of a specific access zone. As a result, you get
the ability to create a local administrator who is responsible for a single access zone. A user in the System Access Zone can
affect all other access zones, and remains a global administrator.
Use the isi auth privileges command to list the available privileges for an access zone:
Where <zone name> is the zone whose privileges you want to list. For example, the following command lists the available
privileges for a zone named zone3:
Access zones 77
Role Description Privileges
ZoneSecurityAd Allows administration of ● ISI_PRIV_LOGIN_PAPI
min security configuration aspects ● ISI_PRIV_AUTH
that are related to the current ● ISI_PRIV_ROLE
access zone.
NOTE: These roles do not have any default users who are automatically assigned to them.
Related concepts
Data Security overview
Managing access zones
78 Access zones
Related tasks
Associate an IP address pool with an access zone
Related concepts
Data Security overview
Managing access zones
Related concepts
Data Security overview
Access zones 79
Managing access zones
Related concepts
Data Security overview
Managing access zones
Related concepts
Data Security overview
Managing access zones
Related concepts
Data Security overview
Managing access zones
80 Access zones
2. Click View/Edit next to the access zone that you want to view.
The system displays the View Access Zone Details window.
3. Click Close.
Related concepts
Data Security overview
Managing access zones
OneFS highlights each step as you go. To return to a previous step, click that step in the navigation bar.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click View / Edit.
The Edit role details window appears. The workflow navigation bar Basic settings step is highlighted.
3. Confirm the role name and description and then click Next.
The Members window appears and its workflow step is highlighted.
4. To verify or add role members, click Add Member.
The Search member dialog box appears.
5. Select one of following options:
● Users
● Groups
● Well-known SIDs
6. If you selected User or Group, locate the user or group through one of the following methods:
● Type the Username or Group Name you want to search for in the text field.
● Select the authentication provider you want to search for from the Providers list. Only providers that are currently
configured and enabled on the cluster are listed.
7. Click Search.
A list of users or groups appears in the Search member window.
8. To add a user or group, select a user name, group name, or a well-known SID from the search results to add as members to
the role.
9. Click Select user.
10. From the Current access zone list, select the appropriate zone-level role that you want to modify.
The roles that are associated with that access zone is displayed.
11. Modify the members as needed and then click Next.
The Privileges window appears and its workflow step is highlighted.
12. Modify the privileges as needed by clicking the appropriate permissions in the Permission column.
Permissions are - (no permission), R (read), X (run), and W (write).
a. To assign subprivileges, click the down arrow of the parent privilege to view and assign subprivileges.
b. When you finish, click Next.
The Summary window appears.
13. Review the modified role, and then click Submit.
14. Click Close.
Access zones 81
5
Authentication
Topics:
• Authentication overview
• Authentication provider features
• Security Identifier (SID) history overview
• Supported authentication providers
• Active Directory
• LDAP
• NIS
• Kerberos authentication
• File provider
• Local provider
• Multifactor authentication (MFA)
• Single sign-on overview
• Multi-instance active directory
• LDAP public keys
• Managing Active Directory providers
• Managing LDAP providers
• Managing NIS providers
• Managing MIT Kerberos authentication
• Managing file providers
• Managing local users and groups
• Managing SSO
Authentication overview
You can manage authentication settings for your cluster, including authentication providers, Active Directory domains, LDAP,
NIS, and Kerberos authentication, file and local providers, multi-factor authentication, and more.
Feature Description
Authentication All authentication providers support cleartext authentication.
You can configure some providers to support NTLM or
Kerberos authentication also.
Users and groups OneFS provides the ability to manage users and groups
directly on the cluster.
Netgroups Specific to NFS, netgroups restrict access to NFS exports.
UNIX-centric user and group properties Login shell, home directory, UID, and GID. Missing information
is supplemented by configuration templates or additional
authentication providers.
82 Authentication
Feature Description
Windows-centric user and group properties NetBIOS domain and SID. Missing information is supplemented
by configuration templates.
Related concepts
Authentication overview
Related concepts
Authentication overview
Active Directory
Active Directory is a Microsoft implementation of Lightweight Directory Access Protocol (LDAP), Kerberos, and DNS
technologies that can store information about network resources. Active Directory can serve many functions, but the primary
reason for joining the cluster to an Active Directory domain is to perform user and group authentication.
You can join the cluster to an Active Directory (AD) domain by specifying the fully qualified domain name, which can be
resolved to an IPv4 or an IPv6 address, and a user name with join permission. When the cluster joins an AD domain, a single
AD machine account is created. The machine account establishes a trust relationship with the domain and enables the cluster
to authenticate and authorize users in the Active Directory forest. By default, the machine account is named the same as the
cluster. If the cluster name is more than 15 characters long, the name is hashed and displayed after joining the domain.
Authentication 83
OneFS supports NTLM and Microsoft Kerberos for authentication of Active Directory domain users. NTLM client credentials
are obtained from the login process and then presented in an encrypted challenge/response format to authenticate. Microsoft
Kerberos client credentials are obtained from a key distribution center (KDC) and then presented when establishing server
connections. For greater security and performance, we recommend that you implement Kerberos, according to Microsoft
guidelines, as the primary authentication protocol for Active Directory.
Each Active Directory provider must be associated with a groupnet. The groupnet is a top-level networking container that
manages hostname resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies
which networking properties the Active Directory provider will use when communicating with external servers. The groupnet
associated with the Active Directory provider cannot be changed. Instead you must delete the Active Directory provider and
create it again with the new groupnet association.
You can add an Active Directory provider to an access zone as an authentication method for clients connecting through
the access zone. OneFS supports multiple instances of Active Directory on a PowerScale cluster; however, you can assign
only one Active Directory provider per access zone. The access zone and the Active Directory provider must reference the
same groupnet. Configure multiple Active Directory instances only to grant access to multiple sets of mutually-untrusted
domains. Otherwise, configure a single Active Directory instance if all domains have a trust relationship. You can discontinue
authentication through an Active Directory provider by removing the provider from associated access zones.
Related concepts
Authentication overview
Managing Active Directory providers
LDAP
The Lightweight Directory Access Protocol (LDAP) is a networking protocol that enables you to define, query, and modify
directory services and resources.
OneFS can authenticate users and groups against an LDAP repository in order to grant them access to the cluster. OneFS
supports Kerberos authentication for an LDAP provider.
The LDAP service supports the following features:
● Users, groups, and netgroups.
● Configurable LDAP schemas. For example, the ldapsam schema allows NTLM authentication over the SMB protocol for users
with Windows-like attributes.
● Simple bind authentication, with and without TLS.
● Redundancy and load balancing across servers with identical directory data.
● Multiple LDAP provider instances for accessing servers with different user data.
● Encrypted passwords.
● IPv4 and IPv6 server URIs.
Each LDAP provider must be associated with a groupnet. The groupnet is a top-level networking container that manages
hostname resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies which
networking properties the LDAP provider will use when communicating with external servers. The groupnet associated with
the LDAP provider cannot be changed. Instead you must delete the LDAP provider and create it again with the new groupnet
association.
You can add an LDAP provider to an access zone as an authentication method for clients connecting through the access
zone. An access zone may include at most one LDAP provider. The access zone and the LDAP provider must reference the
same groupnet. You can discontinue authentication through an LDAP provider by removing the provider from associated access
zones.
Related concepts
Authentication overview
Managing LDAP providers
84 Authentication
NIS
The Network Information Service (NIS) provides authentication and identity uniformity across local area networks. OneFS
includes an NIS authentication provider that enables you to integrate the cluster with your NIS infrastructure.
NIS, designed by Sun Microsystems, can authenticate users and groups when they access the cluster. The NIS provider exposes
the passwd, group, and netgroup maps from an NIS server. Hostname lookups are also supported. You can specify multiple
servers for redundancy and load balancing.
Each NIS provider must be associated with a groupnet. The groupnet is a top-level networking container that manages
hostname resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies which
networking properties the NIS provider will use when communicating with external servers. The groupnet associated with
the NIS provider cannot be changed. Instead you must delete the NIS provider and create it again with the new groupnet
association.
You can add an NIS provider to an access zone as an authentication method for clients connecting through the access zone. An
access zone may include at most one NIS provider. The access zone and the NIS provider must reference the same groupnet.
You can discontinue authentication through an NIS provider by removing the provider from associated access zones.
NOTE: NIS is different from NIS+, which OneFS does not support.
Related concepts
Authentication overview
Managing NIS providers
Kerberos authentication
Kerberos is a network authentication provider that negotiates encryption tickets for securing a connection. OneFS supports
Microsoft Kerberos and MIT Kerberos authentication providers on a cluster. If you configure an Active Directory provider,
support for Microsoft Kerberos authentication is provided automatically. MIT Kerberos works independently of Active Directory.
For MIT Kerberos authentication, you define an administrative domain, also called a realm. Within this realm, an authentication
server has the authority to authenticate a user, host, or service; the server can resolve to either IPv4 or IPv6 addresses. You
can optionally define a Kerberos domain to allow additional domain extensions to be associated with a realm.
The authentication server in a Kerberos environment is called the Key Distribution Center (KDC) and distributes encrypted
tickets. When a user authenticates with an MIT Kerberos provider within a realm, a cryptographic ticket-granting ticket (TGT) is
created. The TGT enables user access to a service principal name (SPN).
Each MIT Kerberos provider is associated with a groupnet. The groupnet is a top-level networking container that manages
hostname resolution against DNS nameservers. It contains subnets and IP address pools. The groupnet specifies which
networking properties the Kerberos provider uses when it communicates with external servers. The groupnet associated with
the Kerberos provider cannot be changed. Instead, delete the Kerberos provider and create it again with the new groupnet
association.
You can add an MIT Kerberos provider to an access zone as an authentication method for clients connecting through the
access zone. An access zone may include at most one MIT Kerberos provider. The access zone and the Kerberos provider must
reference the same groupnet. You can discontinue authentication through an MIT Kerberos provider by removing the provider
from associated access zones.
NOTE: Do not use the NULL account with Kerberos authentication. Using the NULL account for Kerberos authentication
can cause issues.
Authentication 85
Provider type Documentation for configuring maximum lifetimes
Microsoft Kerberos with Active See the following Microsoft documentation:
Directory Domain Services ● Maximum lifetime for service ticket
● Maximum lifetime for user ticket
MIT Kerberos See the MIT Kerberos documentation for configuring the kdc.conf file. The
max_life setting in kdc.conf controls the lifetime duration.
Related concepts
Authentication overview
File provider
A file provider enables you to supply an authoritative third-party source of user and group information to a PowerScale
cluster. A third-party source is useful in UNIX and Linux environments that synchronize /etc/passwd, /etc/group, and
etc/netgroup files across multiple servers.
Standard BSD /etc/spwd.db and /etc/group database files serve as the file provider backing store on a cluster. You
generate the spwd.db file by running the pwd_mkdb command in the OneFS command-line interface (CLI). You can script
updates to the database files.
On a PowerScale cluster, a file provider hashes passwords with libcrypt. For the best security, it is recommended that
you use the Modular Crypt Format in the source /etc/passwd file to determine the hashing algorithm. OneFS supports the
following algorithms for the Modular Crypt Format:
● MD5
● NT-Hash
● SHA-256
● SHA-512
For information about other available password formats, run the man 3 crypt command in the CLI to view the crypt man
pages.
NOTE: The built-in System file provider includes services to list, manage, and authenticate against system accounts such as
root, admin, and nobody. It is recommended that you do not modify the System file provider.
Related concepts
Authentication overview
86 Authentication
Local provider
The local provider provides authentication and lookup facilities for user accounts added by an administrator.
Local authentication is useful when Active Directory, LDAP, or NIS directory services are not configured or when a specific user
or application needs access to the cluster. Local groups can include built-in groups and Active Directory groups as members.
In addition to configuring network-based authentication sources, you can manage local users and groups by configuring a local
password policy for each node in the cluster. OneFS settings specify password complexity, password age and re-use, and
password-attempt lockout policies.
Related concepts
Authentication overview
Authentication 87
SSO user experience by access zone
OneFS SSO is configured and enabled separately for each access zone.
SSO is configured separately for each access zone. Each access zone can have SSO enabled or disabled separately. For each
access zone that has SSO enabled, you must configure an IdP You can use the same or different IdP for each zone. Each zone
can have only one IdP.
When SSO is enabled on a zone, the Log in with SSO link appears on the OneFS WebUI login screen. When a user clicks that
link, OneFS sends a SAML request to the SSO IdP. One of the following occurs:
● If the user has already logged into the SSO IdP, the IdP returns an authentication token to OneFS. The user gains access to
the OneFS home screen.
● If the user has not logged into the SSO IdP, the user is redirected to the IdP login screen and logs in. If the login is
successful, the IdP returns an authentication token to OneFS. The user gains access to the OneFS home screen.
If the signing certificate required for communicating with the IdP expires, OneFS disables SSO. An authorized administrator can
regenerate an expired certificate on the WebUI, using Access > Authentication providers > SSO > <access-zone> .
88 Authentication
Configure an Active Directory provider
You can configure one or more Active Directory providers, each of which must be joined to a separate Active Directory domain.
By default, when you configure an Active Directory provider, it is automatically added to the System access zone.
NOTE: Consider the following information when you configure an Active Directory provider:
● When you join Active Directory from OneFS, cluster time is updated from the Active Directory server, as long as an NTP
server has not been configured for the cluster.
● If you migrate users to a new or different Active Directory domain, you must re-set the ACL domain information after
you configure the new provider. You can use third-party tools such as Microsoft SubInACL.
1. Click Access > Authentication Providers > Active Directory.
2. Click Join a domain.
3. In the Domain Name field, specify the fully qualified Active Directory domain name, which can be resolved to an IPv4 or an
IPv6 address.
The domain name will also be used as the provider name.
4. In the User field, type the username of an account that is authorized to join the Active Directory domain.
5. In the Password field, type the password of the user account.
6. Optional: In the Organizational Unit field, type the name of the organizational unit (OU) to connect to on the Active
Directory server. Specify the OU in the format OuName or OuName1/SubName2.
7. Optional: In the Machine Account field, type the name of the machine account.
NOTE: If you specified an OU to connect to, the domain join will fail if the machine account does not reside in the OU.
8. From the Groupnet list, select the groupnet the authentication provider will reference.
9. Optional: To enable Active Directory authentication for NFS, select Enable Secure NFS.
NOTE: If you specified an OU to connect to, the domain join will fail if the machine account does not reside in the OU.
If you enable this setting, OneFS registers NFS service principal names (SPNs) during the domain join.
10. Optional: In the Advanced Active Directory Settings area, configure the advanced settings that you want to use. It is
recommended that you not change any advanced settings without understanding their consequences.
11. Click Join.
Related concepts
Managing Active Directory providers
Related references
Active Directory provider settings
Related concepts
Managing Active Directory providers
Authentication 89
Delete an Active Directory provider
When you delete an Active Directory provider, you disconnect the cluster from the Active Directory domain that is associated
with the provider, disrupting service for users who are accessing it. After you leave an Active Directory domain, users can no
longer access the domain from the cluster.
1. Click Access > Authentication Providers > Active Directory.
2. In the Active Directory Providers table, click Leave for the domain you want to leave.
3. In the confirmation dialog box, click Leave.
Related concepts
Managing Active Directory providers
Setting Description
Services For UNIX Specifies whether to support RFC 2307 attributes for
domain controllers. RFC 2307 is required for Windows UNIX
Integration and Services For UNIX technologies.
Map to primary domain Enables the lookup of unqualified user names in the primary
domain. If this setting is not enabled, the primary domain must
be specified for each authentication operation.
Ignore trusted domains Ignores all trusted domains.
Trusted Domains Specifies trusted domains to include if the Ignore Trusted
Domains setting is enabled.
Domains to Ignore Specifies trusted domains to ignore even if the Ignore
Trusted Domains setting is disabled.
Send notification when domain is unreachable Sends an alert as specified in the global notification rules.
Use enhanced privacy and encryption Encrypts communication to and from the domain controller.
Home Directory Naming Specifies the path to use as a template for naming home
directories. The path must begin with /ifs and can contain
variables, such as %U, that are expanded to generate the
home directory path for the user.
Create home directories on first login Creates a home directory the first time that a user logs in if a
home directory does not already exist for the user.
UNIX Shell Specifies the path to the login shell to use if the Active
Directory server does not provide login-shell information. This
setting applies only to users who access the file system
through SSH.
Query all other providers for UID If no UID is available in the Active Directory, looks up Active
Directory users in all other providers for allocating a UID.
Match users with lowercase If no UID is available in the Active Directory, normalizes Active
Directory user names to lowercase before lookup.
Auto-assign UIDs If no UID is available in the Active Directory, enables UID
allocation for unmapped Active Directory users.
Query all other providers for GID If no GID is available in the Active Directory, looks up Active
Directory groups in all other providers before allocating a GID.
Match groups with lowercase If no GID is available in the Active Directory, normalizes Active
Directory group names to lowercase before lookup.
90 Authentication
Setting Description
Auto-assign GIDs If no GID is available in the Active Directory, enables GID
allocation for unmapped Active Directory groups.
Make UID/GID assignments for users and groups in these Restricts user and group lookups to the specified domains.
specific domains
5. Select the Connect to a random server on each request checkbox to connect to an LDAP server at random. If
unselected, OneFS connects to an LDAP server in the order listed in the Server URIs field.
6. In the Base distinguished name (DN) field, type the distinguished name (DN) of the entry at which to start LDAP
searches.
Base DNs can include cn (Common Name), l (Locality), dc (Domain Component), ou (Organizational Unit), or other
components. For example, dc=emc,dc=com is a base DN for emc.com.
7. From the Groupnet list, select the groupnet that the authentication provider will reference.
8. In the Bind DN field, type the distinguished name of the entry at which to bind to the LDAP server.
9. In the Bind DN password field, specify the password to use when binding to the LDAP server.
Use of this password does not require a secure connection; if the connection is not using Transport Layer Security (TLS),
the password is sent in cleartext.
10. Optional: Update the settings in the following sections of the Add an LDAP provider form to meet the needs of your
environment:
Option Description
Default Query Settings Modify the default settings for user, group, and netgroup queries.
User Query Settings Modify the settings for user queries and home directory provisioning.
Group Query Settings Modify the settings for group queries.
Netgroup Query Settings Modify the settings for netgroup queries.
Advanced LDAP Settings Modify the default LDAP attributes that contain user information or to modify LDAP security
settings.
11. Click Add LDAP Provider.
Authentication 91
Related concepts
Managing LDAP providers
Related references
LDAP query settings
LDAP advanced settings
Related concepts
Managing LDAP providers
Related concepts
Managing LDAP providers
Base Specifies the base distinguished name (base DN) of the entry at which to start LDAP searches for
distinguished user, group, or netgroup objects. Base DNs can include cn (Common Name), l (Locality), dc (Domain
name Component), ou (Organizational Unit), or other components. For example, dc=emc,dc=com is a base DN
for emc.com.
Search scope Specifies the depth from the base DN at which to perform LDAP searches. The following values are valid:
Default Applies the search scope that is defined in the default query settings. This option is
not available for the default query search scope.
Base Searches only the entry at the base DN.
One-level Searches all entries exactly one level below the base DN.
Subtree Searches the base DN and all entries below it.
Children Searches all entries below the base DN, excluding the base DN itself.
Search timeout Specifies the number of seconds after which to stop retrying and fail a search. The default value is 100.
This setting is available only in the default query settings.
92 Authentication
Query filter Specifies the LDAP filter for user, group, or netgroup objects. This setting is not available in the default
query settings.
Authenticate Specifies whether to allow the provider to respond to authentication requests. This setting is available
users from this only in the user query settings.
LDAP provider
Home directory Specifies the path to use as a template for naming home directories. The path must begin with /ifs and
naming template can contain variables, such as %U, that are expanded to generate the home directory path for the user.
This setting is available only in the user query settings.
Automatically Specifies whether to create a home directory the first time a user logs in, if a home directory does not
create user home exist for the user. This setting is available only in the user query settings.
directories on
first login
UNIX shell Specifies the path to the user's login shell, for users who access the file system through SSH. This setting
is available only in the user query settings.
Name attribute Specifies the LDAP attribute that contains UIDs, which are used as login names. The default value is uid.
Common name Specifies the LDAP attribute that contains common names (CNs). The default value is cn.
attribute
Email attribute Specifies the LDAP attribute that contains email addresses. The default value is mail.
GECOS field Specifies the LDAP attribute that contains GECOS fields. The default value is gecos.
attribute
UID attribute Specifies the LDAP attribute that contains UID numbers. The default value is uidNumber.
GID attribute Specifies the LDAP attribute that contains GIDs. The default value is gidNumber.
Home directory Specifies the LDAP attribute that contains home directories. The default value is homeDirectory.
attribute
UNIX shell Specifies the LDAP attribute that contains UNIX login shells. The default value is loginShell.
attribute
Member of Sets the attribute to be used when searching LDAP for reverse memberships. This LDAP value should
attribute be an attribute of the user type posixAccount that describes the groups in which the POSIX user is a
member. This setting has no default value.
Netgroup Specifies the LDAP attribute that contains netgroup members. The default value is
members memberNisNetgroup.
attribute
Netgroup triple Specifies the LDAP attribute that contains netgroup triples. The default value is nisNetgroupTriple.
attribute
Group members Specifies the LDAP attribute that contains group members. The default value is memberUid.
attribute
Unique group Specifies the LDAP attribute that contains unique group members. This attribute is used to determine
members which groups a user belongs to if the LDAP server is queried by the user’s DN instead of the user’s name.
attribute This setting has no default value.
Alternate Specifies the name to be used when searching for alternate security identities. This name is used when
security OneFS tries to resolve a Kerberos principal to a user. This setting has no default value.
identities
attribute
UNIX password Specifies the LDAP attribute that contains UNIX passwords. This setting has no default value.
attribute
Authentication 93
Windows Specifies the LDAP attribute that contains Windows passwords. A commonly used value is
password ntpasswdhash.
attribute
Certificate Specifies the full path to the root certificates file.
authority file
Require secure Specifies whether to require a Transport Layer Security (TLS) connection.
connection for
passwords
Ignore TLS errors Continues over a secure connection even if identity checks fail.
Related concepts
Managing NIS providers
94 Authentication
Modify an NIS provider
You can modify any setting for an NIS provider except its name. You must specify at least one server for the provider to be
enabled.
1. Click Access > Authentication Providers > NIS.
2. In the NIS Providers table, click View details for the provider whose settings you want to modify.
3. For each setting that you want to modify, click Edit, make the change, and then click Save.
4. Click Close.
Related concepts
Managing NIS providers
Related concepts
Managing NIS providers
Authentication 95
Related concepts
Managing MIT Kerberos realms
Related concepts
Managing MIT Kerberos realms
Related concepts
Managing MIT Kerberos realms
Related concepts
Managing MIT Kerberos realms
96 Authentication
Managing MIT Kerberos providers
You can create view, delete, or modify an MIT Kerberos provider. You can also configure the Kerberos provider settings.
Related concepts
Managing MIT Kerberos providers
Related concepts
Creating an MIT Kerberos provider
Authentication 97
Create an MIT Kerberos provider and join a realm
You join a realm automatically as you create an MIT Kerberos provider. A realm defines a domain within which the authentication
for a specific user or service takes place.
You must be a member of the SecurityAdmin role to view and access the Create a Kerberos Provider button and perform the
tasks described in this procedure.
1. Click Access > Authentication Providers > Kerberos Provider.
2. Click Create a Kerberos Provider.
3. In the User field, type a user name who has the permission to create service principal names (SPNs) in the Kerberos realm.
4. In the Password field, type the password for the user.
5. From the Realm list, select the realm that you want to join. The realm must already be configured on the system.
6. From the Groupnet list, select the groupnet the authentication provider will reference.
7. From the Service Principal Name (SPN) Management area, select one of the following options to be used for managing
SPNs:
● Use recommended SPNs
● Manually associate SPNs
If you select this option, type at least one SPN in the format service/principal@realm to manually associate it
with the realm.
Related concepts
Creating an MIT Kerberos provider
Related concepts
Managing MIT Kerberos providers
98 Authentication
Related concepts
Managing MIT Kerberos providers
Related concepts
Managing MIT Kerberos providers
Related concepts
Managing MIT Kerberos providers
Authentication 99
3. In the Domain field, specify a domain name which is typically a DNS suffix in lowercase characters.
4. From the Realm list, select a realm that you have configured previously.
5. Click Create Domain.
Related concepts
Managing MIT Kerberos domains
Related concepts
Managing MIT Kerberos domains
Related concepts
Managing MIT Kerberos domains
Related concepts
Managing MIT Kerberos domains
100 Authentication
the /etc/master.passwd format. You must copy the replacement files to the cluster and reference them by their directory
path.
NOTE: If the replacement files are located outside the /ifs directory tree, you must distribute them manually to every
node in the cluster. Changes that are made to the system provider's files are automatically distributed across the cluster.
Option Description
Authenticate users from Specifies whether to allow the provider to respond to authentication requests.
this provider
Create home directories Specifies whether to create a home directory the first time a user logs in, if a home directory
on first login does not exist for the user.
Path to home directory Specifies the path to use as a template for naming home directories. The path must begin
with /ifs and can contain expansion variables such as %U, which expand to generate the
home directory path for the user. For more information, see the Home directories section of the
OneFS Web Administration Guide or the OneFS CLI Administration Guide.
UNIX Shell Specifies the path to the user's login shell, for users who access the file system through SSH.
8. Click Add File Provider.
Related concepts
Managing file providers
Related references
Password file format
Group file format
Netgroup file format
Authentication 101
The following command generates an spwd.db file in the /etc directory from a password file that is located at /ifs/
test.passwd:
pwd_mkdb /ifs/test.passwd
The following command generates an spwd.db file in the /ifs directory from a password file that is located at /ifs/
test.passwd:
Related concepts
Managing file providers
Related references
Password file format
admin:*:10:10::0:0:Web UI Administrator:/ifs/home/admin:/bin/zsh
The fields are defined below in the order in which they appear in the file.
NOTE: UNIX systems often define the passwd format as a subset of these fields, omitting the Class, Change, and Expiry
fields. To convert a file from passwd to master.passwd format, add :0:0: between the GID field and the Gecos field.
Username The user name. This field is case-sensitive. OneFS does not limit the length; many applications truncate
the name to 16 characters, however.
Password The user’s encrypted password. If authentication is not required for the user, you can substitute an
asterisk (*) for a password. The asterisk character is guaranteed to not match any password.
UID The UNIX user identifier. This value must be a number in the range 0-4294967294 that is not reserved
or already assigned to a user. Compatibility issues occur if this value conflicts with an existing account's
UID.
GID The group identifier of the user’s primary group. All users are a member of at least one group, which is
used for access checks and can also be used when creating files.
Class This field is not supported by OneFS and should be left empty.
Change OneFS does not support changing the passwords of users in the file provider. This field is ignored.
Expiry OneFS does not support the expiration of user accounts in the file provider. This field is ignored.
Gecos This field can store a variety of information but is usually used to store the user’s full name.
Home The absolute path to the user’s home directory.
Shell The absolute path to the user’s shell. If this field is set to /sbin/nologin, the user is denied command-
line access.
Related concepts
Managing file providers
102 Authentication
Group file format
The file provider uses a group file in the format of the /etc/group file that exists on most UNIX systems.
The group file consists of one or more lines containing four colon-separated fields, as shown in the following example:
admin:*:10:root,admin
The fields are defined below in the order in which they appear in the file.
Group name The name of the group. This field is case-sensitive. Although OneFS does not limit the length of the group
name, many applications truncate the name to 16 characters.
Password This field is not supported by OneFS and should contain an asterisk (*).
GID The UNIX group identifier. Valid values are any number in the range 0-4294967294 that is not reserved
or already assigned to a group. Compatibility issues occur if this value conflicts with an existing group's
GID.
Group members A comma-delimited list of user names.
Related concepts
Managing file providers
Where <host> is a placeholder for a machine name, <user> is a placeholder for a user name, and <domain> is a placeholder for a
domain name. Any combination is valid except an empty triple: (,,).
The following sample file contains two netgroups. The rootgrp netgroup contains four hosts: two hosts are defined in member
triples and two hosts are contained in the nested othergrp netgroup, which is defined on the second line.
NOTE: A new line signifies a new netgroup. You can continue a long netgroup entry to the next line by typing a backslash
character (\) in the right-most position of the first line.
Related concepts
Managing file providers
Authentication 103
Related concepts
Managing file providers
Related concepts
Managing file providers
Option Description
Users Select this tab to view all users by provider.
Groups Select this tab to view all groups by provider.
3. From the Current Access Zone list, select an access zone.
4. Select the local provider in the Providers list.
Related concepts
Managing local users and groups
104 Authentication
Option Description
UID If this setting is left blank, the system automatically allocates a UID for the account. This is the
recommended setting. You cannot assign a UID that is in use by another local user account.
Full Name Type a full name for the user.
Email Address Type an email address for the account.
Primary Group To specify the owner group using the Select a Primary Group dialog box, click Select group.
a. To locate a group under the selected local provider, type a group name or click Search.
b. Select a group to return to the Manage Users window.
Additional To specify any additional groups to make this user a member of, click Add group.
Groups
Home Directory Type the path to the user's home directory. If you do not specify a path, a directory is automatically
created at /ifs/home/<username>.
UNIX Shell This setting applies only to users who access the file system through SSH. From the list, select a shell.
By default, the /bin/zsh shell is selected.
Account Click the calendar icon to select the expiration date or type the expiration date in the field, and then
Expiration Date type the date in the format <mm>/<dd>/<yyyy>.
Enable the Select this check box to allow the user to authenticate against the local database for SSH, FTP, HTTP,
account and Windows file sharing through SMB. This setting is not used for UNIX file sharing through NFS.
8. Click Create.
Related concepts
Managing local users and groups
Related references
Naming rules for local users and groups
7. Optional: For each member that you want to add to the group, click Add Members and perform the following tasks in the
Select a User dialog box:
a. Search for either Users, Groups, or Well-known SIDs.
b. If you selected Users or Groups, specify values for the following fields:
User Name
Type all or part of a user name, or leave the field blank to return all users. Wildcard characters are
accepted.
Group Name
Type all or part of a group name, or leave the field blank to return all users. Wildcard characters are
accepted.
Provider
Authentication 105
Select an authentication provider.
c. Click Search.
d. In the Search Results table, select a user and then click Select.
The dialog box closes.
8. Click Create Group.
Related concepts
Managing local users and groups
Related references
Naming rules for local users and groups
Related concepts
Managing local users and groups
106 Authentication
6. Click Save Changes.
7. Click Close.
Related concepts
Managing local users and groups
Related concepts
Managing local users and groups
Related concepts
Managing local users and groups
Authentication 107
Set a new user account to disable when inactive
You can set a new user account to disable automatically when inactive.
1. Click Access > Membership and roles.
2. On the Users tab, select a provider from the Providers drop-down.
NOTE: This feature is limited to the LOCAL:System provider.
3. Click Create user.
4. Enter the user details and check the box for Disable when inactive.
5. Click Create user to complete the action.
108 Authentication
Set lock users criteria
You can configure a user account to lock after a specific number of unsuccessful login attempts.
1. Click Access > Membership and roles > Password Policy.
2. In the Lock users section, set the criteria for your password policy.
3. Click Save.
Managing SSO
SSO requires configuration of an Identity Provider (IdP) and the Service Provider (SP). After both of those components are
configured, you can enable SSO.
System Requirements
OneFS The OneFS user accounts must have appropriate privileges:
● ISI_PRIV_LOGIN_PAPI -- This privilege is required to access the WebUI.
● component-specific privileges-- Administrators typically require privileges to manage components.
For example, an SMB administrator needs ISI_PRIV_SMB privilege.
ADFS The corresponding ADFS user account must have an associated email address configured.
For configuration, ADFS offers a Windows Web UI and a command-line interface. You can use either, with the Web UI being
simpler to use. The following instructions use the ADFS command-line interface.
1. Configure an SSO administrator and maintainer.
In OneFS, the user account must have at least one of the following privileges:
● ISI_PRIV_LOGIN_PAPI - required for the admin to use the OneFS WebUI to administer SSO.
● ISI_PRIV_LOGIN_SSH - required for the admin to use the OneFS CLI in SSH sessions to administer SSO.
● ISI_PRIV_LOGIN_CONSOLE - required for the admin to use the OneFS CLI on the console to administer SSO.
2. Add OneFS metadata to ADFS.
a. RDP to the ADFS server.
Authentication 109
b. Set a variable to a rule that defines who can log in. The following example shows a simple rule that permits all users to log
in. You can define more complex rules that fit the needs of your organization.
$AuthRules = @"
@RuleTemplate="AllowAllAuthzRule" => issue(Type = "http://schemas.microsoft.com/
authorization/claims/permit", Value="true");
"@
c. Set a variable to the rules for getting the Active Directory user email address as the SAML NameID.
$TransformRules = @"
@RuleTemplate = "LdapClaims"
@RuleName = "LDAP mail"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/
windowsaccountname", Issuer == "AD AUTHORITY"]
=> issue(store = "Active Directory",
types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/
emailaddress"),
query = ";mail;{0}", param = c.Value);
@RuleTemplate = "MapClaims"
@RuleName = "NameID"
c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/
nameidentifier",
Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value,
ValueType = c.ValueType,
Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/
claimproperties/format"] =
"urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress");
"@
Where:
● <OneFS-name> is the name that you want to represent the cluster in ADFS.
● <onefs-node-ip> is the IP address or DNS name of your OneFS node.
110 Authentication
Field Description
Entity ID Unique identifier of the IdP as configured on the IdP. For example:
http://rw-webui-win01.example.com/adfs/services/trust
http://rw-webui-win01.example.com/adfs/ls/
Logout URL Log out endpoint for the IdP. For example:
http://rw-webui-win01.example.com/adfs/ls/
d. Repeat this step for each access zone for which you want to configure SSO.
e. Click Next.
5. On the Service Provider screen:
a. Notice that the Current access zone is carried over from the first screen.
b. Select Metadata download or Manual copy, depending on how you want to provide OneFS details about this SP to the
IdP.
c. Provide the hostname or IP address for the SP for the current access zone. For example: 192.1.2.1.
d. Click Generate.
The system generates information about OneFS and this access zone for you to use in configuring the IdP.
e. Obtain the generated information that you can use on the IdP system to prepare it to accept requests from this SP and
access zone.
● If you selected Metadata download above, download the file now. The signing certificate is in the XML file.
● If you selected Manual copy above, use the Copy links in the lower half of the form to copy the information.
Download the Signing Certificate.
f. Click Next.
6. On the Summary screen, review the information.
Authentication 111
6
Administrative roles and privileges
This section contains the following topics:
Topics:
• Role-based access
• Roles
• Privileges
• Managing roles
Role-based access
You can assign role-based access to delegate administrative tasks to selected users.
Role-based access control (RBAC) allows the right to perform particular administrative actions to be granted to any user who
can authenticate to a cluster. Security Administrators create roles, assign privileges to the roles, and then assign members. All
administrators, including those given privileges by a role, connect to the System zone to configure the cluster. When these
members log in to the cluster through a configuration interface, they have these privileges. All administrators can configure
settings for access zones, and they always have control over all access zones on the cluster.
Roles also give you the ability to assign privileges (including granular or subprivileges) to member users and groups. By default,
only the root user and the admin user can log in to the web administration interface through HTTP or the command-line
interface through SSH. Using roles, the root and admin users can assign others to integrated or custom roles that have login and
administrative privileges to perform specific administrative tasks.
NOTE: As a best practice, assign users to roles that contain the minimum set of necessary privileges. For most purposes,
the default permission policy settings, system access zone, and integrated roles are sufficient. You can create role-based
access management policies as necessary for your particular environment.
Roles
You can permit and limit access to administrative areas of your cluster on a per-user basis through roles. OneFS includes several
integrated administrator roles with predefined sets of privileges that cannot be modified. You can also create custom roles and
assign privileges to those roles.
The following list describes what you can and cannot do through roles:
● You can assign privileges and subprivileges to a role.
● You can assign privileges and subprivileges to a role as execute/read/no permission, even if the privilege or subprivilege is
write by default.
● You can create custom roles and assign privileges and subprivileges to those roles.
● Using the WebUI, you can copy an existing role.
● If the users can authenticate to the cluster, you can add any user or group of users, including well-known groups, to a role.
● You can add a user or group to more than one role.
● You cannot assign privileges and subprivileges directly to users or groups.
When a user belongs to multiple roles, that user's overall privilege consists of the total of all the sets of privileges set for
all the roles to which the user belongs. If a particular privilege is configured in multiple roles, the user is granted the highest
permission. A top-level or parent privilege that was explicitly assigned to a role has precedence over a privilege or subprivilege
that is inherited by the role.
OneFS determines privilege as follows:
1. OneFS obtains the union of all sets of privileges for all the roles that the user belongs to.
2. OneFS recalculates the inherited privileges and subprivileges for every explicitly granted parent privilege.
If you explicitly grant a new privilege to a role, OneFS recalculates the inherited privileges based on the new privilege.
What you can do with privileges through roles applies equally to subprivileges.
Custom roles
Custom roles supplement integrated roles.
You can create custom roles and assign privileges that are mapped to administrative areas in your cluster environment. For
example, you can create separate administrator roles for security, auditing, storage provisioning, and backup.
You can designate certain privileges as no permission, read, execute, or write when adding the privilege to a role. You can modify
this option at any time to add or remove privileges as user responsibilities grow and change.
OneFS roles
OneFS includes integrated roles that are configured with the most likely privileges and subprivileges that are required to perform
common administrative functions. You can assign users and groups to OneFS integrated roles, but you cannot modify their
privileges.
OneFS provides the following integrated administrative roles:
● SecurityAdmin
● SystemAdmin
● AuditAdmin
● BackupAdmin
● VMwareAdmin
OneFS also provides an integrated role that is configured with appropriate privileges for APEX File Storage Services users:
BasicUserRole.
Privileges Permission
ISI_PRIV_LOGIN_CONSOLE Read
ISI_PRIV_LOGIN_PAPI Read
ISI_PRIV_LOGIN_SSH Read
ISI_PRIV_AUTH Write
ISI_PRIV_ROLE Write
Privileges Permission
ISI_PRIV_LOGIN_CONSOLE Read
ISI_PRIV_LOGIN_PAPI Read
ISI_PRIV_LOGIN_SSH Read
ISI_PRIV_SYS_SHUTDOWN Read
ISI_PRIV_SYS_SUPPORT Read
Privileges Permission
ISI_PRIV_LOGIN_CONSOLE Read
ISI_PRIV_LOGIN_PAPI Read
ISI_PRIV_LOGIN_SSH Read
ISI_PRIV_SYS_TIME Read
ISI_PRIV_SYS_UPGRADE Read
ISI_PRIV_ANTIVIRUS Read
ISI_PRIV_AUDIT Read
ISI_PRIV_CERTIFICATE Read
ISI_PRIV_CLOUDPOOLS Read
ISI_PRIV_CLUSTER Read
ISI_PRIV_CONFIGURATION Read
ISI_PRIV_DEVICES Read
ISI_PRIV_EVENT Read
ISI_PRIV_FILE_FILTER Read
ISI_PRIV_FTP Read
ISI_PRIV_GET_SET Read
ISI_PRIV_HARDENING Read
ISI_PRIV_HDFS Read
ISI_PRIV_HTTP Read
ISI_PRIV_IPMI Read
ISI_PRIV_JOB_ENGINE Read
ISI_PRIV_KEY_MANAGER Read
ISI_PRIV_LICENSE Read
ISI_PRIV_MONITORING Read
SI_PRIV_NDMP Read
ISI_PRIV_NETWORK Read
ISI_PRIV_NFS Read
ISI_PRIV_NTP Read
Privileges Permission
ISI_PRIV_IFS_BACKUP Read
ISI_PRIV_IFS_RESTORE Read
Privileges Permission
ISI_PRIV_LOGIN_PAPI Read
ISI_PRIV_NETWORK Write
ISI_PRIV_SMARTPOOLS Write
ISI_PRIV_SNAPSHOT Write
ISI_PRIV_SYNCIQ Write
ISI_PRIV_VCENTER Write
ISI_PRIV_NS_TRAVERSE Read
ISI_PRIV_NS_IFS_ACCESS Read
Privileges Permission
ISI_PRIV_LOGIN_PAPI Read
ISI_PRIV_AUTH Read
ISI_PRIV_AUTH_PROVIDERS No permission
ISI_PRIV_AUTH_SETTINGS_ACLS No permission
ISI_PRIV_AUTH_SETTINGS_GLOBAL No permission
ISI_PRIV_AUTH_ZONES No permission
ISI_PRIV_CLOUDPOOLS No permission
ISI_PRIV_FILE_FILTER Write
ISI_PRIV_HDFS Write
ISI_PRIV_HDFS_RACKS No permission
ISI_PRIV_HDFS_SETTINGS Write
ISI_PRIV_NFS Write
ISI_PRIV_NFS_SETTINGS Read
ISI_PRIV_NFS_SETTINGS_GLOBAL No permission
ISI_PRIV_NFS_SETTINGS_ZONE No permission
ISI_PRIV_QUOTA Write
ISI_PRIV_QUOTA_QUOTAMANAGEMENT Write
ISI_PRIV_QUOTA_QUOTAMANAGEMENT_EFFICIENCYRATIO No permission
ISI_PRIV_QUOTA_QUOTAMANAGEMENT_REDUCTIONRATIO No permission
ISI_PRIV_QUOTA_QUOTAMANAGEMENT_THRESHOLDSON Read
ISI_PRIV_QUOTA_QUOTAMANAGEMENT_USAGE_FSPHYSICAL No permission
ISI_PRIV_QUOTA_REPORTS Read
ISI_PRIV_QUOTA_SETTINGS Write
ISI_PRIV_QUOTA_SUMMARY Read
ISI_PRIV_S3 Write
ISI_PRIV_S3_MYKEYS No permission
ISI_PRIV_S3_SETTINGS Write
ISI_PRIV_S3_SETTINGS_GLOBAL No permission
ISI_PRIV_SMARTPOOLS Write
ISI_PRIV_SMARTPOOLS_STATUS Read
ISI_PRIV_SMARTPOOLS_STORAGEPOOL No permission
ISI_PRIV_SMARTPOOLS_STORAGEPOOL_POOLDETAILS No permission
ISI_PRIV_SMB Write
ISI_PRIV_SMB_SESSIONS Read
ISI_PRIV_SMB_SETTINGS No permission
ISI_PRIV_SYNCIQ_SETTINGS_GLOBAL_SETTINGS_ No permission
PREFERRED_RPO_ALERT
ISI_PRIV_SYNCIQ_SETTINGS_GLOBAL_SETTINGS_RPO_ No permission
ALERTS
ISI_PRIV_SYNCIQ_SETTINGS_REPORT_SETTINGS No permission
ISI_PRIV_SYNCIQ_SETTINGS_SERVICE No permission
ISI_PRIV_NS_IFS_ACCESS Read
Privileges
Privileges permit users to complete tasks on a cluster.
Privileges are associated with an area of cluster administration such as Job Engine, SMB, Quotas, or statistics. Privileges enable
you to control the actions that a user or role can perform within a particular area of cluster administration.
In OneFS 9.3.0.0 and later, privileges are granular: each area of cluster administration is associated with a top-level privilege,
the feature or parent privilege. Each parent privilege can have one or more subprivileges, which can also have subprivileges.
Granular privileges enable you to control the specific actions that a user can perform within a cluster administration area in a
detailed way.
Privilege levels are as follows:
● Feature: the top-level privilege associated with an area of cluster administration, such as quotas (ISI_PRIV_QUOTA).
● Entity (sub-feature): a subprivilege associated with a specific function of an area of cluster administration. For example,
quota reports (ISI_PRIV_QUOTA_REPORTS), quota settings (ISI_PRIV_QUOTA_SETTINGS), or quota management
(ISI_PRIV_QUOTA_QUOTAMANAGEMENT). Entity-level privileges can have subprivileges.
Write (w) Grants write, execute, and read access privileges to a role or user. Allows a role or user to view, create,
modify, and delete a configuration subsystem such as statistics, snapshots, or quotas. For example,
the ISI_PRIV_QUOTA privilege with write permission allows an administrator to create, schedule, and
run quota reports and to configure quota notification rules. Write permission allows performing the API
operations GET, PUT, POST, and DELETE.
Execute (x) Grants execute and read access privileges to a role or user. Allows a role or user to initiate API operations
such as PUT, POST or Delete for specific URIs on a configuration subsystem without granting write
privileges to that role or user. The specific URIs on which execute privileges can be granted do not
perform write operations. The specific URIs are /sync/policies/<POLICY>, /sync/jobs, /sync/
jobs/<JOB>, /sync/policies/<POLICY>/reset, and /sync/rules/<RULE>.
Read (r) Grants the read access privilege to a role or user. Allows a role or user to view a configuration subsystem.
The role or user cannot modify configuration settings. Read permission allows performing the API
operation GET.
No permission (-) The privilege is not granted to the role or user. The role or user has no access to the privilege.
Privileges are granted to the user on login to a cluster through the OneFS API, the web administration interface, SSH, or a
console session. A token is generated for the user that includes a list of all privileges that are granted to that user. Each
URI, web-administration interface page, and command requires a specific privilege to view or modify the information available
through any of these interfaces.
Sometimes, privileges cannot be granted or there are privilege limitations.
● Privileges are not granted to users that do not connect to the System Zone during login or to users that connect through
the deprecated Telnet service, even if they are members of a role.
● Privileges do not provide administrative access to configuration paths outside of the OneFS API. For example, the
ISI_PRIV_SMB privilege does not grant a user the right to configure SMB shares using the Microsoft Management Console
(MMC).
● Privileges do not provide administrative access to all log files. Most log files require root access.
● Privileges can be denied to users and roles using No permission.
The privilege ISI_PRIV_RESTRICTED_AUTH and its subprivileges ISI_PRIV_RESTRICTED_AUTH_GROUPS and
ISI_PRIV_RESTRICTED_AUTH_USERS provide limited administrative privileges for groups and users. Administrators with
the ISI_PRIV_RESTRICTED_AUTH privilege can modify only those groups and users with the same or less privilege as
the administrator. Administrators with the ISI_PRIV_RESTRICTED_AUTH_GROUPS or ISI_PRIV_RESTRICTED_AUTH_USERS
privileges can modify only those groups or users with the same privilege as the administrator. For example, you can grant the
ISI_PRIV_RESTRICTED_AUTH privilege to a help desk administrator to perform basic user management operations without
having the full abilities of the ISI_PRIV_AUTH privilege.
Login privileges
The login privileges listed in the following table either allow the user to perform specific actions or grants access to an area of
administration on the cluster. The permission listed for each privilege is the highest permission allowed.
System privileges
The system privileges listed in the following table either allow the user to perform specific actions or grant access to an area
of administration on the cluster. Permission types are No permission (-), Read (r), Execute (x), and Write (w). The permission
listed for each privilege is the highest permission allowed.
Security privileges
The following table describes the privileges and subprivileges that allow users to assign privileges to others. Subprivileges inherit
their permission type from their parent privilege. Permission types are No permission (-), Read (r), Execute (x), and Write (w).
The permission listed for each privilege is the highest permission allowed.
Configuration privileges
The configuration privileges that are listed in the following tables either allow the user to perform specific actions or grant no
permission, read, execute, or write access to an area of administration on the cluster.
When working with privileges:
● Grant the parent or top-level privilege before granting subprivileges. Subprivileges initially inherit their properties and
permission type from their parent or top-level privileges.
● You can explicitly add subprivileges with less permission than the parent privilege.
● You can change the permission type as appropriate for your requirements.
Permission types are:
● No permission (-)
● Read (r)
● Execute (x)
● Write (w)
The following table lists and describes the feature-level (parent) privileges. Feature-level privileges have a parent ID of
ISI_PRIV_ZERO and are marked with *. Tables listing the subprivileges for each top-level privilege follow. The permission
listed for each privilege is the highest permission allowed.
Subprivilege tables
The following tables list and describe the subprivileges for feature-level (ISI_PRIV_ZERO) privileges. Subprivileges inherit their
privileges from their parent privilege. Some of these subprivileges also have subprivileges and are marked with *. The permission
listed for each subprivilege is the highest permission allowed. Subprivilege permissions cannot be higher than their parent
privilege permissions.
SETTINGS
ISI_PRIV_IFS_RESTORE Restore files from /ifs. Bypass file permission checks and Read
grant all read permissions.
NOTE: This privilege circumvents traditional file access
checks, such as mode bits or NTFS ACLs.
Namespace privileges
The namespace privileges listed in the following table allow the user to perform specific actions or grant access permissions, as
appropriate, to an area of administration on the cluster. Permission types are No permission (-), Read (r), Execute (x), and Write
(w). The permission listed for each privilege is the highest permission allowed.
Most cluster privileges allow changes to cluster configuration in some manner. The backup and restore privileges allow access to
cluster data from the System zone, the traversing of all directories, and reading of all file data and metadata regardless of file
permissions.
Users assigned these privileges use the protocol as a backup protocol to another machine without generating access-denied
errors and without connecting as the root user. These two privileges are supported over the following client-side protocols:
● SMB
● NFS
● OneFS API
● FTP
● SSH
Over SMB, the ISI_PRIV_IFS_BACKUP and ISI_PRIV_IFS_RESTORE privileges emulate the Windows privileges
SE_BACKUP_NAME and SE_RESTORE_NAME. The emulation means that normal file-open procedures are protected by
file system permissions. To enable the backup and restore privileges over the SMB protocol, you must open files with the
FILE_OPEN_FOR_BACKUP_INTENT option, which occurs automatically through Windows backup software such as Robocopy.
Application of the option is not automatic when files are opened through general file browsing software such as Windows File
Explorer.
Both ISI_PRIV_IFS_BACKUP and ISI_PRIV_IFS_RESTORE privileges primarily support Windows backup tools such as
Robocopy. A user must be a member of the BackupAdmin built-in role to access all Robocopy features, which includes copying
file DACL and SACL metadata.
Privilege-to-command mapping
Each privilege is associated with one or more commands. Some commands require root access.
isi status
Managing roles
You can view, add, or remove members of any role. Except for integrated roles, whose privileges you cannot modify, you can
add or remove OneFS privileges on a role-by-role basis. You can copy and delete roles.
The role workflow navigation bar appears across the top of each role task window. The navigation bar indicates each step in the
creation or update process:
OneFS highlights each step as you go. To return to a previous step, click that step in the navigation bar.
NOTE: Roles take both users and groups as members. If a group is added to a role, all users who are members of that group
are assigned the privileges that are associated with the role. Similarly, members of multiple roles are assigned the combined
privileges of each role.
Modify a role
You can modify the description and the user or group membership of any role, including integrated roles. You can modify the
name and privileges only of custom roles. Return to a previous step by clicking that step on the workflow navigation bar.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click View / Edit.
Copy a role
You can copy an existing role and add or remove privileges and members for that role as needed.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click More > Copy.
3. Modify the role name, description, members, and privileges as needed.
4. Click Submit.
View a role
You can view information about integrated and custom roles.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click View / Edit.
The Edit role details window appears.
3. Use the roles workflow navigation bar to view the basic settings, members, and privileges for the role.
4. Click Cancel to return to the Membership & Roles page.
View privileges
You can view user privileges.
1. Click Access > Membership & Roles > Roles.
2. Click View/Edit for the role for which to view privileges.
3. Click the Privileges button on the workflow navigation bar.
The Privileges window appears.
4. Scroll to view the privileges in the Permission column.
Permissions are - (no permission), R (read), X (run), and W (write).
5. Click Cancel to return to the Membership and Roles page.
Identity types
OneFS supports three primary identity types, each of which you can store directly on the file system. Identity types are user
identifier and group identifier for UNIX, and security identifier for Windows.
When you log on to a cluster, the user mapper expands your identity to include your other identities from all the directory
services, including Active Directory, LDAP, and NIS. After OneFS maps your identities across the directory services, it generates
an access token that includes the identity information associated with your accounts. A token includes the following identifiers:
● A UNIX user identifier (UID) and a group identifier (GID). A UID or GID is a 32-bit number with a maximum value of
4,294,967,295.
● A security identifier (SID) for a Windows user account. A SID is a series of authorities and sub-authorities ending with
a 32-bit relative identifier (RID). Most SIDs have the form S-1-5-21-<A>-<B>-<C>-<RID>, where <A>, <B>, and <C> are
specific to a domain or computer and <RID> denotes the object in the domain.
● A primary group SID for a Windows group account.
● A list of supplemental identities, including all groups in which the user is a member.
The token also contains privileges that stem from administrative role-based access control.
On a PowerScale cluster, a file contains permissions, which appear as an access control list (ACL). The ACL controls access to
directories, files, and other securable system objects.
Access tokens
An access token is created when the user first makes a request for access.
Access tokens represent who a user is when performing actions on the cluster and supply the primary owner and group
identities during file creation. Access tokens are also compared against the ACL or mode bits during authorization checks.
During user authorization, OneFS compares the access token, which is generated during the initial connection, with the
authorization data on the file. All user and identity mapping occurs during token generation; no mapping takes place during
permissions evaluation.
An access token includes all UIDs, GIDs, and SIDs for an identity, in addition to all OneFS privileges. OneFS reads the information
in the token to determine whether a user has access to a resource. It is important that the token contains the correct list of
UIDs, GIDs, and SIDs. An access token is created from one of the following sources:
Source Authentication
Username ● SMB impersonate user
● Kerberized NFSv3
● Kerberized NFSv4
● NFS export user mapping
● HTTP
● FTP
● HDFS
Privilege Attribute Certificate (PAC) ● SMB NTLM
● Active Directory Kerberos
User identifier (UID) ● NFS AUTH_SYS mapping
Step 1: User Using the initial identity, the user is looked up in all configured authentication providers in the access
identity lookup zone, in the order in which they are listed. The user identity and group list are retrieved from the
authenticating provider. Next, additional group memberships that are associated with the user and group
list are looked up for all other authentication providers. All of these SIDs, UIDs, or GIDs are added to the
initial token.
Step 2: ID The user's identifiers are associated across directory services. All SIDs are converted to their equivalent
mapping UID/GID and vice versa. These ID mappings are also added to the access token.
Step 3: User Access tokens from other directory services are combined. If the username matches any user mapping
mapping rules, the rules are processed in order and the token is updated accordingly.
Step 4: On- The default on-disk identity is calculated from the final token and the global setting. These identities are
disk identity used for newly created files.
calculation
ID mapping
The Identity (ID) mapping service maintains relationship information between mapped Windows and UNIX identifiers to provide
consistent access control across file sharing protocols within an access zone.
NOTE: ID mapping and user mapping are different services, despite the similarity in names.
During authentication, the authentication daemon requests identity mappings from the ID mapping service in order to create
access tokens. Upon request, the ID mapping service returns Windows identifiers mapped to UNIX identifiers or UNIX identifiers
mapped to Windows identifiers. When a user authenticates to a cluster over NFS with a UID or GID, the ID mapping service
returns the mapped Windows SID, allowing access to files that another user stored over SMB. When a user authenticates to the
cluster over SMB with a SID, the ID mapping service returns the mapped UNIX UID and GID, allowing access to files that a UNIX
client stored over NFS.
Mappings between UIDs or GIDs and SIDs are stored according to access zone in a cluster-distributed database called the ID
map. Each mapping in the ID map is stored as a one-way relationship from the source to the target identity type. Two-way
mappings are stored as complementary one-way mappings.
If the ID mapping service does not locate and return a mapped UID or GID in the ID map, the authentication daemon searches
other external authentication providers configured in the same access zone for a user that matches the same name as the
Active Directory user.
If a matching user name is found in another external provider, the authentication daemon adds the matching user's UID or GID
to the access token for the Active Directory user, and the ID mapping service creates a mapping between the UID or GID and
the Active Directory user's SID in the ID map. This is referred to as an external mapping.
NOTE: When an external mapping is stored in the ID map, the UID is specified as the on-disk identity for that user. When
the ID mapping service stores a generated mapping, the SID is specified as the on-disk identity.
If a matching user name is not found in another external provider, the authentication daemon assigns a UID or GID from the ID
mapping range to the Active Directory user's SID, and the ID mapping service stores the mapping in the ID map. This is referred
to as a generated mapping. The ID mapping range is a pool of UIDs and GIDs allocated in the mapping settings.
After a mapping has been created for a user, the authentication daemon retrieves the UID or GID stored in the ID map upon
subsequent lookups for the user.
ID mapping ranges
In access zones with multiple external authentication providers, such as Active Directory and LDAP, it is important that the
UIDs and GIDs from different providers that are configured in the same access zone do not overlap. Overlapping UIDs and GIDs
between providers within an access zone might result in some users gaining access to other users' directories and files.
The range of UIDs and GIDs that can be allocated for generated mappings is configurable in each access zone through the isi
auth settings mappings modify command. The default range for both UIDs and GIDs is 1000000–2000000 in each
access zone.
Do not include commonly used UIDs and GIDs in your ID ranges. For example, UIDs and GIDs below 1000 are reserved for system
accounts and should not be assigned to users or groups.
User mapping
User mapping provides a way to control permissions by specifying a user's security identifiers, user identifiers, and group
identifiers. OneFS uses the identifiers to check file or group ownership.
With the user-mapping feature, you can apply rules to modify which user identity OneFS uses, add supplemental user identities,
and modify a user's group membership. The user-mapping service combines a user’s identities from different directory services
into a single access token and then modifies it according to the rules that you create.
NOTE: You can configure mapping rules on a per-zone basis. Mapping rules must be configured separately in each access
zone that uses them. OneFS maps users only during login or protocol access.
Use Active Use Microsoft Active Directory with Windows Services for UNIX and RFC 2307 attributes to manage
Directory with Linux, UNIX, and Windows systems. Integrating UNIX and Linux systems with Active Directory centralizes
RFC 2307 identity management and eases interoperability, reducing the need for user-mapping rules. Make sure
and Windows your domain controllers are running Windows Server 2003 or later.
Services for
UNIX
Employ a The simplest configurations name users consistently, so that each UNIX user corresponds to a similarly
consistent named Windows user. Such a convention allows rules with wildcard characters to match names and map
username them without explicitly specifying each pair of accounts.
strategy
Do not use In networks with multiple identity sources, such as LDAP and Active Directory with RFC 2307 attributes,
overlapping ID you should ensure that UID and GID ranges do not overlap. It is also important that the range from
ranges which OneFS automatically allocates UIDs and GIDs does not overlap with any other ID range. OneFS
automatically allocates UIDs and GIDs from the range 1,000,000-2,000,000. If UIDs and GIDs overlap
multiple directory services, some users might gain access to other users’ directories and files.
Avoid common Do not include commonly used UIDs and GIDs in your ID ranges. For example, UIDs and GIDs below 1000
UIDs and GIDs are reserved for system accounts; do not assign them to users or groups.
Do not use UPNs You cannot use a user principal name (UPN) in a user mapping rule. A UPN is an Active Directory domain
in mapping rules and username that are combined into an Internet-style name with an @ symbol, such as an email address:
jane@example. If you include a UPN in a rule, the mapping service ignores it and may return an error.
Instead, specify names in the format DOMAIN\user.com.
Group rules by The system processes every mapping rule by default, which can present problems when you apply a
type and order deny-all rule—for example, to deny access to all unknown users. In addition, replacement rules might
them interact with rules that contain wildcard characters. To minimize complexity, it is recommended that you
group rules by type and organize them in the following order:
1. Replacement rules: Specify all rules that replace an identity first to ensure that OneFS replaces all
instances of the identity.
2. Join, add, and insert rules: After the names are set by any replacement operations, specify join, add,
and insert rules to add extra identifiers.
3. Allow and deny rules: Specify rules that allow or deny access last.
NOTE: Stop all processing before applying a default deny rule. To do so, create a rule that
matches allowed users but does nothing, such as an add operator with no field options, and has
the break option. After enumerating the allowed users, you can place a catchall deny at the end to
replace anybody unmatched with an empty user.
To prevent explicit rules from being skipped, in each group of rules, order explicit rules before rules that
contain wildcard characters.
Add the LDAP When a PowerScale cluster is connected to Active Directory and LDAP, add the LDAP primary group to
or NIS primary the list of supplemental groups. This enables OneFS to honor group permissions on files created over NFS
group to the or migrated from other UNIX storage systems. The same practice is advised when a PowerScale cluster is
supplemental connected to Active Directory as well as and NIS.
groups
On-disk identity
After the user mapper resolves a user's identities, OneFS determines an authoritative identifier for it, which is the preferred
on-disk identity.
OnesFS stores either UNIX or Windows identities in file metadata on disk. On-disk identity types are UNIX, SID, and native.
Identities are set when a file is created or a file's access control data is modified. Almost all protocols require some level of
mapping to operate correctly, so choosing the preferred identity to store on disk is important. You can configure OneFS to store
either the UNIX or the Windows identity, or you can allow OneFS to determine the optimal identity to store.
On-disk identity types are UNIX, SID, and native. Although you can change the type of on-disk identity, the native identity is
best for a network with UNIX and Windows systems. In native on-disk identity mode, setting the UID as the on-disk identity
improves NFS performance.
NOTE: If you change the on-disk identity type, you should run the PermissionRepair job with the Convert repair type
selected to make sure that the disk representation of all files is consistent with the changed setting. For more information,
see the Run the PermissionRepair job section.
Managing ID mappings
You can create, modify, and delete identity mappings and configure ID mapping settings.
The following command deletes all identity mappings in the zone3 access zone that were both created automatically and
include a UID or GID from an external authentication source:
The following command deletes the identity mapping of the user with UID 4236 in the zone3 access zone:
Name: user_36
On-disk: UID: 4236
Unix uid: 4236
Unix gid: -100000
SMB: S-1-22-1-4236
User
Name: user_36
UID: 4236
SID: S-1-22-1-4236
On Disk: 4236
ZID: 3
Zone: zone3
Privileges: -
Primary Group
Name: user_36
GID: 4236
SID: S-1-22-2-4236
On Disk: 4236
Name: YORK\stand
DN: CN=stand,CN=Users,DC=york,DC=hull,DC=example,DC=com
DNS Domain: york.hull.example.com
Domain: YORK
Provider: lsa-activedirectory-provider:YORK.HULL.EXAMPLE.COM
Sam Account Name: stand
UID: 4326
SID: S-1-5-21-1195855716-1269722693-1240286574-591111
Primary Group
ID : GID:1000000
Name : YORK\york_sh_udg
Additional Groups: YORK\sd-york space group
YORK\york_sh_udg
YORK\sd-york-group
YORK\sd-group
YORK\domain users
3. View a user identity from LDAP only by running the isi auth users view command.
The following command displays the identity of an LDAP user named stand:
Name: stand
DN: uid=stand,ou=People,dc=colorado4,dc=hull,dc=example,dc=com
Related references
Mapping rule options
Mapping rule operators
Related tasks
Merge Windows and UNIX tokens
Retrieve the primary group from LDAP
Test a user-mapping rule
User
Name:krb_user_002
UID:1002
SID:S-1-22-1-1001
On disk:1001
ZID:1
Zone:System
Privileges:-
Primary Group
Name:krb_user_001
GID:1000
SID:S-1-22-2-1001
On disk:1000
Supplemental Identities
Name:Authenticated Users
GID: -
SID:S-1-5-11
Related tasks
Create a user-mapping rule
Option Description
Join two users together Inserts the new identity into the token.
Append field from a user Modifies the access token by adding fields to it.
Depending on your selection, the Create a User Mapping Rule dialog box refreshes to display additional fields.
5. Populate the fields as needed.
6. Click Add Rule.
NOTE: Rules are called in the order they are listed. To ensure that each rule gets processed, list replacements first and
allow/deny rules last. You can change the order in which a rule is listed by clicking its title bar and dragging it to a new
position.
Related tasks
Test a user-mapping rule
Related tasks
Test a user-mapping rule
Related tasks
Create a user-mapping rule
Related tasks
Create a user-mapping rule
When user411 connects to the share with the net use command, the user's home directory is created at /ifs/home/
user411. On user411's Windows client, the net use m: command connects /ifs/home/user411 through the HOMEDIR
share:
1. Run the following commands on the cluster with the --allow-variable-expansion option enabled. The %U expansion
variable expands to the user name, and the --auto-create-directory option is enabled to create the directory if it
does not exist:
If user411 connects to the share with the net use command, user411's home directory is created at /ifs/home/
user411. On user411's Windows client, the net use m: command connects /ifs/home/user411 through the
HOMEDIR share, mapping the connection similar to the following example:
2. Run a net use command, similar to the following example, on a Windows client to map the home directory for user411:
3. Run a command similar to the following example on the cluster to view the inherited ACL permissions for the user411 share:
cd /ifs/home/user411
ls -lde .
After running this command, user Zachary will see a share named 'zachary' rather than '%U', and when Zachary tries to connect
to the share named 'zachary', he will be directed to /ifs/home/zachary. On a Windows client, if Zachary runs the following
commands, he sees the contents of his /ifs/home/zachary directory:
Similarly, if user Claudia runs the following commands on a Windows client, she sees the directory contents of /ifs/home/
claudia:
Zachary and Claudia cannot access one another's home directory because only the share 'zachary' exists for Zachary and only
the share 'claudia' exists for Claudia.
NOTE: The following examples refer to setting the login shell to /bin/bash. You can also set the shell to /bin/rbash.
1. Run the following command to set the login shell for all local users to /bin/bash:
2. Run the following command to set the default login shell for all Active Directory users in your domain to /bin/bash:
Name: System
Path: /ifs
Groupnet: groupnet0
Map Untrusted: -
Auth Providers: lsa-local-provider:System, lsa-file-provider:System
NetBIOS Name: -
User Mapping Rules: -
Home Directory Umask: 0077
Skeleton Directory: /usr/share/skel
Cache Entry Expiry: 4H
Negative Cache Entry Expiry: 1m
Zone ID: 1
In the command result, you can see the default setting for Home Directory Umask for the created home directory is
0700, which is equivalent to (0755 & ~(077)). You can modify the Home Directory Umask setting for a zone with the
--home-directory-umask option, specifying an octal number as the umask value. This value indicates the permissions
that are to be disabled, so larger mask values indicate fewer permissions. For example, a umask value of 000 or 022 yields
created home directory permissions of 0755, whereas a umask value of 077 yields created home directory permissions of
0700.
2. Run a command similar to the following example to allow a group/others write/execute permission in a home directory:
In this example, user home directories will be created with mode bits 0755 masked by the umask field, set to the value of
022. Therefore, user home directories will be created with mode bits 0755, which is equivalent to (0755 & ~(022)).
3. Run the isi auth ads view command with the --verbose option.
The system displays output similar to the following example:
Name: YOUR.DOMAIN.NAME.COM
NetBIOS Domain: YOUR
...
Create Home Directory: Yes
Home Directory Template: /ifs/home/ADS/%D/%U
Login Shell: /bin/sh
5. Optional: To verify this information from an external UNIX node, run the ssh command from an external UNIX node.
For example, the following command would create /ifs/home/ADS/<your-domain>/user_100 if it did not previously
exist:
ssh <your-domain>\\[email protected]
Name: System
...
Skeleton Directory: /usr/share/skel
2. Run the isi zone zones modify command to modify the default skeleton directory.
The following command modifies the default skeleton directory, /usr/share/skel, in an access zone, where System is
the value for the <zone> option and /usr/share/skel2 is the value for the <path> option:
Authentication provider Home directory Home directory creation UNIX login shell
Local ● --home-directory- Enabled /bin/sh
template=/ifs/
home/%U
● --create-home-
directory=yes
● --login-
shell=/bin/sh
Related references
Supported expansion variables
%D NetBIOS domain name Expands to the user's domain name, based on the
(for example, YORK for authentication provider:
YORK.EAST.EXAMPLE.COM) ● For Active Directory users, %D expands to the Active
Directory NetBIOS name.
● For local users, %D expands to the cluster name in
uppercase characters. For example, for a cluster named
cluster1, %D expands to CLUSTER1.
● For users in the System file provider, %D expands to
UNIX_USERS.
● For users in other file providers, %D expands to
FILE_USERS.
● For LDAP users, %D expands to LDAP_USERS.
● For NIS users, %D expands to NIS_USERS.
%Z Zone name (for example, Expands to the access zone name. If multiple zones are
ZoneABC) activated, this variable is useful for differentiating users in
separate zones. For example, for a user named user1 in
the System zone, the path /ifs/home/%Z/%U is mapped
to /ifs/home/System/user1.
%L Host name (cluster host name Expands to the host name of the cluster, normalized to
in lowercase) lowercase. Limited use.
%0 First character of the user Expands to the first character of the user name.
name
%1 Second character of the user Expands to the second character of the user name.
name
%2 Third character of the user Expands to the third character of the user name.
name
NOTE: If the user name includes fewer than three characters, the %0, %1, and %2 variables wrap around. For example, for
a user named ab, the variables maps to a, b, and a, respectively. For a user named a, all three variables map to a.
Related references
Supported expansion variables
ACLs
In Windows environments, file and directory permissions, referred to as access rights, are defined in access control lists (ACLs).
Although ACLs are more complex than mode bits, ACLs can express much more granular sets of access rules. OneFS checks the
ACL processing rules commonly associated with Windows ACLs.
A Windows ACL contains zero or more access control entries (ACEs), each of which represents the security identifier (SID) of
a user or a group as a trustee. In OneFS, an ACL can contain ACEs with a UID, GID, or SID as the trustee. Each ACE contains
a set of rights that allow or deny access to a file or folder. An ACE can optionally contain an inheritance flag to specify whether
the ACE should be inherited by child folders and files.
NOTE: Instead of the standard three permissions available for mode bits, ACLs have 32 bits of fine-grained access rights.
Of these, the upper 16 bits are general and apply to all object types. The lower 16 bits vary between files and directories but
are defined in a way that allows most applications to apply the same bits for files and directories.
Rights grant or deny access for a given trustee. You can block user access explicitly through a deny ACE or implicitly by
ensuring that a user does not directly, or indirectly through a group, appear in an ACE that grants the right.
Mixed-permission environments
When a file operation requests an object’s authorization data, for example, with the ls -l command over NFS or with the
Security tab of the Properties dialog box in Windows Explorer over SMB, OneFS attempts to provide that data in the
requested format. In an environment that mixes UNIX and Windows systems, some translation may be required when performing
create file, set security, get security, or access operations.
SID-to-UID and SID-to-GID mappings are cached in both the OneFS ID mapper and the stat cache. If a mapping has
recently changed, the file might report inaccurate information until the file is updated or the cache is flushed.
User
Name : <username>
UID : 2018
SID : SID:S-1-5-21-2141457107-1514332578-1691322784-1018
File
Owner : user:root
Group : group:wheel
Mode : drwxrwxrwx
Relevant Mode : d---rwx---
Permissions
Expected : user:<username> \
allow dir_gen_read,dir_gen_write,dir_gen_execute,delete_child
3. View mode-bits permissions for a user by running the isi auth access command.
The following command displays verbose-mode file permissions information in /ifs/ for the user that you specify in place
of <username>:
4. View expected ACL user permissions on a file for a user by running the isi auth access command.
The following command displays verbose-mode ACL file permissions for the file file_with_acl.tx in /ifs/data/ for
the user that you specify in place of <username>:
Option Description
Send NTLMv2 Specifies whether to send only NTLMv2 responses to SMB clients with NTLM-compatible credentials.
On-Disk Identity Controls the preferred identity to store on disk. If OneFS is unable to convert an identity to the
preferred format, it is stored as is. This setting does not affect identities that are currently stored on
disk. Select one of the following settings:
native Allow OneFS to determine the identity to store on disk. This is the recommended
setting.
unix Always store incoming UNIX identifiers (UIDs and GIDs) on disk.
sid Store incoming Windows security identifiers (SIDs) on disk, unless the SID was
generated from a UNIX identifier; in that case, convert it back to the UNIX
identifier and store it on disk.
Space For clients that have difficulty parsing spaces in user and group names, specifies a substitute
Replacement character.
3. Click Save.
If you changed the on-disk identity selection, it is recommended that you run the PermissionRepair job with the Convert repair
type to prevent potential permissions errors. For more information, see the Run the PermissionRepair job section.
Related references
ACL policy settings
Environment
Depending on the environment you select, the system will automatically select the General ACL Settings and Advanced ACL
Settings options that are optimal for that environment. You also have the option to manually configure general and advanced
settings.
Balanced Enables PowerScale cluster permissions to operate in a mixed UNIX and Windows environment. This
setting is recommended for most PowerScale cluster deployments.
UNIX only Enables PowerScale cluster permissions to operate with UNIX semantics, as opposed to Windows
semantics. Enabling this option prevents ACL creation on the system.
Windows only Enables PowerScale cluster permissions to operate with Windows semantics, as opposed to UNIX
semantics. Enabling this option causes the system to return an error on UNIX chmod requests.
Custom Allows you to configure General ACL Settings and Advanced ACL Settings options.
environment
NOTE: Inheritable ACLs on the system take precedence over this setting. If inheritable ACLs are set
on a folder, any new files and folders that are created in that folder inherit the folder's ACL. Disabling
this setting does not remove ACLs currently set on files. If you want to clear an existing ACL, run the
chmod -b <mode> <file> command to remove the ACL and set the correct permissions.
Use the chmod Specifies how permissions are handled when a chmod operation is initiated on a file with an ACL, either
Command On locally or over NFS. This setting controls any elements that affect UNIX permissions, including File
Files With System Explorer. Enabling this policy setting does not change how chmod operations affect files that do
Existing ACLs not have ACLs. Select one of the following options:
Remove the For chmod operations, removes any existing ACL and instead sets the chmod
existing ACL permissions. Select this option only if you do not need permissions to be set from
and set Windows.
UNIX permissions
instead
Remove the Stores the UNIX permissions in a new Windows ACL. Select this option only if
existing ACL and you want to remove Windows permissions but do not want files to have synthetic
create an ACL ACLs.
equivalent to the
UNIX permissions
Remove the Stores the UNIX permissions in a new Windows ACL only for users and groups
existing ACL that are referenced by the old ACL. Select this option only if you want to remove
and create an Windows permissions but do not want files to have synthetic ACLs.
ACL equivalent
to the UNIX
permissions, for
CAUTION: If you try to run the chmod command on the same permissions that are currently
set on a file with an ACL, you may cause the operation to silently fail. The operation appears
to be successful, but if you were to examine the permissions on the cluster, you would
notice that the chmod command had no effect. As an alternative, you can run the chmod
command away from the current permissions and then perform a second chmod command to
revert to the original permissions. For example, if the file shows 755 UNIX permissions and
you want to confirm this number, you could run chmod 700 file; chmod 755 file.
ACLs Created On Windows systems, the ACEs for directories can define detailed inheritance rules. On a UNIX system,
On Directories the mode bits are not inherited. Making ACLs that are created on directories by the chmod command
By the chmod inheritable is more secure for tightly controlled environments but may deny access to some Windows
Command users who would otherwise expect access. Select one of the following options:
● Make ACLs inheritable
● Do not make ACLs inheritable
Use the chown/ Changes the user or group that has ownership of a file or folder. Select one of the following options:
chgrp On Files
With Existing Modify only the Enables the chown or chgrp operation to perform as it does in UNIX. Enabling this
ACLs owner and/or setting modifies any ACEs in the ACL associated with the old and new owner or
group group.
Modify the owner Enables the NFS chown or chgrp operation to function as it does in Windows.
and/or group and When a file owner is changed over Windows, no permissions in the ACL are
ACL permissions changed.
Ignore operation Prevents an NFS client from changing the owner or group.
if file has an
existing ACL
NOTE: Over NFS, the chown or chgrp operation changes the permissions and user or group that
has ownership. For example, a file that is owned by user Joe with rwx------ (700) permissions
indicates rwx permissions for the owner, but no permissions for anyone else. If you run the chown
command to change ownership of the file to user Bob, the owner permissions are still rwx but they
now represent the permissions for Bob, rather than for Joe, who lost all of his permissions. This
setting does not affect UNIX chown or chgrp operations that are performed on files with UNIX
permissions, and it does not affect Windows chown or chgrp operations, which do not change any
permissions.
Access checks In UNIX environments, only the file owner or superuser has the right to run a chmod or chown operation
(chmod, chown) on a file. In Windows environments, you can implement this policy setting to give users the right to
perform chmod operations that change permissions, or the right to perform chown operations that take
ownership, but do not give away ownership. Select one of the following options:
Allow only the Enables chmod and chown access checks to operate with UNIX-like behavior.
file owner to
Retain 'rwx' Generates an ACE that provides only read, write, and execute permissions.
permissions
Treat 'rwx' Generates an ACE that provides the maximum Windows permissions for a user or
permissions as a group by adding the change permissions right, the take ownership right, and the
Full Control delete right.
Group Owner Operating systems tend to work with group ownership and permissions in two different PowerScale group
Inheritance owner from the file creator's primary group. If you enable a setting that causes the group owner to
be inherited from the creator's primary group, you can override it on a per-folder basis by running the
chmod command to set the set-gid bit. This inheritance applies only when the file is created. For more
information, see the manual page for the chmod command.
Select one of the following options:
When an ACL Specifies that if an ACL exists on a file, the group owner is inherited from the file
exists, use Linux creator's primary group. If there is no ACL, the group owner is inherited from the
and Windows parent folder.
semantics,
otherwise use
BSD semantics
BSD semantics Specifies that the group owner be inherited from the file's parent folder.
- Inherit group
owner from the
parent folder
Linux and Specifies that the group owner be inherited from the file creator's primary group.
Windows
semantics -
Inherit group
owner from the
chmod (007) Specifies whether to remove ACLs when running the chmod (007) command. Select one of the
On Files With following options.
Existing ACLs
chmod(007) does Sets 007 UNIX permissions without removing an existing ACL.
not remove
existing ACL
chmod(007) Removes ACLs from files over UNIX file sharing (NFS) and locally on the cluster
removes existing through the chmod (007) command. If you enable this setting, be sure to run
ACL and sets 007 the chmod command on the file immediately after using chmod (007) to clear an
UNIX permissions ACL. In most cases, you do not want to leave 007 permissions on the file.
Approximate Windows ACLs are more complex than UNIX permissions. When a UNIX client requests UNIX permissions
Owner Mode Bits for a file with an ACL over NFS, the client receives an approximation of the file's actual permissions.
When ACL Exists Running the ls -l command from a UNIX client returns a more open set of permissions than the user
expects. This permissiveness compensates for applications that incorrectly inspect the UNIX permissions
themselves when determining whether to try a file-system operation. The purpose of this policy setting
is to ensure that these applications go with the operation to allow the file system to correctly determine
user access through the ACL. Select one of the following options:
Approximate Causes the owner permissions appear more permissive than the actual permissions
owner mode bits on the file.
using all possible
group ACEs in
ACL
Approximate Causes the owner permissions appear more accurate, in that you see only the
owner mode bits permissions for a particular owner and not the more permissive set. This may cause
using only the access-denied problems for UNIX clients, however.
ACE with the
owner ID
Synthetic "deny" The Windows ACL user interface cannot display an ACL if any deny ACEs are out of canonical ACL order.
ACEs To correctly represent UNIX permissions, deny ACEs may be required to be out of canonical ACL order.
Select one of the following options:
Do not modify Prevents modifications to synthetic ACL generation and allows “deny” ACEs to be
synthetic ACLs generated when necessary.
and mode bit CAUTION: This option can lead to permissions being reordered,
approximations permanently denying access if a Windows user or an application
performs an ACL get, an ACL modification, and an ACL set to and from
Windows.
Remove “deny” Does not include deny ACEs when generating synthetic ACLs.
ACEs from ACLs.
This setting
can cause ACLs
to be more
Access check You can control who can change utimes, which are the access and modification times of a file. Select one
(utimes) of the following options:
Allow only Allows only owners to change utimes, which complies with the POSIX standard.
owners to
change utimes
to client-specific
times (POSIX
compliant)
Allow owners and Allows owners as well as users with write access to modify utimes, which is less
users with ‘write’ restrictive.
access to change
utimes to client-
specific times
Read-only DOS
Deny permission Duplicates DOS-attribute permissions behavior over only the SMB protocol, so that
attribute
to modify files files use the read-only attribute over SMB.
with DOS read-
only attribute
over Windows
Files Sharing
(SMB)
Deny permission Duplicates DOS-attribute permissions behavior over both NFS and SMB protocols.
to modify files For example, if permissions are read-only on a file over SMB, permissions are
with DOS read- read-only over NFS.
only attribute
through NFS and
SMB
Displayed mode
Use ACL Displays the approximation of the NFS mode bits that are based on ACL
bits
to approximate permissions.
mode bits
Always display Displays 777 file permissions. If the approximated NFS permissions are less
777 if ACL exists permissive than those in the ACL, you may want to use this setting so the NFS
client does not stop at the access check before performing its operation. Use this
setting when a third-party application may be blocked if the ACL does not provide
the proper access.
Related tasks
Modify ACL policy settings
Option Description
Manual The job must be started manually.
Scheduled The job is regularly scheduled. Select the schedule option from the drop-down list and specify the schedule
details.
8. Click Save Changes, and then click Close.
9. Optional: From the Job Types table, click Start Job.
The Start a Job window opens.
10. Select or clear the Allow Duplicate Jobs checkbox.
11. Optional: From the Impact policy list, select an impact policy for the job to follow.
12. In the Paths field, type or browse to the directory in /ifs whose permissions you want to repair.
13. Optional: Click Add another directory path and in the added Paths field, type or browse for an additional directory
in /ifs whose permissions you want to repair.
You can repeat this step to add directory paths as needed.
14. From the Repair Type list, select one of the following methods for updating permissions:
Option Description
Clone Applies the permissions settings for the directory that is specified by the Template File or Directory setting to
the directory you set in the Paths fields.
Inherit Recursively applies the ACL of the directory that is specified by the Template File or Directory setting to each
file and subdirectory in the specified Paths fields, according to standard inheritance rules.
Convert For each file and directory in the specified Paths fields, converts the owner, group, and access control list (ACL)
to the target on-disk identity based on the Mapping Type setting.
The remaining settings options differ depending on the selected repair type.
15. In the Template File or Directory field, type or browse to the directory in /ifs that you want to copy permissions from.
This setting applies only to the Clone and Inherit repair types.
16. Optional: From the Mapping Type list, select the preferred on-disk identity type to apply. This setting applies only to the
Convert repair type.
Option Description
Global Applies the system's default identity.
SID (Windows) Applies the Windows identity.
UNIX Applies the UNIX identity.
Native If a user or group does not have an authoritative UNIX identifier (UID or GID), applies the Windows
identity (SID)
17. Optional: Click Start Job.
Access rights are consistently enforced across access protocols on all security models. For example, a user is granted or denied
the same rights to a file whether using SMB, NFS, or HDFS. Clusters running OneFS support global policy settings that enable
you to customize the default access control list (ACL) and UNIX permissions settings. OneFS 9.3.0.0 and later supports HDFS
ACLs.
OneFS is configured with standard UNIX permissions on the file tree. Through Windows Explorer or OneFS administrative tools,
you can give any file or directory an ACL. In addition to Windows domain users and groups, ACLs in OneFS can include local,
NIS, and LDAP users and groups. After a file is given an ACL, the mode bits are no longer enforced and exist only as an estimate
of the effective permissions.
NOTE: We recommend that you keep write caching enabled. You should also enable write caching for all file pool policies.
OneFS interprets writes to the cluster as either synchronous or asynchronous, depending on a client's specifications. The
impacts and risks of write caching depend on what protocols clients use to write to the cluster, and whether the writes are
interpreted as synchronous or asynchronous. If you disable write caching, client specifications are ignored and all writes are
performed synchronously.
The following table explains how clients' specifications are interpreted, according to the protocol.
SMB The write-through flag has been The write-through flag has not been
applied. applied.
Protocol Risk
NFS If a node fails, no data will be lost except in the unlikely event
that a client of that node also crashes before it can reconnect
to the cluster. In that situation, asynchronous writes that have
not been committed to disk will be lost.
SMB If a node fails, asynchronous writes that have not been
committed to disk will be lost.
We recommend that you do not disable write caching, regardless of the protocol that you are writing with. If you are writing to
the cluster with asynchronous writes, and you decide that the risks of data loss are too great, we recommend that you configure
your clients to use synchronous writes, rather than disable write caching.
You then configure the default SMB share. See the section Managing SMB shares for more information.
OneFS supports both user and anonymous security modes. If the user security mode is enabled, users who connect to a share
from an SMB client must provide a valid user name with proper credentials.
SMB shares act as checkpoints, and users must have access to a share in order to access objects in a file system on a share.
If a user has access granted to a file system, but not to the share on which it resides, that user will not be able to access the
file system regardless of privileges. For example, assume a share named ABCDocs contains a file named file1.txt in a path
such as: /ifs/data/ABCDocs/file1.txt. If a user attempting to access file1.txt does not have share privileges on
ABCDocs, that user cannot access the file even if originally granted write privileges to the file.
The SMB protocol uses security identifiers (SIDs) for authorization data. All identities are converted to SIDs during retrieval and
are converted back to their on-disk representation before they are stored on the cluster.
When a file or directory is created, OneFS checks the access control list (ACL) of its parent directory. If the ACL contains any
inheritable access control entries (ACEs), a new ACL is generated from those ACEs. Otherwise, OneFS creates an ACL from the
combined file and directory create mask and create mode settings.
OneFS supports the following SMB clients:
Related concepts
Managing SMB settings
Managing SMB shares
SMB Multichannel
SMB Multichannel supports establishing a single SMB session over multiple network connections.
SMB Multichannel is a feature of the SMB 3.0 protocol that provides the following capabilities:
Increased OneFS can transmit more data to a client through multiple connections over high speed network adapters
throughput or over multiple network adapters.
Connection When an SMB Multichannel session is established over multiple network connections, the session is not
failure tolerance lost if one of the connections has a network fault, which enables the client to continue to work.
Automatic SMB Multichannel automatically discovers supported hardware configurations on the client that have
discovery multiple available network paths and then negotiates and establishes a session over multiple network
connections. You are not required to install components, roles, role services, or features.
Aggregated NICs SMB Multichannel establishes multiple network connections to the PowerScale cluster
over aggregated NICs, which results in balanced connections across CPU cores, effective
consumption of combined bandwidth, and connection fault tolerance.
NOTE: The aggregated NIC configuration inherently provides NIC fault tolerance that is not
dependent upon SMB.
NOTE: Per-zone and per-share encryption settings can only be configured through the OneFS command line interface.
NOTE: You can only disable or enable SMB server-side copy for OneFS using the command line interface (CLI).
none Continuously available writes are not handled differently than other writes to the cluster. If you
specify none and a node fails, you may experience data loss without notification. This setting is
not recommended.
write-read- Writes to the share are moved to persistent storage before a success message is returned to the
coherent SMB client that sent the data. This is the default setting.
full Writes to the share are moved to persistent storage before a success message is returned to the
SMB client that sent the data, and prevents OneFS from granting SMB clients write-caching and
handle-caching leases.
follow symlinks=yes
wide links=yes
In this case, "wide links" in the smb.conf file refers to absolute links. The default setting in this file is no.
When you create a symbolic link, it is designated as a file link or directory link. Once the link is set, the designation cannot be
changed. You can format symbolic link paths as either relative or absolute.
To delete symbolic links, use the del command in Windows, or the rm command in a POSIX environment.
Keep in mind that when you delete a symbolic link, the target file or directory still exists. However, when you delete a target file
or directory, a symbolic link continues to exist and still points to the old target, thus becoming a broken link.
Related concepts
SMB security
Related references
File and directory permission settings
Snapshots directory settings
SMB performance settings
SMB security settings
3. In the File filter area, select Enable file filters to enable file filtering.
4. In the File extensions drop down, select to Deny or Allow writes for a list of file extensions.
a. Click Add file extensions to add extensions to the list.
b. Enter the file extension in the empty field. Continue adding file extensions as needed.
c. Click Add extensions.
5. In the Advanced settings area, choose the system default or a custom configuration for the following settings:
● Continuous availability timeout
● Strict continuous availability lockout
● Create permission
● Directory create mask
● Directory create mode
● File create mask
● File create mode
● Change notify
● Oplocks
● Impersonate guest
● Impersonate user
● NTFS ACL
● Access based enumeration
● Host ACL
6. Click Save changes.
Related concepts
SMB security
Create Permission Sets the default source permissions to apply when a file or
directory is created. The default value is Default ACL.
Directory Create Mask Specifies UNIX mode bits that are removed when a directory
is created, restricting permissions. Mask bits are applied
before mode bits are applied.
Directory Create Mode Specifies UNIX mode bits that are added when a directory
is created, enabling permissions. Mode bits are applied after
mask bits are applied.
File Create Mask Specifies UNIX mode bits that are removed when a file is
created, restricting permissions. Mask bits are applied before
mode bits are applied.
Related concepts
SMB security
Related concepts
SMB security
Directory Create Mask Specifies UNIX mode bits that are removed when a directory
is created, restricting permissions. Mask bits are applied
before mode bits are applied. The default value is that the
user has Read, Write, and Execute permissions.
Directory Create Mode Specifies UNIX mode bits that are added when a directory
is created, enabling permissions. Mode bits are applied after
mask bits are applied. The default value is None.
File Create Mask Specifies UNIX mode bits that are removed when a file is
created, restricting permissions. Mask bits are applied before
mode bits are applied. The default value is that the user has
Read, Write, and Execute permissions.
Impersonate User Allows all file access to be performed as a specific user. This
must be a fully qualified user name. The default value is No
value.
NTFS ACL Allows ACLs to be stored and edited from SMB clients. The
default value is Yes.
Access Based Enumeration Allows access based enumeration only on the files and folders
that the requesting user can access. The default value is No.
HOST ACL The ACL that defines host access. The default value is No
value.
Related concepts
SMB security
Related concepts
SMB security
Variable Expansion
For example, if a user is in a domain that is named DOMAIN and has a username of user_1, the path /ifs/home/%D/%U
expands to /ifs/home/DOMAIN/user_1.
7. Select Create SMB share directory if it does not exist to have OneFS create the share directory for the
path you specified if it did not previously exist.
8. Apply the initial ACL settings for the directory. You can modify these settings later.
● To apply a default ACL to the shared directory, select Apply Windows default ACLs.
NOTE: If the Create SMB share directory if it does not exist setting is selected, OneFS creates an ACL with the
equivalent of UNIX 700 mode bit permissions for any directory that is created automatically.
● To maintain the existing permissions on the shared directory, select Do not change existing permissions.
9. Optional: Configure home directory provisioning settings.
● To expand path variables such as %U in the share directory path, select Allow Variable Expansion.
● To automatically create home directories when users access the share for the first time, select Auto-Create
Directories. This option is available only if the Allow Variable Expansion option is enabled.
10. Select the Enable continuous availability on the share to allow clients to create persistent handles that can be reclaimed
after an outage such as a network-related disconnection or a server failure. Servers must be using Windows 8 or Windows
2012 R2 (or higher).
11. Click Add User or Group to edit the user and group settings.
The default permissions configuration is read-only access for the well-known Everyone account. Modify settings to allow
users to write to the share.
12. Select Enable file filters in the File Filter Extensions section to enable support for file filtering. Add the file types to be
applied to the file filtering method.
13. Select Enable or Disable in the Encryption section to allow or disallow SMBv3 encrypted clients to connect to the share.
14. Optional: Click Show Advanced Settings to apply advanced SMB share settings if needed.
15. Click Create Share.
Related concepts
SMB security
Related concepts
SMB security
Related concepts
SMB security
Related concepts
SMB security
Related concepts
SMB security
Mixed protocol environments
NFS security
OneFS provides an NFS server so you can share files on your cluster with NFS clients that adhere to the RFC1813 (NFSv3) and
RFC3530 (NFSv4) specifications.
NFS is disabled by default. To enable NFS, use the following command:
In OneFS, the NFS server is fully optimized as a multithreaded service running in user space instead of the kernel. This
architecture load balances the NFS service across all nodes of the cluster, providing the stability and scalability necessary to
manage up to thousands of connections across multiple NFS clients.
NFS mounts run and refresh quickly, and the server constantly monitors fluctuating demands on NFS services and makes
adjustments across all nodes to ensure continuous, reliable performance. Using an integrated process scheduler, OneFS helps
ensure fair allocation of node resources so that no client can seize more than its fair share of NFS services.
The NFS server also supports access zones that are defined in OneFS, so that clients can access only the exports appropriate
to their zone. For example, if NFS exports are specified for Zone 2, only clients that are assigned to Zone 2 can access these
exports.
To simplify client connections, especially for exports with large path names, the NFS server also supports aliases, which are
shortcuts to mount points that clients can specify directly.
For secure NFS file sharing, OneFS supports NIS and LDAP authentication providers.
Related concepts
Managing the NFS service
Managing NFS exports
NFS aliases
You can create and manage aliases as shortcuts for directory path names in OneFS. If those path names are defined as NFS
exports, NFS clients can specify the aliases as NFS mount points.
NFS aliases are designed to give functional parity with SMB share names within the context of NFS. Each alias maps a unique
name to a path on the file system. NFS clients can then use the alias name in place of the path when mounting.
Aliases must be formed as top-level Unix path names, having a single forward slash followed by name. For example, you could
create an alias named /q4 that maps to /ifs/data/finance/accounting/winter2015 (a path in OneFS). An NFS
client could mount that directory through either of:
mount cluster_ip:/q4
mount cluster_ip:/ifs/data/finance/accounting/winter2015
Aliases and exports are completely independent. You can create an alias without associating it with an NFS export. Similarly, an
NFS export does not require an alias.
Each alias must point to a valid path on the file system. While this path is absolute, it must point to a location beneath the zone
root (/ifs on the System zone). If the alias points to a path that does not exist on the file system, any client trying to mount
the alias would be denied in the same way as attempting to mount an invalid full pathname.
NFS aliases are zone-aware. By default, an alias applies to the client's current access zone. To change this, you can specify an
alternative access zone as part of creating or modifying an alias.
Each alias can only be used by clients on that zone, and can only apply to paths below the zone root. Alias names are unique per
zone, but the same name can be used in different zones—for example, /home.
When you create an alias in the web administration interface, the alias list displays the status of the alias. Similarly, using the
--check option of the isi nfs aliases command, you can check the status of an NFS alias (status can be: good, illegal
path, name conflict, not exported, or path not found).
Related concepts
NFS security
Related concepts
NFS security
Related references
NFS global settings
NFS export performance settings
NFS export client compatibility settings
NFS export behavior settings
Setting Description
NFS Export Service Enables or disables the NFS service. This setting is enabled by
default.
NFSv3 Enables or disables support for NFSv3. This setting is enabled
by default.
NFSv4 Enables or disables support for NFSv4. This setting is disabled
by default.
Cached Export Configuration Enables you to reload cached NFS exports to help ensure that
any domain or network changes take effect immediately.
Related concepts
NFS security
Related tasks
Configure NFS file sharing
Related concepts
NFS security
If you add the same client to more than one list and the client is entered in the same format for each entry, the client is
normalized to a single list in the following order of priority:
● Root clients
● Always read-write clients
● Always read-only clients
● clients
Setting Description
Clients Specifies one or more clients to be allowed access to the export. Access level is controlled through export
permissions.
Always read- Specifies one or more clients to be allowed read/write access to the export regardless of the export's
write clients access-restriction setting. This is equivalent to adding a client to the Clients list with the Restrict access
to read-only setting cleared.
Always read- Specifies one or more clients to be allowed read-only access to the export regardless of the export's
only clients access-restriction setting. This is equivalent to adding a client to the Clients list with the Restrict access
to read-only setting selected.
root clients Specifies one or more clients to be mapped as root for the export. This setting enables the following client
to mount the export, present the root identity, and be mapped to root. Adding a client to this list does not
prevent other clients from mounting if clients, read-only clients, and read-write clients are unset.
6. Select the export permissions setting to use:
● Enable read-write acces
● Restrict actions to read-only.
7. Specify user and group mappings.
Select Use custom to limit access by mapping root users or all users to a specific user and group ID. For root squash, map
root users to the username nobody.
8. Locate the Security types setting. Set the security type to use. UNIX is the default setting.
Click Use custom to select one or more of the following security types:
● UNIX (system)
● Kerberos5
● Kerberos5 Integrity
● Kerberos5 Privacy
NOTE: The default security flavor (UNIX) relies upon having a trusted network. If you do not completely trust
everything on your network, then the best practice is to choose a Kerberos option. If the system does not support
Kerberos, it will not be fully protected because NFS without Kerberos trusts everything on the network and sends all
packets in cleartext. If you cannot use Kerberos, you should find another way to protect the Internet connection. At a
minimum, do the following:
● Limit root access to the cluster to trusted host IP addresses.
● Make sure that all new devices that you add to the network are trusted. Methods for ensuring trust include, but are
not limited to, the following:
○ Use an IPsec tunnel. This option is very secure because it authenticates the devices using secure keys.
○ Configure all of the switch ports to go inactive if they are physically disconnected. In addition, make sure that the
switch ports are MAC limited.
Related concepts
NFS security
Related concepts
NFS security
Related concepts
NFS security
ID Message
----------
----------
Total: 0
In the following example output, export 1 contains a directory path that does not currently exist:
Related concepts
NFS security
File name limits The default file name limit is 255 B. You have an option to
customize the file name limit.
Setting Description
Block Size The block size used to calculate block counts for NFSv3
FSSTAT and NFSv4 GETATTR requests. The default value is
8192 bytes.
Commit Asynchronous If set to yes, allows NFSv3 and NFSv4 COMMIT operations to
be asynchronous. The default value is No.
Directory Transfer Size The preferred directory read transfer size reported to NFSv3
and NFSv4 clients. The default value is 131072 bytes.
Read Transfer Max Size The maximum read transfer size reported to NFSv3 and
NFSv4 clients. The default value is 1048576 bytes.
Read Transfer Multiple The recommended read transfer size multiple reported to
NFSv3 and NFSv4 clients. The default value is 512 bytes.
Read Transfer Preferred Size The preferred read transfer size reported to NFSv3 and
NFSv4 clients. The default value is 131072 bytes.
Write Datasync Action The action to perform for DATASYNC writes. The default
value is DATASYNC.
Write Datasync Reply The reply to send for DATASYNC writes. The default value is
DATASYNC.
Write Filesync Action The action to perform for FILESYNC writes. The default value
is FILESYNC.
Write Filesync Reply The reply to send for FILESYNC writes. The default value is
FILESYNC.
Write Transfer Max Size The maximum write transfer size reported to NFSv3 and
NFSv4 clients. The default value is 1048576 bytes.
Write Transfer Multiple The recommended write transfer size reported to NFSv3 and
NFSv4 clients. The default value is 512 bytes.
Write Transfer Preferred The preferred write transfer size reported to NFSv3 and
NFSv4 clients. The default value is 524288.
Write Unstable Action The action to perform for UNSTABLE writes. The default
value is UNSTABLE.
Write Unstable Reply The reply to send for UNSTABLE writes. The default value is
UNSTABLE.
Related concepts
NFS security
Related tasks
Configure NFS file sharing
Readdirplus Enable Enables the use of NFSv3 readdirplus service whereby a client
can send a request and received extended information about
the directory and files in the export. The default is Yes.
Return 32 bit File IDs Specifies return 32-bit file IDs to the client. The default is No.
Related concepts
NFS security
Related tasks
Configure NFS file sharing
Setting Description
Can Set Time When this setting is enabled, OneFS allows the NFS client to
set various time attributes on the NFS server. The default
value is Yes.
Encoding Overrides the general encoding settings the cluster has for
the export. The default value is DEFAULT.
Map Lookup UID Looks up incoming user identifiers (UIDs) in the local
authentication database. The default value is No.
Symlinks Informs the NFS client that the file system supports symbolic
link file types. The default value is Yes.
Time Delta Sets the server clock granularity. The default value is 1e-9
seconds (0.000000001 second).
Related concepts
NFS security
Related tasks
Configure NFS file sharing
FTP
OneFS includes a secure FTP service that is called Very Secure FTP Daemon (VSFTPD), that you can configure for standard
FTP and FTPS file transfers.
FTP is disabled by default, as users should be using secure FTP (FTPs) or HTTPs for file transfers.
Related tasks
Enable and configure FTP file sharing
Option Description
Enable anonymous Allow users with "anonymous" or "ftp" as the user name to access files and directories without
access requiring authentication. This setting is disabled by default.
Enable local access Allow local users to access files and directories with their local user name and password, allowing
them to upload files directly through the file system. This setting is enabled by default.
Enable server-to- Allow files to be transferred between two remote FTP servers. This setting is disabled by default.
server transfers
4. Click Save Changes.
Related concepts
FTP
NOTE: Set the file and directory permissions to allow HTTP or HTTPS to access them.
OneFS supports both HTTP and its secure variant, HTTPS. Each node in the cluster runs an instance of the Apache HTTP
Server to provide HTTP access. You can configure the HTTP service to run in different modes.
Both HTTP and HTTPS are supported for file transfer, but only HTTPS is supported for API calls. The HTTPS-only requirement
includes the web administration interface. OneFS supports a form of the web-based DAV (WebDAV) protocol that enables users
to modify and manage files on remote web servers. OneFS performs distributed authoring, but does not support versioning and
does not perform security checks. You can enable DAV in the web administration interface.
Related tasks
Enable and configure HTTP
Option Description
Enable HTTP Allows HTTP access for cluster administration and browsing content on the cluster.
Disable HTTP and Allows only administrative access to the web administration interface. This is the default
redirect to the OneFS setting.
Web Administration
interface
Disable HTTP Closes the HTTP port that is used for file access. Users can continue to access the web
administration interface by specifying the port number in the URL. The default port is 8080.
3. In the Protocol Settings area, in the Document root directory field, type a path name or click Browse to browse to an
existing directory in /ifs.
NOTE: The HTTP server runs as the daemon user and group. To correctly enforce access controls, you must grant
the daemon user or group read access to all files under the document root, and allow the HTTP server to traverse the
document root.
4. In the Authentication Settings area, from the HTTP Authentication list, select an authentication setting:
Option Description
Off Disables HTTP authentication. This is the default setting.
Basic Authentication Only Enables HTTP basic authentication. User credentials are sent in cleartext.
Integrated Authentication Enables HTTP authentication using Kerberos.
Only
Integrated and Basic Enables both basic and integrated authentication.
Authentication
5. To allow multiple users to manage and modify files collaboratively across remote web servers, select Enable WebDAV.
6. Select Enable access logging.
7. Click Save Changes.
Related concepts
HTTP and HTTPS security
Related concepts
File filtering in an access zone
Related concepts
File filtering in an access zone
Auditing overview
You can enable auditing for configuration changes, protocol activity, and high-level system platform events on the cluster.
Auditing can detect many potential sources of data loss, including fraudulent activities, inappropriate entitlements, and
unauthorized access attempts. Customers in financial services, health care, life sciences, media and entertainment, and
governmental agencies must meet stringent regulatory requirements that protect against these sources of data loss.
All audit data is stored and protected in the cluster file system. You can optionally configure forwarding of auditing logs to
remote syslog servers. You can optionally configure encrypted forwarding with TLS. Each audit topic type can be configured
separately regarding remote servers, whether to use TLS forwarding, and whether to use one- or two-way TLS verification.
To configure auditing, you must either be a root user or you must be assigned to an administrative role that includes auditing
privileges (ISI_PRIV_AUDIT).
OneFS internally manages the audit log files. Some configurable options related to log file management are retention period and
whether to implement automatic purging.
The audit topic types are:
● Configuration change auditing
● Protocol activity auditing
● System auditing
Protocol auditing
Protocol auditing tracks and stores activity through SMB, NFS, S3, and HDFS protocol connections. You can enable and
configure protocol auditing for one or more access zones in a cluster. If you enable protocol auditing for an access zone,
file-access events through the SMB, NFS, S3, and HDFS protocols are recorded in the protocol audit topic directories. You can
198 Auditing
specify which events to log in each access zone. For example, you can audit the default set of protocol events in the System
access zone but audit only successful attempts to delete files in a different access zone.
The audit events are logged on the individual nodes where the SMB, NFS, S3, or HDFS client initiated the activity. The events
are stored in a binary file under /ifs/.ifsvar/audit/logs/node<nnn>/<protocol>. The logs automatically roll over to
a new file after the size reaches 1 GB. The logs are compressed to reduce space.
The protocol audit logs are consumable by auditing applications that support the Common Event Enabler (CEE).
You can enable protocol auditing using the Web UI or CLI. To configure syslog forwarding, use the CLI.
System auditing
System auditing tracks system platform events and events that are related to account management. Two services manage
system auditing. Both services log events per node. Both services manage their own log rotations and rollovers. The two system
auditing services are syslogd and OpenBSM.
● The syslogd service collects logs that are generated by other applications and stores them in /var/log/audit/
<audit files>. The syslogd service is always enabled and cannot be disabled. It collects audit logs from the following
application logs.
pw.log Logs account changes that were made with the pw command.
Syslog
The isi_audit_syslog service is the OneFS syslog service that handles forwarding of audit logs to remote servers.
In OneFS 9.5 and later, the isi_audit_syslog service forwards audit logs directly to remote syslog servers when
syslog forwarding is enabled. The transmission from isi_audit_syslog to remote servers is reliable and secure. The
isi_audit_syslog service handles forwarding for all audit logs, including configuration change auditing, protocol activity
auditing, and all system auditing.
Auditing 199
These settings are configured separately for each audit topic. For example, you can enable forwarding of configuration change
auditing while not forwarding the other audit topics. You can configure separate remote servers for each of the audit topics,
and you can configure TLS separately for each audit topic. To view the current configuration for all the audit settings, use isi
audit settings global view.
The OneFS audit system persists all audit data to disk. The audit syslog forwarder ensures that all audit events are processed
for forwarding when remote forwarding is enabled. Only TLS ensures delivery to the remote servers.
Both TLS or non-TLS methods distribute the audit event in the same way. The audit syslog forwarder sends all audit events to
all configured remote syslog servers. Use the following table to determine whether to enable TLS.
Table 12. Comparison of remote forwarding with TLS enabled and disabled
Attribute TLS enabled TLS disabled
Delivery TLS UDP
method
Reliability Every event is guaranteed for successful delivery to at least This method is unreliable. The audit
one remote syslog server. syslog forwarder does not implement UDP
retransmission.
If configuration errors or degraded network conditions exist,
audit events may be dropped for a given remote server. If
all syslog servers are down, the entire forwarding process is
blocked until one server recovers.
OpenBSM service
The OpenBSM service is disabled by default. Administrators can enable and disable this service using the CLI.
OneFS uses the OpenBSM framework and service. The log files use the OpenBSM event log format. Log rotation is self-
managed. The daemon writes run information in /var/log/messages.
OpenBSM log files are in /var/audit/. You can view the logs with the praudit utility:
200 Auditing
NOTE: For the NFS, S3, and HDFS protocols, the rename and delete events might not be enclosed with the create and
close events.
These internally stored events are translated to events that are forwarded through the CEE to the auditing application. The CEE
export facilities on OneFS perform this mapping. The CEE can be used to connect to any third party application that supports
the CEE.
NOTE: The CEE does not support forwarding HDFS or S3 protocol events to a third-party application.
Different SMB, NFS, S3, and HDFS clients issue different requests, and one particular version of a platform such as Windows
or Mac OS X using SMB might differ from another. Similarly, different versions of an application such as Microsoft Word or
Windows Explorer might make different protocol requests. For example, a client with a Windows Explorer window open might
generate many events if an automatic or manual refresh of that window occurs. Applications issue requests with the logged-in
user's credentials, but you should not assume that all requests are purposeful user actions.
Related tasks
Enable protocol access auditing
Configure protocol event filters
Event name Example protocol activity Audited by default Can be exported Cannot be exported
through CEE through CEE
create ● Create a file or directory X X
Auditing 201
Event name Example protocol activity Audited by default Can be exported Cannot be exported
through CEE through CEE
● Open a file, directory, or
share
● Mount a share
● Delete a file
NOTE: While the SMB
protocol allows you
to set a file for
deletion with the create
operation, you must
enable the delete event
in order for the auditing
tool to log the event.
NOTE: The audit log purging features do not work on the system audit logs.
202 Auditing
Managing audit settings
You can enable and disable audit services and manage audit files. You can integrate auditing with the Common Event Enabler.
NOTE: Configuration events are not forwarded to the Common Event Enabler (CEE).
Related tasks
Forward configuration changes to syslog
Related concepts
Syslog
Syslog forwarding and TLS
Auditing 203
applications before you enable the OneFS auditing feature. Otherwise, the large backlog performed by this feature may
cause results to not be updated for a considerable amount of time.
1. Click Cluster Management > Auditing.
2. In the Settings area, select the Enable Protocol Access Auditing checkbox.
3. In the Audited Zones area, click Add Zones.
4. In the Select Access Zones dialog box, select the check box for one or more access zones, and then click Add Zones.
5. Optional: In the Event Forwarding area, specify one or more CEE servers to forward logged events to.
a. In the CEE Server URIs field, type the URI of each CEE server in the CEE server pool.
The OneFS CEE export service uses round-robin load balancing when exporting events to multiple CEE servers. Valid
URIs start with http:// and include the port number and path to the CEE server if necessary—for example, http://
example.com:12228/cee.
b. In the Storage Cluster Name field, specify the name of the storage cluster to use when forwarding protocol events.
This name value is typically the SmartConnect zone name, but in cases where SmartConnect is not implemented, the
value must match the hostname of the cluster as the third-party application recognizes it. If the field is left blank,
events from each node are filled with the node name (clustername + lnn). This setting is required only if needed by your
third-party audit application.
NOTE: Although this step is optional, be aware that a backlog of events will accumulate regardless of whether CEE
servers have been configured. When configured, CEE forwarding begins with the oldest events in the backlog and
moves toward newest events in a first-in-first-out sequence.
Related concepts
Protocol audit events
Related tasks
Forward protocol access events to syslog
204 Auditing
The following command disables forwarding of audited protocol access events from the zone3 access zone:
Related concepts
Syslog
Syslog forwarding and TLS
The following command creates a filter that audits the success of create, close, and delete events in the zone5 access zone:
Related references
Supported event types for protocol auditing
3. Optionally enable syslog forwarding for system events (both the OpenBSM and syslogd collected events).
Auditing 205
To stop forwarding of events logged by OpenBSM, use the following command:
The system assigns an id to the certificate. It stores the certificate and the key file in the OneFS certificate store.
7. View the certificate information.
The system assigned ID, status, and expiration date are displayed.
8. For security reasons, delete the key file from the OneFS file system. You may also delete the certificate file.
206 Auditing
2. View all audit settings.
The screen shows the current settings for configuration auditing, protocol auditing, and system auditing. It includes the
settings for forwarding audit logs to remote syslog servers.
Automatic deletion
The audit logs are deleted on its own from the command-line interface.
The automatic deletion runs periodically (once every hour) . It iterates over the audit directories and compares the date of the
file to the current date to determine if it should be deleted. If the file passes the retention period, it gets deleted. The default
retention period value is 180 days. The automatic deletion function is disabled by default. If you enable automatic purging,
deletion is triggered immediately. When automatic purging is enabled and you modify the retention period, deletion occurs
immediately. You can check the current audit settings using the isi audit settings global view command.
2. Enter "yes."
The automatic purging feature is enabled.
3. You can check if the automatic purging feature is enabled using the isi audit settings global view command.
Auditing 207
Config Syslog TLS Enabled: No
Config Syslog Certificate ID:
Protocol Syslog Servers: -
Protocol Syslog TLS Enabled: No
Protocol Syslog Certificate ID:
System Syslog Enabled: No
System Syslog Servers: -
System Syslog TLS Enabled: No
System Syslog Certificate ID:
Auto Purging Enabled: No
Retention Period: 180
System Auditing Enabled: No
208 Auditing
Config Syslog Enabled: No
Config Syslog Servers: -
Config Syslog TLS Enabled: No
Config Syslog Certificate ID:
Protocol Syslog Servers: -
Protocol Syslog TLS Enabled: No
Protocol Syslog Certificate ID:
System Syslog Enabled: No
System Syslog Servers: -
System Syslog TLS Enabled: No
System Syslog Certificate ID:
Auto Purging Enabled: No
Retention Period: 180
System Auditing Enabled: No
Manual deletion
You can delete audit logs manually from the command-line interface.
By using this method, you can delete audit logs before a certain day forcibly. There is no way to delete audit logs for a time
span. The deletion deletes the audit logs of all the nodes present on the cluster. The deletion runs in background, and you can
only run one instance of manual deletion at one time. If a manual deletion task is running, any other deletion request will be
rejected.
2. Enter "yes."
The deletion request is triggered, and the following message appears:
Purging Status:
Using Before Value: 2019-11-01
Currently Manual Purging Status: COMPLETED
NOTE: If there are some audit logs that cannot be deleted, the output displays the reason.
Auditing 209
Integrating with the Common Event Enabler
OneFS integration with the Common Event Enabler (CEE) enables third-party auditing applications to collect and analyze
protocol auditing logs.
OneFS supports the Common Event Publishing Agent (CEPA) component of CEE for Windows. For integration with OneFS, you
must install and configure CEE for Windows on a supported Windows client.
NOTE: We recommend that you install and configure third-party auditing applications before you enable the OneFS auditing
feature. Otherwise, the large backlog performed by this feature may cause results to not be up-to-date for a considerable
time.
Related references
Supported audit tools
NOTE: You should install a minimum of two servers. We recommend that you install CEE 6.6.0 or later.
Related concepts
Integrating with the Common Event Enabler
210 Auditing
Setting Registry location Key Value
CEE HTTP [HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE\Configuration] HttpPort 12228
listen port
Enable audit [HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE\CEPP\Audit\Co Enabled 1
remote nfiguration]
endpoints
Audit remote [HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE\CEPP\Audit\Co EndPoint <EndPoint>
endpoints nfiguration]
NOTE:
● The HttpPort value must match the port in the CEE URIs that you specify during OneFS protocol audit configuration.
● The EndPoint value must be in the format <EndPoint_Name>@<IP_Address>. You can specify multiple endpoints by
separating each value with a semicolon (;).
Related concepts
Integrating with the Common Event Enabler
View the time stamps of delivery of events to the CEE server and
syslog
You can view the time stamps of delivery of events to the CEE server and syslog on the node on which you are running the isi
audit progress view command.
This setting is available only through the command-line interface.
Auditing 211
● Run the isi audit progress view command to view the time stamps of delivery of events to the CEE server and
syslog on the node on which you are running the command.
A sample output of the isi audit progress view is shown:
You can run the isi audit progress view command with the --lnn option to view the time stamps of delivery of
the audit events on a node specified through its logical node number.
The following command displays the progress of delivery of the audit events on a node with logical node number 2:
View the rate of delivery of protocol audit events to the CEE server
You can view the rate of delivery of protocol audit events to the CEE server.
● Run the isi statistics query command to view the current rate of delivery of the protocol audit events to the CEE
server on a node.
The following command displays the current rate of delivery of the protocol audit events to the CEE server:
Node node.audit.cee.export.rate
---------------------------------
1 3904.600000
---------------------------------
Total: 1
212 Auditing
13
Snapshots
This section contains the following topics:
Topics:
• Snapshots overview
• Data protection with SnapshotIQ
• Snapshot disk-space usage
• Snapshot schedules
• Snapshot aliases
• File and directory restoration
• Best practices for creating snapshots
• Best practices for creating snapshot schedules
• File clones
• Snapshot locks
• Snapshot reserve
• Writable snapshots
• SnapshotIQ license functionality
• Creating snapshots with SnapshotIQ
• Managing snapshots
• Restoring snapshot data
• Managing snapshot schedules
• Managing snapshot aliases
• Managing with snapshot locks
• Configure SnapshotIQ settings
• Set the snapshot reserve
• Managing changelists
Snapshots overview
A OneFS snapshot is a logical pointer to data that is stored on a cluster at a specific point in time.
A snapshot references a directory on a cluster, including all data stored in the directory and its subdirectories. If the data
referenced by a snapshot is modified, the snapshot stores a physical copy of the data that was modified. Snapshots are created
according to user specifications or by OneFS, which generates them automatically to facilitate system operations. You can also
create writable copies of snapshots, useful for testing data recovery scenarios.
To create and manage snapshots, you must activate a SnapshotIQ license on the cluster. Some applications must generate
snapshots to function but do not require you to activate a SnapshotIQ license. By default, these snapshots are automatically
deleted when OneFS no longer needs them. However, if you activate a SnapshotIQ license, you can retain these snapshots. You
can view snapshots that other modules generate without activating a SnapshotIQ license.
You can identify and locate snapshots by name or ID. Users specify snapshot names, then the snapshot is assigned to the virtual
directory that contains the snapshot. A snapshot ID is a numerical identifier that OneFS automatically assigns to a snapshot.
Snapshots 213
Snapshots are less costly than backing up your data on a separate physical storage device in terms of both time and storage
consumption. The time required to move data to another physical device depends on the amount of data being moved.
Snapshots are created almost instantaneously regardless of the amount of data that the snapshot references. Because
snapshots are available locally, users can often restore their data without requiring assistance from a system administrator.
Snapshots require less space than a remote backup because unaltered data is referenced rather than re-created.
Snapshots do not protect against hardware or file system issues. Snapshots reference data that is stored on a cluster, so if the
data on the cluster becomes unavailable, the snapshots are also unavailable. It is recommended that you back up your data to
separate physical devices in addition to creating snapshots.
Snapshot schedules
You can automatically generate snapshots according to a snapshot schedule.
With snapshot schedules, you can periodically generate snapshots of a directory without having to manually create a snapshot
every time. You can also assign an expiration period that determines when SnapshotIQ deletes each automatically generated
snapshot.
Related concepts
Managing snapshot schedules
Best practices for creating snapshot schedules
Related tasks
Create a snapshot schedule
Snapshot aliases
A snapshot alias is a logical pointer to a snapshot. If you specify an alias for a snapshot schedule, the alias will always point to
the most recent snapshot generated by that schedule. Assigning a snapshot alias allows you to quickly identify and access the
most recent snapshot generated according to a snapshot schedule.
If you allow clients to access snapshots through an alias, you can reassign the alias to redirect clients to other snapshots. In
addition to assigning snapshot aliases to snapshots, you can also assign snapshot aliases to the live version of the file system.
This can be useful if clients are accessing snapshots through a snapshot alias, and you want to redirect the clients to the live
version of the file system.
214 Snapshots
File and directory restoration
You can restore the files and directories that are referenced by a snapshot alias. You can copy the data from the snapshot,
clone a file from the snapshot, or revert the entire snapshot.
Copying a file from a snapshot duplicates the file, which roughly doubles the amount of storage space consumed. Even if you
delete the original file from the nonsnapshot directory, the copy of the file remains in the snapshot.
Cloning a file from a snapshot also duplicates the file. However, a clone does not consume additional space on the cluster unless
the clone or cloned file is modified.
Reverting a snapshot replaces the contents of a directory with the data that is stored in the snapshot. Before a snapshot is
reverted, SnapshotIQ creates a snapshot of the directory that is being replaced, which enables you to undo the snapshot revert
later. Reverting a snapshot can be useful if you want to undo many changes that you made to files and directories. If new files
or directories have been created in a directory since a snapshot of the directory was created, those files and directories are
deleted when the snapshot is reverted.
NOTE: If you move a directory, you cannot revert snapshots of the directory that were taken before the directory was
moved. Deleting and then re-creating a directory has the same effect as a move. You cannot revert snapshots of a directory
that were taken before the directory was deleted and then re-created.
Snapshots 215
To implement ordered deletions, assign the same duration period for all snapshots of a directory. The snapshots can be created
by one or multiple snapshot schedules. Always ensure that no more than 1000 snapshots of a directory are created.
To implement unordered snapshot deletions, create several snapshot schedules for a single directory, and then assign different
snapshot duration periods for each schedule. Ensure that all snapshots are created at the same time when possible.
NOTE: Snapshot schedules with frequency of "Every Minute" are not recommended and are to be avoided.
NOTE: It is recommended that you do not schedule multiple Snapshot jobs at the same time, as it might cause performance
issues on the cluster. For more information, see KB article: 000158788
The following table describes snapshot schedules that follow snapshot best practices:
Related concepts
Snapshot schedules
File clones
SnapshotIQ enables you to create file clones that share blocks with existing files in order to save space on the cluster. A file
clone usually consumes less space and takes less time to create than a file copy. Although you can clone files from snapshots,
clones are primarily used internally by OneFS.
The blocks that are shared between a clone and cloned file are contained in a hidden file called a shadow store. Immediately
after a clone is created, all data originally contained in the cloned file is transferred to a shadow store. Because both files
reference all blocks from the shadow store, the two files consume no more space than the original file; the clone does not take
up any additional space on the cluster. However, if the cloned file or clone is modified, the file and clone will share only blocks
that are common to both of them, and the modified, unshared blocks will occupy additional space on the cluster.
Over time, the shared blocks contained in the shadow store might become useless if neither the file nor clone references the
blocks. The cluster routinely deletes blocks that are no longer needed. You can force the cluster to delete unused blocks at any
time by running the ShadowStoreDelete job.
Clones cannot contain alternate data streams (ADS). If you clone a file that contains alternate data streams, the clone will not
contain the alternate data streams.
Related tasks
Clone a file from a snapshot
216 Snapshots
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files that reference shadow stores behave
differently than other files.
● Reading shadow-store references might be slower than reading data directly. Reading noncached shadow-store references
is slower than reading noncached data. Reading cached shadow-store references takes no more time than reading cached
data.
● When files that reference shadow stores are replicated to another PowerScale cluster or backed up to a Network Data
Management Protocol (NDMP) backup device, the shadow stores are not transferred to the target PowerScale cluster or
backup device. The files are transferred as if they contained the data that they reference from shadow stores. On the target
PowerScale cluster or backup device, the files consume the same amount of space as if they had not referenced shadow
stores.
● When OneFS creates a shadow store, OneFS assigns the shadow store to a storage pool of a file that references the
shadow store. If you delete the storage pool that a shadow store resides on, the shadow store is moved to a pool that
contains another file that references the shadow store.
● OneFS does not delete a shadow-store block immediately after the last reference to the block is deleted. Instead, OneFS
waits until the ShadowStoreDelete job is run to delete the unreferenced block. If many unreferenced blocks exist on the
cluster, OneFS might report a negative deduplication savings until the ShadowStoreDelete job is run.
● Shadow stores are protected at least as much as the most protected file that references it. For example, if one file that
references a shadow store resides in a storage pool with +2 protection and another file that references the shadow store
resides in a storage pool with +3 protection, the shadow store is protected at +3.
● Quotas account for files that reference shadow stores as if the files contained the data that is referenced from shadow
stores. From the perspective of a quota, shadow-store references do not exist. However, if a quota includes data protection
overhead, the quota does not account for the data protection overhead of shadow stores.
Snapshot locks
A snapshot lock prevents a snapshot from being deleted. If a snapshot has one or more locks that are applied to it, the snapshot
cannot be deleted: it is a locked snapshot. If the duration period of a locked snapshot expires, OneFS does not delete the
snapshot until all locks on the snapshot have been deleted.
OneFS applies snapshot locks to ensure that snapshots that OneFS applications generate are not deleted prematurely. You
can apply snapshot locks to snapshots that you create either manually or with a snapshot or SyncIQ schedule. However, avoid
creating or removing locks on system-created snapshots.
The maximum number of locks that can be applied to a single snapshot is 16. In general, you should not apply multiple locks to
the same snapshot. If there is a need for multiple locks on the same snapshot, care should be taken to ensure that the maximum
is not exceeded to avoid snapshot creation failure.
Related concepts
Managing with snapshot locks
Related tasks
Create a snapshot lock
Snapshot reserve
The snapshot reserve enables you to set aside a minimum percentage of the cluster storage capacity specifically for snapshots.
If specified, all other OneFS operations are unable to access the percentage of cluster capacity that is reserved for snapshots.
NOTE: The snapshot reserve does not limit the amount of space that snapshots can consume on the cluster. Snapshots
can consume a greater percentage of storage capacity specified by the snapshot reserve. It is recommended that you do
not specify a snapshot reserve.
Related tasks
Set the snapshot reserve
Snapshots 217
Writable snapshots
Writable snapshots enable you to create space-efficient, modifiable copies of a source snapshot. The source snapshot remains
read-only. You can use writable snapshots for tasks such as testing data recovery scenarios and quality assurance. You create
and manage writable snapshots using the OneFS CLI or API.
Using writable snapshots, you can create and manage a modifiable copy of an entire dataset from a source snapshot. The source
snapshot and its writable copy must reside in a directory in the /ifs file system.
You can access writable snapshots with regular file system commands such as ls and find. The writable snapshots feature
creates a directory quota on the root of the writable snapshot that you can use to monitor its space usage.
NOTE: Writable snapshots preserve only the hard links within the domain of the source snapshot.
Writable snapshots populate snapshot metadata on first access. Accessing large directories for the first time with operations
such as discovery (find), unlinking, and renaming can have slow response times. OneFS reads unmodified snapshot data from
the source snapshot, which can also affect response times.
NOTE:
If you a SnapshotIQ license becomes inactive, you will no longer be able to create new snapshots, all snapshot schedules will be
disabled, and you will not be able to modify snapshots or snapshot settings. However, you will still be able to delete snapshots
and access data contained in snapshots.
218 Snapshots
Creating snapshots with SnapshotIQ
To create snapshots, you must configure the SnapshotIQ license on the cluster. You can create snapshots either by creating a
snapshot schedule or by manually generating an individual snapshot.
Manual snapshots are useful if you want to create a snapshot immediately, or at a time that is not specified in a snapshot
schedule. For example, suppose that you are planning changes to your file system, but are unsure of the consequences. You can
capture the current state of the file system in a snapshot before you make the changes.
Before creating snapshots, consider that reverting a snapshot requires that a SnapRevert domain exists for the directory that
is being reverted. If you intend to revert snapshots for a directory, it is recommended that you create SnapRevert domains for
those directories while the directories are empty. Creating a domain for a directory that contains less data takes less time.
WeeklyBackup_%m-%d-%Y_%H:%M
WeeklyBackup_07-13-2014_14:21
5. In the Path field, specify the directory that you want to include in snapshots that are generated according to this schedule.
6. From the Schedule list, select how often you want to generate snapshots according to the schedule.
Generate snapshots every day, or skip generating snapshots for a Select Daily, and specify how often you want to
specified number of days. generate snapshots.
Generate snapshots on specific days of the week, and optionally Select Weekly, and specify how often you want to
skip generating snapshots for a specified number of weeks. generate snapshots.
Generate snapshots on specific days of the month, and optionally Select Monthly, and specify how often you want to
skip generating snapshots for a specified number of months. generate snapshots.
Snapshots 219
Generate snapshots on specific days of the year. Select Yearly, and specify how often you want to
generate snapshots.
NOTE: A snapshot schedule cannot span multiple days. For example, you cannot specify to begin generating snapshots
at 5:00 PM Monday and end at 5:00 AM Tuesday. To continuously generate snapshots for a period greater than a
day, you must create two snapshot schedules. For example, to generate snapshots from 5:00 PM Monday to 5:00 AM
Tuesday, create one schedule that generates snapshots from 5:00 PM to 11:59 PM on Monday, and another schedule
that generates snapshots from 12:00 AM to 5:00 AM on Tuesday.
7. Optional: To assign an alternative name to the most recent snapshot that is generated by the schedule, specify a snapshot
alias.
a. Next to Create an Alias, click Yes.
b. To modify the default snapshot alias name, in the Alias Name field, type an alternative name for the snapshot.
8. Optional: To specify a length of time that snapshots that are generated according to the schedule are kept before they are
deleted by OneFS, specify an expiration period.
a. Next to Snapshot Expiration, select Snapshots expire.
b. Next to Snapshots expire, specify how long you want to retain the snapshots that are generated according to the
schedule.
9. Click Create Schedule.
Related concepts
Best practices for creating snapshot schedules
Related references
Snapshot naming patterns
Create a snapshot
You can create a snapshot of a directory.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. Click Create a Snapshot.
The Create a Snapshot dialog box appears.
3. Optional: In the Snapshot Name field, type a name for the snapshot.
4. In the Path field, specify the directory that you want the snapshot to contain.
5. Optional: To create an alternative name for the snapshot, select Create a snapshot alias, and then type the alias name.
6. Optional: To assign a time when OneFS will automatically delete the snapshot, specify an expiration period.
a. Select Snapshot Expires on.
b. In the calendar, specify the day that you want the snapshot to be automatically deleted.
7. Click Create Snapshot.
Related references
Snapshot information
Variable Description
%A The day of the week.
220 Snapshots
Variable Description
%a The abbreviated day of the week. For example, if the snapshot
is generated on a Sunday, %a is replaced with Sun.
%C The first two digits of the year. For example, if the snapshot is
created in 2014, %C is replaced with 20.
%{PolicyName} The name of the replication policy that the snapshot was
created for. This variable is valid only if you are specifying
a snapshot naming pattern for a replication policy.
Snapshots 221
Variable Description
%R The time. This variable is equivalent to specifying %H:%M.
222 Snapshots
Managing snapshots
You can delete and view snapshots. You can also modify the name, duration period, and snapshot alias of an existing snapshot.
Unless you specify that you are creating a writable snapshot, the data that is contained in a snapshot is read-only and cannot be
modified.
Delete snapshots
You can delete a snapshot if you no longer want to access the data that is contained in the snapshot.
OneFS frees disk space that is occupied by deleted snapshots when the SnapshotDelete job is run. Also, if you delete a
snapshot that contains clones or cloned files, data in a shadow store might no longer be referenced by files on the cluster;
OneFS deletes unreferenced data in a shadow store when the ShadowStoreDelete job is run. OneFS routinely runs both the
ShadowStoreDelete and SnapshotDelete jobs. However, you can also manually run the jobs at any time.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. In the list of snapshots, select the snapshot or snapshots that you want to delete.
a. From the Select an action list, select Delete.
b. In the confirmation dialog box, click Delete.
3. Optional: To increase the speed at which deleted snapshot data is freed on the cluster, run the SnapshotDelete job.
a. Click Cluster Management > Job Operations > Job Types.
b. In the Job Types area, locate SnapshotDelete, and then click Start Job.
The Start a Job dialog box appears.
c. Click Start Job.
4. Optional: To increase the speed at which deleted data that is shared between deduplicated and cloned files is freed on the
cluster, run the ShadowStoreDelete job.
Run the ShadowStoreDelete job only after you run the SnapshotDelete job.
a. Click Cluster Management > Job Operations > Job Types.
b. In the Job Types area, locate ShadowStoreDelete, and then click Start Job.
The Start a Job dialog box appears.
c. Click Start Job.
Related concepts
Snapshot disk-space usage
Reducing snapshot disk-space usage
Best practices for creating snapshots
Snapshots 223
Modify snapshot attributes
You can modify the name and expiration date of a snapshot.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. In the list of snapshots, locate the snapshot that you want to modify, and then click View/Edit.
The View Snapshot Details dialog box appears.
3. Click Edit.
The Edit Snapshot Details dialog box appears.
4. Modify the attributes that you want to change.
5. Click Save Changes.
Related references
Snapshot information
Related references
Snapshot information
View snapshots
You can view a list of snapshots.
Click Data Protection > SnapshotIQ > Snapshots.
The snapshots are listed in the Snapshots table.
Related references
Snapshot information
Snapshot information
You can view information about snapshots, including the total amount of space consumed by all snapshots.
The following information is displayed in the Saved Snapshots area:
Saved Snapshots Indicates the total number of snapshots that exist on the cluster.
Snapshots Indicates the total number of snapshots that were deleted on the cluster since the last snapshot delete
Pending Deletion job was run. The space that is consumed by the deleted snapshots is not freed until the snapshot delete
job is run again.
Snapshot Aliases Indicates the total number of snapshot aliases that exist on the cluster.
Capacity Used by Indicates the total amount of space that is consumed by all snapshots.
Snapshots
Related tasks
Create a snapshot
224 Snapshots
Modify snapshot attributes
Assign a snapshot alias to a snapshot
View snapshots
Revert a snapshot
You can revert a directory back to the state it was in when a snapshot was taken. Before OneFS reverts a snapshot, OneFS
generates a snapshot of the directory being reverted, so that data that is stored in the directory is not lost. OneFS does not
delete a snapshot after reverting it.
● Create a SnapRevert domain for the directory.
● Create a snapshot of a directory.
1. Click Cluster Management > Job Operations > Job Types.
2. In the Job Types table, locate the SnapRevert job, and then click Start Job.
The Start a Job dialog box appears.
3. Optional: To specify a priority for the job, from the Priority list, select a priority.
Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default snapshot revert
priority.
4. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact Policy list, select an
impact policy.
If you do not specify a policy, the job is assigned the default snapshot revert policy.
5. In the Snapshot ID to revert field, type the name or ID of the snapshot that you want to revert, and then click Start Job.
Related tasks
Create a SnapRevert domain
Snapshots 225
● To copy the selected directory to another location, click Copy, and then specify a location to copy the directory to.
● To restore a specific file, click Open, and then copy the file into the original directory, replacing the existing copy with
the snapshot version.
ls /ifs/.snapshot/Snapshot2014Jun04/archive
cp -a /ifs/.snapshot/Snapshot2014Jun04/archive/file1 \
/ifs/archive/file1_copy
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. To view the contents of the snapshot you want to restore a file or directory from, run the ls command for a subdirectory of
the snapshots root directory.
For example, the following command displays the contents of the /archive directory contained in Snapshot2014Jun04:
ls /ifs/.snapshot/Snapshot2014Jun04/archive
3. Clone a file from the snapshot by running the cp command with the -c option.
For example, the following command clones test.txt from Snapshot2014Jun04:
cp -c /ifs/.snapshot/Snapshot2014Jun04/archive/test.txt \
/ifs/archive/test_clone.text
Related concepts
File clones
226 Snapshots
3. Click Edit.
The Edit Snapshot Schedule Details dialog box appears.
4. Modify the snapshot schedule attributes that you want to change.
5. Click Save Changes.
Related concepts
Snapshot schedules
Best practices for creating snapshot schedules
Related concepts
Snapshot schedules
Related concepts
Snapshot schedules
Snapshots 227
2. In the Snapshot Aliases table, in the row of an alias, click View/Edit.
3. In the Alias Name area, click Edit.
4. In the Alias Name field, type a new alias name.
5. Click Save.
Related references
Snapshot information
If a snapshot alias references the live version of the file system, the Target ID is -1.
3. Optional: View information about a specific snapshot by running the isi snapshot aliases view command.
The following command displays information about latestWeekly:
Do not delete or modify a snapshot lock that OneFS creates unless Dell Technologies Support instructs you to
do so.
228 Snapshots
Deleting a OneFS-created snapshot lock can result in data loss. If you delete a OneFS-created snapshot lock, the corresponding
snapshot might be deleted while it is still in use by OneFS. If OneFS cannot access a snapshot that is necessary for an
operation, the operation can malfunction and data loss can result. Modifying the expiration date of a OneFS-created snapshot
lock can also result in data loss because the corresponding snapshot can be deleted prematurely.
Related concepts
Snapshot locks
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. To create a snapshot lock, run the isi snapshot locks create command.
For example, the following command applies a snapshot lock to SnapshotAugust2021, sets the lock to expire in one month,
and adds a description of "Maintenance Lock":
Related concepts
Snapshot locks
Related references
Snapshot lock information
CAUTION: It is recommended that you do not modify the expiration dates of snapshot locks.
Related concepts
Snapshot locks
Related references
Snapshot lock information
Snapshots 229
Delete a snapshot lock
You can delete a snapshot lock.
The system prompts you to confirm that you want to delete the snapshot lock.
3. Type yes and then press ENTER.
Related concepts
Snapshot locks
Related references
SnapshotIQ settings
SnapshotIQ settings
SnapshotIQ settings determine how snapshots behave and can be accessed.
The following SnapshotIQ settings can be configured:
230 Snapshots
Auto-create Determines whether snapshots are automatically generated according to snapshot
Snapshots schedules.
Auto-delete Determines whether snapshots are automatically deleted according to their
Snapshots expiration dates.
Related tasks
Configure SnapshotIQ settings
Related concepts
Snapshot reserve
Snapshots 231
Managing changelists
You can create and view changelists that describe the differences between two snapshots. You can create a changelist for any
two snapshots that have a common root directory.
Changelists are most commonly accessed by applications through the OneFS Platform API. For example, a custom application
could regularly compare the two most recent snapshots of a critical directory path to determine whether to back up the
directory, or to trigger other actions.
Create a changelist
You can create a changelist to view the differences between two snapshots.
1. Optional: Record the IDs of the snapshots.
a. Click Data Protection > SnapshotIQ > Snapshots.
b. In the row of each snapshot that you want to create a changelist for, click View Details, and record the ID of the
snapshot.
2. Click Cluster Management > Job Operations > Job Types.
3. In the Job Types area, in the ChangelistCreate row, from the Actions column, select Start Job.
4. In the Older Snapshot ID field, type the ID of the older snapshot.
5. In the Newer Snapshot ID field, type the ID of the newer snapshot.
6. Click Start Job.
Delete a changelist
You can delete a changelist
Run the isi_changelist_mod command with the -k option.
The following command deletes changelist 22_24:
isi_changelist_mod -k 22_24
View a changelist
You can view a changelist that describes the differences between two snapshots. This procedure is available only through the
command-line interface (CLI).
1. View the IDs of changelists by running the following command:
isi_changelist_mod -l
Changelist IDs include the IDs of both snapshots used to create the changelist. If OneFS is still in the process of creating a
changelist, inprog is appended to the changelist ID.
2. Optional: View all contents of a changelist by running the isi_changelist_mod command with the -a option.
The following command displays the contents of a changelist named 2_6:
isi_changelist_mod -a 2_6
Changelist information
You can view the information contained in changelists.
NOTE: The information contained in changelists is meant to be consumed by applications through the OneFS Platform API.
The following information is displayed for each item in the changelist when you run the isi_changelist_mod command:
232 Snapshots
st_ino Displays the inode number of the specified item.
st_mode Displays the file type and permissions for the specified item.
st_size Displays the total size of the item in bytes.
st_atime Displays the POSIX timestamp of when the item was last accessed.
st_mtime Displays the POSIX timestamp of when the item was last modified.
st_ctime Displays the POSIX timestamp of when the item was last changed.
cl_flags Displays information about the item and what kinds of changes were made to the item.
01 The item was added or moved under the root directory of the snapshots.
02 The item was removed or moved out of the root directory of the snapshots.
04 The path of the item was changed without being removed from the root directory
of the snapshot.
10 The item either currently contains or at one time contained Alternate Data Streams
(ADS).
20 The item is an ADS.
40 The item has hardlinks.
NOTE: These values are added together in the output. For example, if an ADS was added, the code
would be cl_flags=021.
Snapshots 233
14
Deduplication with SmartDedupe
This section contains the following topics:
Topics:
• Deduplication overview
• Deduplication jobs
• Data replication and backup with deduplication
• Snapshots with deduplication
• Deduplication considerations
• Shadow-store considerations
• SmartDedupe license functionality
• Managing deduplication
Deduplication overview
SmartDedupe enables you to save storage space on your cluster by reducing redundant data. Deduplication maximizes the
efficiency of your cluster by decreasing the amount of storage that is required to store multiple files with identical blocks.
The SmartDedupe software module deduplicates data by scanning a PowerScale cluster for identical data blocks. Each block is
8 KB. If SmartDedupe finds duplicate blocks, SmartDedupe moves a single copy of the blocks to a hidden file called a shadow
store. SmartDedupe then deletes the duplicate blocks from the original files and replaces the blocks with pointers to the shadow
store.
Deduplication is applied at the directory level, targeting all files and directories underneath one or more root directories.
SmartDedupe not only deduplicates identical blocks in different files, it also deduplicates identical blocks within a single file.
Before you deduplicate a directory, you can get an estimate of the amount of space you can expect to save. After you begin
deduplicating a directory, you can monitor the amount of space that deduplication is saving in real time.
To enable deduplicating two or more files, the files must have the same disk pool policy ID and protection policy. If either or both
of these attributes differ between two or more identical files, or files with identical 8 K blocks, the files are not deduplicated.
Because it is possible to specify protection policies on a per-file or per-directory basis, deduplication can be further
affected. Consider the example of two files, /ifs/data/projects/alpha/logo.jpg and /ifs/data/projects/
beta/logo.jpg. Even if the logo.jpg files in both directories are identical, they would not be deduplicated if they have
different protection policies.
If you have activated a SmartPools license on your cluster, you can also specify custom file pool policies. These file pool policies
might result in identical files or files with identical 8 K blocks being stored in different node pools. Those files would have
different disk pool policy IDs and would not be deduplicated.
SmartDedupe also does not deduplicate files that are 32 KB or smaller, because doing so would consume more cluster resources
than the storage savings are worth. The default size of a shadow store is 2 GB. Each shadow store can contain up to 256,000
blocks. Each block in a shadow store can be referenced up to 32,000 times.
Deduplication jobs
A deduplication system maintenance job deduplicates data on a cluster. You can monitor and control deduplication jobs as
you would any other maintenance job on the cluster. Although the overall performance impact of deduplication is minimal, the
deduplication job consumes 400 MB of memory per node.
When a deduplication job runs for the first time on a cluster, SmartDedupe samples blocks from each file and creates index
entries for those blocks. If the index entries of two blocks match, SmartDedupe scans the blocks that are next to the matching
pair and then deduplicates all duplicate blocks. After a deduplication job samples a file once, new deduplication jobs will not
sample the file again until the file is modified.
Related tasks
Assess deduplication space savings
Specify deduplication settings
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files that reference shadow stores behave
differently than other files.
● Reading shadow-store references might be slower than reading data directly. Reading noncached shadow-store references
is slower than reading noncached data. Reading cached shadow-store references takes no more time than reading cached
data.
● When files that reference shadow stores are replicated to another PowerScale cluster or backed up to a Network Data
Management Protocol (NDMP) backup device, the shadow stores are not transferred to the target PowerScale cluster or
backup device. The files are transferred as if they contained the data that they reference from shadow stores. On the target
PowerScale cluster or backup device, the files consume the same amount of space as if they had not referenced shadow
stores.
● When OneFS creates a shadow store, OneFS assigns the shadow store to a storage pool of a file that references the
shadow store. If you delete the storage pool that a shadow store resides on, the shadow store is moved to a pool that
contains another file that references the shadow store.
● OneFS does not delete a shadow-store block immediately after the last reference to the block is deleted. Instead, OneFS
waits until the ShadowStoreDelete job is run to delete the unreferenced block. If many unreferenced blocks exist on the
cluster, OneFS might report a negative deduplication savings until the ShadowStoreDelete job is run.
● Shadow stores are protected at least as much as the most protected file that references it. For example, if one file that
references a shadow store resides in a storage pool with +2 protection and another file that references the shadow store
resides in a storage pool with +3 protection, the shadow store is protected at +3.
● Quotas account for files that reference shadow stores as if the files contained the data that is referenced from shadow
stores. From the perspective of a quota, shadow-store references do not exist. However, if a quota includes data protection
overhead, the quota does not account for the data protection overhead of shadow stores.
Related concepts
Deduplication jobs
Related concepts
Deduplication jobs
Option Description
Enable this job Select to enable the job type.
type
Default Priority Set the job priority as compared to other system maintenance jobs that run at the same time. Job priority
is denoted as 1-10, with 1 being the highest and 10 being the lowest. The default value is 4.
Default Impact Select the amount of system resources that the job uses compared to other system maintenance jobs
Policy that run at the same time. Select a policy value of HIGH, MEDIUM, LOW, or OFF-HOURS. The default is
LOW.
Schedule Specify whether the job must be manually started or runs on a regularly scheduled basis. When you
click Scheduled, you can specify a daily, weekly, monthly, or yearly schedule. For most clusters, it is
recommended that you run the Dedupe job once every 10 days.
5. Click Save Changes, and then click Close.
The new job controls are saved and the dialog box closes.
6. Click Start Job.
The Dedupe job runs with the new job controls.
Related references
Deduplication information
Related references
Deduplication job report information
Related tasks
View a deduplication report
Deduplication information
You can view the amount of disk space saved by deduplication in the Deduplication Savings area:
Space Savings The total amount of physical disk space saved by deduplication, including protection overhead and
metadata. For example, if you have three identical files that are all 5 GB, the estimated physical saving
would be greater than 10 GB, because deduplication saved space that would have been occupied by file
metadata and protection overhead.
Deduplicated The amount of space on the cluster occupied by directories that were deduplicated.
data
Other data The amount of space on the cluster occupied by directories that were not deduplicated.
Related tasks
View deduplication space savings
NOTE: To prevent permissions errors, make sure that ACL policy settings are the same across source and target clusters.
You can create two types of replication policies: synchronization policies and copy policies. A synchronization policy maintains an
exact replica of the source directory on the target cluster. If a file or sub-directory is deleted from the source directory, the file
or directory is deleted from the target cluster when the policy is run again.
You can use synchronization policies to fail over and fail back data between source and target clusters. When a source cluster
becomes unavailable, you can fail over data on a target cluster and make the data available to clients. When the source cluster
becomes available again, you can fail back the data to the source cluster.
A copy policy maintains recent versions of the files that are stored on the source cluster. However, files that are deleted on
the source cluster are not deleted from the target cluster. Failback is not supported for copy policies. Copy policies are most
commonly used for archival purposes.
Copy policies enable you to remove files from the source cluster without losing those files on the target cluster. Deleting files on
the source cluster improves performance on the source cluster while maintaining the deleted files on the target cluster. This can
be useful if, for example, your source cluster is being used for production purposes and your target cluster is being used only for
archiving.
After creating a job for a replication policy, SyncIQ must wait until the job completes before it can create another job for the
policy. Any number of replication jobs can exist on a cluster at a given time; however, no more than 50 replication jobs can run
on a source cluster at the same time. If more than 50 replication jobs exist on a cluster, the first 50 jobs run while the others are
queued to run.
There is no limit to the number of replication jobs that a target cluster can support concurrently. However, because more
replication jobs require more cluster resources, replication will slow down as more concurrent jobs are added.
When a replication job runs, OneFS generates workers on the source and target cluster. Workers on the source cluster read and
send data while workers on the target cluster receive and write data.
You can replicate any number of files and directories with a single replication job. You can prevent a large replication job from
overwhelming the system by limiting the amount of cluster resources and network bandwidth that data synchronization is
allowed to consume. Because each node in a cluster is able to send and receive data, the speed at which data is replicated
increases for larger clusters.
Related concepts
Creating replication policies
If a policy is configured to replicate snapshots, you can configure SyncIQ to replicate only snapshots that match a specified
naming pattern.
Configuring a policy to start when changes are made to the source directory can be useful under the following conditions:
● You want to retain an up-to-date copy of your data always.
● You are expecting many changes at unpredictable intervals.
For policies that are configured to start whenever changes are made to the source directory, SyncIQ checks the source
directories every ten seconds. SyncIQ checks all files and directories underneath the source directory, regardless of whether
those files or directories are excluded from replication. Consequently, SyncIQ might occasionally run a replication job
unnecessarily. For example, assume that newPolicy replicates /ifs/data/media but excludes /ifs/data/media/temp. If
a modification is made to /ifs/data/media/temp/file.txt, SyncIQ runs newPolicy even though /ifs/data/media/
temp/file.txt is not replicated.
If a policy is configured to start whenever changes are made to the source directory and a replication job fails, SyncIQ waits
one minute before attempting to run the policy again. SyncIQ increases this delay exponentially for each failure up to a maximum
of eight hours. You can override the delay by running the policy manually at any time. After a job for the policy completes
successfully, SyncIQ will resume checking the source directory every ten seconds.
If a policy is configured to start whenever changes are made to the source directory, you can configure SyncIQ to wait a
specified period after the source directory is modified before starting a job.
NOTE: To avoid frequent synchronization of minimal sets of changes and overtaxing system resources, you should not
configure continuous replication when the source directory is highly active. It is better to configure continuous replication
with a change-triggered delay of several hours to consolidate groups of changes.
To configure static NAT, you would must edit the /etc/local/hosts file on all six nodes, and associate them with their
counterparts by adding the appropriate NAT address and node name. For example, in the /etc/local/hosts file on the three
nodes of the source cluster, the entries would look like:
10.1.2.11 target-1
10.1.2.12 target-2
10.1.2.13 target-3
Similarly, on the three nodes of the target cluster, you would edit the /etc/local/hosts file, and insert the NAT address
and name of the associated node on the source cluster. For example, on the three nodes of the target cluster, the entries would
look like:
10.8.8.201 source-1
10.8.8.202 source-2
10.8.8.203 source-3
When the NAT server receives packets of SyncIQ data from a node on the source cluster, the NAT server replaces the packet
headers and the node's port number and internal IP address with the NAT server's own port number and external IP address.
The NAT server on the source network then sends the packets through the Internet to the target network, where another NAT
server performs a similar process to transmit the data to the target node. The process is reversed when the data fails back.
With this type of configuration, SyncIQ can determine the correct addresses to connect with, so that SyncIQ can send and
receive data. In this scenario, no SmartConnect zone configuration is required.
For information about the ports used by SyncIQ, see the OneFS Security Configuration Guide for your OneFS version.
Related tasks
Perform a full or differential replication
Related concepts
Managing replication performance rules
Replication reports
After a replication job completes, SyncIQ generates a replication report that contains detailed information about the job,
including how long the job ran, how much data was transferred, and what errors occurred.
If a replication report is interrupted, SyncIQ might create a subreport about the progress of the job so far. If the job is then
restarted, SyncIQ creates another subreport about the progress of the job until the job either completes or is interrupted again.
SyncIQ creates a subreport each time the job is interrupted until the job completes successfully. If multiple subreports are
created for a job, SyncIQ combines the information from the subreports into a single report.
SyncIQ routinely deletes replication reports. You can specify the maximum number of replication reports that SyncIQ retains and
the length of time that SyncIQ retains replication reports. If the maximum number of replication reports is exceeded on a cluster,
SyncIQ deletes the oldest report each time a new report is created.
You cannot customize the content of a replication report.
NOTE: If you delete a replication policy, SyncIQ automatically deletes any reports that were generated for that policy.
Related concepts
Managing replication reports
Related concepts
Initiating data failover and failback with SyncIQ
Data failover
Failover is the process of preparing data on a secondary cluster and switching over to the secondary cluster for normal client
operations. After you fail over to a secondary cluster, you can direct clients to access, view, and modify their data on the
secondary cluster.
Before failover is performed, you must create and run a SyncIQ replication policy on the primary cluster. You initiate the failover
process on the secondary cluster. To migrate data from the primary cluster that is spread across multiple replication policies,
you must initiate failover for each replication policy.
If the action of a replication policy is set to copy, any file that was deleted on the primary cluster will still be present on the
secondary cluster. When the client connects to the secondary cluster, all files that were deleted on the primary cluster will be
available.
If you initiate failover for a replication policy while an associated replication job is running, the failover operation completes
but the replication job fails. Because data might be in an inconsistent state, SyncIQ uses the snapshot generated by the last
successful replication job to revert data on the secondary cluster to the last recovery point.
If a disaster occurs on the primary cluster, any modifications to data that were made after the last successful replication job
started are not reflected on the secondary cluster. When a client connects to the secondary cluster, their data appears as it was
when the last successful replication job was started.
Related tasks
Fail over data to a secondary cluster
Data failback
Failback is the process of restoring primary and secondary clusters to the roles that they occupied before a failover operation.
After failback is complete, the primary cluster holds the latest data set and resumes normal operations, including hosting clients
and replicating data to the secondary cluster through SyncIQ replication policies in place.
The first step in the failback process is updating the primary cluster with all of the modifications that were made to the data on
the secondary cluster. The next step is preparing the primary cluster to be accessed by clients. The final step is resuming data
replication from the primary to the secondary cluster. At the end of the failback process, you can redirect users to resume data
access on the primary cluster.
To update the primary cluster with the modifications that were made on the secondary cluster, SyncIQ must create a SyncIQ
domain for the source directory.
You can fail back data with any replication policy that meets all of the following criteria:
● The policy has been failed over.
● The policy is a synchronization policy (not a copy policy).
● The policy does not exclude any files or directories from replication.
Related tasks
Fail back data to a primary cluster
Source directory type Target directory type Replication Allowed Failback allowed
Non-SmartLock Non-SmartLock Yes Yes
Non-SmartLock SmartLock enterprise Yes Yes, unless files are committed to a
WORM state on the target cluster
Non-SmartLock SmartLock compliance No No
SmartLock enterprise Non-SmartLock Yes; however, retention Yes; however, the files do not have
dates and commit WORM status
status of files are lost.
SmartLock enterprise SmartLock enterprise Yes Yes; any newly committed WORM
files are included
SmartLock enterprise SmartLock compliance No No
SmartLock compliance Non-SmartLock No No
SmartLock compliance SmartLock enterprise No No
SmartLock compliance SmartLock compliance Yes Yes; any newly committed WORM
files are included
If you are replicating a SmartLock directory to another SmartLock directory, you must create the target SmartLock directory
prior to running the replication policy. Although OneFS creates a target directory automatically if a target directory does not
already exist, OneFS does not create a target SmartLock directory automatically. If you attempt to replicate an enterprise
directory before the target directory has been created, OneFS creates a non-SmartLock target directory and the replication job
succeeds. If you replicate a compliance directory before the target directory has been created, the replication job fails.
If you replicate SmartLock directories to another PowerScale cluster with SyncIQ, the WORM state of files is replicated.
However, SmartLock directory configuration settings are not transferred to the target directory.
For example, if you replicate a directory that contains a committed file that is set to expire on March 4th, the file is still set
to expire on March 4th on the target cluster. However, if the directory on the source cluster is set to prevent files from being
committed for more than a year, the target directory is not automatically set to the same restriction.
RPO Alerts
You can configure SyncIQ to create OneFS events that alert you to the fact that a specified Recovery Point Objective (RPO)
has been exceeded. You can view these events through the same interface as other OneFS events.
The events have an event ID of 400040020. The event message for these alerts follows the following format:
For example, assume you set an RPO of 5 hours; a job starts at 1:00 PM and completes at 3:00 PM; a second job starts at 3:30
PM; if the second job does not complete by 6:00 PM, SyncIQ creates a OneFS event.
You can enable RPO alert for SyncIQ policies including the preferred frequency so that you get alerts when the SyncIQ job fails
to meet the RPO criteria.
NOTE: SyncIQ only allows source node restrictions on subnets and pools from the default groupnet.
Related concepts
Replication policies and jobs
By default, all files and directories under the source directory of a replication policy are replicated to the target cluster.
However, you can prevent directories under the source directory from being replicated.
Related tasks
Specify source directories and files
A file-criteria statement can include one or more elements. Each file-criteria element contains a file attribute, a comparison
operator, and a comparison value. You can combine multiple criteria elements in a criteria statement with Boolean "AND" and
"OR" operators. You can configure any number of file-criteria definitions.
Configuring file-criteria statements can cause the associated jobs to run slowly. It is recommended that you specify file-criteria
statements in a replication policy only if necessary.
Modifying a file-criteria statement will cause a full replication to occur the next time that a replication policy is started.
Depending on the amount of data being replicated, a full replication can take a very long time to complete.
For synchronization policies, if you modify the comparison operators or comparison values of a file attribute, and a file no longer
matches the specified file-matching criteria, the file is deleted from the target the next time the job is run. This rule does not
apply to copy policies.
Related references
File criteria options
Related tasks
Specify source directories and files
Date created Includes or excludes files based on when the file was created. This option is available for copy policies
only.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as
"January 1, 2012." Time settings are based on a 24-hour clock.
Date accessed Includes or excludes files based on when the file was last accessed. This option is available for copy
policies only, and only if the global access-time-tracking option of the cluster is enabled.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as
"January 1, 2012." Time settings are based on a 24-hour clock.
Date modified Includes or excludes files based on when the file was last modified. This option is available for copy
policies only.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as
"January 1, 2012." Time settings are based on a 24-hour clock.
File name Includes or excludes files based on the file name. You can specify to include or exclude full or partial
names that contain specific text.
The following wildcard characters are accepted:
NOTE: Alternatively, you can filter file names by using POSIX regular-expression (regex) text.
PowerScale clusters support IEEE Std 1003.2 (POSIX.2) regular expressions. For more information
about POSIX regular expressions, see the BSD man pages.
Path Includes or excludes files based on the file path. This option is available for copy policies only.
You can specify to include or exclude full or partial paths that contain specified text. You can also include
the wildcard characters *, ?, and [ ].
Type Includes or excludes files based on one of the following file-system object types:
● Soft link
● Regular file
● Directory
Related concepts
Excluding files in replication
Related tasks
Specify source directories and files
3. Specify which nodes you want replication policies to connect to when a policy is run.
To connect policies to all nodes on a source Click Run the policy on all nodes in this cluster.
cluster:
To connect policies only to nodes contained in a a. Click Run the policy only on nodes in the specified subnet and
specified subnet and pool: pool.
b. From the Subnet and pool list, select the subnet and pool .
NOTE: SyncIQ does not support dynamically allocated IP address pools. If a replication job connects to a dynamically
allocated IP address, SmartConnect might reassign the address while a replication job is running, which would
disconnect the job and cause it to fail.
NOTE: If you create a replication policy for a SmartLock directory, the SyncIQ and SmartLock compliance domains must be
configured at the same root level. A SmartLock compliance domain cannot be nested inside a SyncIQ domain.
Related concepts
Replication policies and jobs
The next step in creating a replication policy is specifying source directories and files.
Related references
Replication policy settings
2. Optional: Prevent specific subdirectories of the source directory from being replicated.
● To include a directory, in the Included Directories area, click Add a directory path.
● To exclude a directory, in the Excluded Directories area, click Add a directory path.
3. Optional: Prevent specific files from being replicated by specifying file matching criteria.
a. In the File Matching Criteria area, select a filter type.
b. Select an operator.
c. Type a value.
Files that do not meet the specified criteria will not be replicated to the target cluster. For example, if you specify File
Type doesn't match .txt, SyncIQ will not replicate any files with the .txt file extension. If you specify Created
after 08/14/2013, SyncIQ will not replicate any files created before August 14th, 2013.
If you want to specify more than one file matching criterion, you can control how the criteria relate to each other by clicking
either Add an "Or" condition or Add an "And" condition.
4. Specify which nodes you want the replication policy to connect to when the policy is run.
Connect the policy to all nodes in the source Click Run the policy on all nodes in this cluster.
cluster.
Connect the policy only to nodes contained in a a. Click Run the policy only on nodes in the specified subnet and
specified subnet and pool. pool.
b. From the Subnet and pool list, select the subnet and pool.
The next step in the process of creating a replication policy is specifying the target directory.
Related concepts
Excluding directories in replication
Excluding files in replication
Related references
File criteria options
Replication policy settings
NOTE: SyncIQ does not support dynamically allocated IP address pools. If a replication job connects to a dynamically
allocated IP address, SmartConnect might reassign the address while a replication job is running, which would
disconnect the job and cause it to fail.
2. In the Target Directory field, type the absolute path of the directory on the target cluster that you want to replicate data
to.
CAUTION:
If you specify an existing directory on the target cluster, make sure that the directory is not the target of
another replication policy. If this is a synchronization policy, make sure that the directory is empty. All files
are deleted from the target of a synchronization policy the first time that the policy is run.
If the specified target directory does not already exist on the target cluster, the directory is created the first time that the
job is run. We recommend that you do not specify the /ifs directory. If you specify the /ifs directory, the entire target
cluster is set to a read-only state, which prevents you from storing any other data on the cluster.
If this is a copy policy, and files in the target directory share the same name as files in the source directory, the target
directory files are overwritten when the job is run.
3. If you want replication jobs to connect only to the nodes included in the SmartConnect zone specified by the target cluster,
click Connect only to the nodes within the target cluster SmartConnect Zone.
The next step in the process of creating a replication policy is to specify policy target snapshot settings.
Related references
Replication policy settings
%{PolicyName}-on-%{SrcCluster}-latest
newPolicy-on-Cluster1-latest
3. Optional: To modify the snapshot naming pattern for snapshots that are created according to the replication policy, in
the Snapshot Naming Pattern field, type a naming pattern. Each snapshot that is generated for this replication policy is
assigned a name that is based on this pattern.
For example, the following naming pattern is valid:
%{SnapName}-%{SnapCreateTime}
newPolicy-10:30
4. Select one of the following options for how snapshots should expire:
● Click Snapshots do not expire.
● Click Snapshots expire after... and specify an expiration period.
● Click Existing Snapshots Expires to expire the snapshots after a time interval.
NOTE: The target snapshot expire duration is the duration obtained by subtracting the current time from the source
snapshot expiration.
The next step in the process of creating a replication policy is configuring advanced policy settings.
Related references
Replication policy settings
3. Optional: If you want SyncIQ to perform a checksum on each file data packet that is affected by the replication policy, select
the Validate File Integrity check box.
If you enable this option, and the checksum values for a file data packet do not match, SyncIQ retransmits the affected
packet.
4. Optional: To increase the speed of failback for the policy, click Prepare policy for accelerated failback performance.
Selecting this option causes SyncIQ to perform failback configuration tasks the next time that a job is run, rather than
waiting to perform those tasks during the failback process. This will reduce the amount of time needed to perform failback
operations when failback is initiated.
5. Optional: To modify the length of time SyncIQ retains replication reports for the policy, in the Keep Reports For area,
specify a length of time.
After the specified expiration period has passed for a report, SyncIQ automatically deletes the report.
Some units of time are displayed differently when you view a report than how they were originally entered. Entering a
number of days that is equal to a corresponding value in weeks, months, or years results in the larger unit of time being
displayed. For example, if you enter a value of 7 days, 1 week appears for that report after it is created. This change occurs
because SyncIQ internally records report retention times in seconds and then converts them into days, weeks, months, or
years.
6. Optional: Specify whether to record information about files that are deleted by replication jobs by selecting one of the
following options:
● Click Record when a synchronization deletes files or directories.
● Click Do not record when a synchronization deletes files or directories.
Related references
Replication policy settings
NOTE: You can assess only replication policies that have never been run before.
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
Related references
Replication job information
Status The status of the job. The following job statuses are possible:
Related concepts
Data failover and failback with SyncIQ
Related concepts
Data failover
<replication-policy-name>_mirror
3. Before beginning the failback process, prevent clients from accessing the secondary cluster.
This action ensures that SyncIQ fails back the latest data set, including all changes that users made to data on the
secondary cluster while the primary cluster was out of service. We recommend that you wait until client activity is low before
preventing access to the secondary cluster.
4. On the secondary cluster, click Data Protection > SyncIQ > Policies.
5. In the SyncIQ Policies list, for the mirror policy, click More > Start Job.
Alternatively, you can edit the mirror policy on the secondary cluster, and specify a schedule for the policy to run.
6. On the primary cluster, click Data Protection > SyncIQ > Local Targets.
7. On the primary cluster, in the SyncIQ Local Targets list, for the mirror policy, select More > Allow Writes.
8. On the secondary cluster, click Data Protection > SyncIQ > Policies.
9. On the secondary cluster, in the SyncIQ Policies list, click More > Resync-prep for the mirror policy.
This puts the secondary cluster back into read-only mode and ensures that the data sets are consistent on both the primary
and secondary clusters.
Redirect clients to begin accessing their data on the primary cluster. Although not required, it is safe to remove a mirror policy
after failback has completed successfully.
Related concepts
Data failback
SIQ-Failover-<policy-name>-<year>-<month>-<day>_<hour>-<minute>-<second>
4. If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the source
directory of the replication policy, apply those settings to the target directory.
Because autocommit information is not transferred to the target cluster, files that were scheduled to be committed to
a WORM state on the original source cluster would not be scheduled to be committed at the same time on the target
cluster. To make sure that all files are retained for the appropriate time period, you can commit all files in target SmartLock
directories to a WORM state.
For example, the following command automatically commits all files in /ifs/data/smartlock to a WORM state after one
minute:
3. Optional: To ensure that SmartLock protection is enforced for all files, commit all migrated files in the SmartLock target
directory to a WORM state.
Because autocommit information is not transferred from the recovery cluster, commit all migrated files in target SmartLock
directories to a WORM state.
For example, the following command automatically commits all files in /ifs/data/smartlock to a WORM state after one
minute:
This step is unnecessary if you have configured an autocommit time period for the SmartLock directories being migrated.
4. On the cluster with the migrated data, click Data Protection > SyncIQ > Local Targets.
5. In the SyncIQ Local Targets table, for each replication policy, select More > Allow Writes.
6. Optional: If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the
source directories of the replication policies, apply those settings to the target directories on the cluster now containing the
migrated data.
7. Optional: Delete the copy of the SmartLock data on the recovery cluster.
You cannot recover the space consumed by the source SmartLock directories until all files are released from a WORM state.
If you want to free the space before files are released from a WORM state, contact PowerScale Technical Support for
information about reformatting your recovery cluster.
Related concepts
Replication policies and jobs
Related references
Replication policy settings
Related concepts
Replication policies and jobs
If you disable a replication policy while an associated replication job is running, the running job is not interrupted. However,
the policy will not create another job until the policy is enabled.
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a replication policy, select either Enable Policy or Disable Policy.
If neither Enable Policy nor Disable Policy appears, verify that a replication job is not running for the policy. If an
associated replication job is not running, ensure that the SyncIQ license is active on the cluster.
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
Related references
Replication policy settings
Related tasks
View replication policies targeting the local cluster
Copy If a file is deleted in the source directory, the file is not deleted in the target
directory.
Synchronize Deletes files in the target directory if they are no longer present on the source. This
ensures that an exact replica of the source directory is maintained on the target
cluster.
Run job Determines whether jobs are run automatically according to a schedule or only when manually specified by
a user.
Last Successful Displays the last time that a replication job for the policy completed successfully.
Run
Last Started Displays the last time that the policy was run.
Source Root The full path of the source directory. Data is replicated from the source directory to the target directory.
Directory
Included Determines which directories are included in replication. If one or more directories are specified by this
Directories setting, any directories that are not specified are not replicated.
Validate File Determines whether OneFS performs a checksum on each file data packet that is affected by a
Integrity replication job. If a checksum value does not match, OneFS retransmits the affected file data packet.
Keep Reports For Specifies how long replication reports are kept before they are automatically deleted by OneFS.
Log Deletions on Determines whether OneFS records when a synchronization job deletes files or directories on the target
Synchronization cluster.
The following replication policy fields are available only through the OneFS command-line interface.
Source Subnet Specifies whether replication jobs connect to any nodes in the cluster or if jobs can connect only to nodes
in a specified subnet.
Source Pool Specifies whether replication jobs connect to any nodes in the cluster or if jobs can connect only to nodes
in a specified pool.
Password Set Specifies a password to access the target cluster.
Report Max Specifies the maximum number of replication reports that are retained for this policy.
Count
Target Compare Determines whether full or differential replications are performed for this policy. Full or differential
Initial Sync replications are performed the first time a policy is run and after a policy is reset.
Resolve Determines whether you can manually resolve the policy if a replication job encounters an error.
After a replication policy is reset, SyncIQ performs a full or differential replication the next time the policy is
run. Depending on the amount of data being replicated, a full or differential replication can take a very long time
to complete.
1. Click Data Protection > SyncIQ > Local Targets.
2. In the SyncIQ Local Targets table, in the row for a replication policy, select Break Association.
3. In the Confirm dialog box, click Yes.
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Controlling replication job resource consumption
Related concepts
Replication reports
3. In the Number of Reports to Keep Per Policy field, type the maximum number of reports you want to retain at a time for
a replication policy.
4. Click Submit.
Related concepts
Replication reports
Related concepts
Replication reports
Related concepts
Replication reports
Policy Name The name of the associated policy for the job. You can view or edit settings for the policy by clicking the
policy name.
Status Displays the status of the job. The following job statuses are possible:
Running
The job is currently running without error.
Paused
The job has been temporarily paused.
Finished
The job completed successfully.
Failed
The job failed to complete.
Source Directory The path of the source directory on the source cluster.
Target Host The IP address or fully qualified domain name of the target cluster.
Related tasks
View replication reports
Related concepts
Replication policies and jobs
Related concepts
Replication policies and jobs
3. Run the policy by running the isi sync jobs start command.
For example, the following command runs newPolicy:
There are one or more non-encrypted SyncIQ policies on this cluster. After enabling
encryption, each of these policies will need to be manually updated to select a
target cluster certificate.
4. Click Enable.
The encryption is set as true. The existing policies will need to be manually updated to select a target cluster certificate.
After the encryption is enabled, it cannot be disabled.
NOTE: Click Cancel to close the dialog box.
Click Configure certificate and you will be redirected to the Certificates tab to create a target cluster certificate.
Limitation Description
ADS files Skipped when encountered
Hardlinks Not supported. An object is created for each link (hard links are not preserved)
Symlinks Skipped when encountered
Special files Skipped when encountered
Metadata Only the following POSIX attributes are copied: mode, UID, GID, atime, mtime, ctime
File name encoding Encodings are converted to UTF-8
Large files Errors are returned for files greater than the cloud provider's maximum object size
Sparse files Sparse sections are not preserved; they are written out fully as zeros
CloudPools Not supported
Compression in transit Not supported
Copy back from the cloud Not supported if the data was not created by Data Mover
Incremental transfers Not supported for file-to-object transfers. Only one-time copy to cloud/copy back from cloud
is supported
Certificate requirements
The following Certificate Authorities (CA) and trust hierarchies are required.
Requirement Description
TLS certificates ● A mutually authenticated TLS handshake is required.
Authorization, authentication, and encryption are provided
by TLS certificates.
● TLS certificates are always required for daemon startup
and all communication between Datamover engines.
● Encryption can be disabled, but authorization and
authentication cannot be disabled.
Reference documentation
The OneFS CLI Administration Guide provides details for configuring and administering Datamover. The Datamover feature
includes a full set of isi dm command line interface (CLI) commands and APIs in the PowerScale OneFS 9.4.0.0 CLI Command
Reference and PowerScale OneFS 9.4.0.0 API Reference Guides.
You can find these guides under the Documentation tab on the PowerScale OneFS support site: https://www.dell.com/
support/home/en-us/product-support/product/isilon-onefs/docs.
FlexProtect overview
A PowerScale cluster is designed to continuously serve data, even when one or more components simultaneously fail. OneFS
ensures data availability by striping or mirroring data across the cluster. If a cluster component fails, data that is stored on the
failed component is available on another component. After a component failure, lost data is restored on healthy components by
the FlexProtect proprietary system.
Data protection is specified at the file level, not the block level, enabling the system to recover data quickly. All data, metadata,
and parity information is distributed across all nodes: the cluster does not require a dedicated parity node or drive. No single
node limits the speed of the rebuild process.
File striping
OneFS uses a PowerScale cluster's internal network to distribute data automatically across individual nodes and disks in the
cluster. OneFS protects files as the data is being written. No separate action is necessary to protect data.
Before writing files to storage, OneFS breaks files into smaller logical chunks called stripes. The size of each file chunk is
referred to as the stripe unit size. Each OneFS block is 8 KB, and a stripe unit consists of 16 blocks, for a total of 128 KB per
stripe unit. During a write, OneFS breaks data into stripes and then logically places the data into a stripe unit. As OneFS writes
data across the cluster, OneFS fills the stripe unit and protects the data according to the number of writable nodes and the
specified protection policy.
OneFS can continuously reallocate data and make storage space more usable and efficient. As the cluster size increases, OneFS
stores large files more efficiently.
To protect files that are 128KB or smaller, OneFS does not break these files into smaller logical chunks. Instead, OneFS uses
mirroring with forward error correction (FEC). With mirroring, OneFS makes copies of each small file's data (N), adds an FEC
parity chunk (M), and distributes multiple instances of the entire protection unit (N+M) across the cluster.
Related concepts
Requesting data protection
Related references
Requested protection settings
Requested protection disk space usage
Smartfail
OneFS protects data stored on failing nodes or drives through a process called smartfailing.
During the smartfail process, OneFS places a device into quarantine. Data stored on quarantined devices is read only. While
a device is quarantined, OneFS reprotects the data on the device by distributing the data to other devices. After all data
migration is complete, OneFS logically removes the device from the cluster, the cluster logically changes its width to the new
configuration, and the node or drive can be physically replaced.
OneFS smartfails devices only as a last resort. Although you can manually smartfail nodes or drives, it is recommended that you
first consult Dell Technologies Support.
Occasionally a device might fail before OneFS detects a problem. If a drive fails without being smartfailed, OneFS automatically
starts rebuilding the data to available free space on the cluster. However, because a node might recover from a failure, if a node
fails, OneFS does not start rebuilding data unless the node is logically removed from the cluster.
Node failures
Because node loss is often a temporary issue, OneFS does not automatically start reprotecting data when a node fails or goes
offline. If a node reboots, the file system does not need to be rebuilt because it remains intact during the temporary failure.
If you configure N+1 data protection on a cluster, and one node fails, all of the data is still accessible from every other node in
the cluster. If the node comes back online, the node rejoins the cluster automatically without requiring a full rebuild.
For 4U Isilon IQ X-Series and NL-Series nodes, and IQ 12000X/EX 12000 combination platforms, the minimum cluster size
of three nodes requires a minimum protection of N+2:1.
Related concepts
Requested data protection
Related concepts
Requested data protection
Number [+1n] [+2d:1n] [+2n] [+3d:1n] [+3d:1n1d] [+3n] [+4d:1n] [+4d:2n] [+4n]
of nodes
3 2 +1 (33%) 4 + 2 — 6+3 3 + 3 (50%) — 8+4 — —
(33%) (33%) (33%)
4 3 +1 6+2 2+2 9+3 5 + 3 (38%) — 12 + 4 4+4 —
(25%) (25%) (50%) (25%) (25%) (50%)
5 4 +1 8+2 3+2 12 + 3 7 + 3 (30%) — 16 + 4 6+4 —
(20%) (20%) (40%) (20%) (20%) (40%)
6 5 +1 (17%) 10 + 2 4+2 15 + 3 9 + 3 (25%) 3+3 16 + 4 8+4 —
(17%) (33%) (17%) (50%) (20%) (33%)
7 6 +1 (14%) 12 + 2 5+2 15 + 3 11 + 3 (21%) 4+3 16 + 4 10 + 4 —
(14%) (29%) (17%) (43%) (20%) (29%)
8 7 +1 (13%) 14 + 2 6+2 15 + 3 13 + 3 (19%) 5+3 16 + 4 12 + 4 4+4
(12.5%) (25%) (17%) (38%) (20%) (25% ) (50%)
9 8 +1 (11%) 16 + 2 7+2 15 + 3 15+3 (17%) 6+3 16 + 4 14 + 4 5+4
(11%) (22%) (17%) (33%) (20%) (22%) (44%)
10 9 +1 (10%) 16 + 2 8+2 15 + 3 15+3 (17%) 7+3 16 + 4 16 + 4 6+4
(11%) (20%) (17%) (30%) (20%) (20%) (40%)
12 11 +1 (8%) 16 + 2 10 + 2 15 + 3 15+3 (17%) 9+3 16 + 4 16 + 4 8+4
(11%) (17%) (17%) (25%) (20%) (20%) (33%)
The parity overhead for mirrored data protection is not affected by the number of nodes in the cluster. The following table
describes the parity overhead for requested mirrored protection.
2x 3x 4x 5x 6x 7x 8x
Related concepts
Requested data protection
Example: Restoring content from a backup created with PowerScale OneFS 9.0 to a cluster with PowerScale OneFS 9.5
is supported. However an attempt to restore content that is created from a cluster with OneFS 9.5 to a pre-OneFS 9.5
cluster is not supported.
NOTE: To use Multistreaming, disable the Backup Restartable extension (BRE) on both Isilon and the DMA.
With multi-stream backup, you can use your DMA to specify multiple streams of data to back up concurrently. OneFS considers
all streams in a specific multi-stream backup operation to be part of the same backup context. A multi-stream backup context is
deleted if a multi-stream backup session is successful. If a specific stream fails, the backup context is retained for five minutes
after the backup operation completes and you can retry the failed stream within that time period.
If you used the NDMP multi-stream backup feature to back data up to tape drives, you can also recover that data in multiple
streams, depending on the DMA. In OneFS 8.0.0.0 and later releases, multi-stream backups are supported with CommVault
Simpana version 11.0 Service Pack 3 and NetWorker version 9.0.1. If you back up data using CommVault Simpana, a multi-stream
context is created, but data is recovered one stream at a time.
Related concepts
Snapshot-based incremental backups
Supported DMAs
NDMP backups are coordinated by a data management application (DMA) that runs on a backup server.
NOTE: All supported DMAs can connect to a PowerScale cluster through the IPv4 protocol. However, only some of the
DMAs support the IPv6 protocol for connecting to a PowerScale cluster.
Supported tape For NDMP three-way backups, the data management application (DMA) determines the tape devices that
devices are supported.
Supported tape For both the two-way and three-way NDMP backups, OneFS supports all of the tape libraries that are
libraries supported by the DMA.
Supported virtual For three-way NDMP backups, the DMA determines the virtual tape libraries that will be supported.
tape libraries
SmartConnect recommendations
● A two-way NDMP backup session with SmartConnect requires Fibre Attached Storage node for backup and recovery
operations. However, a three-way NDMP session with SmartConnect does not require Fibre Attached Storage nodes for
these operations.
● For a NDMP two-way backup session with SmartConnect, connect to the NDMP session through a dedicated SmartConnect
zone consisting of a pool of Network Interface Cards (NICs) on the Fibre Attached Storage nodes.
● For a two-way NDMP backup session without SmartConnect, initiate the backup session through a static IP address or fully
qualified domain name of the Fibre Attached Storage node.
● For a three-way NDMP backup operation, the front-end Ethernet network or the interfaces of the nodes are used to serve
the backup traffic. Therefore, it is recommended that you configure a DMA to initiate an NDMP session only using the nodes
that are not already overburdened serving other workloads or connections.
DMA-specific recommendations
● Enable parallelism for the DMA if the DMA supports this option. This allows OneFS to back up data to multiple tape devices
at the same time.
NOTE: " " are required for Symantec NetBackup when multiple patterns are specified. The patterns are not limited to
directories.
Unanchored patterns such as home or user1 target a string of text that might belong to many files or directories. If a pattern
contains '/', it is an anchored pattern. An anchored pattern is always matched from the beginning of a path. A pattern in the
middle of a path is not matched. Anchored patterns target specific file pathnames, such as ifs/data/home. You can include
or exclude either types of patterns.
If you specify both the include and exclude patterns, the include pattern is first processed followed by the exclude pattern.
4. In the Password and Confirm password fields, type the password for the account.
NOTE: There are no special password policy requirements for an NDMP administrator.
In case you cannot set an environment variable directly on a DMA for your NDMP backup or recovery operation, log in to an
PowerScale cluster through an SSH client and set the environment variable on the cluster through the isi ndmp settings
variables create command.
Setting Description
Add Variables Add new path environment variables along with their values.
Path The path under the /ifs directory to store new environment
variables. If Path is set to "/BACKUP", the environment
variable is applied to all the backup operations. If Path is set
to "/RESTORE", the environment variable is applied to all the
restore operations.
Add Name/Value Add a name and value for the new environment variable.
Name Name of the environment variable.
Value Value set for the environment variable
Action Edit, view, or delete an environment variable at a specified
path.
b. Click Add Name/Value, specify an environment variable name and value, and then click Create Variable.
BACKUP_MODE=snapsho
t settings to trigger a
faster-incremental backup
instead of a token-based
backup. The environment
variable settings prompt
the NDMP server to
compare the BASE_DATE
value against the
timestamp in the
dumpdates file to find
the prior backup. Even
though the DMA fails the
latest faster-incremental
backup, OneFS retains the
prior snapshot. The DMA
can then retry the faster-
incremental backup in the
next backup cycle using
the BASE_DATE value of
the prior backup.
EXCLUDE <file-matching-pattern> None If you specify this option, OneFS does not back
up files and directories that meet the specified
pattern. Separate multiple patterns with a space.
FILES <file-matching-pattern> None If you specify this option, OneFS backs up only
files and directories that meet the specified
pattern. Separate multiple patterns with a space.
NOTE: As a rule, files are matched first and
then the EXCLUDE pattern is applied.
MSB_RETENTION_PERIOD Integer 300 sec For a multi-stream backup session, specifies the
backup context retention period.
MSR_RETENTION_PERIOD 0 through 60*60*24 600 sec For a multi-stream restore session, specifies the
recovery context retention period within which a
recovery session can be retried.
RECURSIVE Y Y For restore sessions only. Specifies that the
restore session should recover files or sub-
N
directories under a directory automatically.
RESTORE_BIRTHTIME Y N Specifies whether to recover the birth time for a
recovery session.
N
Related concepts
Excluding files and directories from NDMP backups
Setting Description
Type The context type. It can be one of backup, restartable backup,
or restore.
ID An identifier for a backup or restore job. A backup or restore
job consists of one or more streams all of which are identified
by this identifier. This identifier is generated by the NDMP
backup daemon.
Start Time The time when the context started in month date time year
format.
Actions View or delete a selected context.
Status Status of the context. The status shows up as active if a
backup or restore job is initiated and continues to remain
active until the backup stream has completed or errored out.
Path The path where all the working files for the selected context
are stored.
MultiStream Specifies whether the multistream backup process is enabled.
Lead Session ID The identifier of the first backup or restore session
corresponding to a backup or restore operation.
Sessions A table with a list of all the sessions that are associated with
the selected context.
Item Description
Session Specifies the unique identification number that OneFS assigns to the session.
Elapsed Specifies the time that has elapsed since the session started.
Transferred Specifies the amount of data that was transferred during the session.
Throughput Specifies the average throughput of the session over the past five minutes.
Client/Remote Specifies the IP address of the backup server that the data management application
(DMA) is running on. If a NDMP three-way backup or restore operation is currently
running, the IP address of the remote tape media server also appears.
Mover/Data Specifies the current state of the data mover and the data server. The first word
describes the activity of the data mover. The second word describes the activity of the
data server.
The data mover and data server send data to and receive data from each other
during backup and restore operations. The data mover is a component of the backup
server that receives data during backups and sends data during restore operations. The
data server is a component of OneFS that sends data during backups and receives
information during restore operations.
NOTE: When a session ID instead of a state appears, the session is automatically
redirected.
The following states might appear:
Operation Specifies the type of operation (backup or restore) that is currently in progress. If no
operation is in progress, this field is blank.
L—Level-based
T—Token-based
S—Snapshot mode
s—Snapshot mode and a full backup (when root dir is new)
r—Restartable backup
R—Restarted backup
+—Backup is running with multiple state threads for better
performance
0-10—Dump Level
R ({M|s}[F | D | Where:
S]{h})
A(B)—The session is an agent session for a redirected restore
operation
M—Multi-stream restore
s—Single-threaded restore (when RESTORE_OPTIONS=1)
F—Full restore
D—DAR
S—Selective restore
h—Restore hardlinks by table
Source/Destination If an operation is currently in progress, specifies the /ifs directories that are affected
by the operation. If a backup is in progress, displays the path of the source directory
that is being backed up. If a restore operation is in progress, displays the path of the
directory that is being restored along with the destination directory to which the tape
media server is restoring data. If you are restoring data to the same location that you
backed up your data from, the same path appears twice.
Device Specifies the name of the tape or media changer device that is communicating with the
PowerScale cluster.
Mode Specifies how OneFS is interacting with data on the backup media server through the
following options:
Examples of active NDMP restore sessions indicated through the Operation field that is described in the previous table are as
shown:
Related tasks
View NDMP sessions
Related references
NDMP session information
Setting Description
LNN Specifies the logical node number of the Fibre Attached Storage node.
Port Specifies the name and port number of the Fibre Attached Storage node.
Topology Specifies the type of Fibre Channel topology that is supported by the port. Options are:
Point to Point A single backup device or Fibre Channel switch directly connected to the
port.
Loop Multiple backup devices connected to a single port in a circular formation.
Auto Automatically detects the topology of the connected device. This is the
recommended setting and is required for a switched-fabric topology.
WWNN Specifies the world wide node name (WWNN) of the port. This name is the same for each port on
a given node.
WWPN Specifies the world wide port name (WWPN) of the port. This name is unique to the port.
Rate Specifies the rate at which data is sent through the port. The rate can be set to 1 Gb/s, 2
Gb/s, 4 Gb/s, 8 Gb/s, and Auto. 8 Gb/s is available for A100 nodes only. If set to Auto, the
Fibre Channel chip negotiates with connected Fibre Channel switch or Fibre Channel devices to
determine the rate. Auto is the recommended setting.
Related tasks
Modify NDMP backup port settings
View NDMP backup ports
Related references
NDMP backup port settings
Related references
NDMP backup port settings
Run the command as shown in the following example to apply a preferred IP setting for a subnet group:
Run the command as shown in the following example to modify the NDMP preferred IP setting for a subnet:
Setting Description
Name Specifies a device name assigned by OneFS.
State Indicates whether the device is in use. If data is currently being backed up to or restored from the
device, Read/Write appears. If the device is not in use, Closed appears.
Related tasks
Detect NDMP backup devices
View NDMP backup devices
5. Optional: To remove entries for devices or paths that have become inaccessible, select the Delete inaccessible paths or
devices check box.
6. Click Submit.
For each device that is detected, an entry is added to either the Tape Devices or Media Changers tables.
Related references
NDMP backup device settings
Related references
NDMP backup device settings
Setting Description
Date Specifies the date when an entry was added to the
dumpdates file.
Snapshot ID Identifies changed files for the next level of backup. This ID
is applicable only for snapshot-based backups. In all the other
cases, the value is 0.
Actions Deletes an entry from the dumpdates file.
The value of the path option must match the FILESYSTEM environment variable that is set during the backup operation.
The value that you specify for the name option is case sensitive.
3. Start the restore operation.
NetWorker refers to the tape drive sharing capability as DDS (dynamic drive sharing). Symantec NetBackup uses the term SSO
(shared storage option). Consult your DMA vendor documentation for configuration instructions.
Related references
NDMP environment variables
Related references
NDMP environment variables
Related concepts
Excluding files and directories from NDMP backups
Related references
NDMP environment variables
Service: False
Port: 10000
DMA: generic
Bre Max Num Contexts: 64
Context Retention Duration: 300
Smartlink File Open Timeout: 10
Enable Redirector: True
Service: False
Port: 10000
DMA: generic
Bre Max Num Contexts: 64
Context Retention Duration: 600
Smartlink File Open Timeout: 10
Enable Throttler: True
Throttler CPU Threshold: 50
3. If required, change the throttler CPU threshold as shown in the following example:
SmartLock overview
With the SmartLock software module, you can protect files on a PowerScale cluster from being modified, overwritten, or
deleted. To protect files in this manner, you must activate a SmartLock license.
With SmartLock, you can identify a directory in OneFS as a WORM domain. WORM stands for write once, read many. All files
within the WORM domain can be committed to a WORM state, meaning that those files cannot be overwritten, modified, or
deleted.
After a file is removed from a WORM state, you can delete the file. However, you can never modify a file that has been
committed to a WORM state, even after it is removed from a WORM state.
In OneFS, SmartLock can be deployed in one of two modes: compliance mode or enterprise mode.
Compliance mode
SmartLock compliance mode enables you to protect your data in compliance with U.S. Securities and Exchange Commission rule
17a-4. Rule 17a-4 is aimed at securities brokers and dealers, and specifies that records of all securities transactions must be
archived in a nonrewritable, nonerasable manner.
NOTE: You can configure a PowerScale cluster for SmartLock compliance mode only during the initial cluster configuration
process, before you activate a SmartLock license. A cluster cannot be converted to SmartLock compliance mode after the
cluster is initially configured and put into production.
Configuring a cluster for SmartLock compliance mode disables the root user. You cannot to log in to that cluster through the
root user account. Instead, you can log in to the cluster through the compliance administrator account that is configured during
initial SmartLock compliance mode configuration.
When you are logged in to a SmartLock compliance mode cluster through the compliance administrator account, you can
perform administrative tasks through the sudo command.
SmartLock directories
In a SmartLock directory, you can commit a file to a WORM state manually or you can configure SmartLock to commit the file
automatically. Before you can create SmartLock directories, you must activate a SmartLock license on the cluster.
You can create two types of SmartLock directories: enterprise and compliance. However, you can create compliance directories
only if the PowerScale cluster has been set up in SmartLock compliance mode during initial configuration.
Enterprise directories enable you to protect your data without restricting your cluster to comply with regulations defined by U.S.
Securities and Exchange Commission rule 17a-4. If you commit a file to a WORM state in an enterprise directory, the file can
never be modified and cannot be deleted until the retention period passes.
However, if you own a file and have been assigned the ISI_PRIV_IFS_WORM_DELETE privilege, or you are logged in through
the root user account, you can delete the file through the privileged delete feature before the retention period passes.
The privileged delete feature is not available for compliance directories. Enterprise directories reference the system clock to
facilitate time-dependent operations, including file retention.
Compliance directories enable you to protect your data in compliance with the regulations defined by U.S. Securities and
Exchange Commission rule 17a-4. If you commit a file to a WORM state in a compliance directory, the file cannot be modified
or deleted before the specified retention period has expired. You cannot delete committed files, even if you are logged in
to the compliance administrator account. Compliance directories reference the compliance clock to facilitate time-dependent
operations, including file retention.
You must set the compliance clock before you can create compliance directories. You can set the compliance clock only once,
after which you cannot modify the compliance clock time. You can increase the retention time of WORM committed files on an
individual basis, if desired, but you cannot decrease the retention time.
The compliance clock is controlled by the compliance clock daemon. Root and compliance administrator users could disable
the compliance clock daemon, which would have the effect of increasing the retention period for all WORM committed files.
However, this is not recommended.
NOTE: Using WORM exclusions, files inside a WORM compliance or enterprise domain can be excluded from having a
WORM state. All the files inside the excluded directory will behave as normal non-Smartlock protected files. For more
information, see the OneFS CLI Administration Guide.
SmartLock considerations
● If a file is owned exclusively by the root user, and the file exists on a PowerScale cluster that is in SmartLock compliance
mode, the file will be inaccessible: the root user account is disabled in compliance mode. For example, if a file is assigned
root ownership on a cluster that has not been configured in compliance mode, and then the file is replicated to a cluster
in compliance mode, the file becomes inaccessible. This can also occur if a root-owned file is restored onto a compliance
cluster from a backup.
● It is recommended that you create files outside of SmartLock directories and then transfer them into a SmartLock directory
after you are finished working with the files. If you are uploading files to a cluster, it is recommended that you upload the
files to a non-SmartLock directory, and then later transfer the files to a SmartLock directory. If a file is committed to a
WORM state while the file is being uploaded, the file will become trapped in an inconsistent state.
● Files can be committed to a WORM state while they are still open. If you specify an autocommit time period for a directory,
the autocommit time period is calculated according to the length of time since the file was last modified, not when the file
was closed. If you delay writing to an open file for more than the autocommit time period, the file is automatically committed
to a WORM state, and you will not be able to write to the file.
● In a Microsoft Windows environment, if you commit a file to a WORM state, you can no longer modify the hidden or archive
attributes of the file. Any attempt to modify the hidden or archive attributes of a WORM committed file generates an error.
This can prevent third-party applications from modifying the hidden or archive attributes.
● You cannot rename a SmartLock compliance directory. You can rename a SmartLock enterprise directory only if it is empty.
● You can only rename files in SmartLock compliance or enterprise directories if the files are uncommitted.
● You cannot move:
○ SmartLock directories within a WORM domain
○ SmartLock directories in a WORM domain into a directory in a non-WORM domain.
○ directories in a non-WORM domain into a SmartLock directory in a WORM domain.
Retention periods
A retention period is the length of time that a file remains in a WORM state before being released from a WORM state. You can
configure SmartLock directory settings that enforce default, maximum, and minimum retention periods for the directory.
If you manually commit a file, you can optionally specify the date that the file is released from a WORM state. You can configure
a minimum and a maximum retention period for a SmartLock directory to prevent files from being retained for too long or too
short a time period. It is recommended that you specify a minimum retention period for all SmartLock directories.
For example, assume that you have a SmartLock directory with a minimum retention period of two days. At 1:00 PM on Monday,
you commit a file to a WORM state, and specify the file to be released from a WORM state on Tuesday at 3:00 PM. The file will
be released from a WORM state two days later on Wednesday at 1:00 PM, because releasing the file earlier would violate the
minimum retention period.
You can also configure a default retention period that is assigned when you commit a file without specifying a date to release
the file from a WORM state.
4. From the Privileged Delete list, specify whether to enabled the root user to delete files that are currently committed to a
WORM state.
NOTE: This functionality is available only for SmartLock enterprise directories.
5. In the Path field, type the full path of the directory you want to make into a SmartLock directory.
The specified path must belong to an empty directory on the cluster.
6. Optional: To specify a default retention period for the directory, click Apply a default retention span and then specify a
time period.
The default retention period will be assigned if you commit a file to a WORM state without specifying a day to release the file
from the WORM state.
7. Optional: To specify a minimum retention period for the directory, click Apply a minimum retention span and then specify
a time period.
The minimum retention period ensures that files are retained in a WORM state for at least the specified period of time.
8. Optional: To specify a maximum retention period for the directory, click Apply a maximum retention span and then specify
a time period.
The maximum retention period ensures that files are not retained in a WORM state for more than the specified period of
time.
9. Click Create Domain.
10. Click Create.
Privileged Delete Indicates whether files committed to a WORM state in the directory can be deleted through the
privileged delete functionality. To access the privilege delete functionality, you must either be assigned
the ISI_PRIV_IFS_WORM_DELETE privilege and own the file you are deleting. You can also access the
privilege delete functionality for any file if you are logged in through the root or compadmin user account.
on Files committed to a WORM state can be deleted through the isi worm files
delete command.
off Files committed to a WORM state cannot be deleted, even through the isi worm
files delete command.
disabled Files committed to a WORM state cannot be deleted, even through the isi worm
files delete command. After this setting is applied, it cannot be modified.
Apply a default The default retention period for the directory. If a user does not specify a date to release a file from a
retention span WORM state, the default retention period is assigned.
Enforce a The minimum retention period for the directory. Files are retained in a WORM state for at least the
minimum specified amount of time, even if a user specifies an expiration date that results in a shorter retention
retention time period.
span
Enforce a The maximum retention period for the directory. Files cannot be retained in a WORM state for more than
maximum the specified amount of time, even if a user specifies an expiration date that results in a longer retention
retention time period.
span
Automatically The autocommit time period for the directory. After a file exists in this SmartLock directory without being
commit files modified for the specified time period, the file is automatically committed to a WORM state.
after a specific
period of time
Override The override retention date for the directory. Files committed to a WORM state are not released from a
retention periods WORM state until after the specified date, regardless of the maximum retention period for the directory
and protect all or whether a user specifies an earlier date to release a file from a WORM state.
Other touch command input formats are also allowed to modify the access time of files. For example, the command:
cp -p <source> <destination>
copies the contents of source to destination, and then updates the attributes of destination to match the attributes of
source, including setting the same access time.
3. Specify the name of the file you want to set a retention period for by creating an object.
The file must exist in a SmartLock directory.
3. Delete the WORM committed file by running the isi worm files delete command.
The following command deletes /ifs/data/SmartLock/directory1/file:
WORM Domains
ID Root Path
------------------------------------
65539 /ifs/data/SmartLock/directory1
NOTE: If you use SyncIQ to create a replication policy for a SmartLock compliance directory, the SyncIQ and SmartLock
compliance domains must be configured at the same root directory level. A SmartLock compliance domain cannot be nested
inside a SyncIQ domain.
Related concepts
Protection domains overview
Self-encrypting drives
Self-encrypting drives store data on a cluster that is specially designed for data-at-rest encryption.
Data-at-rest encryption on self-encrypting drives occurs when data that is stored on a device is encrypted to prevent
unauthorized data access. All data that is written to the storage device is encrypted when it is stored, and all data that is
read from the storage device is decrypted when it is read. The stored data is encrypted with a 256-bit data AES encryption key
and decrypted in the same manner. OneFS controls data access by combining the drive authentication key with data-encryption
keys.
NOTE: All nodes in a cluster must be of the self-encrypting drive type. Mixed nodes are not supported.
Data-at-rest-encryption 329
Data migrations and upgrades to a cluster with self-
encrypting drives
You can have data from your existing cluster migrated or upgraded to a cluster of nodes made up of self-encrypting drives
(SEDs). As a result, all migrated and future data on the new cluster are encrypted.
Upgrading from a cluster with SEDs using on-disk keys to a cluster with SEDs using an external key management server retains
the on-disk keys. After the upgrade, you must migrate your drives to the external key management server.
NOTE: Data migration and upgrades to a cluster with SEDs must be performed by PowerScale Professional Services. For
more information, contact your Dell Technologies representative.
330 Data-at-rest-encryption
Chassis and drive states
You can view chassis and drive state details.
In a cluster, the combination of nodes in different degraded states determines whether read requests, write requests, or both
work. A cluster can lose write quorum but keep read quorum. OneFS provides details about the status of chassis and drives in
your cluster. The following table describes all the possible states that you may encounter in your cluster.
Data-at-rest-encryption 331
State Description Interface Error state
WRONG_TYPE The drive type is wrong for this node. For Command-line interface
example, a non-SED drive in a SED node, SAS only
instead of the expected SATA drive type.
BOOT_DRIVE Unique to the A100 drive, which has boot drives Command-line interface
in its bays. only
SED_ERROR The drive cannot be acknowledged by the OneFS Command-line X
system. interface, web
NOTE: In the web administration interface, administration interface
this state is included in Not available.
ERASE The drive is ready for removal but needs your Command-line interface
attention because the data has not been erased. only
You can erase the drive manually to guarantee
that data is removed.
NOTE: In the web administration interface,
this state is included in Not available.
Node 1, [ATTN]
Bay 1 Lnum 11 [SMARTFAIL] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [HEALTHY] SN:Z296LBP4 00009330EYE03 /dev/da3
Bay 4 Lnum 8 [HEALTHY] SN:Z296LCJW 00009327BYE03 /dev/da4
Bay 5 Lnum 7 [HEALTHY] SN:Z296M8XB 00009330KYE03 /dev/da5
Bay 6 Lnum 6 [HEALTHY] SN:Z295LXT7 000093172YE03 /dev/da6
Bay 7 Lnum 5 [HEALTHY] SN:Z296M8ZF 00009330KYE03 /dev/da7
Bay 8 Lnum 4 [HEALTHY] SN:Z296M8SD 00009330EYE03 /dev/da8
Bay 9 Lnum 3 [HEALTHY] SN:Z296M8QA 00009330EYE03 /dev/da9
Bay 10 Lnum 2 [HEALTHY] SN:Z296M8Q7 00009330EYE03 /dev/da10
Bay 11 Lnum 1 [HEALTHY] SN:Z296M8SP 00009330EYE04 /dev/da11
Bay 12 Lnum 0 [HEALTHY] SN:Z296M8QZ 00009330JYE03 /dev/da12
If you run the isi dev list command after the smartfail completes successfully, the system displays output similar to the
following example, showing the drive state as REPLACE:
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [HEALTHY] SN:Z296LBP4 00009330EYE03 /dev/da3
Bay 4 Lnum 8 [HEALTHY] SN:Z296LCJW 00009327BYE03 /dev/da4
332 Data-at-rest-encryption
Bay 5 Lnum 7 [HEALTHY] SN:Z296M8XB 00009330KYE03 /dev/da5
Bay 6 Lnum 6 [HEALTHY] SN:Z295LXT7 000093172YE03 /dev/da6
Bay 7 Lnum 5 [HEALTHY] SN:Z296M8ZF 00009330KYE03 /dev/da7
Bay 8 Lnum 4 [HEALTHY] SN:Z296M8SD 00009330EYE03 /dev/da8
Bay 9 Lnum 3 [HEALTHY] SN:Z296M8QA 00009330EYE03 /dev/da9
Bay 10 Lnum 2 [HEALTHY] SN:Z296M8Q7 00009330EYE03 /dev/da10
Bay 11 Lnum 1 [HEALTHY] SN:Z296M8SP 00009330EYE04 /dev/da11
Bay 12 Lnum 0 [HEALTHY] SN:Z296M8QZ 00009330JYE03 /dev/da12
If you run the isi dev list command while the drive in bay 3 is being smartfailed, the system displays output similar to the
following example:
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [SMARTFAIL] SN:Z296LBP4 00009330EYE03 N/A
Bay 4 Lnum 8 [HEALTHY] SN:Z296LCJW 00009327BYE03 /dev/da4
Bay 5 Lnum 7 [HEALTHY] SN:Z296M8XB 00009330KYE03 /dev/da5
Bay 6 Lnum 6 [HEALTHY] SN:Z295LXT7 000093172YE03 /dev/da6
Bay 7 Lnum 5 [HEALTHY] SN:Z296M8ZF 00009330KYE03 /dev/da7
Bay 8 Lnum 4 [HEALTHY] SN:Z296M8SD 00009330EYE03 /dev/da8
Bay 9 Lnum 3 [HEALTHY] SN:Z296M8QA 00009330EYE03 /dev/da9
Bay 10 Lnum 2 [HEALTHY] SN:Z296M8Q7 00009330EYE03 /dev/da10
Bay 11 Lnum 1 [HEALTHY] SN:Z296M8SP 00009330EYE04 /dev/da11
Bay 12 Lnum 0 [HEALTHY] SN:Z296M8QZ 00009330JYE03 /dev/da12
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [ERASE] SN:Z296LBP4 00009330EYE03 /dev/da3
Drives showing the ERASE state can be safely retired, reused, or returned.
Any further access to a drive showing the ERASE state requires the authentication key of the drive to be set to its default
manufactured security ID (MSID). This action erases the data encryption key (DEK) on the drive and renders any existing data
on the drive permanently unreadable.
Data-at-rest-encryption 333
24
S3 Support
This section contains the following topics:
Topics:
• S3
• Server Configuration
• Bucket handling
• Object handling
• Authentication
• Access key management
S3
OneFS supports the Amazon Web Services Simple Storage Service (AWS S3) protocol for reading data from and writing data to
the OneFS platform.
The S3-on-OneFS technology enables the usage of Amazon Web Services Simple Storage Service (AWS S3) protocol to store
data in the form of objects on top of the OneFS file system storage. The data resides under a single namespace. The AWS
S3 protocol becomes a primary resident of the OneFS protocol stack, along with NFS, SMB, and HDFS. The technology allows
multiprotocol access to objects and files.
The S3 protocol supports bucket and object creation, retrieving, updating, and deletion. Object retrievals and updates are
atomic. Bucket properties can be updated. Objects are accessible using NFS and SMB as normal files, providing cross-protocol
support.
To use S3, administrators generate access IDs and secret keys to authenticated users for access.
Etag consistency is now implemented for S3 on OneFS protocol.
S3 concepts
This section describes some of the key concepts related to the S3 protocol.
Buckets: A bucket is a container for objects stored in S3. Every object is contained in a bucket. Buckets organize the S3
namespace at the highest level, identify the account responsible for storage and data transfer charges, and play a role in access
control.
Objects: Objects are the fundamental entities stored in S3. An object is uniquely identified within a bucket by a key name and
version ID. Objects consist of object data, metadata and others. Key is the object name, value is the data portion that is not
visible by users, and metadata is the data about the data and is a set of name-value pairs that describe the object for example,
content-type, size, last modified. Custom metadata can also be specified at the time the object is stored.
Keys: A key is the unique identifier for an object within a bucket. Every object in a bucket has a key and a value.
An Account ID and a secret key are used to authenticate a user. The Account ID and secret key are created by the Administrator
and mapped to users (such as UNIX, AD, LDAP, and so on).
Server Configuration
The S3 settings are defined in the registry.
The server configuration settings for the S3 protocol are separated into global service configuration and per-zone configuration.
334 S3 Support
Global S3 settings
You can enable and disable the S3 service on the OneFS cluster, set ports for HTTP and HTTPS for the S3 protocol across the
cluster.
You can view or modify the global S3 settings for service related parameters from the Global settings page.
Enable S3 service
You can enable the S3 service.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, select the Enable S3 service check box.
The S3 service is enabled.
Disable S3 service
You can disable the S3 service.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, clear the Enable S3 service check box.
The S3 service is disabled.
View ports
You can view already configured HTTPS and HTTP ports.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, view the details. The default values for the ports are:
● HTTPS: 9021
● HTTP: 9020
If the Enable S3 HTTP check box is clear, only S3 HTTPS is supported.
Modify ports
You can modify already configured HTTPS and HTTP ports.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, select the Enable S3 HTTP check box to enable S3 HTTP support.
If the Enable S3 HTTP is clear, only S3 HTTPS is supported.
4. Modify the port details. The default values for the ports are:
● HTTPS: 9021
● HTTP: 9020
You can modify the value for both the ports. Click the arrows inside the box or enter a value in the box.
Click Revert changes to go back to previous settings. Revert changes is enabled only if any changes are made.
5. Click Save changes.
Save changes is enabled only if you have modified the settings.
S3 Support 335
The changes are saved, and the following message appears.
S3 zone settings
Access zones provide default locations for creating buckets.
You can view or modify specific S3 settings of an access zone from the Zone settings page.
If you are creating a bucket, and a zone ID or name is not provided, the creation of the bucket defaults to the System zone.
336 S3 Support
Certificates
Server certificates are a requirement for the server to set up a TLS handshake.
On a OneFS cluster, the certificate manager manages all the certificates. The certificate manager is designed to provide a
generic programmatic way for accessing and configuring certificates on the cluster.
The HTTPS certificates used by S3 are handled by the isi certificate manager. The Apache instance uses the same store.
Bucket handling
Buckets are the containers for objects. You can have one or more buckets. For each bucket, you can control access to it (who
can create, delete, and list objects in the bucket).
Buckets are a similar concept to exports in NFS and shares in SMB. A major difference between buckets and NFS export is that
any user with valid credentials can create a bucket on the server, and the bucket is owned by that user.
OneFS now supports these bucket and account operations:
● PUT bucket
● GET bucket (list objects in a bucket)
● GET bucket location
● DELETE bucket
● GET Bucket acl
● PUT Bucket acl
● HEAD Bucket
● List Multipart Uploads
● GET Service
Managing buckets
You can access the S3 bucket management feature from OneFS web administration interface.
List buckets
View a list of buckets. You can also sort and filter the list.
1. Click Protocols > Objects Storage (S3) .
The Buckets page appears with the list of buckets.
2. View the list of buckets in a tabular format. The four columns are: name, path, owner, and actions.
● You can filter the buckets by zone: In the Current access zone list, select the access zone. The buckets for the given
zone are displayed. The default selected value is System zone.
● You can filter the buckets by owner: In the Owner box, enter the name of the owner and click Apply. The buckets for
the given owner are displayed.
3. Click the arrows on the table header of the columns to sort the buckets based on name, path, and owner fields.
4. Click <<, <, and > to go to the first page, previous page, and next page respectively. Click Last to refresh and reload all the
buckets and go to page 1.
Create a bucket
You can create a bucket for a user.
1. Click Protocols > Objects Storage (S3) .
● In the Current access zone list, select the access zone where you want to create the bucket. The default selected
value is System zone.
2. On the Buckets page, click Create Bucket.
The Create a Bucket dialog box appears.
S3 Support 337
3. Enter the following information in the Create a Bucket dialog box:
● Name: Enter the name of the bucket. The name must be between 3 and 63 characters in length. Bucket name cannot
contain characters other than a-z, 0-9, and '-'. This is a mandatory field.
● Owner: Enter the name of the bucket owner. Click Select user to search for a user. Then, in the Search user dialog
box, enter the details in the User and Providers boxes and click Search. This is a mandatory field.
● Path: Enter the file path where you plan to store the bucket. Click Browse to select a path to a directory and then click
Select. Select the Create bucket path if it does not exist check box if a path does not exist. This is a mandatory field.
● Description: Enter a description for the bucket. This is an optional field.
● ACL: Click Add ACL . Then, click Select user and search for the user. In the Permissions list, select the permission for
your bucket. Whenever, you want to specify ACLs, you must select the user or grantee and permissions for that user or
grantee. This is an optional field.
4. Click Create Bucket.
The bucket is successfully created, and the following message appears:
Delete a bucket
You can delete a bucket.
1. Click Protocols > Objects Storage (S3) .
338 S3 Support
2. On the Buckets page, you can see the list of buckets. Under the Actions column, click Delete next to the bucket that you
want to delete.
The Confirm delete dialog box appears.
3. Click Delete. The bucket is deleted.
Click Cancel to exit the Confirm delete dialog box and return to the list of buckets.
Object handling
An object consists of a file and optionally any metadata that describes that file. To store an object in S3, you upload the file that
you want to store to a bucket. You can set permissions on the object and any metadata.
S3 stores data in the form of objects, which are key-value pairs. An object is identified using the key. The data is stored inside
the object as a value. In OneFS, you use files to represent objects. An object key points out to a path name and an object value
to the contents of the file.
An object can have associated metadata with size limits. There can be system metadata, which are generated for every object.
Also, there can be user metadata that applications create for selected objects.
Objects reside within buckets. The life cycle and access of an object depends on the policies and ACLs enforced on the bucket.
Also, each object can have its own ACLs.
An object key can have prefixes or delimiters, which are used to organize them efficiently.
Object key
The object key is the path of the file from the root of the bucket directory.
For OneFS the object key is treated as a file path from root. "/" is treated as the path for directories. The limitations on object
keys are listed below:
● Cannot use " / " (It is treated as a delimiter) .
● Cannot use ". " and ".." as a key or as a part of prefix.
● If snapshot is already present, .snapshot cannot be created.
● Maximum key length including prefix and delimiter is 1023 bytes.
● Key length or each prefix split by / is 255 bytes.
● Can use ASCII or UTF-8.
● Other OneFS data services may have a problem if path length as a file exceeds 1024 bytes.
● Cannot place object under the .isi_s3 directory.
● Cannot place file object if a directory with the same name already exists.
Object Metadata
An object can have two types of metadata, system metadata and user-defined metadata.
Both system and user-defined metadata are defined as a set of name-value pairs. In OneFS, system metadata gets stored as an
inode attribute and the user-defined metadata gets stored as an extended attribute of the file.
Multipart upload
The S3 protocol allows you to upload a large file as multiple parts rather than as a single request.
The client initiates a multipart upload with a POST request with the uploads the query parameter and the object key. On the
cluster, a unique userUploadId string is generated by concatenating the bucket ID and upload ID and returned to the client. The
pair of bucket ID and upload ID is also stored in a per-zone SBT. A directory, .s3_parts_userUploadID is created in the
target directory to store the parts. After getting created, the directory and kvstore entry persists until the multipart operation
is either completed or stopped. Parts are uploaded with a part number and stored in the temporary directory. A part has a
maximum size of 5 GB and the last part a minimum size of 5 MB. Complete multipart upload is handled by concatenating the
parts to a temporary file under the .isi_s3 directory. Once the concatenation succeeds, the temporary file is copied to the
target, the .s3_parts_userUploadID is deleted, and the SBT entry is removed.
S3 Support 339
Etag
S3 may use an MD5 checksum as an ETag. This value is specified in the HTTP header "Content-MD5."
The Etag consistency is implemented to S3 on OneFS protocol to calculate an MD5 hash for a single PUT object operation when
no Etag is provided. This feature provides compatibly with AWS S3 behavior and helps the S3 service to calculate and return the
MD5 hash for single object PUT operations when an Etag is not supplied by the client.
Two new configuration options, use-md5-for-etag and validate-content-md5 are added to control when the MD5
hash is calculated. The options are in the S3 zone settings and can be configured on a per-zone basis. By default, both options
are disabled and the MD5 hash is not calculated. If the validate-content-md5 is set to true, then the MD5 hash is
calculated provided the PUT object request has a Content-MD5 to check the content integrity. If use-md5-for-etag is set
to true, then the MD5 hash is calculated on request if no Content-MD5 is provided to store as the Etag . If both options are set
to true , the MD5 hash is always calculated.
PUT object
The PUT object operation allows you to add an object to a bucket. You must have the WRITE permission on a bucket to add an
object to it.
To emulate the atomicity guarantee of an S3 PUT object, objects are written to a temporary directory, .isi_s3 before getting
moved to the target path. On PUT, directories are implicitly created from writing the object. Implicitly created directories are
owned by the object owner and have permissions that are inherited from the parent directory.
Authentication
S3 uses its own method of authentication which relies on access keys that are generated for the user.
The access ID is sent in the HTTP request and is used to identify the user. The secret key is used in the signing algorithm.
There are two signing algorithms, Version 2 (v2) and Version 4 (v4).
S3 requests can either be signed or unsigned. A signed request contains an access ID and a signature. The access ID indicates
who the user is. The included signature value is the result of hashing several header values in the request with a secret key.
The server must use the access ID to retrieve a copy of the secret key, recompute the expected hash value of the request, and
compare against the signature sent. If they match, then the requester is authenticated, and any header value that was used in
the signature is now verified to be unchanged as well.
An S3 operation is only performed after the following criteria are met:
● Verify signatures that use AWS Signature Version 4 or AWS Signature Version 2 and validate it against the S3 request.
● Get user credential using access ID, once verification is complete.
● Perform authorization of user credential against bucket ACL.
● Perform traversal check of user credential against object path.
● Perform access check of user credential against object ACL.
Access keys
On OneFS, user keys are created using PAPI and stored in the kvstore.
The entry format in the kvstore is access_id:secret_key. The secret key is the randomly generated base64 string. The
access key is formatted as ZoneId_username_accid. In the S3 protocol, on receiving an authenticated request, the access
340 S3 Support
key is used to retrieve the secret key from the keystore. The signature is then generated on the server side, using the header
fields from the request and the user's secret key. If the signature matches, the request is successfully authenticated. The
username and zone information encoded in the access ID is used to generate the user security context and the request is
performed. By default, when a new key is created, the previous user key remains valid for 10 minutes. If you want, you can
change it up to 1440 minutes (24 hrs).
Access control
In S3, permissions on objects and buckets are defined by an ACL.
S3 supports five grant permission types: READ, WRITE, READ_ACP, WRITE_ACP, and FULL_CONTROL. The FULL_CONTROL
grant is a shorthand for all grants. Each ACE consists of one grantee and one grant. The grantee can either be a user or one
of the defined groups that OneFS S3 supports, Everyone and Authenticated Users. S3 ACLs are limited to a maximum of 100
entries.
ACL concepts
In S3, you must understand some concepts that are related to an ACL.
Grantee: S3 ACL grantees can be specified as either an ID or an email address to an AWS account. The ID is a randomly
generated value for each user. For the OneFS S3, only ID is supported and the ID is set to be the username or group of the
grantee.
S3 Groups: S3 has two predefined groups, Everyone and Authenticated Users. On OneFS, Everyone is translated to the
integrated World group SID S-1-1-0 and Authenticated Users is translated to the integrated group Authenticated User SID
S-1-5-11.
Canned ACL: When specifying ACLs in S3, the user can either specify the ACL as a list of grants or use a canned ACL.
The canned ACL is a predefined ACL list which is added to the file. The supported canned ACLS are private, public-read,
public-read-write, authenticated-read, bucket-owner-read, and bucket-owner-full-control.
Default ACL: When objects and buckets are created in S3 by a PUT operation, the user has the option of setting the ACL. If no
ACL is specified, then the private canned ACL is used by default, granting full control to the creator.
Object ACL
S3 ACLs are a legacy access control mechanism that predates Identity and Access Management (IAM).
On OneFSobjects, ACLs are translated to NTFS ACLs and stored on-disk. The table below lists the mapping of S3 grants to
NTFS grants. The difference in the OneFSS3 implementation is the WRITE grant is allowed on object ACLs. In S3, the WRITE
grant has no meaning as the S3 protocol does not allow modifying objects.
The WRITE grant instead allows an object to be modified through other access protocols. For translating S3 ACLs to NTFS
ACLs for operations PUT object ACL, the translation of each entry happens as shown in the table. The translation of NTFS
ACL to S3 ACL, as needed in the GET object ACL some entries may not be shown. As NTFS ACLs have a richer set of grants,
permissions that are not in the table are omitted. Deny ACEs are also omitted as S3 ACLs do not support a deny entry.
An S3 ACL can also have one of the following pre-defined groups as a grantee:
● Authenticated Users: Any signed request is included in this group.
● All Users: Any request, signed or unsigned, is included in this group.
● Log Delivery Group: This group represents the log server that writes server access logs in the bucket.
S3 Support 341
Object ACLs translate to the following S3 permissions:
A difference in the OneFS implementation is the implicit owner ACE permission. In S3 the object owner is implicitly granted
FULL_CONTROL, regardless of the ACL on the file. On OneFS to emulate this behavior, an ace entry granting FULL_CONTROL
to the object owner is appended to the end of any ACL set by S3 which does not grant the owner FULL_CONTROL privilege.
Bucket ACL
S3 ACLs are a legacy access control mechanism that predates Identity and Access Management (IAM).
ACLs set on the bucket are written as part of the bucket configuration in Tardis. The ACLs define which S3 bucket operations
are allowed by which user.
342 S3 Support
Directory permissions
In S3, directories may be implicitly related on a PUT object for keys with delimiters.
For directories related this way, the user issuing the PUT object request becomes the owner of the directory and the directory
mode gets copied from the parent.
S3 Permissions
The following is a list of S3 permissions which OneFS supports.
● AbortMultipartUpload
● DeleteObject
● DeleteObjectVersion
● GetObject
● GetObjectAcl
● GetObjectVersion
● GetObjectVersionAcl
● ListMultipartUploadParts
● PutObject
● PutObjectAcl
● PutObjectVersionAcl
● CreateBucket
● DeleteBucket
● ListBucket
● ListBucketVersions
● ListAllMyBuckets
● ListBucketMultipartUploads
● GetBucketAcl
● PutBucketAcl
Some of these permissions require special handling. The following permissions are handled outside of the bucket, and may be
handled in PAPI:
The following permissions interact with file system ACLs and require extra handling:
You cannot bypass file system permissions. If a user has the ListBucket permission, but does not have read permission on a
directory, then the user cannot list the files in that directory.
S3 Support 343
Anonymous authentication
Requests sent without an authentication header in S3 are run as the anonymous user.
An anonymous user is mapped to the user 'nobody'.
Managing keys
You can access the S3 key management feature from OneFS web administration interface.
You can generate secret keys and access IDs.
344 S3 Support
Click Close to exit the Search user dialog box.
7. Click Create a key.
Access ID and secret key are generated, and the following message appears:
Created S3 Key.
S3 key has been created successfully.
8. You can now view the secret key details in a tabular format. The columns are:
● Type: Displays the type of the key (existing or old)
● Secret Keys: Click Show key to view the key and Hide key to hide the key.
● Expiry time: The time for the existing key is not mentioned. If you have created more than one key, the expiry time of
the old key is displayed.
● Creating date: Displays the date and time when the key was created.
Created S3 Key.
S3 key has been created successfully.
The old key expires after the time limit that you have set.
The default expiry time is 10 minutes. The maximum time that you can set is 1440 minutes (24 hours) and an error similar to
the following appears of the time is exceeded:
Now that you have two secret keys, if you try to create a new key, the Force delete old key check box appear. You can
select the check box if you want to forcefully create the key. The first key that you had created is not valid anymore. Only
two keys appear at one time.
S3 Support 345
5. In the Search user dialog box, enter the details in the User and Providers fields and click Search.
The name of the user and if there is any description that is associated with the user, appears in a tabular format. You can
click Reset to revert the search process.
6. Select the user from the table and click Select user.
The user is selected.
Click Close to exit the Search user dialog box.
7. You can now view the secret key details in a tabular format. The columns are:
● Type: Displays the type of the key (existing or old)
● Secret Keys: Click Show key to view the key and Hide key to hide the key.
● Expiry time: The time for the existing key is not mentioned. If you have created more than one key, the expiry time of
the old key is displayed.
● Creating date: Displays the date and time when the key was created.
8. Click Delete Keys on the left bottom of the table.
The key is deleted.
346 S3 Support
25
SmartQuotas
This section contains the following topics:
Topics:
• SmartQuotas overview
• Quota types
• Default quota type
• Usage accounting and limits
• Disk-usage calculations
• Quota notifications
• Quota notification rules
• Quota reports
• Creating quotas
• Managing quotas
• Managing quota notifications
• Email quota notification messages
• Managing quota reports
• Basic quota settings
• Advisory limit quota notification rules settings
• Soft limit quota notification rules settings
• Hard limit quota notification rules settings
• Limit notification settings
• Quota report settings
SmartQuotas overview
The SmartQuotas module is an optional quota-management tool that monitors and enforces administrator-defined storage limits.
Using accounting and enforcement quota limits, reporting capabilities, and automated notifications, SmartQuotas manages
storage use, monitors disk storage, and issues alerts when disk-storage limits are exceeded.
Quotas help you manage storage usage according to criteria that you define. Quotas are used for tracking—and sometimes
limiting—the amount of storage that a user, group, or directory consumes. Quotas help ensure that a user or department does
not infringe on the storage that is allocated to other users or departments. In some quota implementations, writes beyond the
defined space are denied, and in other cases, a simple notification is sent.
NOTE: Do not apply quotas to /ifs/.ifsvar/ or its subdirectories. If you limit the size of the /ifs/.ifsvar/
directory through a quota, and the directory reaches its limit, jobs such as File-System Analytics fail. A quota blocks older
job reports from being deleted from the /ifs/.ifsvar/ subdirectories to make room for newer reports.
The SmartQuotas module requires a separate license. For more information about the SmartQuotas module or to activate the
module, contact your Dell Technologies sales representative.
Quota types
OneFS uses the concept of quota types as the fundamental organizational unit of storage quotas. Storage quotas comprise a
set of resources and an accounting of each resource type for that set. Storage quotas are also called storage domains.
Storage quotas creation requires three identifiers:
● The directory to monitor
● Whether snapshots are tracked against the quota limit
● The quota type (directory, user, or group)
SmartQuotas 347
NOTE: Do not create quotas of any type on the OneFS root (/ifs). A root-level quota may significantly degrade
performance.
You can choose a quota type from the following entities:
User Either a specific user or default user (every user). Specific-user quotas that you configure take
precedence over a default user quota.
Group All members of a specific group or all members of a default group (every group). Any specific-group
quotas that you configure take precedence over a default group quota. Associating a group quota with a
default group quota creates a linked quota.
You can create multiple quota types on the same directory, but they must be of a different type or have a different snapshot
option. You can specify quota types for any directory in OneFS and nest them within each other to create a hierarchy of
complex storage-use policies.
Nested storage quotas can overlap. For example, the following quota settings ensure that the finance directory never exceeds 5
TB, while limiting the users in the finance department to 1 TB each:
● Set a 5 TB hard quota on /ifs/data/finance.
● Set 1 TB soft quotas on each user in the finance department.
348 SmartQuotas
Usage accounting and limits
Storage quotas can perform two functions: they monitor storage space through usage accounting and they manage storage
space through enforcement limits.
You can configure OneFS quotas by usage type to track or limit storage use. The accounting option, which monitors disk-
storage use, is useful for auditing, planning, and billing. Enforcement limits set storage limits for users, groups, or directories.
Track storage The accounting option tracks but does not limit disk-storage use. Using the accounting option for a quota,
consumption you can monitor inode count and physical and logical space resources. Physical space refers to all of the
without space that is used to store files and directories, including data, metadata, and data protection overhead in
specifying a the domain. There are two types of logical space:
storage limit
● File system logical size: Logical size of files as per file system. Sum of all files sizes, excluding file
metadata and data protection overhead.
● Application logical size : Logical size of file apparent to the application. Used file capacity from the
application point of view, which is usually equal to or less than the file system logical size. However, in
the case of a sparse file, application logical size can be greater than file system logical size. Application
logical size includes capacity consumption on the cluster as well as data tiered to the cloud.
Storage consumption is tracked using file system logical size by default, which does not include protection
overhead. As an example, by using the accounting option, you can do the following:
● Track the amount of disk space that is used by various users or groups to bill each user, group, or
directory for only the disk space used.
● Review and analyze reports that help you identify storage usage patterns and define storage policies.
● Plan for capacity and other storage needs.
Specify storage Enforcement limits include all of the functionality of the accounting option, plus the ability to limit disk
limits storage and send notifications. Using enforcement limits, you can logically partition a cluster to control
or restrict how much storage that a user, group, or directory can use. For example, you can set hard-
or soft-capacity limits to ensure that adequate space is always available for key projects and critical
applications and to ensure that users of the cluster do not exceed their allotted storage capacity.
Optionally, you can deliver real-time email quota notifications to users, group managers, or administrators
when they are approaching or have exceeded a quota limit.
NOTE:
If a quota type uses the accounting-only option, enforcement limits cannot be used for that quota.
The actions of an administrator who is logged in as root may push a domain over a quota threshold. For example, changing the
protection level or taking a snapshot has the potential to exceed quota parameters. System actions such as repairs also may
push a quota domain over the limit.
The system provides three types of administrator-defined enforcement thresholds.
Hard Limits disk usage to a size that cannot be exceeded. If an operation, such as a file write,
causes a quota target to exceed a hard quota, the following events occur:
● The operation fails
● An alert is logged to the cluster
● A notification is issued to specified recipients.
Soft Allows a limit with a grace period that can be exceeded until the grace period expires. When a
soft quota is exceeded, an alert is logged to the cluster and a notification is issued to specified
recipients; however, data writes are permitted during the grace period.
If the soft threshold is still exceeded when the grace period expires, data writes fail, and a
notification is issued to the recipients you have specified.
SmartQuotas 349
Threshold type Description
Advisory An informational limit that can be exceeded. When an advisory quota threshold is exceeded,
an alert is logged to the cluster and a notification is issued to specified recipients. Advisory
thresholds do not prevent data writes.
Disk-usage calculations
For each quota that you configure, you can specify whether physical or logical space is included in future disk usage calculations.
You can configure quotas to include the following types of physical or logical space:
Most quota configurations do not need to include data protection overhead calculations, and therefore do not need to include
physical space, but instead can include logical space (either file system logical size, or application logical size). If you do not
include data protection overhead in usage calculations for a quota, future disk usage calculations for the quota include only the
logical space that is required to store files and directories. Space that is required for the data protection setting of the cluster is
not included.
Consider an example user who is restricted by a 40 GB quota that does not include data protection overhead in its disk usage
calculations. (The 40 GB quota includes file system logical size or application logical size.) If your cluster is configured with
a 2x data protection level and the user writes a 10 GB file to the cluster, that file consumes 20 GB of space but the 10GB
for the data protection overhead is not counted in the quota calculation. In this example, the user has reached 25 percent of
the 40 GB quota by writing a 10 GB file to the cluster. This method of disk usage calculation is recommended for most quota
configurations.
If you include data protection overhead in usage calculations for a quota, future disk usage calculations for the quota include the
total amount of space that is required to store files and directories, in addition to any space that is required to accommodate
your data protection settings, such as parity or mirroring. For example, consider a user who is restricted by a 40 GB quota
that includes data protection overhead in its disk usage calculations. (The 40 GB quota includes physical size.) If your cluster is
configured with a 2x data protection level (mirrored) and the user writes a 10 GB file to the cluster, that file actually consumes
20 GB of space: 10 GB for the file and 10 GB for the data protection overhead. In this example, the user has reached 50 percent
of the 40 GB quota by writing a 10 GB file to the cluster.
350 SmartQuotas
NOTE: Cloned and deduplicated files are treated as ordinary files by quotas. If the quota includes data protection overhead,
the data protection overhead for shared data is not included in the usage calculation.
You can configure quotas to include the space that is consumed by snapshots. A single path can have two quotas applied to
it: one without snapshot usage, which is the default, and one with snapshot usage. If you include snapshots in the quota, more
files are included in the calculation than are in the current directory. The actual disk usage is the sum of the current directory
and any snapshots of that directory. You can see which snapshots are included in the calculation by examining the .snapshot
directory for the quota path.
NOTE: Only snapshots created after the QuotaScan job finishes are included in the calculation.
Quota notifications
Quota notifications are generated for enforcement quotas, providing users with information when a quota violation occurs.
Reminders are sent periodically while the condition persists.
Each notification rule defines the condition that is to be enforced and the action that is to be executed when the condition is
true. An enforcement quota can define multiple notification rules. When thresholds are exceeded, automatic email notifications
can be sent to specified users, or you can monitor notifications as system alerts or receive emails for these events.
Notifications can be configured globally, to apply to all quota domains, or be configured for specific quota domains.
Enforcement quotas support the following notification settings. A given quota can use only one of these settings.
Use the system settings for quota notifications Uses the global default notification for the specified type of
quota.
Create custom notifications rules Enables the creation of advanced, custom notifications that
apply to the specific quota. Custom notifications can be
configured for any or all of the threshold types (hard, soft,
or advisory) for the specified quota.
Instant Includes the write-denied notification, triggered when a hard threshold denies a write, and the threshold-
notifications exceeded notification, triggered at the moment a hard, soft, or advisory threshold is exceeded. These are
one-time notifications because they represent a discrete event in time.
Ongoing Generated on a scheduled basis to indicate a persisting condition, such as a hard, soft, or advisory
notifications threshold being over a limit or a soft threshold's grace period being expired for a prolonged period.
SmartQuotas 351
Quota reports
The OneFS SmartQuotas module provides reporting options that enable administrators to manage cluster resources and analyze
usage statistics.
Storage quota reports provide a summarized view of the past or present state of the quota domains. After raw reporting data
is collected by OneFS, you can produce data summaries by using a set of filtering parameters and sort types. Storage-quota
reports include information about violators, grouped by threshold types. You can generate reports from a historical data sample
or from current data. In either case, the reports are views of usage data at a given time. OneFS does not provide reports on
data aggregated over time, such as trending reports, but you can use raw data to analyze trends. There is no configuration limit
on the number of reports other than the space needed to store them.
OneFS provides the following data-collection and reporting methods:
● Scheduled reports are generated and saved on a regular interval.
● Ad hoc reports are generated and saved at the request of the user.
● Live reports are generated for immediate and temporary viewing.
Scheduled reports are placed by default in the /ifs/.isilon/smartquotas/reports directory, but the location is
configurable to any directory under /ifs. Each generated report includes quota domain definition, state, usage, and global
configuration settings. By default, ten reports are kept at a time, and older reports are purged. You can create ad hoc reports at
any time to view the current state of the storage quotas system. These live reports can be saved manually. Ad hoc reports are
saved to a location that is separate from scheduled reports to avoid skewing the timed-report sets.
Creating quotas
You can create two types of storage quotas to monitor data: accounting quotas and enforcement quotas. Storage quota limits
and restrictions can apply to specific users, groups, or directories.
The type of quota that you create depends on your goal.
● Enforcement quotas monitor and limit disk usage. You can create enforcement quotas that use any combination of hard
limits, soft limits, and advisory limits.
NOTE: Enforcement quotas are not recommended for snapshot-tracking quota domains.
● Accounting quotas monitor, but do not limit, disk usage.
NOTE: Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are running.
352 SmartQuotas
● To include capacity consumption on the cluster as well as data tiered to the cloud, select Application logical size. This
accounting quota does not measure file system space, but provides the application/user view of the used file capacity.
8. In the Quota limits area, select Track storage without specifying a storage limit.
9. Select how available space should be shown:
● Size of smallest hard or soft threshold
● Size of cluster
10. Click Create quota.
Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are in progress by checking Cluster
management > Job operations > Job summary.
Managing quotas
You can modify the configured values of a storage quota, and you can enable or disable a quota. You can also create quota limits
and restrictions that apply to specific users, groups, or directories.
Quota management in OneFS is simplified by the quota search feature, which helps you locate a quota or quotas by using filters.
You can unlink quotas that are associated with a parent quota, and configure custom notifications for quotas. You can also
disable a quota temporarily and then enable it when needed.
SmartQuotas 353
NOTE: Moving quota directories across quota domains is not supported.
To clear the result set and display all storage quotas, click Reset.
Manage quotas
Quotas help you monitor and analyze the current or historical use of disk storage. You can search for quotas, and you can view,
modify, delete, and unlink a quota.
You must run an initial QuotaScan job for the default or scheduled quotas, or the data that is displayed may be incomplete.
Before you modify a quota, consider how the changes will affect the file system and end users.
NOTE:
● The options to edit or delete a quota display only when the quota is not linked to a default quota.
● The option to unlink a quota is available only when the quota is linked to a default quota.
1. Click File System > SmartQuotas > Quotas and usage.
2. Optional: In the filter bar, select the options that you want to filter by.
● From the Filters list, select the quota type that you want to find (Directory, User, Group, Default user, or Default
group).
● To search for quotas that are over the limit, select Over limit from the Exceeded list.
● In the Path field, type a full or partial path. You can use the wildcard character (*) in the Path field.
● To search subdirectories, select Include children from the Recursive path list.
Quotas that match the search criteria appear in the Quotas and usage table.
3. Optional: Locate the quota that you want to manage. You can perform the following actions:
● To review or edit this quota, click View Details.
● To delete the quota, click Delete.
● To unlink a linked quota, click Unlink.
NOTE: Configuration changes for linked quotas must be made on the parent (default) quota that the linked quota
is inheriting from. Changes to the parent quota are propagated to all children. If you want to override configuration
from the parent quota, you must first unlink the quota.
354 SmartQuotas
2. At the command prompt, run the following command:
The system parses the file and imports the quota settings from the configuration file. Quota settings that you configured
before importing the quota configuration file are retained, and the imported quota settings are effective immediately.
If a directory service is used to authenticate users, you can configure notification mappings that control how email addresses
are resolved when the cluster sends a quota notification. If necessary, you can remap the domain that is used for quota email
notifications and you can remap Active Directory domains, local UNIX domains, or both.
SmartQuotas 355
a. Click Add a Notification Rule.
The Create a Notification Rule dialog box opens.
b. From the Rule type list, select the rule type to use.
c. In the Rule Settings area, select the notify option to use.
6. Click Create Rule.
7. Click Save Changes.
Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are in progress by checking Cluster
Management > Job Operations > Job Summary.
NOTE: You must be logged in to the web administration interface to perform this task.
356 SmartQuotas
Template Description
quota_email_template.txt A notification that disk quota has been exceeded.
quota_email_grace_template.txt A notification that disk quota has been exceeded (also
includes a parameter to define a grace period in number of
days).
quota_email_test_template.txt A notification test message you can use to verify that a user is
receiving email notifications.
If the default email notification templates do not meet your needs, you can configure your own custom email notification
templates by using a combination of text and SmartQuotas variables. Whether you choose to create your own templates or
modify the existing ones, make sure that the first line of the template file is a Subject: line. For example:
If you want to include information about the message sender, include a From: line immediately under the subject line. If you use
an email address, include the full domain name for the address. For example:
From: [email protected]
In this example of the quota_email_template.txt file, a From: line is included. Additionally, the default text "Contact
your system administrator for details" at the end of the template is changed to name the administrator:
This is an example of a what a user will see as an emailed notification (note that the SmartQuotas variables are resolved):
<html><body>
<img src="https://i.dell.com/sites/imagecontent/app-merchandizing/responsive/Shop/Browse/
PublishingImages/dell-social-logo.jpg">
<h1>Quota Exceeded</h1><p></p>
<hr>
<p>The path <ISI_QUOTA_PATH> has exceeded the threshold <ISI_QUOTA_THRESHOLD> for this
<ISI_QUOTA_TYPE> quota.</p>
</body></html>
In this example, the <ISI_QUOTA_PATH>, <ISI_QUOTA_THRESHOLD>, and <ISI_QUOTA_TYPE> variables are substituted
with the values of the variables from the cluster in the body of the HTML email message. For example:
Quota Exceeded
SmartQuotas 357
The path /ifs/data/myfolder has exceeded the threshold 4.00G for this advisory quota.
cp /etc/ifs/quota_email_template.txt /ifs/data/quotanotifiers/
quota_email_template_copy.txt
edit /ifs/data/quotanotifiers/quota_email_template_copy.txt
358 SmartQuotas
11. In the Message template field, type the path for the message template, or click Browse to locate the template.
12. Optional: Click Create Rule
ls -a *.xml
● To view a specific quota report in the directory, run the following command:
ls <filename>.xml
SmartQuotas 359
Basic quota settings
When you create a storage quota, the following attributes must be defined, at a minimum. When you specify usage limits,
additional options are available for defining the quota.
Option Description
User Quota Create a quota for every current or future user that stores
data in the specified directory.
Group Quota Create a quota for every current or future group that stores
data in the specified directory.
Include snapshots in the storage quota Count all snapshot data in usage limits. This option cannot
be changed after the quota is created.
Enforce the limits for this quota based on physical size Base quota enforcement on storage usage which includes
metadata and data protection.
Enforce the limits for this quota based on file system logical size Base quota enforcement on storage usage which does not
include metadata and data protection.
Enforce the limits for this quota based on application logical size Base quota enforcement on storage usage which includes
capacity consumption on the cluster as well as data tiered to
the cloud.
Track storage without specifying a storage limit Account for usage only.
Specify storage limits Set and enforce advisory, soft, or absolute limits.
360 SmartQuotas
Option Description Exceeded Remains exceeded
Message template Type the path for the custom Yes Yes
template, or click Browse to
locate the custom template.
Leave the field blank to use
the default template.
SmartQuotas 361
Option Description Exceeded Remains Grace period Write access
exceeded expired denied
comma-
separated email
addresses.
Duplicate email
addresses are
identified and
only unique
addresses are
stored. You can
enter a
maximum of
1,024
characters of
comma-
separated email
addresses.
362 SmartQuotas
Hard limit quota notification rules settings
You can configure custom quota notification rules for hard limits for a quota. These settings are available when you select the
option to use custom notification rules.
Message template Type the path for the custom Yes Yes
template, or click Browse to
locate the custom template.
Leave the field blank to use
the default template.
SmartQuotas 363
Notification setting Description
Use the system settings for quota notifications Use the default notification rules that you configured for the
specified threshold type.
Create custom notification rules Provide settings to create basic custom notifications that
apply only to this quota.
Setting Description
Report frequency Specifies the interval for this report to run: daily, weekly,
monthly, or yearly. You can use the following options to
further refine the report schedule.
Generate report every. Specify the numeric value for the
selected report frequency; for example, every 2 months.
Generate reports on. Select the day or multiple days to
generate reports.
Select report day by. Specify date or day of the week to
generate the report.
Generate one report per specified by. Set the time of day
to generate this report.
Generate multiple reports per specified day. Set the
intervals and times of day to generate the report for that day.
Scheduled report archiving Determines the maximum number of scheduled reports that
are available for viewing on the SmartQuotas Reports page.
Limit archive size for scheduled reports to a specified
number of reports. Type the integer to specify the maximum
number of reports to keep.
Archive Directory. Browse to the directory where you want
to store quota reports for archiving.
364 SmartQuotas
26
Storage Pools
This section contains the following topics:
Topics:
• Storage pools overview
• Storage pool functions
• Autoprovisioning
• Node pools
• Virtual hot spare
• Spillover
• Suggested protection
• Protection policies
• SSD strategies
• Other SSD mirror settings
• Global namespace acceleration
• L3 cache overview
• Tiers
• File pool policies
• Managing node pools in the web administration interface
• Managing L3 cache from the web administration interface
• Managing tiers
• Creating file pool policies
• Managing file pool policies
• Monitoring storage pools
Autoprovisioning Automatically groups equivalent nodes into node pools for optimal storage efficiency and protection. At
of node pools least three equivalent nodes are required for autoprovisioning to work.
Tiers Groups node pools into logical tiers of storage. If you activate a SmartPools license for this feature, you
can create custom file pool policies and direct different file pools to appropriate storage tiers.
Default file pool Governs all file types and can store files anywhere on the cluster. Custom file pool policies, which require
policy a SmartPools license, take precedence over the default file pool policy.
Requested Specifies a requested protection setting for the default file pool, per node pool, or even on individual files.
protection You can leave the default setting in place, or choose the suggested protection calculated by OneFS for
optimal data protection.
Virtual hot spare Reserves a portion of available storage space for data repair in the event of a disk failure.
SSD strategies Defines the type of data that is stored on SSDs in the cluster. For example, storing metadata for read/
write acceleration.
L3 cache Specifies that SSDs in nodes are used to increase cache memory and speed up file system performance
across larger working file sets.
Global Activates global namespace acceleration (GNA), which enables data stored on node pools without
namespace SSDs to access SSDs elsewhere in the cluster to store extra metadata mirrors. Extra metadata mirrors
acceleration accelerate metadata read operations.
When you activate a SmartPools license, OneFS provides the following additional functions:
Custom file pool Creates custom file pool policies to identify different classes of files, and stores these file pools in logical
policies storage tiers. For example, you can define a high-performance tier of node pools and an archival tier
of high-capacity node pools. Then, with custom file pool policies, you can identify file pools based on
matching criteria, and you can define actions to perform on these pools. For example, one file pool policy
can identify all JPEG files older than a year and store them in an archival tier. Another policy can move all
files that were created or modified within the last three months to a performance tier.
Storage pool Enables automated capacity overflow management for storage pools. Spillover defines how to handle
spillover write operations when a storage pool is not writable. If spillover is enabled, data is redirected to a
specified storage pool. If spillover is disabled, new data writes fail and an error message is sent to the
client that is attempting the write operation.
Node pools
A node pool is a group of three or more nodes that forms a single pool of storage. As you add nodes to the cluster, OneFS
attempts to automatically provision the new nodes into node pools.
To autoprovision a node, OneFS requires that the new node be equivalent to the other nodes in the node pool. If the new node
is equivalent, OneFS provisions the new node to the node pool. All nodes in a node pool are peers, and data is distributed across
nodes in the pool. Each provisioned node increases the aggregate disk, cache, CPU, and network capacity of the cluster.
We strongly recommend that you let OneFS handle node provisioning. However, if you have a special requirement or use case,
you can move nodes from an autoprovisioned node pool into a node pool that you define manually. The capability to create
manually-defined node pools is available only through the OneFS command-line interface, and should be deployed only after
consulting with Dell PowerScale Technical Support.
If you try to remove a node from a node pool for the purpose of adding it to a manual node pool, and the result would leave
fewer than three nodes in the original node pool, the removal fails. When you remove a node from a manually-defined node pool,
OneFS attempts to autoprovision the node back into an equivalent node pool.
If you add fewer than three equivalent nodes to your cluster, OneFS cannot autoprovision these nodes. In these cases, you can
add new node types to existing node pools. Adding the new node types can enable OneFS to provision the newly added nodes to
a compatible node pool.
Node pools can use SSDs either as storage or as L3 cache, but not both, with the following exception. PowerScale F200 and
F600 nodes are full SSD nodes and can only be used as storage. Enabling L3 cache on F200 and F600 nodes is not an option.
NOTE: Do not use NL nodes in node pools used for NFS or SMB. It is recommended that you use high performance nodes
to handle NFS and SMB workloads.
Compatibilities
If there are compatibility restrictions between new nodes and the existing nodes in a node pool, OneFS cannot autoprovision the
new nodes. To enable new nodes to join a compatible node pool, add the new node type to the existing node pool. Modify node
pool compatibilities using the command-line interface.
Add new node type to existing node pool
For example, suppose that your cluster has an X410 node pool and you add a newer X410 node. OneFS attempts to
autoprovision the new node to the X410 node pool. However, if the new X410 node has different RAM than the older X410
nodes, then OneFS cannot autoprovision the new node. To provision the new node into the existing X410 node pool, add the
new X410 node type to the existing X410 node pool.
Use the isi storagepool nodetypes list command to view the node types and their IDs, then isi storagepool
nodepools modify <nodepool_name> --add-node-type-ids=<new_nodetype_id> to add the new node type to
the existing node pool.
For example, suppose that your X410 node pool name is x410_nodepool and isi storagepool nodetypes list shows
the new node type ID as 12:
Performance considerations and incompatible node types determine compatibility restrictions. For example:
● Performance can be affected by adding a particular node type to an existing node pool.
● A particular node type can be incompatible with the nodes in an existing pool.
In that case, OneFS generates a message describing the compatibility issue.
NOTE: SSD compatibilities require that L3 cache is enabled on all nodes. If you attempt to move nodes with SSDs into a
node pool on which L3 cache is not enabled, the process fails with an error message. Ensure that L3 cache is enabled for
the existing node pool and try again. L3 cache can only be enabled on nodes that have fewer than 16 SSDs and at least a
2:1 ratio of HDDs to SSDs. On Generation 6 nodes that support SSD compatibilities, SSD count is ignored. If SSDs are used
for storage, then SSD counts must be identical on all nodes in a node pool. If SSD counts are left unbalanced, node pool
efficiency and performance can be less than optimal.
For example, the PowerScale F200 and F600 node types are incompatible with each other and with previous node types. You
cannot add F200 or F600 nodes to a node pool containing any other node types (for example, S210 or F800 nodes). They are
not hybrid nodes, so enabling L3 cache is not an option. They can be used as storage only.
The following table shows the compatibilities between specific PowerScale archive and hybrid nodes. Nodes in the same row of
the table are compatible. Compatible nodes can be provisioned into the same node pool. Nodes that are not compatible cannot
be provisioned into the same node pool.
A300 nodes are compatible with A200 and H400 nodes. However, A200 and H400 nodes are not compatible with each other.
Compatibility restrictions
OneFS enforces pool and node type restrictions for cluster configuration and node compatibility. Restrictions represent the rules
governing cluster configuration and node compatibility. They prevent performance degradation of the node types within a node
pool.
OneFS supports the following restriction types.
● Hard node type restriction: A rule that is not allowed. If you try to modify a cluster configuration in a way that generates a
hard restriction, the modification fails. OneFS presents a message that describes the restrictions that result in denying the
modification request.
● Soft node type restriction: A rule that is allowed but requires confirmation before being implemented. If you try to modify a
cluster configuration in a way that generates a soft restriction, OneFS presents an advisory notice. To continue, you must
confirm the modification.
NOTE: If the modification request results in both hard and soft restrictions, OneFS reports only the hard restrictions.
● Pool restriction: A rule that exists for a node pool.
○ Hard pool restriction: A rule that represents an invalid change to a node group. For example, you cannot modify a manual
node pool or modify a pool in a way that results in that pool being underprovisioned.
○ Soft pool restriction: A rule that represents a change to a node group that requires confirmation. Requesting a
modification that results in a soft pool restriction generates an advisory notice. To continue, you must confirm the
modification.
Some examples of hard and soft restrictions are as follows.
● There are hard node type restrictions for the PowerScale F200 and F600 node types.
○ F200 and F600 node types are incompatible with each other and with previous node types.
○ F200 node types can form node pools only with other compatible F200 nodes.
○ F600 node types can form node pools only with other compatible F600 nodes.
○ F200 and F600 nodes are storage only nodes and cannot be used as L3 cache.
○ F200 nodes must have the same SSD size to be considered compatible.
If you try to add F200 or F600 nodes to an incompatible node pool, the modification fails.
● A300 nodes are compatible with A200 and H400 nodes. However, A200 and H400 nodes are not compatible with each
other.
● There is a soft node type restriction for different RAM capacities. Any difference in RAM is allowed and there are no RAM
ranges for compatibilities. If you add a node to a node pool that has different RAM than existing nodes in that pool, OneFS
displays an advisory notice. Confirm the operation to add the node to the node pool.
Spillover
When you activate a SmartPools license, you can designate a node pool or tier to receive spillover data when the hardware
specified by a file pool policy is full or otherwise not writable.
If you do not want data to spill over to a different location because the specified node pool or tier is full or not writable, you can
disable this feature.
NOTE: Virtual hot spare reservations affect spillover. If the setting Deny data writes to reserved disk space is enabled,
while Ignore reserved space when calculating available free space is disabled, spillover occurs before the file system
reports 100% utilization.
Suggested protection
Based on the configuration of your PowerScale cluster, OneFS automatically calculates the amount of protection that is
recommended to maintain Dell Technologies PowerScale stringent data protection requirements.
OneFS includes a function to calculate the suggested protection for data to maintain a theoretical mean-time to data loss
(MTTDL) of 5000 years. Suggested protection provides the optimal balance between data protection and storage efficiency on
your cluster.
By configuring file pool policies, you can specify one of multiple requested protection settings for a single file, for subsets of files
called file pools, or for all files on the cluster.
It is recommended that you do not specify a setting below suggested protection. OneFSperiodically checks the protection level
on the cluster, and alerts you if data falls below the recommended protection.
Protection policies
OneFS provides a number of protection policies to choose from when protecting a file or specifying a file pool policy.
The more nodes you have in your cluster, up to 20 nodes, the more efficiently OneFS can store and protect data, and the
higher levels of requested protection the operating system can achieve. Depending on the configuration of your cluster and
how much data is stored, OneFS might not be able to achieve the level of protection that you request. For example, if you
have a three-node cluster that is approaching capacity, and you request +2n protection, OneFS might not be able to deliver the
requested protection.
The following table describes the available protection policies in OneFS.
5x
6x
7x
8x
SSD strategies
OneFS clusters can contain nodes that include solid-state drives (SSD). OneFS autoprovisions nodes with SSDs into one or
more node pools. The SSD strategy defined in the default file pool policy determines how SSDs are used within the cluster, and
can be set to increase performance across a wide range of workflows. SSD strategies apply only to SSD storage.
You can configure file pool policies to apply specific SSD strategies as needed. When you select SSD options during the creation
of a file pool policy, you can identify the files in the OneFS cluster that require faster or slower performance. When the
SmartPools job runs, OneFS uses file pool policies to move this data to the appropriate storage pool and drive type.
The following SSD strategy options that you can set in a file pool policy are listed in order of slowest to fastest choices:
Avoid SSDs Writes all associated file data and metadata to HDDs only.
CAUTION: Use this option to free SSD space only after consulting with Dell Technologies
Support. Using this strategy can negatively affect performance.
Metadata read Writes both file data and metadata to HDDs. This is the default setting. An extra mirror of the file
acceleration metadata is written to SSDs, if available. The extra SSD mirror is included in the number of mirrors, if any,
required to satisfy the requested protection.
Metadata Writes file data to HDDs and metadata to SSDs, when available. This strategy accelerates metadata
read/write writes in addition to reads but requires about four to five times more SSD storage than the Metadata
acceleration read acceleration setting. Enabling GNA does not affect read/write acceleration.
Data on SSDs Uses SSD node pools for both data and metadata, regardless of whether global namespace acceleration
is enabled. This SSD strategy does not result in the creation of additional mirrors beyond the normal
requested protection but requires significantly increased storage requirements compared with the other
SSD strategy options.
Note the following considerations for setting and applying SSD strategies.
● To use an SSD strategy that stores metadata and/or data on SSDs, you must have SSD storage in the node pool or tier,
otherwise the strategy is ignored.
● If you specify an SSD strategy but there is no storage of the type that you specified, the strategy is ignored.
● If you specify an SSD strategy that stores metadata and/or data on SSDs but the SSD storage is full, OneFS attempts to
spill data to HDD. If HDD storage is full, OneFS raises an out of space error.
L3 cache overview
You can configure nodes with solid-state drives (SSDs) to increase cache memory and speed up file system performance across
larger working file sets.
OneFS caches file data and metadata at multiple levels. The following table describes the types of file system cache available on
a PowerScale cluster.
OneFS caches frequently accessed file and metadata in available random access memory (RAM). Caching enables OneFS to
optimize data protection and file system performance. When RAM cache reaches capacity, OneFS normally discards the oldest
cached data and processes new data requests by accessing the storage drives. This cycle is repeated each time RAM cache fills
up.
You can deploy SSDs as L3 cache to reduce the cache cycling issue and further improve file system performance. L3 cache adds
significantly to the available cache memory and provides faster access to data than hard disk drives (HDD).
As L2 cache reaches capacity, OneFS evaluates data to be released and, depending on your workflow, moves the data to L3
cache. In this way, much more of the most frequently accessed data is held in cache, and overall file system performance is
improved.
For example, consider a cluster with 128GB of RAM. Typically the amount of RAM available for cache fluctuates, depending
on other active processes. If 50 percent of RAM is available for cache, the cache size would be approximately 64GB. If this
same cluster had three nodes, each with two 200GB SSDs, the amount of L3 cache would be 1.2TB, approximately 18 times the
amount of available L2 cache.
L3 cache is enabled by default for new node pools. A node pool is a collection of nodes that are all of the same equivalence
class, or for which compatibilities have been created. L3 cache applies only to the nodes where the SSDs reside. For the HD400
node, which is primarily for archival purposes, L3 cache is on by default and cannot be turned off. On the HD400, L3 cache is
used only for metadata.
If you enable L3 cache on a node pool, OneFS manages all cache levels to provide optimal data protection, availability, and
performance. In addition, in case of a power failure, the data on L3 cache is retained and still available after power is restored.
NOTE: Although some benefit from L3 cache is found in workflows with streaming and concurrent file access, L3 cache
provides the most benefit in workflows that involve random file access.
Migration to L3 cache
L3 cache is enabled by default on new nodes.
You can enable L3 cache as the default for all new node pools or manually for a specific node pool, either through the command
line or from the web administration interface. L3 cache can be enabled only on node pools with nodes that contain SSDs. When
you enable L3 cache, OneFS migrates data that is stored on the SSDs to HDD storage disks and then begins using the SSDs as
cache.
When you enable L3 cache, OneFS displays the following message:
WARNING: Changes to L3 cache configuration can have a long completion time. If this is a
concern, please contact Dell Technologies Support for more information.
You must confirm whether OneFS should proceed with the migration. After you confirm the migration, OneFS handles the
migration as a background process, and, depending on the amount of data stored on your SSDs, the process of migrating data
from the SSDs to the HDDs might take a long time.
NOTE: You can continue to administer your cluster while the data is being migrated.
Nodes Comments
HD-series For all node pools made up of HD-series nodes, L3 cache stores metadata only in SSDs and cannot
be disabled.
Generation 6 A-series For all node pools made up of Generation 6 A-series nodes, L3 cache stores metadata only in SSDs
and cannot be disabled.
FilePolicy job
You can use the FilePolicy job to apply file pool policies.
The FilePolicy job supplements the SmartPools job by scanning the file system index that the File System Analytics (FSA) job
uses. You can use this job if you are already using snapshots (or FSA) and file pool policies to manage data on the cluster. The
FilePolicy job is an efficient way to keep inactive data away from the fastest tiers. Because the scan is done on the index, which
does not require many locks, ensure that you run the IndexUpdate job before running the FilePolicy job. In this way, you can
vastly reduce the number of times a file is visited before it is tiered down.
You must keep down-tiering data in ways they already have, such as file pool policies that move data based on a fixed age.
Adjust the data based on the fullness of their tiers.
To ensure that the cluster is correctly laid out and adequately protected, run the SmartPools job. Use the SmartPools job after
modifying the cluster, such as adding or removing nodes. You can also use the job for modifying the SmartPools settings (such
as default protection settings), and if a node is down.
To use this feature, you must schedule the FilePolicy job daily and continue running the SmartPools job at a lower frequency.
You can run the SmartPools job after events that may affect node pool membership.
You can use the following options when running the FilePolicy job:
Managing tiers
You can move node pools into tiers to optimize file and storage management. Managing tiers requires the SmartPools or higher
administrative privilege.
Create a tier
You can group create a tier that contains one or more node pools. You can use the tier to store specific categories of files.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab appears with two sections: Tiers and pools and Compatibilities.
2. In the Tiers and pools section, click Create a tier.
3. In the Create a tier page that appears, enter a name for the tier.
4. For each node pool that you want to add to the tier, select a node pool from the Available Node Pools list, and click Add.
The node pool is moved into the Selected Node Pools for this Tier list.
5. Click Create Tier.
The Create a tier page closes, and the new tier is added to the Tiers and pools area. The node pools that you added
appear below the tier name.
Edit a tier
You can modify the name and change the node pools that are assigned to a tier.
A tier name can contain alphanumeric characters and underscores but cannot begin with a number.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab displays two groups: Tiers and pools and Compatibilities.
2. In the Tiers and pools area, click the tier you want to edit.
3. In the Edit Tier Details page, modify the following settings as needed:
Option Description
Tier Name To change the name of the tier, select and type over the existing name.
Node Pool Selection To change the node pool selection, select a node pool, and click either Add or Remove.
4. When you have finished editing tier settings, click Submit.
Delete a tier
You can delete a tier that has no assigned node pools.
If you want to delete a tier that does have assigned node pools, you must first remove the node pools from the tier.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab displays two lists: Tiers and pools and Compatibilities.
2. In the Tiers and pools list, go to the Actions column of the tier that you want to delete and click the X.
A message box asks you to confirm or cancel the operation.
3. Click Delete to confirm the operation.
If existing file pool policies direct data to a specific storage pool, do not configure other file pool policies with
anywhere for the Data storage target option. Because the specified storage pool is included when you use
anywhere, target specific storage pools to avoid unexpected results.
1. Click File System > Storage Pools > File Pool Policies.
2. Click Create a File Pool Policy.
3. In the Create a File Pool Policy dialog box, enter a policy name and, optionally, a description.
4. Specify the files to be managed by the file pool policy.
To define the file pool, you can specify file matching criteria by combining IF, AND, and OR conditions. You can define these
conditions with a number of file attributes, such as name, path, type, size, and timestamp information.
5. Specify SmartPools actions to be applied to the selected file pool.
You can specify storage and I/O optimization settings to be applied.
6. Click Create Policy.
The file pool policy is created and applied when the next scheduled SmartPools system job runs. By default, this job runs once a
day, but you also have the option to start the job immediately.
OneFS supports UNIX shell-style (glob) pattern matching for file name attributes and paths.
The following table lists the file attributes that you can use to define a file pool policy.
File type Includes or excludes files based on one of the following file-system object types:
● File
● Directory
● Other
Modified Includes or excludes files based on when the file was last modified.
In the web administration interface, you can specify a relative date and time,
such as "older than 2 weeks," or a specific date and time, such as "before
January 1, 2012." Time settings are based on a 24-hour clock.
Created Includes or excludes files based on when the file was created.
In the web administration interface, you can specify a relative date and time,
such as "older than 2 weeks," or a specific date and time, such as "before
January 1, 2012." Time settings are based on a 24-hour clock.
Metadata changed Includes or excludes files based on when the file metadata was last modified.
This option is available only if the global access-time-tracking option of the
cluster is enabled.
In the web administration interface, you can specify a relative date and time,
such as "older than 2 weeks," or a specific date and time, such as "before
January 1, 2012." Time settings are based on a 24-hour clock.
Accessed Includes or excludes files based on when the file was last accessed based on the
following units of time:
In the web administration interface, you can specify a relative date and time,
such as "older than 2 weeks," or a specific date and time, such as "before
January 1, 2012." Time settings are based on a 24-hour clock.
NOTE: Because it affects performance, access time tracking as a file pool
policy criterion is disabled by default.
Wildcard Description
* Matches any string in place of the asterisk.
For example, m* matches movies and m123.
[a-z] Matches any characters contained in the brackets, or a range of characters separated by a
hyphen. For example, b[aei]t matches bat, bet, and bit, and 1[4-7]2 matches 142,
152, 162, and 172.
You can exclude characters within brackets by following the first bracket with an exclamation
mark. For example, b[!ie] matches bat but not bit or bet.
You can match a bracket within a bracket if it is either the first or last character. For
example, [[c]at matches cat and [at.
You can match a hyphen within a bracket if it is either the first or last character. For
example, car[-s] matches cars and car-.
? Matches any character in place of the question mark. For example, t?p matches tap, tip,
and top.
SmartPools settings
SmartPools settings include directory protection, global namespace acceleration, L3 cache, virtual hot spare, spillover, requested
protection management, and I/O optimization management.
Enable global --global-namespace- Specifies whether to allow per- This setting is available only if
namespace acceleration acceleration-enabled file metadata to use SSDs in the 20 percent or more of the nodes
node pool. in the cluster contain SSDs and
● When disabled, restricts per- at least 1.5 percent of the total
file metadata to the storage cluster storage is SSD-based.
pool policy of the file, except If nodes are added to or removed
in the case of spillover. This is from a cluster, and the SSD
the default setting. thresholds are no longer satisfied,
● When enabled, allows per-file GNA becomes inactive. GNA
metadata to use the SSDs in remains enabled, so that when
any node pool. the SSD thresholds are met again,
GNA is reactivated.
NOTE: Node pools with L3
cache enabled are effectively
invisible for GNA purposes.
All ratio calculations for GNA
are done exclusively for
node pools without L3 cache
enabled.
Use SSDs as L3 Cache --ssd-l3-cache-default- For node pools that include L3 cache is enabled by default
by default for new node enabled solid-state drives, deploy the on new node pools. When you
pools SSDs as L3 cache. L3 cache enable L3 cache on an existing
extends L2 cache and speeds up node pool, OneFS performs a
file system performance across migration, moving any existing
larger working file sets. data on the SSDs to other
locations on the cluster.
OneFS manages all cache
levels to provide optimal
data protection, availability, and
performance. In case of a power
failure, the data on L3 cache is
retained and still available after
power is restored.
Virtual Hot Spare --virtual-hot-spare-deny- Reserves a minimum amount of If you configure both the
writes space in the node pool that can minimum number of virtual drives
--virtual-hot-spare-hide- be used for data repair in the and a minimum percentage of
spare event of a drive failure. total disk space when you
configure reserved VHS space,
--virtual-hot-spare-limit- To reserve disk space for use as
the enforced minimum value
drives a virtual hot spare, select from
satisfies both requirements.
the following options:
--virtual-hot-spare-limit- If this setting is enabled and
percent ● Ignore reserved disk space
Deny new data writes is
when calculating available
disabled, it is possible for the file
free space. Subtracts the
system utilization to be reported
space reserved for the virtual
at more than 100%.
Enable global spillover --spillover-enabled Specifies how to handle write ● When enabled, redirects write
operations to a node pool that is operations from a node pool
not writable. that is not writable either
to another node pool or
anywhere on the cluster (the
default).
● When disabled, returns a
disk space error for write
operations to a node pool that
is not writable.
Spillover Data Target --spillover-target Specifies another storage pool to When spillover is enabled, but it
target when a storage pool is not is important that data writes do
--spillover-anywhere
writable. not fail, select anywhere for the
Spillover Data Target setting,
even if file pool policies send data
to specific pools.
Manage protection --automatically-manage- When this setting is enabled, When Apply to files with
settings protection SmartPools manages requested manually-managed protection
protection levels automatically. is enabled, overwrites any
protection settings that were
configured through File System
Explorer or the command-line
interface.
Manage I/O --automatically-manage- When enabled, uses SmartPools When Apply to files
optimization settings io-optimization technology to manage I/O with manually-managed I/O
optimization. optimization settings is
enabled, overwrites any I/O
optimization settings that were
configured through File System
Explorer or the command-line
interface
None --ssd-qab-mirrors Either one mirror or all mirrors for Improve quota accounting
the quota account block (QAB) performance by placing all QAB
are stored on SSDs mirrors on SSDs for faster I/O.
By default, only one QAB mirror is
stored on SSD.
None --ssd-system-btree- Either one mirror or all mirrors for Increase file system performance
mirrors the system B-tree are stored on by placing all system B-tree
SSDs mirrors on SSDs for faster
access. Otherwise only one
system B-tree mirror is stored on
SSD.
1. Click File System > Storage Pools > File Pool Policies.
2. In the File Pool Policies tab, next to Default Policy in the list, click View/Edit.
The View Default Policy Details dialog box is displayed.
3. Click Edit Policy.
The Edit Default Policy Details dialog box is displayed.
4. In the Apply SmartPools Actions to Selected Files section, choose the storage settings that you want to apply as the
default for Storage Target, Snapshot Storage Target, and Requested Protection.
5. Click Save Changes, and then click Close.
The next time the SmartPools job runs, the settings that you selected are applied to any file that is not covered by another file
pool policy.
you use anywhere, target specific storage both file data and
pools to avoid unintentional file storage metadata to HDD storage
locations. pools but adds an
additional SSD mirror if
Select one of the following options to define your possible to accelerate read
SSD strategy: performance. Uses HDDs
to provide reliability and
Use SSDs for Default. Write both file an extra metadata mirror
metadata read data and metadata to HDDs to SSDs, if available, to
acceleration and metadata to SSDs. improve read performance.
Accelerates metadata reads Recommended for most
only. Uses less SSD space uses.
than the Metadata read/
write acceleration setting. When you select Use SSDs
for metadata read/write
Use SSDs Write metadata to acceleration , the strategy
for metadata SSD pools. Uses uses SSDs, if available in
read/write significantly more SSD the storage target, for
acceleration space than Metadata performance and reliability.
read acceleration, but The extra mirror can be
accelerates metadata reads from a different storage
and writes. pool using GNA enabled or
from the same node pool.
Use SSDs for Use SSDs for both data Neither the Use SSDs for
data & metadata and metadata. Regardless of data & metadata strategy
whether global namespace nor the Use SSDs for data
acceleration is enabled, any & metadata strategy result
SSD blocks reside on the in the creation of additional
storage target if there is mirrors beyond the normal
room. requested protection. Both
file data and metadata
Avoid SSDs Write all associated file data are stored on SSDs if
and metadata to HDDs only. available within the file pool
CAUTION: policy. This option requires
a significant amount of SSD
Use this to free storage.
SSD space only
after consulting with
Dell Technologies
Support; the setting
can negatively affect
performance.
Snapshot storage --snapshot-storage- Specifies the storage pool that you want to target Notes for data storage
target target for snapshot storage with this file pool policy. The target apply to snapshot
--snapshot-ssd- settings are the same as those for data storage storage target
strategy target, but apply to snapshot data.
Requested --set-requested- Default of storage pool. Assign the default To change the requested
protection protection requested protection of the storage pool to the protection , select a new
filtered files. value from the list.
Specific level. Assign a specified requested
protection to the filtered files.
1. Click File System > Storage Pools > File Pool Policies.
2. In the File Pool Policies list, next to the policy you want to modify, click View/Edit.
The View File Pool Policy Details dialog box is displayed.
3. Click Edit Policy.
The Edit File Pool Policy Details dialog box is displayed.
4. Modify the policy settings, and then click Save Changes.
5. Click Close in the View File Pool Policy Details dialog box.
FSAnalyze (FSA)
The FSAnalyze job in OneFS gathers file system analytic information.
The FSAnalyze (FSA) job is used for namespace analysis. You can run FSA on demand or on a schedule through the command
line interface and PAPI. A successful FSA job produces analysis result. You may run FSA at least once a day.
Disk usage is a component of FSA that adds up the overall usage for any given directory. It gathers disk usage efficiently and
stores the results in a results database table, one row for every directory. FSA PAPI directory information endpoint exposes
stored result. The result for a directory may be compared with the result at a different time using the same PAPI endpoint. You
can run FSA on demand or on a schedule through the command line interface and PAPI. A successful FSA job produces analysis
result. You may run FSA at least once a day.
The FSA job runs in two modes. The SCAN mode scans the OneFS file system entirely. The INDEX mode is the default mode and
is more efficient. It relies on snapshots and change lists, updates, and walks a global metadata index.
FSA or IndexUpdate job is used to build the metadata index. The FSA job running in INDEX mode generates FSA Index.
IndexUpdate job generates Cluster Index. The disk usage component walks the metadata index.
NOTE: To initiate any Job Engine tasks, you must have the role of SystemAdmin in the OneFS system.
FSAnalyze* Gathers and reports information about all None Low 1 Scheduled
files and directories beneath the /ifs path.
This job requires you to activate an InsightIQ
license. Reports from this job are used by
Undedupe Undedupe undoes the work that the dedupe Dedupe Medium 6 Manual
job performed, potentially increasing disk
Restripe
space usage.
Upgrade Upgrades the file system after a software Restripe Medium 3 Manual
version upgrade.
NOTE: The Upgrade job should be
run only when you are updating your
cluster with a major software version.
For complete information, see the
PowerScale OneFS Upgrade Planning
and Process Guide.
WormQueue Processes the WORM queue, which tracks None Low 6 Scheduled
the commit times for WORM files. After
a file is committed to WORM state, it is
removed from the queue.
* Available only if you activate an additional license
Job operation
OneFS includes system maintenance jobs that run to ensure that your PowerScale cluster performs at peak health.
Through the Job Engine, OneFS runs a subset of these jobs automatically, as needed, to:
● Ensure file and data integrity.
● Check for and mitigate drive and node failures.
● Optimize free space.
For other jobs, such as Dedupe, you can use Job Engine to start them manually or schedule them to run automatically at regular
intervals. Job Engine will not start a scheduled job if the job is currently running. The scheduled job starts after the running
instance finishes.
The Job Engine runs system maintenance jobs in the background and prevents jobs within the same classification (exclusion set)
from running simultaneously. Two exclusion sets are enforced: restripe and mark.
Restripe job types are:
● AutoBalance
● AutoBalanceLin
● FlexProtect
● FlexProtectLin
● MediaScan
● MultiScan
● SetProtectPlus
● SmartPools
Mark job types are:
● Collect
● IntegrityScan
● MultiScan
MultiScan is a member of both the restripe and mark exclusion sets. You cannot change the exclusion set parameter for a job
type.
Related references
System jobs library
If your workflow requires an impact policy different from the defaults, you can create a custom policy with new settings.
Jobs with a low impact policy have the least impact on available CPU and disk I/O resources. Jobs with a high impact policy
have a significant impact. Job Engine limits the number of tasks that can process in parallel according to the impact policy.
However, if the impact of a job is under the impact limits, Job Engine can increase the number of tasks that are in process.
CAUTION: Job Engine can use more CPU and I/O even if doing so delays other system activities. Requesting a
HIGH impact for a job can be disruptive to the cluster and can affect the client connection.
Related concepts
Managing impact policies
Related references
System jobs library
Start a job
By default, only some system maintenance jobs are scheduled to run automatically. However, you can start any of the jobs
manually at any time.
1. Click Cluster Management > Job Operations > Job Types.
2. In the Job Types list, locate the job that you want to start, and then click Start Job.
The Start a Job dialog box appears.
3. Provide the details for the job, then click Start Job.
Related references
System jobs library
Pause a job
You can pause a job temporarily to free up system resources.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to pause.
3. Click Pause Running Job in the menu that appears.
The job remains paused until you resume it.
Related references
System jobs library
Resume a job
You can resume a paused job.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to pause.
3. Click Resume Running Job in the menu that appears.
The job continues from the phase or task at which it was paused.
Related references
System jobs library
Cancel a job
If you want to free up system resources, or for any reason, you can permanently discontinue a running, paused, or waiting job.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to cancel.
3. Click Cancel Running Job in the menu that appears.
Related references
System jobs library
NOTE: To change job settings permanently, see "Modify job type settings."
Related references
System jobs library
Related references
System jobs library
Related concepts
Job performance impact
Related concepts
Job performance impact
Related concepts
Job performance impact
Policy description a. In the Description field, type a new overview for the impact policy.
b. Click Submit.
Impact schedule a. In the Impact Schedule area, modify the schedule of the impact policy by adding, editing, or
deleting impact intervals.
b. Click Save Changes.
The modified impact policy is saved and listed in alphabetical order in the Impact Policies table.
Related concepts
Job performance impact
Related concepts
Job performance impact
Related concepts
Job performance impact
Networking overview
After you determine the topology of your network, you can set up and manage your internal and external networks.
There are two types of networks on a cluster:
Internal Generation 5 nodes communicate with each other using a high-speed, low latency InfiniBand network.
Generation 6 nodes support using InfiniBand or Ethernet for the internal network. PowerScale F200
and F600 nodes support only Ethernet as the backend network. You can optionally configure a second
InfiniBand network to enable failover for redundancy.
External Clients connect to the cluster through the external network with Ethernet. The PowerScale cluster
supports standard network communication protocols, including NFS, SMB, HDFS, HTTP, and FTP. The
cluster includes various external Ethernet connections, providing flexibility for a wide variety of network
configurations.
Networking 403
Internal IP address ranges
The number of IP addresses assigned to the internal network determines how many nodes can be joined to the cluster.
When you initially configure the cluster, you specify one or more IP address ranges for the primary InfiniBand switch or Ethernet.
This range of addresses is used by the nodes to communicate with each other. It is recommended that you create a range of
addresses large enough to accommodate adding additional nodes to your cluster.
While all clusters will have, at minimum, one internal InfiniBand or Ethernet network (int-a), you can enable a second internal
network to support network failover (int-b/failover). You must assign at least one IP address range for the secondary network
and one range for failover.
If any IP address ranges defined during the initial configuration are too restrictive for the size of the internal network, you can
add ranges to the int-a network or int-b/failover networks, which might require a cluster restart. Other configuration changes,
such as deleting an IP address assigned to a node, might also require that the cluster be restarted.
NOTE: Generation 5 nodes support InfiniBand for the internal network. Generation 6 nodes support both InfiniBand and
Ethernet for the internal network. PowerScale F200 and F600 nodes support Ethernet for the internal network.
404 Networking
IPv6 support
OneFS supports both IPv4 and IPv6 address formats on a cluster. OneFS supports dual stack.
OneFS supports the USGv6 standard of IPv6 used by the US Government.
The following table describes distinctions between IPv4 and IPv6.
IPv4 IPv6
32-bit addresses 128-bit addresses
Address Resolution Protocol (ARP) Neighbor Discovery Protocol (NDP); Duplicate Address
Detection (DAD)
Router Advertisement
A subnet can use either IPv4 or IPv6 addresses, but not both. You set the IP family when creating the subnet, and all IP address
pools that are assigned to the subnet must use the selected format.
Dual Stack
Dual stack means that a domain name can reference both IPv4 and IPv6 network pools.
You can configure one subnet to be IPv4 and another to be IPv6. If a pool in both subnets has the same sc-dns-zone, and the
sc-subnet references the same subnet (for example, they both reference the IPv4 subnet), that IPv4 subnet can now resolve
for both IPv4 and IPv6 addresses.
Related concepts
Subnets
● On new clusters that are installed with OneFS 9.5.0.0 and later:
○ If you use IPv6 configurations in the initial configuration wizard, IPv6 is enabled on the cluster. For example, if you
configure IPv6 external DNS servers, network pool IPs, SmartConnect service addresses, the wizard enables IPv6.
○ If you use only IPv4 configurations in the initial configuration wizard, IPv6 is disabled on the cluster. You can enable basic
IPv6 support at any time in the CLI using isi network external modify --ipv6-enabled true.
● On an existing OneFS cluster that has IPv6 enabled, an upgrade to OneFS 9.5.0.0 or later does not change the IPv6
configurations. In this case, IPv6 remains enabled.
IPv6 configuration options are disabled by default when you first enable IPv6 support. You can enable each option using the isi
network external modify command.
IPv6 configuration
Enable, disable, and configure options for IPv6 using the isi network external modify command.
The following IPv6 options are available for configuration in the command.
Table 26. IPv6 options in the isi network external modify command
Option Description
--ipv6-enabled Enables or disables front-end interfaces to support IPv6.
--ipv6-auto-config-enabled Sets whether OneFS discovers and applies network settings from the
IPv6 router advertisements (RAs).
--ipv6-generate-link-local Specifies whether OneFS generates IPv6 link-local addresses on the
front-end network interfaces.
Networking 405
Table 26. IPv6 options in the isi network external modify command (continued)
Option Description
--ipv6-dad Enables or disables IPv6 Duplicate Address Detection (DAD) globally on
OneFS. This option can set a global DAD timeout value. This global DAD
setting must be true to enable DAD on SSIPs or on network pools.
--ipv6-ssip-perform-dad Enables DAD on IPv6 SmartConnect Service IPs (SSIPs)
--ipv6-accept-redirects Controls whether OneFS processes ICMPv6 redirect messages.
You can also enable DAD on a network pool using the isi network pools modify or isi network pools create
commands.
Groupnets
Groupnets reside at the top tier of the networking hierarchy and are the configuration level for managing multiple tenants on
your external network. DNS client settings, such as nameservers and a DNS search list, are properties of the groupnet. You can
create a separate groupnet for each DNS namespace that you want to use to enable portions of the PowerScale cluster to have
different networking properties for name resolution. Each groupnet maintains its own DNS cache, which is enabled by default.
A groupnet is a container that includes subnets, IP address pools, and provisioning rules. Groupnets can contain one or more
subnets, and every subnet is assigned to a single groupnet. Each cluster contains a default groupnet named groupnet0 that
contains an initial subnet named subnet0, an initial IP address pool named pool0, and an initial provisioning rule named rule0.
Each groupnet is referenced by one or more access zones. When you create an access zone, you can specify a groupnet. If a
groupnet is not specified, the access zone will reference the default groupnet. The default System access zone is automatically
associated with the default groupnet. Authentication providers that communicate with an external server, such as Active
Directory and LDAP, must also reference a groupnet. You can specify the authentication provider with a specific groupnet;
otherwise, the provider will reference the default groupnet. You can only add an authentication provider to an access zone if
they are associated with the same groupnet. Client protocols such as SMB, NFS, HDFS, and Swift, are supported by groupnets
through their associated access zones.
Related concepts
Managing groupnets
DNS name resolution
Related tasks
Specify a SmartConnect service subnet
Subnets
Subnets are networking containers that enable you to sub-divide your network into smaller, logical IP networks.
On a cluster, subnets are created under a groupnet and each subnet contains one or more IP address pools. Both IPv4 and IPv6
addresses are supported on OneFS; however, a subnet cannot contain a combination of both. When you create a subnet, you
specify whether it supports IPv4 or IPv6 addresses.
You can configure the following options when you create a subnet:
● Gateway servers that route outgoing packets and gateway priority.
● Maximum transmission unit (MTU) that network interfaces in the subnet will use for network communications.
406 Networking
● SmartConnect service address, which is the IP address on which the SmartConnect module listens for DNS requests on this
subnet.
● SmartConnect service name, which is displayed when you create or modify a subnet. The SmartConnect service name field
is an optional field to answer nameserver, Start of Authority, and other DNS queries.
● VLAN tagging to allow the cluster to participate in multiple virtual networks.
How you set up your external network subnets depends on your network topology. For example, in a basic network topology
where all client-node communication occurs through direct connections, only a single external subnet is required. In another
example, if you want clients to connect through both IPv4 and IPv6 addresses, you must configure multiple subnets.
Related concepts
VLANs
Managing external network subnets
IPv6 support
VLANs
Virtual LAN (VLAN) tagging is an optional setting that enables a cluster to participate in multiple virtual networks.
You can partition a physical network into multiple broadcast domains, or virtual local area networks (VLANs). You can enable a
cluster to participate in a VLAN which allows multiple cluster subnet support without multiple network switches; one physical
switch enables multiple virtual subnets.
VLAN tagging inserts an ID into packet headers. The switch refers to the ID to identify from which VLAN the packet originated
and to which network interface a packet should be sent.
Related tasks
Enable or disable VLAN tagging
IP address pools
IP address pools are assigned to a subnet and consist of one or more IP address ranges. You can partition nodes and network
interfaces into logical IP address pools. IP address pools are also utilized when configuring SmartConnect DNS zones and client
connection management.
Each IP address pool belongs to a single subnet. Multiple pools for a single subnet are available only if you activate a
SmartConnect Advanced license.
The IP address ranges assigned to a pool must be unique and belong to the IP address family (IPv4 or IPv6) specified by the
subnet that contains the pool.
You can add network interfaces to IP address pools to associate address ranges with a node or a group of nodes. For example,
based on the network traffic that you expect, you might decide to establish one IP address pool for storage nodes and another
for accelerator nodes.
SmartConnect settings that manage DNS query responses and client connections are configured at the IP address pool level.
Related concepts
Managing IP address pools
Link aggregation
Link aggregation, also known as network interface card (NIC) aggregation, combines the network interfaces on a physical node
into a single, logical connection to provide improved network throughput.
You can add network interfaces to an IP address pool singly or as an aggregate. A link aggregation mode is selected on a
per-pool basis and applies to all aggregated network interfaces in the IP address pool. The link aggregation mode determines
how traffic is balanced and routed among aggregated network interfaces.
Related concepts
Managing network interface members
Networking 407
SmartConnect module
The SmartConnect module specifies how the cluster DNS server handles connection requests from clients and the policies that
assign IP addresses to network interfaces, including failover and rebalancing.
You can think of SmartConnect as a limited implementation of a custom DNS server. SmartConnect answers only for the
SmartConnect zone names or aliases that are configured on it. Settings and policies that are configured for SmartConnect are
applied per IP address pool.
NOTE: Enable gratuitous Address Resolution Protocol (gratuitous ARP, or GARP) on the network switch to ensure
consistent connectivity.
You can configure basic and advanced SmartConnect settings.
SmartConnect Basic
SmartConnect Basic is included with OneFS as a standard feature and does not require a license. SmartConnect Basic supports
the following:
● Specifying the DNS zone.
● Round-robin connection balancing method only
● Specifying a service subnet to answer DNS requests.
● Viewing the current status of nodes in a specified network pool.
SmartConnect Basic enables you to add two SmartConnect Service IP addresses to a subnet.
SmartConnect Basic has the following limitations to IP address pool configuration:
● You may only specify a static IP address allocation policy.
● You cannot specify an IP address failover policy.
● You cannot specify an IP address rebalance policy.
● You cannot create more than two IP address pools per network subnet.
SmartConnect Advanced
SmartConnect Advanced extends the settings available from SmartConnect Basic. It requires an active license. SmartConnect
Advanced supports the following settings:
● Round-robin, CPU utilization, connection counting, and throughput balancing methods
● Static and dynamic IP address allocation
SmartConnect Advance enables you to add a maximum of six SmartConnect Service IP addresses per subnet.
SmartConnect Advanced enables you to specify the following IP address pool configuration options:
● You can define an IP address failover policy for the IP address pool.
● You can define an IP address rebalance policy for the IP address pool.
● SmartConnect Advanced supports multiple IP address pools per external subnet to enable multiple DNS zones within a single
subnet.
Related concepts
Managing SmartConnect Settings
SmartConnect Multi-SSIP
OneFS supports defining more than one SmartConnect Service IP (SSIP) per subnet. Support for multiple SmartConnect
Service IPs (Multi-SSIP) ensures that client connections continue uninterrupted if an SSIP becomes unavailable.
The additional SSIPs provide fault tolerance and a failover mechanism to ensure continued load balancing of clients according to
the selected policy. Though the additional SSIPs are in place for failover, they are active and respond to DNS server requests.
The SmartConnect Basic license allows defining 2 SSIPs per subnet. The SmartConnect Advanced license allows defining up to
6 SSIPs per subnet.
408 Networking
NOTE: SmartConnect Multi-SSIP is not an additional layer of load balancing for client connections: additional SSIPs only
provide redundancy and reduce failure points in the client connection sequence. Do not configure the site DNS server to
perform load balancing for the SSIPs. Allow OneFS to perform load balancing through the selected SmartConnect policy to
ensure effective load balancing.
Configure DNS servers for SSIP failover to ensure that the next SSIP is contacted only if the first SSIP connection times out.
If the SSIPs are not configured in a failover sequence, the SSIP load balancing policy resets each time a new SSIP is contacted.
The SSIPs function independently: they do not track the current distribution status of the other SSIPs.
Configuring IP addresses as failover-only addresses is not supported on all DNS servers. To support Multi-SSIP as a failover
only option, it is recommended that you use a DNS server that supports failover addresses. If a DNS server does not support
failover addresses, Multi-SSIP still provides advantages over a single SSIP. However, increasing the number of SSIPs may affect
SmartConnect's ability to load balance.
NOTE: If the DNS server does not support failover addresses, test Multi-SSIP in a lab environment that mimics the
production environment to confirm the impact on SmartConnect's load balancing for a specific workflow. Only after
confirming workflow impacts in a lab environment should you update a production cluster.
You can configure a SmartConnect DNS zone name for each IP address pool. The zone name must be a fully qualified domain
name. Add a new name server (NS) record that references the SmartConnect service IP address in the existing authoritative
DNS zone that contains the cluster. Provide a zone delegation to the fully qualified domain name (FQDN) of the SmartConnect
zone in your DNS infrastructure.
If you have a SmartConnect Advanced license, you can also specify a list of alternate SmartConnect DNS zone names for the IP
address pool.
When a client connects to the cluster through a SmartConnect DNS zone:
● SmartConnect handles the incoming DNS requests on behalf of the IP address pool.
● The service subnet distributes incoming DNS requests according to the connection balancing policy of the pool.
NOTE: Using SmartConnect zone aliases is recommended for making clusters accessible using multiple domain names. Use
of CNAMES is not recommended.
Related tasks
Modify a SmartConnect DNS zone
Networking 409
NOTE: SmartConnect requires that you add a new name server (NS) record that references the SmartConnect service IP
address in the existing authoritative DNS zone that contains the cluster. You must also provide a zone delegation to the
fully qualified domain name (FQDN) of the SmartConnect zone.
Related tasks
Configure a SmartConnect service IP address
Suspend or resume a node
IP address allocation
The IP address allocation policy specifies how IP addresses in the pool are assigned to an available network interface.
You can specify whether to use static or dynamic allocation.
Static Assigns one IP address to each network interface added to the IP address pool, but does not guarantee
that all IP addresses are assigned.
Once assigned, the network interface keeps the IP address indefinitely, even if the network interface
becomes unavailable. To release the IP address, remove the network interface from the pool or remove it
from the node.
Without a license for SmartConnect Advanced, static is the only method available for IP address
allocation.
Dynamic Assigns IP addresses to each network interface added to the IP address pool until all IP addresses are
assigned. This guarantees a response when clients connect to any IP address in the pool.
If a network interface becomes unavailable, its IP addresses are automatically moved to other available
network interfaces in the pool as determined by the IP address failover policy.
This method is only available with a license for SmartConnect Advanced.
Related references
Supported IP allocation methods
Allocation recommendations based on file sharing protocols
Related tasks
Configure IP address allocation
IP address failover
When a network interface becomes unavailable, the IP address failover policy specifies how to handle the IP addresses that
were assigned to the network interface.
To define an IP address failover policy, you must have a license for SmartConnect Advanced, and the IP address allocation policy
must be set to dynamic. Dynamic IP allocation ensures that all of the IP addresses in the pool are assigned to available network
interfaces.
When a network interface becomes unavailable, the IP addresses that were assigned to it are redistributed to available
network interfaces according to the IP address failover policy. Subsequent client connections are directed to the new network
interfaces.
You can select one of the following connection balancing methods to determine how the IP address failover policy selects which
network interface receives a redistributed IP address:
● Round-robin
● Connection count
● Network throughput
● CPU usage
Related tasks
Configure an IP failover policy
410 Networking
Connection balancing
The connection balancing policy determines how the DNS server handles client connections to the cluster.
You can specify one of the following balancing methods:
Round-robin Selects the next available network interface on a rotating basis. This is the default method. Without a
SmartConnect license for advanced settings, this is the only method available for load balancing.
Connection count Determines the number of open TCP connections on each available network interface and selects the
network interface with the fewest client connections.
Network Determines the average throughput on each available network interface and selects the network interface
throughput with the lowest network interface load.
CPU usage Determines the average CPU utilization on each available network interface and selects the network
interface with lightest processor usage.
Related references
Supported connection balancing methods
Related tasks
Configure a connection balancing policy
IP address rebalancing
The IP address rebalance policy specifies when to redistribute IP addresses if one or more previously unavailable network
interfaces becomes available again.
To define an IP address rebalance policy, you must have a license for SmartConnect Advanced, and the IP address allocation
policy must be set to dynamic. Dynamic IP addresses allocation ensures that all of the IP addresses in the pool are assigned to
available network interfaces.
You can set rebalancing to occur manually or automatically:
Manual Does not redistribute IP addresses until you manually start the rebalancing process.
Upon rebalancing, IP addresses will be redistributed according to the connection balancing method
specified by the IP address failover policy defined for the IP address pool.
Automatic Automatically redistributes IP addresses according to the connection balancing method specified by the IP
address failover policy defined for the IP address pool.
Automatic rebalancing may also be triggered by changes to cluster nodes, network interfaces, or the
configuration of the external network.
NOTE: Rebalancing can disrupt client connections. Ensure the client workflow on the IP address pool
is appropriate for automatic rebalancing.
Related tasks
Manually rebalance IP addresses
SmartConnect diagnostics
You can view information about the status of the nodes in a network pool.
SmartConnect collects information about the status of nodes in network pools. Use the CLI command isi network pools
status <network pool id> to view whether each node in the network pool is operating optimally, needs attention, or is down.
The format of <network pool id> is [groupnet ID].subnetID.poolID.
The network pool status report displays summary information about the network pool and node status details:
● If all nodes are operating optimally, only summary information about the network pool displays.
● If some nodes are down or need attention, network pool summary and detailed information about the affected
node(s)displays.
Networking 411
Use the --show-all option to display network pool summary information and detailed information for all the nodes in the
network pool.
Related concepts
Managing node provisioning rules
Routing options
OneFS supports source-based routing and static routes which allow for more granular control of the direction of outgoing client
traffic on the cluster.
If no routing options are defined, by default, outgoing client traffic on the cluster is routed through the default gateway, which
is the gateway with the lowest priority setting on the node. If traffic is being routed to a local subnet and does not need to route
through a gateway, the traffic will go directly out through an interface on that subnet.
Related concepts
Managing routing options
Source-based routing
Source-based routing (SBR) selects which gateway to direct outgoing client traffic through based on the source IP address in
each packet header.
When enabled, source-based routing automatically scans your network configuration to create client traffic rules. If you modify
your network configuration, for example, changing the IP address of a gateway server, source-based routing adjusts the rules.
Source-based routing is applied across the entire cluster and does not support the IPv6 protocol.
In the following example, you enable source-based routing on a PowerScale cluster that is connected to SubnetA and SubnetB.
Each subnet is configured with a SmartConnect zone and a gateway, also labeled A and B. When a client on SubnetA makes
a request to SmartConnect ZoneB, the response originates from ZoneB. The result is a ZoneB address as the source IP in the
packet header, and the response is routed through GatewayB. Without source-based routing, the default route is destination-
based, so the response is routed through GatewayA.
In another example, a client on SubnetC, which is not connected to the PowerScale cluster, makes a request to SmartConnect
ZoneA and ZoneB. The response from ZoneA is routed through GatewayA, and the response from ZoneB is routed through
GatewayB. In other words, the traffic is split between gateways. Without source-based routing, both responses are routed
through the same gateway.
Source-based routing is disabled by default. Enabling or disabling source-based routing goes into effect immediately. Packets
in transit continue on their original courses, and subsequent traffic is routed based on the status change. If the status of
source-based routing changes during transmission, transactions that are composed of multiple packets might be disrupted or
delayed.
In the event where there is more than one matching route, rules that are made from static routes are evaluated first. If a
static route matches, static routes are prioritized over source-based rules. When a static route is added, the matching static
route is found first and takes precedence over source-based routing routes. If both source-based routing and static routes are
configured, the static routes always take priority for traffic that matches the static routes.
Consider enabling source-based routing if you have a large network with a complex topology. For example, if your network is a
multitenant environment with several gateways, traffic is more efficiently distributed with source-based routing.
412 Networking
Related tasks
Enable or disable source-based routing
Static routing
A static route directs outgoing client traffic to a specified gateway based on the IP address of the client connection.
You configure static routes by IP address pool, and each route applies to all nodes that have network interfaces as IP address
pool members.
You might configure static routing in order to connect to networks that are unavailable through the default routes or if you have
a small network that only requires one or two routes.
Related tasks
Add or remove a static route
Host-based firewall
The OneFS host-based firewall controls inbound traffic on the front-end network. You can enable default global firewall policies
that provide basic protection on the OneFS default ports. You can create custom policies and custom rules that define a firewall
for your specific network management and security requirements.
You can manage the firewall policies using either the command-line interface or the Web UI. In either interface, you can:
● Modify existing policies and create policies.
● Clone existing policies and edit the clones.
● Reset global policies to original installed defaults.
● Create and modify rules.
● Assign policies to subnets and network pools.
Firewall management requires the ISI_PRIV_FIREWALL privilege.
● The integrated SystemAdmin role is granted with the ISI_PRIV_FIREWALL write permission.
● The integrated AuditAdmin role is granted with ISI_PRIV_FIREWALL read permission.
NOTE: The firewall uses the FreeBSD ipfw kernel model, which is the same model that source-based routing (SBR)
uses. The two features use different partitions in the same ipfw table. You may enable and disable firewall and SBR
independently.
Firewall policies
The firewall consists of policies that you apply to specified subnets or network pools.
A policy is a collection of rules that filters inbound packets. A rule can filter packets on the protocol, source address, source
port, and destination port. Each rule defines an action to take when a packet matches the rule. Each policy also has a defined
default action. The available actions are:
● allow—Accept the packet.
● deny—Silently drop the packet.
● reject—Drop the packet and send an error code to the sender.
Networking 413
To make a policy take effect, you associate the policy to one or more network pools or subnets. Use either the Web UI or the
isi network firewall policies modify command with the --add-pools or --add-subnets option.
Global policies
The firewall comes with predefined global policies. You can modify the global policies. You can reset the global policies back to
their original installed state.
The following table describes the global policies that are installed with OneFS.
Policy Summary
default_pools_policy Rules for the inbound default ports for TCP and UDP services in OneFS. For a list of default
ports, see the "Network exposure" section in the "Product and Subsystem Security" chapter of
the OneFS Security Configuration Guide.
default_subnets_policy Rules for:
● DNS port 53
● Rule for ICMP
● Rule for ICMP6
Custom policies
You can create custom policies. As a convenience, you can clone any policy and edit the clone to create a custom policy. You
have complete control over the rules in custom policies.
Firewall rules
Firewall rules filter incoming network packets and define specific actions to take based on source network, source port,
destination port, and protocol on the cluster.
The ordering of rules in a policy can make a difference in the outcome. Each rule in a policy has an integer ID. Rules are applied
to a packet by ascending ID. Filtering stops at the first match. In general, you should order rules from most restrictive to least
restrictive.
You can change the ordering of rules in a policy by editing the policy. The Web UI lists all the rules in indexed order and makes it
convenient to rearrange them.
Maximum settings
The following predefined system settings affect the total permitted size of the firewall.
414 Networking
Table 27. System limits that affect firewall (continued)
Name Description Value
MAX_INACTIVE_POLICIES Maximum number of policies that are not 200
applied to any network subnet or pool. These
policies are not written into the ipfw table.
For most FTP clients, you must configure the client in FTP active mode. You should also check the firewall settings on the
client.
Networking 415
Modify the internal IP address range
Each internal network requires an IP address range. The ranges should have a sufficient number of IP addresses for present
operating conditions as well as future expansion and addition of nodes. You can add, remove, or migrate IP addresses for both
the initial internal network (int-a) and secondary internal network (int-b/failover).
1. Click Cluster Management > Network Configuration > Internal Network.
2. In the Internal Networks Settings area, select the network that you want to add IP addresses for.
● To select the int-a network, click int-a.
● To select the int-b/failover network, click int-b/Failover.
3. In the IP Ranges area, you can add, delete, or migrate your IP address ranges.
Ideally, the new range is contiguous with the previous one. For example, if your current IP address range is 192.168.160.60–
92.168.160.162, the new range should start with 192.168.160.163.
4. Click Submit.
5. Restart the cluster, if needed.
● If you remove any IP address that are currently in use, you must restart the cluster.
● If you add IP address changes are within the internal network netmask, you do not need to restart the cluster.
● If you change the internal network netmask, you must restart the cluster.
● If you migrate the IP address ranges, you must restart the cluster.
Related concepts
Internal IP address ranges
Related concepts
Internal IP address ranges
416 Networking
4. On the Add IP Range dialog box, enter the IP address at the low end of the range in the first IP range field.
5. In the second IP range field, type the IP address at the high end of the range.
Ensure that there is no overlap of IP addresses between the int-a and int-b/failover network ranges. For example, if the IP
address range for the int-a network is 192.168.1.1–192.168.1.100, specify a range of 192.168.2.1 - 192.168.2.100 for the int-b
network.
6. Click Submit.
7. In the IP Ranges area for the Failover network, click Add range.
Add an IP address range for the failover network, ensuring there is no overlap with the int-a network or the int-b network.
The Edit Internal Network page appears, and the new IP address range appears in the IP Ranges list.
8. In the Settings area, specify a valid netmask. Ensure that there is no overlap between the IP address range for the int-b
network or for the failover network.
We recommend that the netmask values you specify for int-a and int-b/failover are the same.
9. In the Settings area, for State, click Enable to enable the int-b and failover networks.
10. Click Submit.
The Confirm Cluster Reboot dialog box appears.
11. Restart the cluster by clicking Yes.
Related concepts
Internal network failover
Related concepts
Internal network failover
Managing IPv6
You can enable, disable, and configure IPv6 using the CLI.
Networking 417
3. Run isi network external modify with appropriate IPv6 options.
For example, to configure IPv6 to discover and apply network settings from the IPv6 Router Advertisement, run:
IPv6 Settings:
IPv6 Enabled: True
IPv6 Auto Configuration Enabled: False
IPv6 Generate Link Local: False
IPv6 Accept Redirects: False
IPv6 DAD: Disabled
IPv6 SSIP Perform DAD: False
418 Networking
2. To view whether IPv6 duplicate address detection (DAD) is configured on a network pool, run the isi network pools view
command.
Managing groupnets
You can create and manage groupnets on a cluster.
Create a groupnet
You can create a groupnet and configure DNS client settings.
1. Click Cluster Management > Networking Configuration > External Network.
2. click Add a groupnet.
The Create Groupnet window opens.
3. In the Name field, type a name for the groupnet that is unique in the system.
The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces
or other punctuation.
4. Optional: In the Description field, type a descriptive comment about the groupnet.
The description cannot exceed 128 characters.
5. In the DNS Settings area, configure the following DNS settings you want to apply to the groupnet:
● DNS Servers
● DNS Search Suffixes
● DNS Resolver Rotate
● Server-side DNS Search
● DNS Cache
6. Click Add Groupnet.
Related concepts
Groupnets
DNS name resolution
Networking 419
Related references
DNS settings
DNS settings
You can assign DNS servers to a groupnet and modify DNS settings that specify DNS server behavior.
Setting Description
DNS Servers Sets a list of DNS IP addresses. Nodes issue DNS requests to
these IP addresses.
You cannot specify more than three DNS servers.
DNS Search Suffixes Sets the list of DNS search suffixes. Suffixes are appended to
domain names that are not fully qualified.
You cannot specify more than six suffixes.
Enable DNS resolver rotate Sets the DNS resolver to rotate or round-robin across DNS
servers.
Enable DNS server-side search Specifies whether server-side DNS searching is enabled,
which appends DNS search lists to client DNS inquiries
handled by a SmartConnect service IP address.
Enable DNS cache Specifies whether DNS caching for the groupnet is enabled.
Related concepts
Groupnets
Related tasks
Create a groupnet
Modify a groupnet
You can modify groupnet attributes including the name, supported DNS servers, and DNS configuration settings.
1. Click Cluster Management > Networking Configuration > External Network.
2. Click the View/Edit button in the row of the groupnet you want to modify.
3. From the View Groupnet Details window, click Edit.
4. From the Edit Groupnet Details window, modify the groupnet settings as needed.
5. Click Save changes.
Related concepts
Groupnets
DNS name resolution
Related references
DNS settings
Delete a groupnet
You can delete a groupnet from the system, unless it is associated with an access zone, an authentication provider, or it is the
default groupnet. Removal of the groupnet from the system might affect several other areas of OneFS and should be performed
with caution.
In several cases, the association between a groupnet and another OneFS component, such as access zones or authentication
providers, is absolute. You cannot modify these components so that they become associate with another groupnet.
420 Networking
In the event that you need to delete a groupnet, we recommend that you complete the these tasks in the following order:
1. Delete IP address pools in subnets associated with the groupnet.
2. Delete subnets associated with the groupnet .
3. Delete authentication providers associated with the groupnet .
4. Delete access zones associated with the groupnet .
1. Click Cluster Management > Networking Configuration > External Network.
2. Click the More button in the row of the groupnet you want to delete, and then click Delete Groupnet.
3. At the Confirm Delete dialog box, click Delete.
If you did not first delete access zones associated with the groupnet, the deletion fails, and the system displays an error.
Related concepts
Groupnets
View groupnets
You can view a list of all groupnets on the system and view the details of a specific groupnet.
1. Click Cluster Management > Networking Configuration > External Network.
The External Network table displays all groupnets in the system and displays the following attributes:
● Groupnet name
● DNS servers assigned to the groupnet
● The type of groupnet
● Groupnet description
2. Click the View/Edit button in a row to view the current settings for that groupnet.
The View Groupnet Details dialog box opens and displays the following settings:
● Groupnet name
● Groupnet description
● DNS servers assigned to the groupnet
● DNS search suffixes
● Whether DNS resolver is enabled
● Whether DNS search is enabled
● Whether DNS caching is enabled
3. Click the tree arrow next to a groupnet name to view subnets assigned to the groupnet.
The table displays each subnet in a new row within the groupnet tree.
4. When you have finished viewing groupnet details, click Close.
Related concepts
Groupnets
Create a subnet
You can add a subnet to the external network. Subnets are created under a groupnet.
1. Click Cluster Management > Network Configuration > External Network.
2. Click More > Add Subnet next to the groupnet that will contain the new subnet.
The system displays the Create Subnet window.
3. In the Name field, specify the name of the new subnet.
The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces
or other punctuation.
4. Optional: In the Description field, type a descriptive comment about the subnet.
Networking 421
The comment can be no more than 128 characters.
5. From the IP family area, select one of the following IP address formats for the subnet:
● IPv4
● IPv6
All subnet settings and IP address pools added to the subnet must use the specified address format. You cannot modify the
address family once the subnet has been created.
6. In the Netmask field, specify a subnet mask or prefix length, depending on the IP family you selected.
● For an IPv4 subnet, type a dot-decimal octet (x.x.x.x) that represents the subnet mask.
● For an IPv6 subnet, type an integer (ranging from 1 to 128) that represents the network prefix length.
7. In the Gateway Address field, type the IP address of the gateway through which the cluster routes communications to
systems outside of the subnet.
8. In the Gateway Priority field, type the priority (integer) that determines which subnet gateway will be installed as the
default gateway on nodes that have more then one subnet.
A value of 1 represents the highest priority.
9. In the MTU list, type or select the size of the maximum transmission units the cluster uses in network communication. Any
numerical value is allowed, but must be compatible with your network and the configuration of all devices in the network
path. Common settings are 1500 (standard frames) and 9000 (jumbo frames).
Although OneFS supports both 1500 MTU and 9000 MTU, using a larger frame size for network traffic permits more
efficient communication on the external network between clients and cluster nodes. For example, if a subnet is connected
through a 10 GbE interface and NIC aggregation is configured for IP address pools in the subnet, we recommend that you
set the MTU to 9000. To benefit from using jumbo frames, all devices in the network path must be configured to use jumbo
frames.
10. If you plan to use SmartConnect for connection balancing, in the SmartConnect Service IP field, type the IP address that
will receive all incoming DNS requests for each IP address pool according to the client connection policy. You must have at
least one subnet configured with a SmartConnect service IP in order to use connection balancing.
11. In the SmartConnect Service Name field, specify the SmartConnect service name. The SmartConnect service name is an
optional field to answer nameserver, Start of Authority, and other DNS queries. It specifies the domain name corresponding
to the SmartConnect Service IP (SSIP) address, serving as the glue record in the DNS delegation tying the nameserver and
IP address.
12. In the Advanced Settings section, you can enable VLAN tagging if you want to enable the cluster to participate in virtual
networks.
NOTE: Configuring a VLAN requires advanced knowledge of network switches. Consult your network switch
documentation before configuring your cluster for a VLAN.
13. If you enable VLAN tagging, specify a VLAN ID that corresponds to the ID number for the VLAN set on the switch, with a
value from 1 through 4094.
14. Click Remove IP to remove a hardware load balancing IP.
15. Click Add Subnet.
Related concepts
Subnets
IPv6 support
Modify a subnet
You can modify a subnet on the external network.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the subnet that you want to modify.
The system displays the View Subnet Details window.
3. Click Edit.
The system displays the Edit Subnet Details window.
4. Modify the subnet settings, and then click Save Changes.
Related concepts
Subnets
422 Networking
Delete a subnet
You can delete a subnet from the external network.
Deleting an subnet that is in use can prevent access to the cluster. Client connections to the cluster through any IP address
pool that belongs to the deleted subnet will be terminated.
1. Click Cluster Management > Network Configuration > External Network.
2. Click More > Delete Subnet next to the subnet that you want to delete.
3. At the confirmation prompt, click Delete.
Related concepts
Subnets
Related concepts
Subnets
Related concepts
Subnets
DNS request handling
Networking 423
Related concepts
Subnets
VLANs
Related concepts
IP address pools
424 Networking
3. Click Edit.
The system displays the Edit Pool Details window.
4. Modify the IP address pool settings, and then click Save Changes.
Related concepts
IP address pools
Related concepts
IP address pools
Related concepts
IP address pools
Related concepts
IP address pools
Networking 425
Managing SmartConnect Settings
You can configure SmartConnect settings within each IP address pool on the cluster, and view the status of nodes in a network
pool.
Related concepts
SmartConnect zones and aliases
Related concepts
DNS name resolution
Related tasks
Configure a SmartConnect service IP address
426 Networking
Suspend or resume a node
You can suspend and resume SmartConnect DNS responses for a node.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the IP address pool that you want to modify.
The system displays the View Pool Details window.
3. Click Edit.
The system displays the Edit Pool Details window.
4. To suspend a node:
a. In the SmartConnect Suspended Nodes area, click Suspend Nodes.
The system displays the Suspend Nodes window.
b. Select a logical node number (LNN) from the Available table, and then click Add.
c. Click Suspend Nodes.
d. At the confirmation window, click Confirm.
5. To resume a node:
a. From the SmartConnect Suspended Nodes table, click the Resume button next to the node number you want to
resume.
b. At the confirmation window, click Confirm.
6. Click Close.
Related concepts
DNS name resolution
Related concepts
IP address allocation
Static Assigns one IP address to each network interface added to the IP address pool, but does not guarantee
that all IP addresses are assigned.
Once assigned, the network interface keeps the IP address indefinitely, even if the network interface
becomes unavailable. To release the IP address, remove the network interface from the pool or remove it
from the cluster.
Without a license for SmartConnect Advanced, static is the only method available for IP address
allocation.
Networking 427
Dynamic Assigns IP addresses to each network interface added to the IP address pool until all IP addresses are
assigned. This guarantees a response when clients connect to any IP address in the pool.
If a network interface becomes unavailable, its IP addresses are automatically moved to other available
network interfaces in the pool as determined by the IP address failover policy.
This method is only available with a license for SmartConnect Advanced.
Related concepts
IP address allocation
Related concepts
IP address allocation
Related concepts
Connection balancing
428 Networking
Supported connection balancing methods
The connection balancing policy determines how the DNS server handles client connections to the cluster.
You can specify one of the following balancing methods:
Round-robin Selects the next available node on a rotating basis. This is the default method. Without a SmartConnect
license for advanced settings, this is the only method available for load balancing.
Connection count Determines the number of open TCP connections on each available node and selects the node with the
fewest client connections.
Network Determines the average throughput on each available node and selects the node with the lowest network
throughput interface load.
CPU usage Determines the average CPU utilization on each available node and selects the node with lightest
processor usage.
Related concepts
Connection balancing
Related concepts
IP address failover
Networking 429
Related concepts
IP address rebalancing
Manual Does not redistribute IP addresses until you manually issue a rebalance command through the command-
line interface.
Upon rebalancing, IP addresses will be redistributed according to the connection balancing method
specified by the IP address failover policy defined for the IP address pool.
Automatic Automatically redistributes IP addresses according to the connection balancing method specified by the IP
address failover policy defined for the IP address pool.
Automatic rebalance may also be triggered by changes to cluster nodes, network interfaces, or the
configuration of the external network.
NOTE: Rebalancing can disrupt client connections. Ensure the client workflow on the IP address pool
is appropriate for automatic rebalancing.
Related concepts
IP address rebalancing
430 Networking
3. Click Edit.
The system displays the Edit Pool Details window.
4. To add a network interface to the IP address pool:
a. From the Pool Interface Members area, select the interface you want from the Available table.
If you add an aggregated interface to the pool, you cannot individually add any interfaces that are part of the aggregated
interface.
b. Click Add.
5. To remove a network interface from the IP address pool:
a. From the Pool Interface Members area, select the interface you want from the In Pool table.
b. Click Remove.
6. Click Save Changes.
Related concepts
Managing network interface members
Link aggregation
Related concepts
Link aggregation
Link Aggregation Dynamic aggregation mode that supports the IEEE 802.3ad Link Aggregation Control Protocol (LACP).
Control Protocol You can configure LACP at the switch level, which allows the node to negotiate interface aggregation
(LACP) with the switch. LACP balances outgoing traffic across the interfaces based on hashed protocol header
information that includes the source and destination address and the VLAN tag, if available. This option is
the default aggregation mode.
Loadbalance Static aggregation method that accepts all incoming traffic and balances outgoing traffic over aggregated
(FEC) interfaces based on hashed protocol header information that includes source and destination addresses.
Networking 431
Active/Passive Static aggregation mode that switches to the next active interface when the primary interface becomes
Failover unavailable. The primary interface handles traffic until there is an interruption in communication. At that
point, one of the secondary interfaces will take over the work of the primary.
Round-robin Static aggregation mode that rotates connections through the nodes in a first-in, first-out sequence,
handling all processes without priority. Balances outbound traffic across all active ports in the aggregated
link and accepts inbound traffic on any port.
NOTE: This method is not recommended if your cluster is handling TCP/IP workloads.
Related tasks
Configure link aggregation
Related tasks
Configure link aggregation
432 Networking
6. From the Node Type list, select one of the following node types:
● Any
● Storage
● Accelerator
● Backup accelerator
The rule is applied when a node matching the selected type is added to the cluster.
7. Click Add rule.
Related concepts
Node provisioning rules
Related concepts
Node provisioning rules
Related concepts
Node provisioning rules
Related concepts
Node provisioning rules
Networking 433
Managing routing options
You can provide additional control of the direction of outgoing client traffic through source-based routing or static route
configuration.
If both source-based routing and static routes are configured, the static routes will take priority for traffic that matches the
static routes.
Related concepts
Source-based routing
Related concepts
Static routing
434 Networking
Modify DNS cache settings
You can modify global settings that are applied to the DNS cache of each groupnet that has enabled DNS caching.
1. Click Cluster Management > Network Configuration > DNS Cache.
2. Modify the DNS cache settings, and then click Save Changes.
Setting Description
TTL No Error Minimum Specifies the lower boundary on time-to-live for cache hits.
The default value is 30 seconds.
TTL No Error Maximum Specifies the upper boundary on time-to-live for cache hits.
The default value is 3600 seconds.
TTL Non-existent Domain Minimum Specifies the lower boundary on time-to-live for nxdomain.
The default value is 15 seconds.
TTL Non-existent Domain Maximum Specifies the upper boundary on time-to-live for nxdomain.
The default value is 3600 seconds.
TTL Other Failures Minimum Specifies the lower boundary on time-to-live for non-
nxdomain failures.
The default value is 0 seconds.
TTL Other Failures Maximum Specifies the upper boundary on time-to-live for non-
nxdomain failures.
The default value is 60 seconds.
TTL Lower Limit For Server Failures Specifies the lower boundary on time-to-live for DNS server
failures.
The default value is 300 seconds.
TTL Upper Limit For Server Failures Specifies the upper boundary on time-to-live for DNS server
failures.
The default value is 3600 seconds.
Eager Refresh Specifies the lead time to refresh cache entries that are
nearing expiration.
The default value is 0 seconds.
Cache Entry Limit Specifies the maximum number of entries that the DNS cache
can contain.
The default value is 65536 entries.
Test Ping Delta Specifies the delta for checking the cbind cluster health.
The default value is 30 seconds.
Networking 435
Modify the OneFS firewall service
You can modify the OneFS firewall service settings.
1. When the firewall service is enabled, all external network traffic is filtered according to the firewall policies and rules. To
modify the firewall service, click Cluster Management > Firewall Configuration > Settings.
2. Click the toggle switch to enable or disable the firewall service on the cluster.
436 Networking
Modify a firewall rule
You can modify an existing firewall rule.
1. To modify a firewall rule, click Cluster Management > Firewall Configuration.
2. On the Firewall Policies tab, expand the policy that contains the rule you want to delete. Locate the rule and click the
pencil icon to edit the rule.
3. Click Save.
Networking 437
Managing TCP ports
You can modify the list of client TCP ports available to the external network.
OneFS uses TCP ports that are configured in the WebUI under Cluster Management > Network Configuration > Settings
to drop TCP sessions on failover. This configuration can assist with protocol failover for the following reason: If a client is
connected to IP X on node Y and the IP moves, the TCP connection may take some time to notice the change and failover. But
if you configure the port it is using (for example, port 111 for NFS), OneFS will explicitly end the TCP session. This reduces the
time that is required for the connection to failover.
438 Networking
30
NFS3oRDMA
This section contains the following topics:
Topics:
• RDMA support for NFSoRDMA
• Enable NFSoRDMA
• Disable NFSoRDMA
• View interface details of a node
• View IP address pool details
• Create an IP address pool with NFSoRDMA capabilities
• Modify an existing IP address pool
Enable NFSoRDMA
You can enable NFSoRDMA on clusters that have RDMA capable NICs.
Log in to the OneFS web administration interface as a user with the administrator role.
1. Click Protocols > UNIX sharing (NFS) > Global settings.
2. In the Edit NFS global settings area, select the NFSoRDMA check box.
3. Click Save.
The NFSoRDMA feature is enabled.
NFS3oRDMA 439
If you enable NFSoRDMA and the cluster does not have RDMA capable NICs, the following warning appears but the settings
are saved.
Disable NFSoRDMA
You can disable the NFS share with interface having NFSoRDMA capabilities.
Log in to the OneFS web administration interface as a user with the administrator role.
1. Click Protocols > UNIX sharing (NFS) > Global settings.
2. In the Edit NFS global settings area, clear the NFSoRDMA check box.
3. Click Save.
The NFSoRDMA feature is disabled.
440 NFS3oRDMA
Modify an existing IP address pool
You can enable NFSoRDMA on the pool by modify an existing IP address pool on the external network.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the IP address pool that you want to view or modify.
The system displays the View Pool Details window.
3. Click Edit.
The system displays the Edit Pool Details window.
4. Select the Enable NFSoRDMA check box.
The Confirm Action dialog box appears with the following message:
On enabling NFSoRDMA on the pool, interfaces selected in the pool which are not
NFSoRDMA enabled will be removed. Do you want to procced?
NFS3oRDMA 441
31
Smart QoS
This section contains the following topics:
Topics:
• Smart QoS
• Workload monitoring
• Create a standard dataset
• Pin a workload
• Throttle a workload
• Additional information
Smart QoS
You can define and monitor performance-related issues on the cluster using OneFS Smart QoS monitoring.
As clusters increase in size and the number of competing workloads place demands on system resources, more visibility is
required to share cluster resources equitably. Smart QoS provides you with fine-grained accounting of the dynamic resources
which helps in the better utilization and performance of the cluster.
OneFS supports Smart QoS monitoring with several protocols, including NFS, SMB, and S3. You can use Smart QoS monitoring
to define performance datasets to enable tracking any combination of directories, shares, users, clients, and access zones. You
can view the associated performance statistics, including protocols operations, disk operations, read/write bandwidth, and CPU.
You can configure customized settings and filters to match specific workloads for a dataset that meets the required criteria.
Reported statistics are refreshed every 30 seconds. The performance dataset data is available to you through the CLI and PAPI.
Workload monitoring
Workload monitoring is a key for show-back and charge-back resource accounting.
Workload is a set of identification metrics and resource consumption metrics. For example:
Indicates that the user bob in the zone System consumed 1.2 s of CPU with 10 Kb bytes in and 20 Mb bytes out, and so on.
Dataset is a specification of identification metrics to aggregate workloads by, and the workloads collected that match
that specification. For instance, the workload above would be in a dataset that specified identification metrics {username,
zone_name}.
Filter is a method for including only workloads that match specific identification metrics. For example, take the following
workloads for a dataset with filter {zone_name:System}:
● {username:bob , zone_name:System} would be included.
● {username:mary , zone_name:System} would be included.
● {username:bob , zone_name:Quarantine} would not be included.
A performance dataset automatically collects a list of the top workloads, with pinning and filtering to allow further customization
to that list.
View datasets
You can view the list of configured performance datasets.
1. Click Cluster management > Smart QoS.
2. Select the tab for an individual dataset to view the details of that dataset.
Delete dataset
You can delete a dataset.
1. Click Cluster management > Smart QoS.
2. On the tab for the dataset that you want to delete, click Delete Dataset.
3. Click Delete to confirm.
Pin a workload
If you want a workload to be always visible, you can pin it.
1. Click Cluster management > Smart QoS.
2. On the tab of the dataset that contains the workload that you want to pin, click Pin Workload.
3. Select the identification metrics and set the optional Protocol Ops limit to throttle the workload.
4. If you are throttling the workload, click Pin and Throttle Workload.
5. If you are not throttling the workload, click Pin workload.
6. If you have other workloads to pin, check the box for Pin another workload before clicking Pin and Throttle Workload or
Pin workload.
Unpin a workload
You can unpin a workload.
1. Click Cluster management > Smart QoS.
2. On the tab for the dataset that has the pinned workload that you want to unpin, in the Actions column, click Unpin.
3. Click Unpin to confirm.
Additional information
These pointers provide you with some tips regarding the feature.
● Name lookup failures, for example UID to username mappings, are reported in an additional column in the statistics output.
● Statistics are updated every 30 s. A newly created dataset does not show up in the statistics. Similarly, an old dataset might
show up until that update occurs.
● Some identification metrics may not be available until post commit when upgrading.
● export_id and share_name metrics can be combined in a dataset.
○ Dataset with both metrics list workloads with either export_id or share_name.
○ Dataset with only share_name metric excludes NFS workloads.
○ Dataset with only export_id metric excludes SMB workloads.
● Paths and Non-Primary groups are only reported if they are pinned or have a filter applied.
● Paths and Non-Primary groups might result in work being accounted twice within the same dataset, as they can match
multiple workloads. The total amount that is overaccounted within a dataset is aggregated into the Overaccounted workload.
Antivirus overview
You can scan the files that you store on a PowerScale cluster for viruses, malware, and other security threats by integrating
with third-party scanning services through the Internet Content Adaptation Protocol (ICAP) or the Common AntiVirus Agent
(CAVA).
OneFS sends files through ICAP or CAVA to a server running third-party antivirus scanning software. These servers are called
ICAP servers or CAVA servers. These servers scan files for viruses.
After a server scans a file, it notifies OneFS of whether the file is a threat. If a threat is detected, OneFSnotifies system
administrators by creating an event, displaying near real-time summary information, and documenting the threat in an antivirus
scan report. You can configure OneFS to request that ICAP or CAVA servers attempt to repair infected files. You can also
configure OneFS to protect users against potentially dangerous files by truncating or quarantining infected files.
Before OneFS sends a file for scanning, it ensures that the scan is not redundant. If a scanned file has not been modified since
the last scan, the file will not be scanned again unless the virus database on the antivirus server has been updated since the last
scan.
NOTE: Antivirus scanning is available only on nodes in the cluster that are connected to the external network.
On-access scanning
You can configure OneFS to send files to be scanned before they are opened, after they are closed, or both. This can be done
through file access protocols such as SMB, NFS, and SSH. Sending files to be scanned after they are closed is faster but less
secure. Sending files to be scanned before they are opened is slower but more secure.
If OneFS is configured to ensure that files are scanned after they are closed, when a user creates or modifies a file on the
cluster, OneFS queues the file to be scanned. OneFS then sends the file to an ICAP or CAVA server to be scanned when
convenient. In this configuration, users can always access files without any delay. However, it is possible that after a user
Antivirus 445
modifies or creates a file, a second user might access the file before the file is scanned. If a virus was introduced to the file from
the first user, the second user will be able to access the infected file. Also, if an ICAP or CAVA server is unable to scan a file, the
file will still be accessible to users.
If OneFS ensures that files are scanned before they are opened, when a user attempts to download a file from the cluster,
OneFS first sends the file to an ICAP or CAVA server to be scanned. The file is not sent to the user until the scan is complete.
Scanning files before they are opened is more secure than scanning files after they are closed, because users can access only
scanned files. However, scanning files before they are opened requires users to wait for files to be scanned. You can also
configure OneFS to deny access to files that cannot be scanned by an ICAP or CAVA server, which can increase the delay. For
example, if no ICAP or CAVA servers are available, users will not be able to access any files until the servers become available
again.
If you configure OneFS to ensure that files are scanned before they are opened, it is recommended that you also configure
OneFS to ensure that files are scanned after they are closed. Scanning files as they are both opened and closed will not
necessarily improve security, but it will usually improve data availability when compared to scanning files only when they are
opened. If a user wants to access a file, the file may have already been scanned after the file was last modified, and will not
need to be scanned again if the antivirus server database has not been updated since the last scan.
NOTE: When scanning, do not exclude any file types (extensions). This ensures that any renamed files are caught.
Related tasks
Configure on-access scanning settings
Related concepts
Managing ICAP antivirus policies
Related tasks
Create an antivirus policy
446 Antivirus
When practical, you can initiate an antivirus scan on files before they are committed to a WORM state.
Related concepts
Managing antivirus reports
Related tasks
Configure antivirus report retention settings
View antivirus reports
ICAP servers
The number of ICAP servers that are required to support a PowerScale cluster depends on how virus scanning is configured, the
amount of data a cluster processes, and the processing power of the ICAP servers.
If you intend to scan files exclusively through anti-virus scan policies, it is recommended that you have a minimum of two ICAP
servers per cluster. If you intend to scan files on access, it is recommended that you have at least one ICAP server for each
node in the cluster.
If you configure more than one ICAP server for a cluster, ensure that the processing power of each ICAP server is relatively
equal. OneFS distributes files to the ICAP servers on a rotating basis, regardless of the processing power of the ICAP servers. If
one server is more powerful than another, OneFS does not send more files to the more powerful server.
CAUTION: When files are sent from the cluster to an ICAP server, they are sent across the network in
cleartext. Ensure that the path from the cluster to the ICAP server is on a trusted network. Authentication
is not supported. If authentication is required between an ICAP client and ICAP server, hop-by-hop Proxy
Authentication must be used.
Related concepts
Managing ICAP servers
Related tasks
Add and connect to an ICAP server
Antivirus 447
CAVA servers
CAVA uses industry-standard Server Message Block (SMB) protocol versions 2 and 3 in a Microsoft Windows Server
environment. CAVA uses third-party antivirus software to identify and eliminate known viruses before they infect files on the
system.
You can use the CAVA calculator and the CAVA sizing tool to determine the number of antivirus servers that the system
requires. It is recommended that you start with one Common Event Enabler (CEE) server per two nodes and adjust the number
as needed. For information about the sizing tool and using CAVA on Windows platforms, see the chapter Monitoring and Sizing
the Antivirus Agent in the Dell Technologies latest version of the CEE document Using the Common Event Enabler on Windows.
Alert All threats that are detected cause an event to be generated in OneFS at the warning level, regardless of
the threat response configuration.
Repair The ICAP server attempts to repair the infected file before returning the file to OneFS.
Quarantine OneFS quarantines the infected file. A quarantined file cannot be accessed by any user. However, a
quarantined file can be removed from quarantine by the root user if the root user is connected to the
cluster through secure shell (SSH). If you back up your cluster through NDMP backup, quarantined files
remain quarantined when the files are restored. If you replicate quarantined files to another PowerScale
cluster, the quarantined files continue to be quarantined on the target cluster. However, transferring
anti-virus files attributes using SyncIQ is not supported.
NOTE: A potentially harmful file that was scanned and quarantined on the primary side is replicated
on the target side without the quarantine flag. This means that file is potentially accessible on the
target although read only. Workaround: Switch the scan policy to scan-on-read during failover.
Quarantines operate independently of access control lists (ACLs).
Truncate OneFS truncates the infected file. When a file is truncated, OneFS reduces the size of the file to zero
bytes to render the file harmless.
You can configure OneFS and ICAP servers to react in one of the following ways when threats are detected:
Repair or Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS quarantines the file. If the
quarantine ICAP server repairs the file successfully, OneFS sends the file to the user. Repair or quarantine can be
useful if you want to protect users from accessing infected files while retaining all data on a cluster.
Repair or Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS truncates the file. If the
truncate ICAP server repairs the file successfully, OneFS sends the file to the user. Repair or truncate can be
useful if you do not care about retaining all data on your cluster, and you want to free storage space.
However, data in infected files will be lost.
Alert only Only generates an event for each infected file. It is recommended that you do not apply this setting.
Repair only Attempts to repair infected files. Afterwards, OneFS sends the files to the user, whether or not the ICAP
server repaired the files successfully. It is recommended that you do not apply this setting. If you only
attempt to repair files, users will still be able to access infected files that cannot be repaired.
Quarantine Quarantines all infected files. It is recommended that you do not apply this setting. If you quarantine files
without attempting to repair them, you might deny access to infected files that could have been repaired.
Truncate Truncates all infected files. It is recommended that you do not apply this setting. If you truncate files
without attempting to repair them, you might delete data unnecessarily.
Related tasks
Configure antivirus threat response settings
448 Antivirus
CAVA threat responses
You configure CAVA threat responses in the antivirus software you use.
See your CAVA antivirus software documentation for information about how to configure your CAVA software to perform threat
handling.
Antivirus 449
Wildcard character Description
Related concepts
On-access scanning
Related concepts
ICAP threat responses
Related concepts
Antivirus scan reports
450 Antivirus
Managing ICAP servers
Before you can send files to be scanned on an ICAP server, you must configure OneFS to connect to the server. You can test,
modify, and remove an ICAP server connection. You can also temporarily disconnect and reconnect to an ICAP server.
Related concepts
ICAP servers
Related concepts
ICAP servers
Related concepts
ICAP servers
Related concepts
ICAP servers
Related tasks
Test an ICAP server connection
Antivirus 451
Temporarily disconnect from an ICAP server
If you want to prevent OneFS from sending files to an ICAP server, but want to retain the ICAP server connection settings, you
can temporarily disconnect from the ICAP server.
1. Click Data Protection > Antivirus > ICAP Servers.
2. In the ICAP Servers table, in the row for an ICAP server, click View / Edit.
3. Click Edit.
4. Clear the Enable ICAP Server box and then click Save Changes.
Related concepts
ICAP servers
Related tasks
Reconnect to an ICAP server
Related concepts
ICAP servers
Related tasks
Temporarily disconnect from an ICAP server
Related concepts
ICAP servers
Related tasks
Add and connect to an ICAP server
452 Antivirus
Add and connect to a CAVA server
You can add and connect to a CAVA server. After a server is added, OneFS can send files to the server to be scanned for
viruses.
1. Click Data Protection > Antivirus > CAVA.
2. In the Servers area, click Add server.
3. Optional: To enable the CAVA server, click Enable this server checkbox.
4. In the Create CAVA antivirus server dialog box, in the Server URL field, type the IPv4 or IPv6 address or URL of a CAVA
server.
5. In the Server name field, type the name of the server.
6. Click Add Server.
Antivirus 453
View an IP pool in a CAVA server
You can add an IP pool to a CAVA server. The purpose of creating an IP pool is to facilitate the connections from anti-virus
applications. This dedicated IP pools should only be used by the anti-virus applications. To achieve that, Dell EMC recommends
the IP ranges in this IP pool must be exclusive and only available to the CAVA servers.
1. Click Data Protection > Antivirus > CAVA.
2. In the Settings area, view the IP Pool field.
3. Optional: Note: To add an IP pool, click Cluster management > Network configuration.
454 Antivirus
Optionally, click Add another directory path to specify additional directories.
7. In the Recursion Depth area, specify how much of the specified directories you want to scan.
● To scan all subdirectories of the specified directories, click Full recursion.
● To scan a limited number of subdirectories of the specified directories, click Limit depth and then specify how many sub
directories you want to scan.
8. Optional: To scan all files regardless of whether OneFS has marked files as having been scanned, or if global settings specify
that certain files should not be scanned, select Enable force run of policy regardless of impact policy.
9. Optional: To modify the default impact policy of the antivirus scans, from the Impact Policy list, select a new impact policy.
10. In the Schedule area, specify whether you want to run the policy according to a schedule or manually.
Scheduled policies can also be run manually at any time.
Related concepts
ICAP Antivirus policy scanning
Related concepts
ICAP Antivirus policy scanning
Related concepts
ICAP Antivirus policy scanning
Related concepts
ICAP Antivirus policy scanning
Antivirus 455
Enable or disable an antivirus policy
You can temporarily disable antivirus policies if you want to retain the policy but do not want to scan files.
1. Click Data Protection > Antivirus > Policies.
2. In the Antivirus Policies table, in the row for the antivirus policy you want to enable or disable, click More > Enable Policy
or More > Disable Policy.
Related concepts
ICAP Antivirus policy scanning
Related concepts
ICAP Antivirus policy scanning
Scan a file
You can manually scan an individual file for viruses.
This procedure is available only through the command-line interface (CLI).
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi antivirus scan command.
The following command scans the /ifs/data/virus_file file for viruses:
456 Antivirus
Managing antivirus threats
You can repair, quarantine, or truncate files in which threats are detected. If you think that a quarantined file is no longer a
threat, you can rescan the file or remove the file from quarantine.
Related concepts
ICAP threat responses
Rescan a file
You can rescan a file for viruses if, for example, you believe that a file is no longer a threat.
This procedure is available only through the command-line interface (CLI).
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi antivirus scan command.
For example, the following command scans /ifs/data/virus_file:
Related concepts
ICAP threat responses
Related concepts
ICAP threat responses
truncate -s 0 /ifs/data/virus_file
Related concepts
ICAP threat responses
Antivirus 457
View threats
You can view files that have been identified as threats by an ICAP server.
1. Click Data Protection > Antivirus > Detected Threats.
2. In the Antivirus Threat Reports table, view potentially infected files.
Related concepts
ICAP threat responses
Related references
Antivirus threat information
Name Displays the name of the detected threat as it is recognized by the ICAP server.
Path Displays the file path of the potentially infected file.
Remediation Indicates how OneFS responded to the file when the threat was detected.
Policy Displays the ID of the antivirus policy that caused the threat to be detected.
Detected Displays the time that the threat was detected.
Actions Displays actions that can be performed on the file.
Related concepts
Antivirus scan reports
458 Antivirus
ICAP Server Misconfigured, Unreachable or Unresponsive
OneFS is unable to communicate with an ICAP server.
Antivirus 459
33
File System Explorer
This section contains the following topics:
Topics:
• File System Explorer overview
• Browse the file system
• Create a directory
• Modify file and directory properties
• View file and directory properties
• File and directory properties
Create a directory
You can create a directory in the /ifs directory tree through the File system explorer.
1. Click File System > File system explorer.
2. Navigate to the directory that you want to add the directory to.
3. Click Create Directory.
4. In the Create a directory dialog box, in the Directory name field, enter a name for the directory.
5. In the User field, enter the name of the user, or browse for the user.
6. In the Group field, enter the name of the group the user belongs to or browse for the group.
7. In the Permissions area, assign permissions to the directory.
8. In the File name limits area, from the Restrict name length list, select one of the following options:
● Inherited (255 characters, 255 bytes)
● Restricted (255 characters, 255 bytes)
Properties
Path Displays the absolute path of the file or directory.
File Size Displays the logical size of the file or directory.
Space Used Displays the physical size of the file or directory.
Last Modified Displays the time that the file or directory was last modified.
Last Accessed Displays the time that the file or directory was last accessed.
UNIX Permissions
User Displays the permissions assigned to the owner of the file or directory.
Group Displays the permissions assigned to the group of the file or directory.
Others Displays the permissions assigned to other users for the file or directory.