6-68456-01 StorNextNAS CommandLineInterfaceGuide
6-68456-01 StorNextNAS CommandLineInterfaceGuide
StorNext NAS
6-68456-01 Rev A *6-68456-01*
StorNext NAS Command Line Interface Guide, 6-68456-01 Rev A, June 2016, Product of USA.
Quantum Corporation provides this publication “as is” without warranty of any kind, either express or implied, including
but not limited to the implied warranties of merchantability or fitness for a particular purpose. Quantum Corporation
may revise this publication from time to time without notice.
COPYRIGHT STATEMENT
© 2016 Quantum Corporation. All rights reserved.
Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written
authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law.
TRADEMARK STATEMENT
Artico, Be Certain (and the Q brackets design), DLT, DXi, DXi Accent, DXi V1000, DXi V2000, DXi V4000, GoVault,
Lattus, NDX, the Q logo, the Q Quantum logo, Q-Cloud, Quantum (and the Q brackets design), the Quantum logo,
Quantum Be Certain (and the Q brackets design), Quantum Vision, Scalar, StorageCare, StorNext,
SuperLoader, Symform, the Symform logo (and design), vmPRO, and Xcellis are either registered trademarks or
trademarks of Quantum Corporation and its affiliates in the United States and/or other countries. All other trademarks
are the property of their respective owners.
Products mentioned herein are for identification purposes only and may be registered trademarks or trademarks of their
respective companies. All other brand names or trademarks are the property of their respective owners.
Quantum specifications are subject to change.
Preface vi
Modifying Shares 22
Exporting and Importing Share Information 24
Share Options 26
Deleting Shares 28
Viewing Shares 29
View Active SMB Sessions 30
Enabling or Disabling the Support and Upgrade Shares 31
Chapter 7: Troubleshooting 68
Troubleshooting Tips and FAQs 68
Logging Issues and FAQs 68
System Restart, Restore, and Sync Issues and FAQs 69
Alert Issues and FAQs 70
This manual introduces the Quantum StorNext NAS and contains the following chapters:
l About the StorNext NAS CLI on page 1
l Configure NAS User Authentication on page 7
l Share Management on page 18
l NAS Clusters on page 33
l Backing Up and Restoring on page 55
l System Management on page 59
l Troubleshooting on page 68
Audience
This manual is written for StorNext NAS operators, system administrators, and field service engineers.
Note: It is useful for the audience to have a basic understanding of UNIX® and backup/recovery
systems.
Notational Conventions
This manual uses the following conventions:
Convention Example
For UNIX and Linux commands, the command prompt is implied. ./DARTinstall
is the same as
# ./DARTinstall
File and directory names, menu commands, button names, and /data/upload
window names are shown in bold font.
Related Documents
The following Quantum documents are also available for StorNext NAS:
6-68362 StorNext NAS Release Notes Presents updates, resolved issues, and known issues
for the associated release.
Contacts
For information about contacting Quantum, including Quantum office locations, go to:
http://www.quantum.com/aboutus/contactus/index.aspx
Comments
To provide comments or feedback about this document, or about other Quantum technical publications,
send e-mail to:
[email protected]
l eSupport - Submit online service requests, update contact information, add attachments, and receive
status updates via email. Online Service accounts are free from Quantum. That account can also be used
to access Quantum’s Knowledge Base, a comprehensive repository of product support information. Get
started at:
https://onlineservice.quantum.com
For further assistance, or if training is desired, contact the Quantum Customer Support Center:
Region Support Contact
StorageCare Guardian
StorageCare Guardian securely links Quantum hardware and the diagnostic data from the surrounding
storage ecosystem to Quantum's Global Services Team for faster, more precise root cause diagnosis.
StorageCare Guardian is simple to set up through the internet and provides secure, two-way
communications with Quantum’s Secure Service Center. Learn more at:
http://www.quantum.com/ServiceandSupport/Services/GuardianInformation/Index.aspx
Multi-Protocol
StorNext NAS supports both SMB and NFS file sharing.
l SMB support is for SMB 1 (CIFS) through SMB 3.
l NFS support is for NFS v3.
Flexible User Authentication
StorNext NAS supports several methods of user authentication, making it easy to implement the software
within a networked environment.
See Using auth config with Directory Services on page 8.
NAS Failover
With SMB shares, StorNext NAS failover automatically transfers NAS management services from the
active master node to another node in a NAS cluster in the event that the active master node becomes
unavailable. Through this feature, continuous access to NAS shares is possible because NAS management
services can be run on any node within the NAS cluster.
See NAS Failover on page 34.
G300 Load-Balancing
Load-balancing allows you to group several G300 StorNext NAS Gateways together as a NAS cluster.
Users connect to the master node within the NAS cluster, which then equally distributes connections to the
other nodes within the cluster.
In addition, load-balancing ensures that if one of the StorNext NAS Gateway in the NAS cluster goes offline,
its current connections will be rerouted / reconnected to another StorNext NAS Gateway in the NAS cluster.
See Supported NAS Clusters on page 36.
Licensing
Beginning with StorNext 5.3.0, StorNext NAS is a licensed feature for Artico, Xcellis, G300 Gateways, and
M-Series Metadata Controllers.
l For Artico, the appliances are shipped with StorNext NAS licenses pre-installed.
l For Xcellis, G300 Gateways, and M-Series Metadata Controllers, you must purchase add-on StorNext
NAS licenses, and then install these licenses on each node running the StorNext NAS software.
l You can install StorNext NAS licenses using the StorNext GUI's licensing feature. See the License NAS
in the StorNext GUI topic of the StorNext Conned Documentation Center.
Guide Terminology
This guide uses the following terms to generally refer to Quantum hardware, unless a specific product is
being discussed.
Term Definition
Migration Assistance
If it becomes necessary to migrate a StorNext NAS environment from a metadata controller (MDC)
server to a G300 Gateway environment, you will need to contact Quantum Professional Services for
assistance.
Note: If a specific software configuration is not listed, it is not a supported configuration for StorNext
NAS.
5.3.1 NAS Apps version 3 or 1.2.3 (Not bundled with l Upgrades via the YUM
later StorNext 5.3.1) software repository
l Connect NAS Apps
version 3
l NAS failover for SMB
shares
5.3.1 NAS Apps version 3 or 1.2.1 or later (bundled with l Connect NAS Apps
later StorNext 5.3.1) version 3
l NAS failover for SMB
shares
Xcellis
StorNext NAS currently supports the following software configurations on Xcellis:
Note: If a specific software configuration is not listed, it is not a supported configuration for StorNext
NAS.
5.3.1.1 NAS Apps version 3 or later 1.2.3 (Not bundled with l Upgrades
StorNext 5.3.1.1) via the YUM
software
repository
l Connect
NAS Apps
version 3
l NAS failover
for SMB
shares
Note: If you are configuring StorNext NAS for Xcellis, we recommend using the StorNext Connect
Manage NAS application. For more information, see www.quantum.com/sncdocs.
CLI Example
ssh [email protected]
sysadmin@quantum's password:
Last login: Tue Jan 27 15:36:00 2015 from eng.acme.com
Welcome to Quantum G300 SN-NAS Console
----------------------------------------
*** Type 'help' for a list of commands.
G300:gateway>
sysadmin Password
StorNext NAS is installed with a special administrator user account, sysadmin. The password for the
sysadmin user account is randomly generated during installation, and you must change it before
performing any administrative steps.
Change the sysadmin Password
1. Log in to the console command line.
2. At the prompt, enter the following:
auth change local password sysadmin
3. At the prompt, enter the new password.
4. At the prompt, re-enter the new password for verification.
CLI Example
auth change local password sysadmin
Please enter the new password
Re-enter the password
Applying local configuration settings ...
Modified password for user sysadmin
Authentication Methods
Use one of the following methods to autheniticate users accessing NAS shares:
l A directory service, such as Microsoft Active Directory. See Using auth config with Directory Services
below.
l Local access. See Using auth config with Local Access on page 12.
l A Kerberos keytab that has been prepared for your StorNext NAS Gateway. See Using a Kerberos
Keytab on page 14.
Directory Services
StorNext NAS supports the following directory services.
Microsoft Active Directory (AD)
To perform user-authentication with AD, see Authenticate Users with AD below.
OpenLDAP with Samba Schema (LDAPS)
To perform user-authentication with LDAPS, see Authenticate Users with LDAPS on page 11.
OpenLDAP with Kerberos (LDAP)
To perform user-authentication with LDAP, see Authenticate Users with LDAP on page 11.
[idmap] (Optional) The method used to map UNIX IDs from the AD server for user
accounts.
Valid Entries
l rfc2307 (default)
l rid
l tdb
CLI Example
auth config ads administrator ADS.ACME.COM
Please enter the password for user ADS.ACME.COM\administrator:
Applying ads configuration settings ...
Configured ads directory services authentication
4. Validate your AD configuration by displaying authenticated users. See Viewing User Information on
page 15.
About ID Mapping
An idmap is used to map UNIX IDs from AD user accounts. When configuring your StorNext NAS
Gateway to use AD, you can use the following the idmap options.
RFC2307
The RFC2307 idmap uses the AD UNIX Attributes mechanism. This mechanism guarantees
consistency in user IDs (UIDs) and group IDs (GIDs) when connecting to multiple StorNext NAS
Gateways across your environment.
When the auth config ads command is issued, the StorNext NAS Gateway verifies whether
RFC2307 has been configured on the AD server.
l If it has been configured, then the StorNext NAS gateway uses RFC2307 as the default idmap.
l If it has not been configured, then the auth config ads request returns an error stating such.
To set up the RFC2307 extension for AD, see the following Microsoft instructions:
https://technet.microsoft.com/en-us/library/dd764491%28v=ws.10%29.aspx.
RID
The Relative Identifier, RID idmap, converts a Security Identifier (SID) to an RID, using an algorithm
that allows all Quantum appliances to see the same UID.
TDB
The Trivial Database TDB idmap tells Samba to generate UIDs and GIDs locally on demand.
TDB is suitable for an environment where only one StorNext NAS Gateway is used. However, if you
have multiple StorNext NAS Gateways and use the TDB idmap, it is possible that the same user could
create files with different UIDs and GIDs if they connected to the share from two different StorNext NAS
Gateways.
Choosing the idmap
Use the following table to determine the appropriate idmap value, depending on whether you have
multiple StorNext NAS Gateways in your environment and whether the RFC2307 extension has been
configured
No Yes rfc2307
Yes No rid
No No tdb or rid
Mapping UIDs and GIDs
To authenticate user connections through AD by mapping a specific UID or GID to an AD user or group,
use the following commands.
Note: The auth map ads user and auth map ads group commands are supported with AD
authentication where the TDB idmap is used.
UID
To map the SID of an AD user to a specific UID, enter the following command:
auth map ads user <username> <UID>
GID
To map the UID of an AD group to a specific GID, enter the following command:
auth map ads group <groupname> <GID>
The parameters are:
<ip-addr|host> IP Address or hostname for the LDAPS server. The port is not required
and will be set to 636.
CLI Example
auth config ldapsam Manager sam.acme.com MYDOMAIN.COM
Please enter the password for user cn=Manager:
Configured ldapsam directory services authentication
<ip-addr|host> IP Address or hostname for the LDAP server. The port is not required and
will be set to 636.
CLI Example
auth config ldap kadmin nod.acme.com ACME.COM OD.ACME.COM
kadmin = Administrator-principal in Kerberos
nod.acme.com = LDAP/Kerberos-server
ACME.COM = LDAP domain
OD.ACME.COM = Kerberos realm
Please enter the password for user kadmin/[email protected]:
CLI Example
auth config local
Applying local configuration settings ...
Successfully configured local authentication
<username> User for whom to allow access to NAS shares on your StorNext NAS Gateway.
[<UID> [<GID>]] (Optional) Specify a UID and GID for the newly created user.
CLI Example
auth add local user joe
Please enter a password for the new user
Re-enter the password
Added user joe
CLI Example
auth delete local user joe
Are you sure you want to delete the user joe (Yes/no)? yes
Deleted user joe
CLI Example
auth change local password joe
Please enter the new password
Re-enter the password
Modified password for user joe
CLI Example
auth import keytab
Imported keytab /var/upgrade/krb5.keytab
4. After the keytab has been imported, enter the following command to specify the keytab as the
authenticated user for the LDAP server:
auth config ldap keytab <ip-addr|host> <ldap_domain> <kerberos_realm>
The parameters are:
<ip-addr|host> IP address or hostname for the LDAP server. The port is not required and
will be set to 636.
CLI Example
auth config ldap keytab nod.acme.com ACME.COM OD.ACME.COM
Configured ldap directory services authentication
CLI Example
auth show user mary
uid=1001(mary) gid=1000(ldapusers)
CLI Example
Type: ldap
Domain: ACME.COM
Url: ldaps://nod.acme.com:636
DC: dc=ACME,dc=COM
CN: kadmin,dc=ACME,dc=COM
Realm: OD.ACME.COM
Resetting a Configuration
To reset the configuration of your StorNext NAS Gateway to local authentication, do the following.
For more information about local authentication, see Using auth config with Local Access on page 12.
CLI Example
auth reset config
Applying local configuration settings ...
Successfully reset configuration for local authentication
Share Management 18
Adding Shares 19
Modifying Shares 22
Exporting and Importing Share Information 24
Share Options 26
Deleting Shares 28
Viewing Shares 29
View Active SMB Sessions 30
Enabling or Disabling the Support and Upgrade Shares 31
Share Management
This section presents information about about adding and managing NAS shares on your StorNext NAS
Gateway.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
Topics
Adding Shares below
Add either SMB or NFS shares to your StorNext NAS Gateway.
Modifying Shares on page 22
After adding a share, you can modify its settings, as needed.
Exporting and Importing Share Information on page 24
Export shares to save share configurations for reference. In addition, you can edit an exported file's
share options to add or modify multiple shares.
Share Options on page 26
Include share options when you create, add, or modify a share.
Deleting Shares on page 28
You can remove shares to prohibit their being exported from the StorNext NAS Gateway.
Viewing Shares on page 29
View a list of shares being managed by the StorNext NAS Gateway.
View Active SMB Sessions on page 30
View active SMB sessions to determine current share access and usage.
Enabling or Disabling the Support and Upgrade Shares on page 31
Use the support and upgrade shares to manage your environment and to perform troubleshooting.
Adding Shares
When you add shares to your StorNext NAS Gateway, you must fine the following parameters:
l Type of share
l Alias for the share
l Directory in which the share exists
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
Add Shares
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At the prompt, enter the following:
share add <share_type> <share_name> <share_path> [option [, option, …]]
[nfshosts = host1 [, hostN]]
The parameters are:
<share_path> The fully qualified path name of the directory being shared.
The input is limited to 128 characters.
Example
guest ok = yes
users = john sue mary
In the above example, the first option contains 14 characters and the
second option contains 21 characters. Both options are well within the
1024-character limit.
See Share Options on page 26.
[nfshosts = host1 [, (NFS shares only) Host(s) allowed access to an NFS share. If no host(s)
hostN]] is provided, any host may access the NFS share.
When entering options, use the following conventions:
l You must enter nfshosts =.
l List hosts in one of the following ways:
o Host name or IP address
o Wildcards
o IP networks or netgroups
3. Repeat steps 1 and 2 for each share to add to the StorNext NAS Gateway.
CLI Example
share add smb mysmbshare /stornext/snfs1/mysmbshare
Create a Share Using the share create Command
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At the prompt, enter the following:
share create <share_type> <share_name> <share_path> [option [, option, …]]
[nfshosts = host1 [, hostN]]
See Add Shares above for parameter descriptions.
CLI Example
G302:localhost> share create smb myshare /stornext/snfs1/myshare
Share myshare successfully created
Modifying Shares
After adding SMB or NFS shares to your StorNext NAS Gateway, you can modify the share settings. The
same settings accepted by the share add command can be modified with the share change command.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
Modify Shares
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At theprompt, enter the following:
share change <share_type> <share_name> [option [, option, …]] [nfshosts = host1
[, hostN]]
The parameters are:
<share_path> The fully qualified path name of the directory being shared.
The input is limited to 128 characters.
Example
guest ok = yes
users = john sue mary
In the above example, the first option contains 14 characters and the
second option contains 21 characters. Both options are well within the
1024-character limit.
See Share Options on page 26.
[nfshosts = host1 [, (NFS shares only) Host(s) allowed access to an NFS share. If no host(s)
hostN]] is provided, any host may access the NFS share.
When entering options, use the following conventions:
l You must enter nfshosts =.
l List hosts in one of the following ways:
o Host name or IP address
o Wildcards
o IP networks or netgroups
3. Repeat steps 1 and 2 for each share to add to the StorNext NAS Gateway.
CLI Example Command: Change the create mask and write list options of an SMB
share
share change smb myshare create mask = 600, write list = @smb-rw
CLI Example Command: Change an NFS share to read only and restrict root
privileges to remote root users
share change nfs mynfsshare ro,root_squash
Consideration
To append additional options to the share, you need to enter the share's current options along with
the new options.
CLI Example Command: Place the valid users option in the global section
share change global valid users = @smb-ro
For a list of valid SMB share options, see Share Options on page 26.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
<filename> (Optional) The file to which the configuration is written. If you do not
provide a file name, the configuration is written to /var/smb_
shares.config.
CLI Example
share export config
Share configuration written to '/var/smb_shares.config'
Note: Currently, you can use the share export config command for exporting only SMB share
configurations.
<filename> (Optional) The file from which the configuration is imported. If you do not
provide a file name, the configuration is read from /var/smb_
shares.config.
CLI Example
share import config
Share configuration '/var/smb_shares.config' successfully imported
Share Options
You can include SMB and NFS share options with the share add, share change, and share create
commands.
CLI Example Command: Specify a write list and admin users for myshare
share add smb myshare /stornext/snfs1/myshare write list=james doris, admin users
= sysadmin
Option Conventions
When entering options, use the following conventions:
l Separate multiple options by a comma.
l For options that can have multiple values, separate the values by a space.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
See the smb.conf(5) MAN page for descriptions of the accepted options, along with how best to use them
for your environment.
Default Options
If you do not provide options for SMB shares, default values are set to the following:
l writable = yes
l public = no
Default Options
If you do not provide options for NFS shares, default values are set to the following:
l read write (rw)
l sync
Limiting NFS Share Access with Options
When you add an NFS share or change its options, StorNext NAS allows you to limit access to the NFS
share by hostname, IP Network, or netgroup. The following scenarios present options to limit NFS share
access.
Note: For the following scenarios, the same options are specified for each host.
Scenario 1: Export myshare with the default options of rw and sync without
restricting access to any hosts
Command
share add nfs myshare /stornext/snfs/myshare
Scenario 2: Add an NFS share as read only (ro) and secure for the eng.acme.com
host
Command
share add nfs myshare /stornext/snfs/myshare ro,secure nfshosts = eng.acme.com
Command
share add myshare /stornext/snfs/myshare nfshosts = myhost.acme.com,
myhost2.acme.com,10.20.30.123
Deleting Shares
You can remove SMB and NFS shares from the StorNext NAS configuration, and in turn, prohibit the
shares from being exported from the StorNext NAS Gateway. Removing SMB and NFS shares from the
StorNext NAS configuration does not remove the share directory or any of its files.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
Delete a Share
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At theprompt, enter the following:
share delete <share_type> <share_name>
The parameters are:
Viewing Shares
You can view a list of shares being managed by the StorNext NAS Gateway. The list includes the number of
shares, share name, share type, path, options, and host(s) associated with the share. You can also show
information for a defined group of shares.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
Gateway. Any edits made directly to either of these files will be lost when the StorNext NAS Gateway is
restarted, or when changes are made using any of the share commands.
Show Shares
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At the prompt, enter the following:
share show [<share_name>] [<share_type>] [paged]
The parameters are:
<share_name> (Optional) The name of the share(s) for which to list information.
<share_type> (Optional) The type of share, either nfs or smb, for which to list
information.
CLI Example
share show
4 shares:
1: nfs100 | nfs | /stornext/snfs/share100 | ro,sync |
myhost1.acme.com,10.20.30.37
2: myshare2 | nfs | /stornext/snfs/share2 | rw,sync |
myhost2.acme.com,10.20.30.35
3: myshare1 | nfs | /stornext/snfs/share1 | ro,wdelay,root_squash |
*.acme.com
4: myx | nfs | /stornext/snfs/myx | ro,async | *
CLI Example
x86_64:td-centos6sp7-gw1> system show smb
1 connections:
PID | User | Group | Machine
---------------------------------------------
1: 1:12013 | mtester | marcom-print-group | 10.65.167.5
1 services:
Service | PID | Machine | Connected at
---------------------------------------------------------------------------
1: smb1 | 1:12013 | 10.65.167.5 | Wed Apr 13 13:15:18 2016I k
Note: Only the support and upgrade shares can be enabled or disabled using the following
commands. For all user-defined shares, use the share add or share delete command.
NAS Clusters 33
NAS Failover 34
Supported NAS Clusters 36
Enabling NAS Clusters 43
Joining NAS Clusters 44
Removing Nodes from NAS Clusters 46
Disabling Nodes in NAS Clusters 47
Viewing NAS Cluster Information 47
Setting a Virtual IP Address for a NAS Cluster 48
NAS Cluster Command Scenarios 49
NAS Clusters
A StorNext NAS cluster provides users the ability to access NAS shares located on any of the NAS cluster's
nodes regardless of the physical location of the node.
You can take advantage of additional features built into StorNext NAS, such as NAS failover ensuring that
users can always access NAS shares, or G300 load-balancing maintaining a desirable level of network-
response time.
Resource Topics
Review the following topics to access additional information about StorNext NAS features, supported
configurations, and configuration scenarios:
NAS Failover below
Supported NAS Clusters on page 36
NAS Cluster Command Scenarios on page 49
NAS Failover
StorNext NAS failover automatically transfers NAS management services from the active master node to
another node in a NAS cluster in the event that the active master node becomes unavailable. Through this
feature, continuous access to NAS shares is possible because NAS management services can be run on
any node within the NAS cluster.
Note: StorNext NAS failover supports SMB shares only. For environments exporting NFS shares,
users connect to the shares through the master StorNext NAS Gateway IP address.
Failover Pathways
When you configure NAS clusters, the first node added to the cluster becomes the preferred master node.
Beginning with StorNext NAS 1.2.3, duties of a master node fail over to the next available node in the NAS
cluster. If this new active master node fails, StorNext NAS does one of the following depending on the
NAS cluster:
NAS cluster with 2 Xcellis, Artico, or MDC nodes
In an Xcellis, Artico, or MDC NAS cluster, StorNext NAS uses an active/standby failover arrangement, in
which the master node actively runs NAS services and the other node is on standby ready to take over NAS
services, as needed.
NAS cluster with up to 8 G300 nodes
In a G300 NAS cluster, StorNext NAS transfers services to the next available node. Keep in mind that
StorNext NAS does not automatically fail services back to the original master node. If you want NAS
management services to be returned to the original master node, you must manually do so by reconfiguring
the NAS cluster. See Enabling NAS Clusters on page 43.
NAS VIP
To configure NAS failover, you must create a NAS virtual IP (VIP) address for the NAS cluster, and then
assign this NAS VIP to each node within the NAS cluster. If failover occurs, users can still access the NAS
shares by connecting through the NAS VIP, or the virtual host name associated with the NAS VIP. See
Setting a Virtual IP Address for a NAS Cluster on page 48.
Important
Keep the following in mind when creating a NAS VIP:
l The VIP set for the NAS cluster is NOT the same as the VIP used for the StorNext MDC network.
l You must configure the NAS cluster's VIP on the LAN client network, using the same network and
subnet in which the nodes exist. The NAS cluster and NAS clients should also be configured for and
use the same LAN client network and subnet.
Failover Behavior
When a failover occurs, it will effect the StorNext NAS console command line, clients, and clusters as
follows.
Command Line Console Behavior
If a failover occurs, it will terminate your command line session. You must log into your command line
console again.
Client Behavior
Failover notification is OS or client dependent. If one of the nodes in a NAS cluster fails, users connected
to the node might experience a momentary interruption to services. This interruption can range from a
pause communicating with the remote share to a user needing to reenter authentication credentials to
access data residing on the NAS share.
Cluster Behavior
l If a node that is not the master fails, any user connections being serviced by the failed node are
redistributed to other nodes within the NAS cluster. In most cases, this transfer is completely
Important
To enable NAS failover within these NAS clusters, a single NAS VIP must be assigned to the cluster.
See NAS Failover on page 34.
NAS Failover Components
For StorNext NAS clusters configured from Xcellis, Artico, or M-Series MDC systems, NAS failover
functions as follows:
l NAS Failover is automatic
l StorNext NAS software runs on both nodes, supporting an active/passive NAS failover configuration
if a NAS VIP has been configured for the cluster
l StorNext NAS services are active on one node at a time, with the master node (Node 2) being the
preferred active node
l Failback to the preferred master node is not automatic.
Xcellis Two-Node NAS Cluster with Failover
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. To guarantee that NAS
management services are transferred back to the preferred master node, you must manually reset the
master node.
Artico Two-Node NAS Cluster with Failover
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. To guarantee that NAS
management services are transferred back to the preferred master node, you must manually reset the
master node.
M-Series MDC Two-Node NAS Cluster with Failover
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. To guarantee that NAS
management services are transferred back to the preferred master node, you must manually reset the
master node.
Additional Information
l You must purchase the StorNext NAS license separately for each Xcellis or M-Series MDC system.
l A StorNext NAS license is included and pre-installed on every Artico at the factory.
l For help with configuring NAS clusters, see NAS Clusters on page 33.
l For an example scenario in configuring an MDC NAS cluster with NAS failover enabled, see NAS
Cluster Command Scenarios on page 49.
Important
The Load-Balancing feature is supported only on a NAS cluster of G300 systems, and only for SMB
shares.
StorNext NAS load-balancing allows you to group up to 8 G300s together as a NAS cluster. Users connect
to the preferred master node within the cluster, which then distributes connections to the other nodes within
the cluster.
Automatic Load-Balancing Workflow
Load-balancing ensures that connections are distributed to nodes within the cluster using a "least
number of connections" algorithm. In addition, if one of the nodes in the NAS cluster goes offline, its
current connections will be rerouted / reconnected to another node in the NAS cluster.
You can also configure a load-balanced NAS cluster for NAS failover. This configuration ensures that a
master node is always available to act as the load-balancer to distribute connections appropriately.
Important
To enable NAS failover within these NAS clusters, a single NAS VIP must be assigned to the cluster.
See NAS Failover on page 34.
Before creating a NAS cluster of G300s, you need to determine which G300s will be part of the NAS cluster.
Any G300 running StorNext NAS software can be included as a node in the NAS cluster. Note the IP
address for each G300 to be included in the NAS cluster, as you will need them to assign the G300 node to
the NAS cluster.
Caution: IP addresses must be ‘static’ for the G300s. Assigning IP addresses via DCHP may cause
unpredictable behavior.
G300 Multi-Node Load-Balancing NAS Cluster with Failover
Scenario A: Load-Balancing without NAS Failover Workflow
l If a node this is not the active master node becomes unavailable, the master node redistributes all
connections from this failed node to other nodes within the cluster.
Scenario B: Load Balancing with NAS Failover Workflow
l When the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services, including load-balancing operations, over to the next available node within the NAS cluster. The
next available node can be either the original preferred master node or another node within the NAS
cluster.
l To guarantee that NAS management services are transferred back to the preferred master node, you
must manually reconfigure the NAS cluster. Failback to the preferred master node is not automatic.
Additional Information
l You must purchase the StorNext NAS license separately for each G300.
l For help with configuring NAS clusters, see NAS Clusters on page 33.
l For an example scenario in configuring a G300 load-balancing NAS cluster, see NAS Cluster
Command Scenarios on page 49.
Important
A G300 Load-Balancing NAS cluster cannot export shares from an Artico system.
Configuration Components
For this type of configuration, the following applies:
l The NAS cluster consists of G300 nodes only.
l The Xcellis or M-Series MDC systems host the StorNext file system.
l StorNext NAS software is enabled on both nodes of the Xcellis or M-Series MDC system.
l The master G300 node communicates with the Xcellis or M-Series MDC system to export shares for
users.
l NAS failover is not available between the G300 and Xcellis or M-Series MDC systems.
l Load-balancing is not available between the G300 and Xcellis or M-Series MDC systems.
G300 Load-Balancing NAS Cluster with Xcellis System
NAS Export Workflow
l The active master node of the G300 NAS Cluster exports shares from the StorNext file system located on
the Xcellis nodes.
l Xcellis nodes are not available for NAS failover or load-balancing operations.
Additional Information
l You must purchase the StorNext NAS license separately for each G300, and each Xcellis or M-Series
MDC system.
l For help with configuring NAS clusters, see NAS Clusters on page 33.
l For an example scenario in configuring a G300 load-balancing NAS cluster, see NAS Cluster
Command Scenarios on page 49.
CLI Example
nascluster enable master 10.65.188.89 /stornext/snfs1
Verifying NAS cluster configuration for 10.65.188.89 ...
NAS cluster enable master node 10.65.188.89 starting...
Updating system NAS cluster configuration ...
Check for master takeover ...
Publish master configuration ...
Setting master local auth config ...
Applying local configuration settings ...
Master node successfully enabled for NAS cluster using 10.65.188.89
CLI Example
nascluster enable node 10.65.188.91
Verifying NAS cluster configuration for 10.65.188.91 ...
Node 10.65.188.91 successfully enabled for NAS cluster
Note: Beginning with NAS 1.2.1, all joining can be done from the master node.
Considerations
Review the following considerations before joining nodes to NAS clusters.
Sequence for Issuing Commands
l You must first enable nodes for NAS clustering before you can join them in NAS clusters. See
Enabling NAS Clusters on the previous page.
l When configuring a NAS cluster for failover, you must also set a NAS VIP for the cluster before
joining nodes to the cluster. See Setting a Virtual IP Address for a NAS Cluster on page 48.
Cluster Synchronization
When joining a node to a NAS cluster, the node is synchronized with the master node. This
synchronization ensures that all nodes in the cluster are using the same authentication scheme, and that
they all have the same shares configured.
Keep in mind that the StorNext NAS software synchronizes authentication and share configuration
information between all nodes in a NAS cluster. Changes made to a single StorNext NAS Gateway will
not be synchronized between nodes unless you use the auth config or share commands. After you
have created a NAS cluster, you can only execute the auth config or share commands from the master
node.
With NAS failover configured, if you connect to the NAS cluster through the NAS VIP, you will ensure
that you are always connected to the master node.
CLI Example
nascluster leave 10.1.1.1
NAS cluster leave applying settings ...
Updating system NAS cluster configuration ...
Disable a Node
1. Log in to the console command line from the master node. See Access the Console Command Line on
page 5.
CLI Example Output from Master Node: NAS Cluster of G300 Appliances
Configured for HA
nascluster show
NAS Cluster IP: 10.65.188.89/eth0, Master: Yes, SNFS Root: /stornext/snfs1,
Joined: Yes
Load balancing: leastconn
Master IP: 10.65.188.89
VIP: 10.65.166.179 active
Nodes: 3
1: 10.65.188.89 (Joined)
2: 10.65.188.91 (Disabled)
3: 10.65.188.96 (Not-Ready)
Node States
The following table presents the different states of nodes.
State Description
Not-Ready A node has been enabled for the NAS cluster, but has not been joined to the cluster.
Disabled For master nodes, the node has not been joined to the cluster.
For non-master nodes, the node has been removed from the cluster.
Requirements
Before setting a NAS VIP, review the following information.
Important
The VIP set for the NAS cluster is NOT the same as the VIP used for the StorNext MDC network.
Make sure the NAS VIP host name used for the NAS cluster is NOT the same host name used for the
HA pair
l You must configure the NAS cluster's VIP over the LAN client network, using the same network and
subnet in which the nodes exist. The NAS cluster and NAS clients should also be configured for and
using this same network and subnet.
l You must configure the StorNext file system to enable global file locking before creating and assigning
a NAS VIP for the NAS cluster. See the snfs_config MAN page for more information on the
fileLocks parameter.
Prerequisites and Scenario Assumptions
Review the following information to better understand this scenario.
Prerequisites
l Configure the StorNext file system to enable global file locking before setting a NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l Set a NAS VIP to enable load-balancing and failover for the NAS cluster. See Setting a Virtual
IP Address for a NAS Cluster on page 48.
l Obtain and install valid StorNext NAS licenses for each G300 to include in the NAS cluster.
l Configure each G300 within the NAS cluster to access the StorNext file system.
l Configure all G300s within the NAS cluster with the same NTP configuration. See Configuring NTP
on page 66.
Assumptions
l StorNext file system: stornext/snfs.
l StorNext NAS Gateways within the NAS cluster:
o gw01: 10.20.4.35
o gw02: 10.20.4.36
o gw03: 10.20.4.37
l NAS VIP: 10.30.5.200
l DNS name for the NAS VIP: eng-nas-cluster.acme.com
l Master node: gw01
Steps
1. Access the StorNext NAS console command line from gw01.
2. Issue the following commands:
nascluster enable master 10.20.4.35 /stornext/snfs1
nascluster enable node 10.20.4.36
nascluster enable node 10.20.4.37
nascluster set virtual ipaddr 10.30.5.200
nascluster join /stornext/snfs1
3. Issue the following command to add gw02 to the NAS cluster:
nascluster join /stornext/snfs1 10.20.4.36
4. Issue the following command to add gw03 to the NAS cluster:
nascluster join /stornext/snfs1 10.20.4.37
Result and Next Steps
The 3 G300 StorNext NASGateways are part of a NAS cluster that has been configured for load-
balancing and failover.
Proceed with configuring your NAS cluster and adding shares. After this NAS cluster has been
configured, users can access the shares using eng-nas-cluster.acme.com, which is the DNS
name associate with the NAS VIP.
Prerequisites and Scenario Assumptions
Review the following information to better understand this scenario.
Prerequisites
l Configure the StorNext file system to enable global file locking before setting a NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l Set a NAS VIP to enable NAS failover for the NAS cluster. See Setting a Virtual IP Address for a
NAS Cluster on page 48.
Assumptions
l Node 1 IP address: 10.60.4.35
l Node 2 IP address: 10.60.4.36
l NAS VIP: 10.60.4.200
l DNS name for the NAS VIP: archive-cluster.acme.com
l Master node: Node 2
Steps
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following commands:
nascluster enable master 10.60.4.35 /stornext/snfs1
nascluster enable node 10.60.4.36
nascluster set virtual ipaddr 10.60.4.200
nascluster join /stornext/snfs1
3. Issue the following command to join Node 1 to the NAS cluster and enable it for NAS failover:
nascluster join /stornext/snfs1 10.60.4.36
Result and Next Steps
The NAS cluster is configured for NAS failover.
l Proceed with configuring your NAS cluster and adding shares. After this NAS cluster has been
configured, users can access the shares using archive-cluster.acme.com, which is the DNS
name associate with the NAS VIP.
l Log into the master node to configure the NAS cluster for an authentication scheme.
Keep in mind that you do not need to configure the other node in the NAS cluster. Instead, this
configuration setting is automatically synchronized to it. As you add, modify, or delete shares,
these changes are also synchronized to the other node in the NAS cluster. Remember that all
share changes must be issued from the master node.
In addition, due to the NAS failover capabilities of a NAS cluster, the master can change. We
recommend connecting to the NAS VIP to ensure that you are always connected to the active
master node.
Scenario Assumptions
l System: Artico
l Node 2 IP address: 11.11.11.118
l Node 1 IP address: 11.11.11.116
l NAS VIP: 11.11.11.119
l Master node: Node 2
Steps: Create an Artico NAS Cluster
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to enable Node 2 as the master:
nascluster enable master 11.11.11.118 /stornext/artico
3. Access the StorNext NAS console command line from Node 1.
4. Issue the following command to enable Node 1 for the NAS cluster:
nascluster enable node 11.11.11.116
5. From Node 2, issue the following command to set the NAS VIP:
nascluster set virtual ipaddr 11.11.11.119
Steps: Remove an Artico NAS Cluster
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to remove Node 1 from the NAS cluster:
nascluster leave 11.11.11.116
3. Issue the following command to disable Node 1:
nascluster disable node 11.11.11.116
4. Issue the following command to remove Node 2 from the NAS cluster:
nascluster leave
5. Issue the following command to disable Node 2:
nascluster disable node 11.11.11.118
6. Issue the following command from both Nodes 1 and 2 to verify the NAS cluster has been removed:
nascluster show
Scenario Assumptions
l System: 2-Node MDC
l Node 2 IP address: 10.10.10.2
l Node 1 IP address: 10.10.10.1
l NAS VIP: 10.20.20.1
l Master node: Node 2
Steps
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to remove Node 1 from the NAS cluster:
nascluster leave 10.10.10.1
3. Issue the following command to disable Node 1:
nascluster disable node 10.10.10.1
4. Issue the following command to remove Node 2 from the NAS cluster:
nascluster leave
Result
The NAS Cluster is now disable, and the NAS VIP has been removed from the NAS cluster.
5. From Node 2, issue the following commands to recreate the NAS cluster:
nascluster enable node 10.10.10.2
nascluster set virtual ipaddr 10.20.20.1
nascluster join /stornext/snfs1
6. Issue the following command to join Node 1 to the NAS cluster:
nascluster join /stornext/snfs1 10.10.10.1
7. Issue a new NAS VIP. See Setting a Virtual IP Address for a NAS Cluster on page 48.
System Backup
When the first NAS share is added to your system, a small StorNext NAS configuration file is placed in the
root directory of the StorNext file system. The system backup command protects this configuration file.
If you have a managed file system, your configuration file is stored in the .ADIC_INTERNAL_
BACKUP/snnas directory of your StorNext file system. Placing backup configuration files on a managed
file system ensures that redundant copies of the configuration are protected.
By default, the system backup command automatically runs once a day. You can manually back up the
StorNext NAS configuration file at any time, and we recommend running a manual backup after the file has
been modified. For detailed steps, see Performing a Manual System Backup on the next page.
System Restore
If you need to restore the StorNext NAS configuration file from a previous backup, run the system
restore command. The most recent backup file will be in the /var directory. If you do not want to restore
from the most recent configuration file, make sure that the file from which to restore system configuration is
in the /var directory. For detailed steps, see Performing a System Restore on the next page.
CLI Example
> system backup
Creating configuration backup /var/snnas-db-package.tar.bz2.enc (79 KB)
Finished configuration backup /var/snnas-db-package.tar.bz2.enc (79 KB)
Saved configuration backup package /var/snnas-db-package.tar.bz2.enc to
/stornext/snfs1/.StorNext/.snnas/snnas-db-
package.tar.bz2.enc.ceb028ca81a211e587c7ecf4bbdc1708
Backup of configuration successful
Turn Off the NO_STORE Flag
1. Log in to the console command line from the MDC node. See Access the Console Command Line
on page 5.
2. At the prompt, enter the following:
/usr/adic/TSM/util/dm_util -d no_store <managed_file_system>/.ADIC_INTERNAL_
BACKUP/snnas
Important
Always execute the dm_util command. Otherwise, the StorNext NAS configuration file for your
StorNext NASGateway may not be copied to tiers managed by File Manager.
Important
When performing a system restore for a NAS cluster, you will need to remove all non-master nodes from
the cluster first. After the system restore completes, rejoin the non-master nodes to the NAS cluster. This
workflow ensures that all nodes within the NAS cluster are synchronized with the master node. See
System Restart, Restore, and Sync Issues and FAQs on page 69.
CLI Example
System Management 59
Accessing the Software Version 60
Performing System Installations and Upgrades 61
Restarting StorNext NAS 62
Working with Support Logs 63
Viewing Logs 64
Files Managed by StorNext NAS 65
Configuring NTP 66
System Management
This section provides the following help in managing your StorNext NAS system.
Topics
Accessing the Software Version on the next page
To access the version of your StorNext NAS software, view it from the console command line.
CLI Example
> system show version
Quantum G302 SN-NAS 5.2.2-15662 1.1.0-4473
Important
Before you install StorNext NAS 1.2.3, you must first perform one last manual update using the
SNFS NAS Repo Upgrade RPM. This manual update accomplishes the following:
l Properly configures your StorNext NAS software to access the external YUM repository in which the
StorNext NAS 1.2.3 RPM is stored.
l Imports a public key required to install the latest StorNext NAS RPM.
Beginning with StorNext NAS 1.2.3, all upgrade packages are signed, meaning that the public key —
imported to your system from the SNFS NAS Repo Upgrade RPM — is required to install the
software upgrade. You need to import this public key regardless of whether the upgrade packages are
installed directly from the external YUM repository or from a local /var/upgrade directory.
After running the SNFS NAS Repo Upgrade RPM, you will no longer need to manually download the
latest StorNext NAS RPMs. Instead your StorNext NAS software will be able to directly pull the latest
version at your command.
CentOS6
quantum-snfs-nas-repo-upgrade-1.2.3-5181.el6.x86_64.rpm
CentOS7
quantum-snfs-nas-repo-upgrade-1.2.3-5181.el7.centos.x86_64.rpm
2. From the console command line, run the following command to point your StorNext NAS software to
the external YUM repository and to import the public key required to install the StorNext NAS 1.2.3
RPM:
system upgrade local
Note: Access to NAS shares may be briefly interrupted during the upgrade process.
Perform a Local Upgrade to StorNext NAS 1.2.3
1. Complete Step 1 above.
2. Download the applicable StorNext NAS 1.2.3 RPM to the /var/upgrade directory.
CentOS6
quantum-snfs-nas-1.2.3-5165.el6.x86_64.rpm
CentOS7
quantum-snfs-nas-1.2.3-5165.el7.centos.x86_64.rpm
3. From the console command line, run the following command to upgrade your StorNext NAS software
to version 1.2.3:
system upgrade local
CLI Example
system restart services
Restarting snnas_controller
CLI Example
system restart services all
Stopping all services . . .
Shutting down SAMBA winbindd : [ OK ]
smbd stop/waiting
console stop/waiting
snnas_controller stop/waiting
Starting all services . . .
snnas_controller start/running, process 22427
initctl: Job is already running: console
smbd start/running, process 22536
Starting SAMBA winbindd : [ OK ]
Note: Depending on your configuration, different services may be restarted while they are
running.
CLI Example
supportbundle create
Gathering support logs...
Finished support package creation /var/snnas_auto_support.sh.tar.bz2 (155 KB)
Done.
CLI Example
supportbundle send [email protected]
Gathering support logs...
Finished support package creation /var/snnas_auto_support.sh.tar.bz2 (155 KB)
Emailing support package...
Support package sent successfully.
Note: The supportbundle send command gathers the same logs and files as the
supportbundle create command.
Viewing Logs
You can view system logs stored in the /var/log directory on the StorNext NAS Gateway, or you can
monitor a specific system log for new updates.
CLI Example
log view snnas_controller
This example allows you to view the snnas_controller log file.
CLI Example
log watch snnas_controller
This example allows you to watch the snnas_controller log file.
Important
If you change any of the following files manually, modifications or edits to these files may be lost when
you restart the StorNext NAS Gateway.
/etc/default/nfs
/etc/exports
/etc/krb5.conf
/etc/nslcd.conf
/etc/nsswitch.conf
/etc/ntp.conf
/etc/openldap/ldap.conf
/etc/pam.d/login
/etc/pam.d/password-auth
/etc/samba/smb.conf
/etc/ssh/sshd_config
/etc/sssd/sssd.conf
/etc/sysconfig/ctdb
/etc/sysconfig/nfs
Configuring NTP
You can configure your StorNext NAS Gateway to use a Network Time Protocol (NTP) server to control its
internal clock.
2. Use the following commands to configure NTP for your StorNext NAS Gateway:
Action Command
Synchronize the StorNext NAS Gateway with the NTP server ntp sync
CLI Examples
gw01> ntp add 10.56.261.10
NTP host list: 0.us.pool.ntp.org, 1.us.pool.ntp.org, 2.us.pool.ntp.org,
3.us.pool.ntp.org, 10.56.261.10
gw01> ntp add 10.56.261.11
NTP host list: 0.us.pool.ntp.org, 1.us.pool.ntp.org, 2.us.pool.ntp.org,
3.us.pool.ntp.org, 10.56.261.10, 10.56.261.11
gw01> ntp del 0.us.pool.ntp.org
NTP host list: 1.us.pool.ntp.org, 2.us.pool.ntp.org, 3.us.pool.ntp.org,
10.56.261.10, 10.56.261.11
gw01> ntp del 1.us.pool.ntp.org
NTP host list: 2.us.pool.ntp.org, 3.us.pool.ntp.org, 10.56.261.10,
10.56.261.11
gw01> ntp del 2.us.pool.ntp.org
NTP host list: 3.us.pool.ntp.org, 10.56.261.10, 10.56.261.11
gw01> ntp del 3.us.pool.ntp.org
NTP host list: 10.56.261.10, 10.56.261.11
gw01> ntp sync
Sync system time to 10.56.261.10.
Restore the NTP daemon.
Topics
Logging Issues and FAQs below
System Restart, Restore, and Sync Issues and FAQs on the next page
Alert Issues and FAQs on page 70
Add the log level option
1. Log in to the console command line. See Access the Console Command Line on page 5.
2. At the prompt, enter the following:
share change global log level = 3
3. When you are finished troubleshooting the issue, return the logging level back to the default by issuing
the following command:
share change global log level = 0
Note: In this scenario, you do not need to disable the non-master nodes because you will be
rejoining them to the master node after the system restore is performed.
2. From the master node, perform the system restore. See Performing a System Restore on page 57.
3. For each non-master node within the cluster, run the nascluster join command to rejoin and sync
the nodes to the master node. See Joining NAS Clusters on page 44.
Note: In this scenario, you do not need to re-enable the non-master nodes as they should already
be enabled.
Why Am I Receiving a user 'administrator' not found Alert When I Issue the
share create Command?
When you issue the share create command, StorNext NAS creates a directory and assigns ownership of
the directory to the administrator user. However, if the administrator user has a UID of 0, the SMB server
denies root access because an administrator account with a UID of 0 is an invalid account.
You will receive the user 'administrator' not found (E-5060) alert if all 3 of the following settings
have been configured and you issue the share create command :
l You have configured your StorNext NAS Gateway to authenticate user access to NAS shares with
Microsoft AD.
l You specified an ID map of RFC2307.
l The administrator user for your Microsoft AD server has a UID of 0 (zero).
Resolution
Issue the auth config ads command, specifying TDB or RID as the ID Map. See About ID Mapping on
page 9.
Resolution
1. Verify you have entered in the user name correctly.
2. Verify the user is an administrator or has administrative privileges. See Using auth config with Directory
Services on page 8 and Viewing User Information on page 15.
Resolution
1. Verify that you have entered a valid password for the administrator user.
2. Retry the command. See Using auth config with Directory Services on page 8.
Resolution
Issue the auth config ads command, specifying RID as the ID Map. See About ID Mapping on page 9.