NAS Platform v13 4 File Service Administration Guide MK-92HNAS006-16
NAS Platform v13 4 File Service Administration Guide MK-92HNAS006-16
MK-92HNAS006-16
March 2018
© 2011, 2018 Hitachi, Ltd. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and
recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or
Hitachi (collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in
utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other
copies of the Materials. “Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for
information about feature and product availability, or contact Hitachi at https://support.hitachivantara.com/en_us/contact-us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of
Hitachi products is governed by the terms of your agreements with Hitachi.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other
individuals; and
2. Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including
the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader
agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000,
S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business
Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS,
Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,
Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks
of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Contents
File Services Administration Guide for Hitachi NAS Platform 3
Tree clone commands.................................................................................. 39
Deleting a tree directory with tree-delete........................................................... 40
Important considerations when using tree-delete.........................................40
Unmounting a file system and tree-delete.............................................. 40
Undeletable directories........................................................................... 41
Using tree-delete.......................................................................................... 41
Submitting a tree-delete job.................................................................... 41
Troubleshooting tree-delete..........................................................................41
Controlling file system space usage.................................................................. 42
File system utilization recommendations......................................................43
Archive file systems......................................................................................43
High activity file systems.............................................................................. 43
Dynamic Superblocks (DSB)........................................................................ 44
Increasing the size of a file system.............................................................. 45
Thin provisioning file systems.......................................................................45
Managing file system expansion.................................................................. 47
Enabling and disabling file system auto-expansion......................................49
Expanding a file system.......................................................................... 49
Moving a file system.....................................................................................51
File system relocation............................................................................. 52
Using system lock on file systems.......................................................... 54
Enabling and disabling system lock for a file system.............................. 54
Recovering a file system.............................................................................. 55
Restoring a file system from a checkpoint ............................................. 56
File system recovery from a snapshot ................................................... 57
Automatic file system recovery .............................................................. 58
Using deduplication file system.................................................................... 58
Determining sufficient space for dedupe conversion.............................. 58
Preparing for dedupe conversion............................................................59
Viewing the deduplication file system page............................................ 60
Enabling dedupe for a file system...........................................................62
Converting a file system to enable dedupe............................................. 62
Managing file system quotas ............................................................................ 63
Contents
File Services Administration Guide for Hitachi NAS Platform 4
Managing usage quotas .............................................................................. 63
Setting user and group file system quota defaults....................................... 66
Adding a quota............................................................................................. 69
Modifying a file system quota....................................................................... 71
Deleting a file system quota......................................................................... 73
Managing quotas on virtual volumes................................................................. 73
Advertising NFS exports for Virtual Volumes................................................74
Viewing and modifying virtual volume quotas...............................................75
Setting user/group defaults.......................................................................... 77
Exporting quotas for a specific virtual volume.............................................. 78
Managing virtual volumes.................................................................................. 79
Viewing virtual volumes................................................................................ 80
Adding a virtual volume................................................................................ 81
Modifying a virtual volume............................................................................83
Deleting a virtual volume ............................................................................. 86
Enabling and disabling file system capacity and free space reporting
based on virtual volume quotas....................................................................86
Using the per-file system throttle feature........................................................... 87
Creating a read cache file system..................................................................... 88
Contents
File Services Administration Guide for Hitachi NAS Platform 5
Symbolic links......................................................................................... 99
Mixed mode operation and LDAP servers............................................ 102
Mandatory and advisory byte-range file locks in mixed mode.............. 103
Opportunistic locks (oplocks)................................................................ 104
Exclusive and batch oplocks................................................................. 105
Level II oplocks..................................................................................... 106
User and group names in NFSv4..........................................................106
Configuring user and group mappings..................................................106
Managing NFS user and group mapping.............................................. 107
About importing user or group mappings from a file or an NIS LDAP
server.................................................................................................... 114
File system auditing......................................................................................... 117
About file system audit logs........................................................................ 119
Controlling file system auditing................................................................... 119
Creating a file system audit policy.........................................................119
Configuring auditing on the Windows client.......................................... 122
Displaying file system audit logs........................................................... 128
Contents
File Services Administration Guide for Hitachi NAS Platform 6
Modifying NFS Export Details.....................................................................145
Deleting an NFS export.............................................................................. 147
Backing up or restoring NFS exports......................................................... 147
About the rquotad service................................................................................148
Restrictive mode operation.........................................................................149
Matching mode operation...........................................................................150
Contents
File Services Administration Guide for Hitachi NAS Platform 7
Using home directories with cluster EVS name spaces............................. 186
Offline file access modes............................................................................186
Backing up and restoring SMB shares....................................................... 186
Considerations when using Hyper-V............................................................... 188
Configuring the Service Witness Protocol....................................................... 188
Configuring a witness EVS.........................................................................189
Using Windows server management............................................................... 190
Using the computer management tool....................................................... 190
Restoring a previous version of a file...............................................................191
Contents
File Services Administration Guide for Hitachi NAS Platform 8
Taking snapshots of logical units................................................................206
Volume full conditions.................................................................................207
Managing iSCSI logical units........................................................................... 207
Viewing the properties of iSCSI logical units.............................................. 207
Adding iSCSI logical units.......................................................................... 209
Modifying an iSCSI logical unit...................................................................212
Deleting an iSCSI logical unit..................................................................... 213
Backing up iSCSI logical units....................................................................214
Restoring iSCSI logical units...................................................................... 214
Setting up iSCSI targets............................................................................. 215
Viewing the properties of iSCSI targets......................................................215
Adding iSCSI targets.................................................................................. 216
Adding a logical unit to an iSCSI target......................................................219
Modifying the properties of an iSCSI target................................................221
Deleting an iSCSI target.............................................................................223
Configuring iSCSI security (mutual authentication)......................................... 223
Configuring the storage server for mutual authentication...........................224
Changing the storage server’s mutual authentication configuration .... 225
Configuring the Microsoft iSCSI initiator for mutual authentication............ 227
Accessing iSCSI storage ................................................................................ 228
Using iSNS to find iSCSI targets ............................................................... 228
Using target portals to find iSCSI targets................................................... 229
Accessing available iSCSI targets .............................................................229
Verifying an active connection....................................................................230
Terminating an active connection............................................................... 230
Using Computer Manager to configure iSCSI storage.....................................231
Contents
File Services Administration Guide for Hitachi NAS Platform 9
Preface
This guide explains about file system formats, and provides information about creating
and managing file systems, and enabling and configuring file services (file service
protocols). Note that some features apply only to individual platforms and may not be
applicable to your configuration.
Virtual Storage Platform G400, G600, G800 and Virtual Storage Platform F400, F600, F800
storage systems can be configured with NAS modules to deliver native NAS functionality
in a unified storage platform. The unified VSP Gx00 models and VSP Fx00 models
automatically form a two-node cluster in a single chassis upon installation, with no
external cabling required.
Related Documentation
Release Notes provide the most up-to-date information about the system, including
new feature summaries, upgrade instructions, and fixed and known defects.
Administration Guides
■ System Access Guide (MK-92HNAS014)—Explains how to log in to the system, provides
information about accessing the NAS server/cluster CLI and the SMU CLI, and
provides information about the documentation, help, and search capabilities available
in the system.
■ Server and Cluster Administration Guide (MK-92HNAS010)—Provides information about
administering servers, clusters, and server farms. Includes information about
licensing, name spaces, upgrading software, monitoring servers and clusters, and
backing up and restoring configurations.
■ Storage System User Administration Guide (MK-92HNAS013)—Explains user
management, including the different types of system administrator, their roles, and
how to create and manage these users.
■ Network Administration Guide (MK-92HNAS008)—Provides information about the
server's network usage, and explains how to configure network interfaces, IP
addressing, name and directory services.
■ File Services Administration Guide (MK-92HNAS006)—Explains about file system
formats, and provides information about creating and managing file systems, and
enabling and configuring file services (file service protocols).
■ Data Migrator Administration Guide (MK-92HNAS005) —Provides information about the
Data Migrator feature, including how to set up migration policies and schedules.
■ Storage Subsystem Administration Guide (MK-92HNAS012)—Provides information about
managing the supported storage subsystems (RAID arrays) attached to the server/
cluster. Includes information about tiered storage, storage pools, system drives (SDs),
SD groups, and other storage device related configuration and management features
and functions.
■ Snapshot Administration Guide (MK-92HNAS011)—Provides information about
configuring the server to take and manage snapshots.
■ Replication and Disaster Recovery Administration Guide (MK-92HNAS009)—Provides
information about replicating data using file-based replication and object-based
replication, provides information on setting up replication policies and schedules, and
using replication features for disaster recovery purposes.
■ Antivirus Administration Guide (MK-92HNAS004)—Describes the supported antivirus
engines, provides information about how to enable them, and how to configure the
system to use them.
■ Backup Administration Guide (MK-92HNAS007)—Provides information about
configuring the server to work with NDMP, and making and managing NDMP backups.
Note: For a complete list of Hitachi NAS open source software copyrights and
licenses, see the System Access Guide.
Hardware References
■ Hitachi NAS Platform 3080 and 3090 G2 Hardware Reference (MK-92HNAS017) —
Provides an overview of the second-generation server hardware, describes how to
resolve any problems, and replace potentially faulty parts.
■ Hitachi NAS Platform and Hitachi Unified Storage Series 4000 Hardware Reference
(MK-92HNAS030)—Provides an overview of the Hitachi NAS Platform Series 4000
server hardware, describes how to resolve any problems, and how to replace
potentially faulty components
■ Hitachi NAS Platform System Manager Unit (SMU) Hardware Reference (MK-92HNAS065)
—This document describes the usage and replacement instructions for the SMU
300/400.
Best Practices
■ Hitachi USP-V/VSP Best Practice Guide for HNAS Solutions (MK-92HNAS025)—The
practices outlined in this document describe how to configure the system to achieve
the best results.
■ Hitachi Unified Storage VM Best Practices Guide for HNAS Solutions (MK-92HNAS026) —
The system is capable of heavily driving a storage array and disks. The practices
outlined in this document describe how to configure the system to achieve the best
results
■ Hitachi NAS Platform Best Practices Guide for NFS with VMware vSphere (MK-92HNAS028)
—This document covers best practices specific to using VMware vSphere with the
Hitachi NAS platform.
■ Hitachi NAS Platform Deduplication Best Practice (MK-92HNAS031)—This document
provides best practices and guidelines for using deduplication.
■ Hitachi NAS Platform Best Practices for Tiered File Systems (MK-92HNAS038)—This
document describes the Hitachi NAS Platform feature that automatically and
intelligently separates data and metadata onto different Tiers of storage called Tiered
File Systems (TFS).
■ Hitachi NAS Platform Data Migrator to Cloud Best Practices Guide (MK-92HNAS045)—
Data Migrator to Cloud allows files hosted on the HNAS server to be transparently
migrated to cloud storage, providing the benefits associated with both local and cloud
storage.
■ Brocade VDX 6730 Switch Configuration for use in an HNAS Cluster Configuration Guide
(MK-92HNAS046)—This document describes how to configure a Brocade VDX 6730
switch for use as an ISL (inter-switch link) or an ICC (inter-cluster communication)
switch.
■ Best Practices for Hitachi NAS Universal Migrator (MK-92HNAS047)—The Hitachi NAS
Universal Migrator (UM) feature provides customers with a convenient and minimally
disruptive method to migrate from their existing NAS system to the Hitachi NAS
Platform. The practices and recommendations outlined in this document describe
how to best use this feature.
■ Hitachi Data Systems SU 12.x Network File System (NFS) Version 4 Feature Description
(MK-92HNAS056)—This document describes the features of Network File System
(NFS) Version 4.
■ Hitachi NAS HDP Best Practices (MK-92HNAS057)—This document lists frequently asked
questions regarding the use of Hitachi Dynamic Provisioning.
■ Hitachi Multi-tenancy Implementation and Best Practice Guide (MK-92HNAS059)—This
document details the best practices for configuring and using Multi-Tenancy and
related features, and EVS security.
■ Hitachi NAS Platform HDP Best Practices (MK-92HNAS063)—This document details the
best practices for configuring and using storage pools, related features, and Hitachi
Dynamic Provisioning (HDP).
■ Hitachi NAS Platform System Manager Unit (SMU) Hardware Reference (MK-92HNAS065)
—This document describes the usage and replacement instructions for the SMU
300/400.
■ Brocade VDX 6740 Switch Configuration for use in an HNAS Cluster Configuration Guide
(MK-92HNAS066)—This document describes how to configure a Brocade VDX 6740
switch for use as an ICC (intra-cluster communication) switch.
■ File System Snapshots Operational Best Practice (MK-92HNAS068)—This document
provides operational guidance on file system snapshots.
■ Virtual Infrastructure Integrator for Hitachi Storage Platforms Operational Best Practice
(MK-92HNAS069)—This document provides operational guidance on Hitachi Virtual
Infrastructure Integrator for the HNAS platform.
■ Hitachi NAS Platform Replication Best Practices Guide (MK-92HNAS070)—This document
details the best practices for configuring and using HNAS Replication and related
features.
■ Hitachi Virtual SMU Administration Guide (MK-92HNAS074)—This guide provides
information about how to install and configure a virtual System Management Unit
(SMU).
■ Hitachi NAS Platform to Hitachi Virtual Storage Platform Unified Gx00 Models Migration
Guide (MK-92HNAS075)—This best practice guide describes how to perform a data-in-
place migration of the Hitachi NAS Platform and Virtual Storage Platform (VSP) Gx00
File solution to the VSP Gx00 platform.
Getting help
Hitachi Support Connect is the destination for technical support of products and
solutions sold by Hitachi. To contact technical support, log on to Hitachi Support Connect
for contact information: https://support.hitachivantara.com/en_us/contact-us.html.
Comments
Please send us your comments on this document to
[email protected]. Include the document title and number, including
the revision level (for example, -07), and refer to specific sections and paragraphs
whenever possible. All comments become the property of Hitachi.
Thank you!
In a tiered file system, metadata is stored on the highest performance tier of storage,
and user data is stored on a lower-performance tier. A tiered file system may provide the
following:
■ Performance benefits: Storing metadata on the higher-performance tier provides
system performance benefits over storing both the metadata and user data on the
same, lower, tier of storage. The performance gain is seen because metadata is
accessed more often than the user data, so storing it on higher-performance storage
increases overall performance.
■ Reduced expenses for storage: Storing metadata on the higher-performance storage
(which is usually more expensive than the lower performance storage) and user data
on lower performance (and less expensive) storage may provide cost benefits. This is
because metadata typically consumes a relatively small amount of storage, while the
user data consumes the bulk of the storage. Because the higher-performance storage
is used only to hold the metadata, less of the expensive storage is used than if both
the metadata and the user data were on the higher-performance storage. Also,
because user data can be kept on lower performance storage while achieving better
performance than keeping both metadata and user data on the lower performance
storage, you may not have to upgrade storage as often (or you may be able to
repurpose aging storage when you do upgrade).
A tiered file system has the following characteristics:
■ Maintains a single file system view while providing data separation. This separation
allows the file system to store file system metadata (which is critical to system
performance) on very high-performance devices, while storing user data on cheaper,
lower-performance storage subsystems.
■ The use of multiple tiers of storage is completely transparent to applications or
clients. No environmental tweaking or effort is required. All file system functionality
(such as snapshots, replication, quotas, cluster name space, and virtual volumes are
preserved.
■ File system management activities, such as mounting, unmounting, sharing, and
exporting, for tiered file systems are the same as for untiered file systems.
■ The file system block size (4 KiB or 32 KiB) is maintained across all tiers of storage.
■ Cross volume links are treated as metadata.
Tier 1 data (user data) can spill over into Tier 0 (metadata). This only occurs if the Tier 1
file system is completely full, and additional data is written to the file system. Users are
alerted if this type of spillage occurs, enabling them to better allocate data.
Tier 0 data (metadata) can also spill over into Tier 1 if necessary. Once there is space
again on Tier 0, the NAS server returns the metadata to its original tier. If Tier 0 data
spills over to Tier 1, performance can be degraded, including reduced write performance.
4040 128
To increase the number of supported file systems from the default to the maximum,
contact customer support.
For further information, see the CLI command filesystem-enable-max-count man
page.
Per-span limits
By default, there is a limit of 32 filesystems per span. If you require a greater number of
filesystems per span, it is possible to increase this number using the filesystem-
create CLI command with the --exceed-safe-count option. This option must not be
used when creating up to 32 filesystems. It must only be used when creating filesystems
beyond the 32nd one.
Note: This option is only available on the CLI. The NAS Manager does not
permit you to create more than 32 filesystems.
Creating too many filesystems fills up the filesystem catalogue. The filesystem catalogue
usage is dependent on the filesystem name lengths; the longer the name of the
filesystem, the more space it consumes in the catalogue. Use the span-dump command
to check the remaining space in the FS catalogue.
For example:
If the filesystem catalogue is full, this will cause the recycle bin to become unavailable. In
this case, in order to delete a filesystem, it is necessary to use the filesystem-delete
--no-undeletion-information option to bypass the recycle bin.
Note: There is a limit of 128 filesystems that can be assigned to a single EVS.
Note: If Dynamic Write Balancing is not enabled, or if your system does not
support Dynamic Write Balancing, when expanding a storage pool, use as
many disk drives as possible and keep SDs as large as possible to attain
optimal performance. For more information on Dynamic Write Balancing,
refer to the Storage Subsystem Administration Guide.
Note: The maximum size of a file system is 1 PiB but a 1 PiB file system is
only supported on an HDP storage pool.
Procedure
1. From the Home page, navigate to Storage Management > File Systems to display
the File System page.
2. Click create to display the Create File System page.
Field/Item Description
Storage Pool Displays the name of the storage pool in which the file
system or read cache is being created.
Free Capacity Amount Displays the available space in the storage pool that can
be used by the file systems.
Tier 0 Meta-data and Displays the size of the storage pool’s metadata tier (Tier
Tier 1 User-data 0) and the user data tier (Tier 1).
Guideline Chunk Size Displays the approximate size of the chunks used in the
selected storage pool.
Rounded to nearest Click to read about how the file system created and
chunk expansion is affected by rounding to the nearest chunk
size.
Label The label (name) by which the file system or read cache
should be referenced.
Initial Capacity The initial capacity of the file system, or the user data tier
of a tiered file system. As auto expansion occurs, the file
Field/Item Description
system will grow to the Size Limit specified in the Size
Limit field.
Assign to EVS The EVS to which the file system or read cache is
assigned.
Object Replication When this check box is filled, the file system or read
Target cache will be formatted to allow shares and exports.
Support and Enable When this check box is filled, the file system can support
Dedupe and enable deduplication on the file system.
Block Size Sets the optimal block size for the file system or read
cache.
14. In the Block Size field, enter the desired file system block size.
15. Click OK.
Read caches
A read cache is a special read-only file system that stores copies of individual files
outside of their local file systems, enabling a server or a node to have a cached copy of
the file. When NFS v2 or NFS v3 clients submit a read request for a file in the read cache,
the server or node can serve the read request from the copy in the read cache. Note that
a read cache does not benefit CIFS clients, and that read caches have special
characteristics and limitations.
Deduplication characteristics
The deduplication feature has the following characteristics:
■ Only user data blocks are deduplicated.
■ Dedupe is a post-process that is performed as a fixed block-based deduplication. It is
not an inline dedupe process.
■ Data is deduped within a given a file system and not across multiple file systems.
■ Dedupe has been designed with quality of service (QoS) as a key component. File
system activity takes precedence over dedupe activity when file serving load goes
beyond 50 percent of the available IOPS or throughput capacity. The deduplication
process throttles back to allow the additional file serving traffic load to continue
without impacting performance.
■ You can configure a new file system to support and enable dedupe.
■ An existing WFS-2 file system can be converted to be dedupe-enabled.
■ File systems with support for dedupe can be dedupe-enabled or dedupe-disabled.
Application Interoperability
Object Replication File data is not retained in the deduplicated state when
replicating a file system using the Object Replication
option. Deduplication is not supported on object
replication targets.
File Replication File data is not retained in the deduplicated state during
file replication. The target file system can have dedupe-
enabled, in which case the files on the target will be
eventually deduped.
Data Migration With both local and external migrations, migrated files will
be rehydrated.
NDMP file-based backup Files to be deep copied during a NDMP file based backup
are restored to single files from tape; a NDMP recovery
however cannot be deduped supported or enabled.
Sync Image backup Sync Image backups do not retain their deduplicated
state.
Tiered File Systems User data can be deduplicated; however, metadata is not.
If both Group A and Group B have gone through the dedupe process:
■ Group A had no duplicates removed and consumed the same 30 TB.
■ Group B had duplicates removed and consumed only 10 TB to hold the unique data
blocks.
■ Group B (70 TB) = {Group C (10 TB raw remaining)} + {Group D (60 TB deduped and
now sharing or pointing to physical blocks of group C)}
■ The original 100 TB of data now requires only 40 TB (30 plus 10) of physical blocks
because all duplicates were removed. However, the logical data size is 100 TB (30 plus
70), which is the amount of space needed if the data were not deduped. The results
are outlined in the following table:
Used The amount of physical disk space used by the file system, in this
Space example, group A and group C = 30 + 10 = 40 TB
Deduped The amount of duplicate data that does not occupy its own physical
space disk space, but has been deduped to share existing physical blocks =
group D = 60 TB
Logical The amount of physical disk space that would be required if the data
space were not deduped = {used space} + {deduped space} = 40 + 60 = 100
TB
Based on the example presented, the dedupe percentage gives the amount of physical
disk space saved by removing the duplicates. The percentage measures against the
amount of space that would be required if the data were not deduped.
For example:
■ All columns except Snapshots and Deduped have the same meaning as a normal file
system:
● Size column: The formatted capacity of the file system.
● Used column: The amount (and percentage) of formatted capacity used by live and
snapshot data.
● Avail column: The amount of formatted capacity available for further data.
■ Deduped column
● This column reports the amount (and percentage) of deduped data in the file
system.
■ Snapshots column
● This column normally reports the amount (and percentage) of logical space used
by snapshots.
● On file systems that do not support dedupe, the logical size of data is equal to the
physical space used by that data. However, on file systems that support dedupe,
the logical space used by snapshots can exceed the physical used size due to block
sharing through Dedupe.
● In some cases, snapshot space usage can even exceed the total formatted capacity
of file system size. To avoid confusion, the Snapshots column displays NA for file
systems that support dedupe.
Procedure
1. Navigate to Home > Storage Management > File Systems to display the File
Systems page.
Field/Item Description
Filter Click to open the Filter dialog box and enter one or more of the
following filtering criteria:
■ File System
■ Storage Pool
■ Status
■ EVS
Field/Item Description
Click OK to save the filter settings. File Systems that meet the
specified criteria are displayed on the page. Click reset to remove
all filter settings.
Label Name of the file system, assigned upon creation and used to
identify the file system when performing particular operations; for
example, creating an export or taking a snapshot.
Storage Name of the storage pool on which the file system resides.
Pool
Field/Item Description
■ Mounted as Object Replication target: The file system is
mounted and has been formatted as an object replication target.
■ Mounting: The file system is being mounted and available for
service.
■ Not Assigned to EVS: The file system is not currently assigned to
an EVS.
■ Not Available for Mounting: The file system is not available.
Make sure to enable the EVS to which the file system is assigned
and make sure the file system is not marked "hidden".
■ Not Mounted: The file system is not mounted.
■ Not Mounted (System Drive initialization status unknown):
The file system is not mounted, and the SD initialization status is
not known.
■ Not Mounted (System Drive is not initialized): The file system
is not mounted. The SD initialization status is known, but SD
initialization is not complete.
■ Syslocked: The file system is syslocked.
details Displays the File System Details page for the selected file system.
mount Select one or more unmounted file systems and click mount to
mount the file system.
unmount Select one ore more mounted file system and click unmount to
unmount the file system.
Field/Item Description
Active Displays more information about active tasks on the Active Tasks
Tasks page.
Procedure
1. Navigate to Home > Storage Management > File Systems.
2. Select a file system and click details to display the File System Details page.
The following table describes the fields in this page:
Field/Item Description
Settings/Status
Label Name of the file system, assigned upon creation and used to
identify the file system when performing particular
operations; for example, creating an export or taking a
snapshot. To change the name of the file system, enter a
new name and click rename.
Field/Item Description
■ Total Used: Total amount of file system space in use, in
GiB or TiB and as a percentage of the total.
■ Expansion Limit: Defines the size limit up to which a file
system can expand if auto-expansion is allowed, provided
it accommodates the chunk size.
■ Live File System: Total space used by the file system
data, in GiB or TiB and as a percentage of the total.
■ Snapshots: Total space used by snapshots of the file
system, in GiB or TiB and as a percentage of the total.
Where no snapshots exist, this is reported as '0 Bytes'.
Tier 0 Meta‐data
Note: These areas are displayed only for tiered
and Tier 1 User‐
file systems.
data
These areas display information about the space allocation
and usage for the tiers making up the file system. The Tier 0
Meta‐data section describes information about the metadata
tier. The Tier 1 User‐data section describes information
about the user data tier.
■ % Total Used Space: Percentage of the file system’s total
allocated space that has been used. This total reflects
data and snapshots, if any.
■ Capacity: Total amount of formatted space (free + used
space).
■ Free: Total amount of file system space unused (free), in
GiB and as a percentage of the total.
■ Total Used: Total amount of file system space in use, in
GiB and as a percentage of the total.
If you change any of the auto‐expansion limits, click apply to
make the changes effective.
Configuration
Status Current status of the file system, showing whether the file
system is mounted or unmounted.
Field/Item Description
backup or replication, but the file system remains in read‐
only mode to clients using the file service protocols (NFS,
CIFS, FTP, and iSCSI).
To enable/disable the System Lock for a file system, click
enable or disable. When viewing the details of a read cache,
the System Lock’s enable/disable button is not available.
Transfer Access Indicates whether or not the file system is enabled to allow
Points During transfer access points (shares and/or exports) during an
Object Replication object replication. If disabled, click enable to allow the file
system to transfer access points during an object replication.
If enabled, click disable to prohibit the transfer of access
points during an object replication.
Security Mode Displays the file system security policy defined for the file
system. Possible values are Mixed (Windows and Unix)
or Unix. This may be followed by (Inherited), where the
security mode is inherited from the EVS security mode and
has not been manually changed. Clicking on the status
displays the File System Security page.
Field/Item Description
Block Size File system block size: 32 KiB or 4 KiB, as defined when the
file system was formatted.
Read Cache Indicates whether this file system is a read cache (Yes) or a
regular file system (No).
Usage Thresholds
File System Usage Usage thresholds are expressed as a percentage of the space
that has been allocated to the file system.
When a threshold is reached, an event is logged and,
depending on quota settings, an email may be sent. The
Current line displays information on current usage for each
of the following:
■ Live file system: Percentage threshold for space used by
data.
■ Snapshots: Threshold for file system snapshots.
■ Entire file system: Threshold for the total of the live file
system data and snapshots.
You can use the edit boxes to specify the Warning and Severe
thresholds:
■ The Warning threshold should be set to indicate a high,
but not critical, level of usage.
■ The Severe threshold should be set to indicate a critical
level of usage, a situation in which an out‐of‐space
condition may be imminent.
You can define both Warning and Severe thresholds for any
or all of the following:
■ Live file system (data).
■ File system snapshots.
■ Total of the live file system and snapshots.
Click apply to save the new settings.
To verify that the live file system does not expand beyond its
Severe threshold setting, which would cause snapshots to be
lost, fill the Do not allow the live file system to expand
above its Severe limit check box.
Associations
Storage Pool The name of the storage pool in which the file system or
read cache was created.
Field/Item Description
This area also displays the following information:
■ Capacity: The total space allocated to the storage pool.
■ Free: The total storage pool free space, in MiB, GiB or TiB,
and as a percentage of the total.
■ Used: The total storage pool used space, in MiB, GiB or
TiB, and as a percentage of the total.
Related File Displays the name of any related file systems. A related file
Systems system is one that is either the:
■ Source of a migration or replication operation where this
file system was the target of the operation.
■ Target of a migration or replication operation where this
file system was the source of the operation.
If there are related file systems, the start date/time of the
most recent associated migration/replication will be
displayed in the Last: field.
Check/Fix
Scope The scope controls allow you to set the scope of a check by
the entire file system or the directory tree.
To check the whole file system, click the Entire File System
radio button. This option is only available when the file
system is not mounted.
To check a part of the file system, click the Directory Tree
radio button, then use the browse button to navigate to the
part of the file system you want to check.
Once you have set the scope, click check to start the check.
The Cancel button requests that checkfs/fixfs be
aborted.This is not a forceful cancellation, so the check/fix
operation may not be aborted immediately.
Field/Item Description
delete Use to delete the file system. Not available when the file
system is mounted.
Procedure
1. From the Home page, navigate to Storage Management > File Systems to display
a list of all file systems.
2. For the file system to be formatted, click details.
3. If the file system is mounted, do the following to unmount it:
a. In the Label column, select the file system.
b. In the Actions section, click unmount.
c. In the confirmation dialog, click OK.
There is no special configuration needed to use this feature. File systems on HDP storage
are automatically formatted with standard bitmap resiliency. Files on non-HDP storage
are formatted with enhanced bitmap resiliency as before.
Two commands are provided to perform bitmap resiliency conversion:
fs-convert-to-standard-bitmap-resiliency
fs-convert-to-enhanced-bitmap-resiliency
When converting a file system from a resilient to standard format (or vice versa), note
that:
■ The file system must be unmounted before converting bitmap resiliency.
■ Bitmap resiliency is reversible. You can undo the conversion by running one of the
commands mentioned above.
■ If the bitmap resiliency conversion fails, such as due to a power failure, the file system
reverts to its state before the command was initiated, as if the command were never
run. The file system is unchanged if the conversion fails.
■ To use the fs-convert-to-enhanced-bitmap-resiliency command, the free
space needed is about 8 times the data length of the free space bitmap. For the fs-
convert-to-standard-bitmap-resiliency command, it is about 4 times.
Procedure
1. Navigate to Home > Storage Management > File Systems to display a list of all file
systems.
2. Select the check box next to the label of the file system to be mounted.
3. If the file system is unmounted, click mount.
Note: Some types of files in a directory tree are not cloneable. For example,
files that are not regular (such as sockets, FIFOs, block special devices, and
character devices) are not cloneable, and links such as hard links or cross
volume links are not cloneable.
During a directory tree cloning operation, timing can have an effect on what is cloned,
because the directory tree is not protected against modification during the clone
operation. The cloned directory tree is not a point-in-time replica (like a snapshot),
because the source tree is online and may be in use during the clone operation, so the
directory tree (or the files in the tree) may be modified while the tree clone operation is
in progress. When a directory tree is modified during a tree clone operation, none, some,
or all of the modifications may be included in the clone. To ensure that the directory tree
is cloned precisely as it is at the time the clone operation is initiated (a consistent copy),
you must ensure that the directory tree is not modified until after the clone operation is
complete.
Tree cloning uses the same mechanism as file cloning to clone individual files within the
tree, so the same limitations apply, and a File Clone license is required to enable file or
directory tree clones features.
■ The server-side delete eliminates the need for a client to recursively delete the
directory tree over the network, therefore using less system resources.
Although the targeted directory tree is immediately removed from the file system
namespace and the listing of the parent directory, the client continues to have access to
parts of the directory tree it had acquired access to prior to deletion, until these parts are
actually deleted in the background. Users should be mindful of the fact that changes to a
directory tree (creation and deletion of files/directories) after submission for deletion are
detected by tree-delete, and all such new content is deleted before tree-delete
considers its job done.
Note: Quotas only reflect the physical deletion happening in the background.
Undeletable directories
The following directories cannot be deleted using tree-delete:
■ Root directory and system directories.
Note: You can run logtrace dump tree-delete for more details about
deleted files.
Using tree-delete
tree-delete is implemented with the commands:
■ tree-delete-job-abort
■ tree-delete-job-list
■ tree-delete-job-reschedule
■ tree-delete-job-status
■ tree-delete-job-submit
Please see the man pages for details.
■ A maximum of 160 jobs can be handled by the system at any given time.
Troubleshooting tree-delete
Run logtrace dump tree-delete for details about deleted files. Contact customer
support if necessary.
Note: Deleting files from the live file system may increase the space taken up
by snapshots, so that no disk space is actually reclaimed as a result of the
delete operation. The only sure way to reclaim space taken up by snapshots is
to delete the oldest snapshot.
When the storage space occupied by a volume crosses the warning threshold, a warning
event is recorded in the event log. When the Entire File System Warning threshold has
been reached, the space bar used to indicate disk usage turns yellow:
When the space reaches the severe threshold, a severe event is recorded in the event
log, generating corresponding alerts. If the Entire File System Severe threshold has been
reached, the space bar used to indicate disk usage turns amber.
If file system auto-expansion is disabled, you can limit the growth of the live file system
to prevent it from crossing the severe threshold, effectively reserving the remaining
space for use by snapshots. To limit the live file system to the percentage of available
space defined as the severe threshold, fill the Do not allow the live file system to expand
beyond its Severe limit check box on the File System Details page.
The recommendations are structured to take into consideration file systems of various
sizes and uses. The recommendations are broken into the following components:
■ Type of file system
■ Recommended maximum file system utilization
■ Recommended file system thresholds
Archive file systems are defined as file systems that maintain a data set for an extended
period of time that has little to no change during that life time. This type of access
pattern allows it to be utilized at very high levels.
Recommendation: The file system should be maintained at a usage level no higher than
97%. The Entire File System usage thresholds are recommended to be set at the
following levels:
Warning 90%
Severe 97%
High activity or high churn file systems are defined as file systems that have a high rate
of data being accessed, deleted and created. Due to the workload type and to maintain a
high level of write performance, sufficient free space is required. These amounts can
vary based on file system size. The following recommendations take into account file
system size.
■ File system size range < 1 TiB
● Recommendation: The file system should be maintained at a usage level no higher
than 80%. The Entire File System usage thresholds are recommended to be set at
the following levels:
Severe 80%
* User Definable: Choose a value that provides sufficient time to increase file system
capacity.
■ File system size range 1 TiB < 10 TiB
● Recommendation: The file system should be maintained at a usage level no higher
than 85%. The Entire File System usage thresholds are recommended to be set at
the following levels:
Warning 70%
Severe 85%
Warning 80%
Severe 90%
The file system maintains a history of file system checkpoints known as Dynamic
Superblocks. If the end user requires fast reclamation of free space after data deletions,
the DSB count could be reduced to 2 for file systems <10TiB and 16 for file systems
>10TiB. The default number of DSBs is 128. You can specify the setting at format time or
change it at a later time by issuing the following command:
fs-set-dsb-count <file system> <dsb count>
Example:
To change the DSB count of "fs1" to two DSBs:
fs-set-dsb-count fs1 2
Note that changing the number of DSBs requires that the file system be unmounted.
When thin provisioning is enabled, and the aggregated file system expansion limits of all
file systems exceeds the amount of storage connected to the server/cluster, warnings
are issued to indicate that storage is oversubscribed. These warnings are issued because
there is an insufficient amount of actual storage space for all file systems to grow to their
expansion limit.
File
System
Type Auto-Expansion Enabled Auto-Expansion Disabled
Untiered If auto-expansion is not confined, the size The size limit defines the
limit is ignored. The file system will be amount of space that is
allowed to expand until the storage pool is immediately allocated to
full. the file system.
If auto-expansion is confined, the size When the file system is
limit defines the maximum size to which a created, it is allocated the
file system will be allowed to expand. total amount of space
specified by the size limit.
The file system can be
manually expanded,
increasing the file system
size limit.
File
System
Type Auto-Expansion Enabled Auto-Expansion Disabled
When the file system is created, it is initially
allocated a certain amount of space (the
initial capacity), and the file system is
allowed to expand automatically, up to its
size limit. When the file system uses
approximately 80% of its currently allocated
space, it is expanded automatically up to its
size limit. This expansion occurs in
increments specified by the guideline chunk
size (which is calculated by the system).
The file system can be manually expanded,
increasing the file system size limit.
Tiered If auto-expansion is not confined, the size The size limit defines the
limit is ignored if defined. The tiers of the amount of space that is
file system will be allowed to expand until immediately allocated to
the storage pool is full. the user-data tier.
If auto-expansion is confined, the size When the file system is
limit defines the maximum size to which the created, the user data tier is
tier of a file system will be allowed to initially allocated the total
expand. amount of space specified
by the size limit.
When the file system is created, the user
data tier is initially allocated a certain Either tier can be manually
amount of space (the initial capacity), and expanded, increasing the
the user data tier is allowed to expand file system size limit.
automatically, up to its size limit. When the
user data tier uses approximately 80% of its
currently allocated space, it is expanded
automatically up to its size limit. This
expansion occurs in increments specified by
the guideline chunk size (which is calculated
by the system).
Either tier can be manually expanded,
increasing the file system size limit.
When the file system is created, the user
data tier is initially allocated the total
amount of space specified by the size limit.
Either tier can be manually expanded,
increasing the file system size limit.
Procedure
1. Navigate to Home > Storage Management > File Systems.
2. Select a file system and click details to display the File System Details page.
3. Click expand to display the Expand File System page.
For an untiered file system the Expand File System page looks like the following:
In some circumstances, such as when the storage pool resides on a UVM span or in
HDP compressed storage, a specific stripeset must be selected for expanding the
file system. If the server cannot select the stripeset, the Expand File System page
shows a list of stripesets from which to select.
Note: You can expand one tier per expansion operation. To expand
both tiers, you must perform a manual expansion twice.
5. Click OK.
Manual expansion of file systems is also supported through the command line
interface. For detailed information on this process, run man filesystem-expand
on the CLI.
Caution: Whether or not the file system resides in a CNS, relocating a file
system will disrupt CIFS communication with the server. If Windows clients
require access to the file system, the file system relocation should be
scheduled for a time when CIFS access can be interrupted.
■ Transfer of primary access
A transfer of primary access is a replication-based method of copying data from a
portion of a file system (or an entire file system) and relocating the access points for
that data (copying the data and metadata). A transfer of primary access causes very
little down time, and the file system is live and servicing file read requests during
most of the relocation process. For a short period during the relocation process,
access is limited to read-only. For more information on relocating a file system using
transfer of primary access, refer to the Replication and Disaster Recovery Administration
Guide.
The method you use to relocate a file system depends, in part, on what you want to
move, and what you want to accomplish by relocating the file system.
■ If you want to move the file system's access points, but not the actual data, using file
system relocation is the most appropriate method.
■ If you want to move the file system's data and access points, using a transfer of
primary access is the most appropriate method.
Before it can be shared or exported, a file system must be associated with a Virtual
Server (EVS), thereby making it available to network clients. The association between a
file system and an EVS is established when the file system is created. Over time, evolving
patterns of use and/or requirements for storage resources may make it desirable to
relocate a file system to a different EVS.
Relocating file systems that contain iSCSI Logical Units (LUs) will interrupt service to
attached initiators, and manual reconfiguration of the IP addresses through which
targets are accessed will be required once the relocation is complete. If relocating a file
system with LUs is required, the following steps must be performed:
■ Disconnect any iSCSI Initiators with connections to LUs on the file system to be
relocated.
■ Unmount the iSCSI LU.
■ Relocate the file system as normal. This procedure is described in detail in the
Replication and Disaster Recovery Administration Guide.
■ Reconnect the new Targets with the iSCSI Initiators. Be aware that the Targets will be
referenced by a new name corresponding to the new EVSs.
Note: All iSCSI LUs on a target must be associated with file systems hosted by
the same EVS.
Procedure
1. Navigate to Home > Storage Management > File Systems.
2. Select a file system and click details to display the File System Details page.
Procedure
1. Navigate to Home > Storage Management > File Systems.
2. Select a file system and click details to display the File System Details page.
3. If a file system displays Not Mounted in the Status column, click mount to try to
mount the file system.
■ If necessary, the automatic recovery processes will be invoked automatically. The
file system was mounted successfully.
■ If the automatic recovery fails, the file system will not mount, and the File
Systems page will reappear, indicating that the file system was not mounted.
Navigate to the File System Details page.
4. For the file system that failed to mount, click details to display the File System
Details page. In the Settings/Status area of the page, the file system label will be
displayed, along with the reason the file system failed to mount (if known), and
suggested methods to recover the file system, including the link for the Forcefully
mount option.
5. Depending on the configuration of your system, and the reason the file system
failed to mount, you may have several recovery options:
■ If the server is part of a cluster, you may be able to migrate the assigned EVS to
another cluster node, then try to mount the file system. This can become
necessary when another node in the cluster has the current available data in
NVRAM that is necessary to replay write transactions to the file system following
the last checkpoint. An EVS should be migrated to the cluster node that mirrors
the failed node's NVRAM (for more information on NVRAM mirroring, refer to the
System Access Guide. For more details on migrating EVSs, refer to the Server and
Cluster Administration Guide.
■ If the first recovery attempt fails, click the Forcefully mount link. This will execute
a file system recovery without replaying the contents of NVRAM.
Caution: Once you mount a restored file system in normal (read/write) mode,
you cannot restore to a later checkpoint.
Note: You can recover a file system from a snapshot only when at least the
configured number of preserved file system checkpoints have been taken
since that snapshot was taken. For example, if a file system is configured to
preserve 128 checkpoints (the default), then you can recover the file system
from a snapshot only after a minimum of 128 checkpoints have been taken
after the snapshot. If less than the configured number of checkpoints have
been taken since the snapshot, you can either recover from an earlier
snapshot or recover the file system from a checkpoint.
Note: Once you have recovered a file system from a snapshot, and mounted
it in read-write mode, you cannot undo the recovery or recover again to a
different snapshot or checkpoint.
To roll back a file system from a snapshot, use the snapshot-recover-fs command.
Refer to the Command Line Reference for more information about this command.
An additional tool is available to kill all current snapshots, that is the kill-snapshots
command (refer to the Command Line Reference for more information about this
command).
Note: File systems formatted or expanded in releases newer than dedupe will
have the extra scratch space required for conversion and the following
procedure is unnecessary.
Before you start the dedupe conversion, use the following procedure to determine if a
file system has sufficient scratch and free space available. It is not an offline procedure.
Procedure
1. Call customer support to obtain the instructions and dev password to execute the
fs-capacity-info dev command.
This command can be run on a mounted file system.
2. Run the command with the name of the file system that is to be checked (for
example, fs-capacity-info f filesystem1).
This command generates the following sample output:
If you receive the following failure message during the dedupe conversion process, this
indicates that additional scratch space needs to be setup.
You can resolve this issue by expanding the file system. Usually one chunk is all that is
required. Any storage beyond what is required for the scratch space from the chunk will
be made available to the file system. Once this has been done, you can retry the
conversion process.
Procedure
1. Navigate to Home > Storage Management > File Systems > Dedupe File Systems.
The Deduplication page appears.
Field/Item Description
File Systems Displays a list of file systems that match the search criteria.
Used Capacity Displays the total amount of file system space in use, in GiB
or TiB and as a percentage of the total.
Last Run The date and time of the last dedupe run.
Field/Item Description
Procedure
1. Navigate to Home > Storage Management > File Systems > Dedupe File Systems.
The Deduplication page appears.
2. Enter the search criteria in the Filter section of the page and then click Filter.
3. Under the File System label, fill the check box next to the file system to be enabled.
You can enable one or more file systems at a time.
4. Click enable.
The system immediately starts the dedupe-enable process. The Status column
displays Enabled to reflect this action.
Procedure
1. Navigate to Home > Storage Management > File Systems > Dedupe File Systems.
The Deduplication page appears.
2. Select Not converted file systems from the Show list to display file systems do not
have dedupe support enabled.
3. Click Filter.
4. The system displays the file systems that need conversion in order to be dedupe-
enabled
5. Fill in the check box next to the file system to convert. It is recommended that you
one file system at a time.
6. Click Convert and read the messages in the dialog that appears.
7. After you have read the message and ensure that you want to proceed with
conversion, click OK.
8. Click Active Tasks to view the current conversion status.
After the conversion is done, the file system is dedupe capable and the file system is
now queued for a full dedupe run. The dedupe process will start when the file
system is queued for dedupe. The Status column displays Enabled.
If the status remains in the Needs Convertion status, check the Events page.
Navigate to Home > Status & Monitoring > Event Log. This log reports any
conversion errors. For example, an error may occur if there is not sufficient space
for the file system to be converted or if the user-data-write-policy of the file system
is set to anything other than never-overwrite. See the following CLI man page for
more information:
wfs-user-data-write-policy
Procedure
1. Navigate to Home > Storage Management > Quotas by File System to display the
Quotas by File System page.
Field/Item Description
EVS/File System The name of the selected EVS and file system.
Label
Field/Item Description
A name may be ‘0’ (if the quota was created for the owner
of the directory at the root of the virtual volume).
Quota Type Type of file system activity. Possible values are User or
Group.
Usage Limit Overall limit set for the total size of all files in the file
system owned by the target of the quota.
File Count Limit Overall limit set for the total number of files in the file
system owned by the target of the quota.
details Displays the File System Quotas Details page for the
selected file system.
Delete All Quotas Deletes all of the current quotas. This option is only visible
if more than one quota is configured.
Refresh cache Clears the NAS Manager cache and repopulates it with
relevant objects. This is different from clicking the browser
refresh button, which picks up any recent updates without
clearing the cache.
Modify Email Use to add or delete email contacts who are notified when
Contacts the file system exceeds its size threshold.
Download Quotas Use to download the quotas (not file system quotas) for
this virtual volume to a .csv file.
Procedure
1. Navigate to Home > Storage Management > Quotas by File System to display the
Quotas by File System page.
2. In the Quotas by File System page, click User Defaults or Group Defaults. User
Defaults creates a user quota for the user; a Group quota creates a group quota for
the user's domain.
Field/Item Description
EVS/File System The EVS and file system on which the user file system
quota applies.
Virtual Volume name The name given to the virtual volume on creation.
Automatically create
Note: This option only displays on the
quotas for Domain Users
group file system quota page.
Usage
Field/Item Description
File Count
Log Quota Events in the Selecting this check box sets the default for all users
managed server's Event or groups to have quota events logged in the server’s
log event log.
3. For group file system quota defaults, select the Automatically create quotas for
Domain Users check box to allow the creation of default quotas for the group
domain users.
4. Under the Usage and File Count sections, enter the values as appropriate:
a. In the Limit field, enter the limit. Additionally, under Usage, select KiB, MiB, GiB,
or TiB from the list.
b. Select the Hard Limit check box if the space specified in the Limit field cannot
be exceeded.
c. In the Warning field , enter the warning.
d. In the Severe field, enter the value.
e. Select the Log Quota Events in the managed server's Event Log check box
to set the default for all users or groups to have quota events logged in the
server's event log.
5. Click OK.
Adding a quota
Describes how to allocate storage usage and file count by client.
Procedure
1. Navigate to Home > Storage Management > Quotas by File System to display the
Quotas by File System page.
2. Click add in the Add File System Quota page.
Field/Item Description
EVS/File System The EVS and file system on which the user file system
quota applies.
Usage
File Count
Log Quota Events in the Selecting this check box sets the default for all users
managed server's Event or groups to have quota events logged in the server's
log event log.
4. Under the Usage and File Count sections, enter the values as appropriate:
a. In the Limit field, enter the limit. Additionally, under Usage, select KiB, MiB, GiB,
or TiB from the list.
b. Select the Hard Limit check box if the space specified in the Limit field cannot
be exceeded.
c. In the Warning field , enter the warning.
d. In the Severe field, enter the value.
e. Select the Log Quota Events in the managed server's Event Log check box
to set the default for all users or groups to have quota events logged in the
server's event log.
5. Click OK.
Procedure
1. Navigate to Home > Storage Management > Quotas by File System to display the
Quotas by File System page.
2. Fill in the check box next to the quota, and click details.
The following table describes the fields on this page:
Field/Item Description
EVS/File System The EVS and file system on which the user file system
quota applies.
Usage
File Count
Log Quota Events in the Selecting this check box sets the default for all users
managed server's Event or groups to have quota events logged in the server's
log event log.
Procedure
1. Navigate to Home > Storage Management > Quotas by File System to display the
Quotas by File System page.
2. Fill in the check box next to one or more quotas and then click delete.
Note: Quotas track the number and total size of all files. At specified
thresholds, emails alert the list of contacts associated with the virtual volume
and, optionally, Quota Threshold Exceeded events are logged. Operations that
would take the user or group beyond the configured limit can be disallowed
by setting hard limits.
When Usage and File Count limits are combined, the server will enforce the first quota to
be reached.
Quotas can be set for the entire virtual volume, and on individual users, and on groups
of users. Default user and group quotas can be defined, and in the absence of explicit
user or group quotas, the default quotas apply.
The following caveats apply in measuring the virtual volume status against quota
thresholds:
■ Metadata and snapshot files. Neither file system metadata nor snapshot files count
towards the quota limits.
■ Symbolic link calculation. Files with multiple hard links pointing to them are
included only once in the quota calculation. A symbolic link adds the size of the
symbolic link file to a virtual volume and not the size of the file to which it links.
A file system can contain NFS exports at the root of the file system, for example, /home,
and also virtual volumes for users' home directories, for example, /fred and /joe. When
the automounter on an NFS client mounts one of these home directories, it sends a
request to the server to obtain the list of exports. This is the same request that's used by
the showmount -e command. On receiving this list, it selects the most suitable export to
mount.
If, for example, the automounter has been told to mount /home/fred but the only
suitable export it finds is /home, it first mounts /home, then changes directory to /fred.
Although this is fine for general file system operation, and indeed many mounts are
created this way, it does mean that quota information may not be returned correctly. The
reason for this is that because the client has mounted the export /home, the free space
that's reported for the mount is that of the entire file system, not of the virtual volume /
fred.
In order to enable the server to report the correct quota information for the mount, the
client must mount /home/fred directly, rather than mounting /home and then changing
directory to /fred. An obvious way to achieve this is to add an export /home/fred.
However, having to manually add an export at the root of every virtual volume is time
consuming.
Therefore, the server has the facility to report a 'fake' NFS export at the root of each
virtual volume. In the scenario above, with a single export /home at the root of the file
system, the server reports the exports /home, /home/fred and /home/joe. On receiving
this list, the client now sees that the most suitable export is /home/fred, so it mounts
that path directly, therefore ensuring that the correct quota information is returned to
the user fred.
Take the following information into account when using this feature:
■ If UDP is used to transport MOUNT requests, only the first 64KiB of NFS export
information is returned. So, TCP is recommended to ensure the complete list of NFS
exports is always returned.
■ The server returns a maximum of 1000 mounts in response to the showmount -d
command.
■ The Linux autofs4 client mounts all available exports, using a separate socket for
each. So, if this feature causes a large number of exports to be advertised, Linux
clients can end up with a large number of mounts. This can cause port exhaustion
and can cause Linux clients to run very slowly.
For further information, see the nfs-export-advertise-for-virtual-volumes CLI
command man page. This command also refreshes the list of exports manually and
configures the interval at which automatic refreshes of the list occur. The list is always
refreshed immediately when an export or virtual volume is added or removed, but it is
also refreshed in the background at regular intervals to ensure other changes, such as
directory renames, are reflected in the export list.
Procedure
1. Navigate to Home > Storage Management > Virtual Volume & Quotas.
2. Fill in the check box next to the virtual volume to view or modify and then click View
Quotas to display the Quotas page.
Field/Item Description
Virtual Volume Identifies the virtual volume to which these quotas apply:
■ EVS/File System: EVS and file system on which the
virtual volume resides.
■ Virtual Volume Name: Name of the virtual volume.
■ Path: Directory on which the virtual volume has been
created.
Quota Type Type of source of virtual volume activity. Possible values are
User, Group, or Virtual Volume . The type applies to anyone
initiating activity in the entire virtual volume, and only one
quota with this target type may exist on each virtual
volume.
Usage Limit Overall limit set for the total size of all files in the virtual
volume owned by the target of the quota.
File Count Limit Overall limit set for the total number of files in the virtual
volume owned by the target of the quota.
details Opens the Details page, in which you can view and edit the
configuration of the selected quota.
Field/Item Description
refresh cache Clears the NAS Manager cache and repopulates the cache
with the relevant objects. (This is different than clicking the
browser refresh button, which picks up any recent updates
without clearing the cache.)
User Defaults Opens the User Quota Defaults page, in which you can set
or change the defaults for users.
Group Defaults Opens the User Quota Defaults page, in which you can set
or change the defaults for groups.
Procedure
1. Navigate to Home > Storage Management > Virtual Volumes & Quotas to display
the Quotas page.
2. Click User Defaults to display the User Quota Defaults page.
Field/Item Description
EVS/File System The EVS and file system on which the user quota
applies.
Virtual Volume Name Name of the virtual volume to which a user quota
created using these defaults is assigned. This option
only appears when setting the quotas for a virtual
volume.
Automatically create This option only appears for group quotas. It creates
quotas for domain quotas for individual domain users.
users
Usage
File Count
Hard Limit When enabled, the number of files specified in the Limit
field cannot be exceeded.
Log Quota Events in Selecting this check box sets the default for all users or
the managed server's groups to have quota events logged in the server's
Event log event log.
Procedure
1. Navigate to Home > Storage Management > Virtual Volumes & Quotas to display
the Virtual Volumes & Quotas page.
2. Select a virtual volume and click View Quotas to display the Quotas page.
Procedure
1. Navigate to the Home > Storage Management > Virtual Volumes & Quotas to
display the page.
Field/Item Description
EVS/File System The name of the selected EVS and file system.
details Displays the Virtual Volume page for the selected virtual
volume.
Download All Downloads a CSV (comma separated values) file listing all
Quotas virtual volumes' configured quotas.
The saved quota information includes: Quota Type,
Created By, Usage, Usage Limit, Usage Hard Limit, Usage
Reset (%), Usage Warning (%), Usage Severe (%), File Count,
File Count Limit, File Count Hard Limit, File Count Reset
(%), File Count Warning (%), and File Count Severe (%).
Procedure
1. Navigate to Home > Storage Management > Virtual Volumes & Quotas
2. Click add to display the Add Virtual Volume page.
Field/Item Description
EVS/File System The EVS and the file system to which to add this virtual
volume. If the volume will be added to a different EVS/file
system, click change and select an EVS/file system.
Create a CIFS Share If a share or export with the same name as the virtual
or NFS Export with volume does not exist, selecting this check box ensures its
the same name as creation. This is only intended for convenience in accessing
the virtual volume the virtual volume through CIFS or NFS.
Path Directory in the file system that will be the root of the
virtual volume; for example, /company/sales. All
subdirectories of this path will be a part of this volume.
The path to the directory at the root of the virtual volume
must be specified, or selected by browsing the file system.
If the directory does not yet exist, then leaving the box
checked will ensure it is created. It should be noted that if
the system is left to create the directory in this way, the
owner will be designated 'root', and the default quotas for
this virtual volume will be named anonymously.
3. In the Virtual Volume Name field, enter the name. Note the the following:
■ The name can be up 128 characters.
■ Do not use the characters ?*=+[];:/,<>\| in the name.
■ The name A$ is reserved for the Windows Event Viewer, and cannot be used.
4. If a CIFS share of the same name as this virtual volume is required, fill the Create a
CIFS Share with the same name as the Virtual Volume check box. Selecting this
check box ensures its creation.
5. if an NFS export with the same name as the virtual volume is required, fill in the
Create a NFS Export with the same name as the Virtual Volume check box.
6. If there is a possibility that this new NFS export will overlap an existing export, fill in
the Allow exports to overlap check box.
7. Enter the path to the directory at the root of the virtual volume or click Browse and
navigate to the file system path.
8. If the directory does not yet exist, fill in the Create path if it does not exist check
box to ensure that it is created.
9. Enter each email address in the Email Contacts box, and click add to append it to
the list. Email lists are limited to a maximum of 512 characters.
■ To configure email notification of threshold alerts, designate explicit email
recipients (for example, [email protected]) to receive email notification any
time a defined threshold has been reached.
■ To send email to all affected user accounts when their user quota has been reached,
add an email address beginning with * to the Email Contacts list (for example,
*@example.com).
Note: If no email contacts are specified for the virtual volume, the server
generates events for quota warnings. To generate events in addition to
email alerts, go to the server’s command line interface and issue the
command quota-event–-on.
Procedure
1. Navigate to Home > Storage Management > Virtual Volumes & Quotas to display
the Virtual Volumes & Quotas page.
Field/Item Description
EVS/File System The EVS and the file system to which to add this virtual
volume. If the virtual volume will be added to a different
EVS/file system, click change and select an EVS/file system.
Path Directory in the file system that is the root of the virtual
volume; for example, /company/sales. All subdirectories of
this path will be a part of this volume.
Total Usage Displays the total usage excluding metadata. Use the fs-
analyze-data-usage command to determine how much
metadata exists on the file system. It is not possible to
determine how much metadata exists for an individual
virtual volume.
File Count Displays the file count. For virtual volume quotas, the root
of the virtual volume counts as belonging to the virtual
volume - therefore an empty virtual volume displays a file
count of one.
Procedure
1. Navigate to Home > Storage Management > Virtual Volumes & Quotas.
2. Select one or more virtual volumes.
3. Click delete.
A warning message displays asking for confirmation that this action is definitely
required.
4. Click OK.
Enabling and disabling file system capacity and free space reporting
based on virtual volume quotas
The file system capacity and free space reporting for virtual volume quotas option
supports thin provisioning within a virtual volume. When this option is enabled and a
virtual volume quota is created, capacity/free space counts returned to clients are
derived solely from the virtual volume quota. This affects only those clients that have
mounted an export or share within a virtual volume.
You may want to enable this option when data migration is configured. In this scenario,
the primary file system could ingest more data than it has capacity itself for. You can
define a quota for a virtual volume based on available capacity of migration target(s) and
enable this feature so that the capacity defined by the quota is reported to protocol
clients rather than the primary file system capacity/free space.
Enabling file system capacity and free space reporting based on virtual volume
quotas
■ To enable this option for the virtual volume vivol1 that resides on the file system fs1,
issue the following CLI command:
● fs-space-reporting-based-solely-on-vivol-quota --on fs1 vivol1
Displaying file system capacity and free space reporting based on virtual volume
quotas
■ To get the current setting for the virtual volume vivol1 that resides on the file system
fs1, issue the following CLI command:
● fs-space-reporting-based-solely-on-vivol-quota fs1 vivol1
Disabling file system capacity and free space reporting based on virtual volume
quotas
■ To disable this option for the virtual volume vivol1 that resides on the file system fs1,
issue the following CLI command:
● fs-space-reporting-based-solely-on-vivol-quota --off fs1 vivol1
See the Command Line Reference for more details.
Note: Only NFSv2 and NFSv3 traffic is throttled. All other protocols, including
CIFS and NFSv4, are unaffected by throttling. For further information, refer to
the per-fs-throttle concept CLI man page.
The following table lists the CLI commands to enable, disable, create, delete, and modify
the per-file system throttle feature. For more information on a specific command, refer
to the CLI man page.
Description Command
Procedure
1. Navigate to Home > Storage Management > File Systemsto display the File
System page.
2. Click Read Cache.
3. Select a storage pool to contain the read cache, and then click next to display the
Create Read Cache page.
Field/Item Description
Tier 0 Meta-data and Tier 1 User-data Displays the size of the storage pool’s
metadata tier (Tier 0) and the user data
tier (Tier 1).
Field/Item Description
The size value can be changed on the
File System Details page once the read
cache has been created.
Block Size Sets the optimal block size for the file
system.
Procedure
1. Navigate to Home > File Services > File System Security to display the File
System Security page.
Field/Item Description
EVS Security Context Displays the currently selected EVS security context.
EVS Security Mode Displays current EVS security mode settings, and
allows you to change those settings.
Default File System Indicates the default security mode that is in effect
Security Mode for the entire EVS. Click the Switch Mode link to
switch the security mode for the entire EVS. You can
switch between Mixed mode and UNIX mode.
Virtual Volume Lists the virtual volumes found on the file systems
defined by the filter.
The NAS server supports Kerberos to provide authentication, integrity, and privacy when
using NFS v2, v3, and v4. Kerberos provides a mechanism for entities (principals) to
authenticate to each other and securely exchange session keys. The NAS server supports
RPCSEC_GSS using Kerberos v5.
The Kerberos implementation has been updated with the Advanced Encryption Standard
(AES). The Data Encryption Standard (DES) has been deprecated and is insufficiently
secure.
Secure NFS requires configuration of the NFS server's Kerberos principal name, and
secret keys. Kerberos related configuration settings are setup both globally and on a per-
EVS basis. The NFS host name is configured on a per-EVS basis.
The SMB implementation supports the new AES crypto profiles. The supported AES
crypto profiles are:
■ AES256: HMAC-SHA1-96 (the default if AES is supported)
■ AES128: HMAC-SHA1-96. To force AES-128 encryption:
● Configure the DC only: Set msDS-SupportedEncryptionType 0x8 =
(AES128_CTS_HMAC_SHA1_96).
● Run klist purge on the client.
Configuration to Support AES with Existing CIFS Names (created on 12.2 or earlier)
■ No configuration is required for existing CIFS names. AES is automatically enabled on
upgrade to 12.3 or later.
■ However, configuration is required on the DC for existing CIFS names. AES must be
added to the supported encryption types list of existing CIFS names computer
accounts.
Configuration to Support AES with New CIFS Names (create on 12.3 or later)
■ No configuration is required for newly created CIFS names.
■ The system automatically converts file security attributes from Windows to UNIX
format and stores the result in file metadata, making the files native to both SMB and
NFS clients. Although UNIX files are also converted to Windows format, the results are
not stored in file metadata:
■ Any changes that a user makes to a file’s security attributes are applied equally to
Windows and UNIX.
When an SMB user tries to access a file that has UNIX-only security information, the
server maps the user to an NFS name and converts the user’s access token to UNIX
credentials. It then checks these credentials against the file’s security attributes to
determine whether or not the operation is permissible.
Similarly, when an NFS user tries to access a file that has Windows-only security
information, the server maps the user to a Windows name and converts the user’s
UNIX credentials to a Windows access token. It then checks the token against the file’s
security attributes.
Note: With UNIX security mode, NFS users do not need to rely on the
presence of a Windows domain controller (DC) in order to access files. As a
result, they are fully isolated from potential DC failures.
Procedure
1. Navigate to Home > File Services > File System Security to display the File
System Security page.
2. Click details to display the Security Configuration page.
The following table describes the fields on this page:
Field/Item Description
File System Displays the selected virtual volume's parent file system (when
selected from the File System Security page filter options).
Virtual Displays the selected virtual volume whose security mode can be
Volume changed (when selected from the File System Security page filter
options).
Mode The security mode defined on the virtual volume or file system,
depending on the selection from the File System Security page
filter options).
Symbolic links
Symbolic links (symlinks) are commonly used:
■ To aggregate disparate parts of the file system.
■ As a convenience, similar to a shortcut in the Windows environment.
■ To access data outside of a cluster. For example, a symlink can point to data in
another server in a server farm or a non- server.
There are two types of symlinks:
■ Relative symlinks contain a path relative to the symlink itself. For example, . ./dst
is a relative symlink.
■ Absolute symlinks contain a path relative to the root of the file system on the NFS
client that created the symlink (not relative to the root of the server's file system). For
example, /mnt/datadir/dst is an absolute symlink.
When accessing the file system through NFS, the server fully supports symlinks. NFS/
UNIX clients assume that files marked as symbolic links contain a text pathname that the
client can read and interpret as an indirect reference to another file or directory. Any
client can follow a symlink, but accessing the target file (or directory) still requires
permission.
Clients using SMB1 cannot follow files marked as symlinks. For these files the server
provides a server-side symlink following capability. When an SMB or FTP client accesses a
server-side symlink, the server reads the path from the link and attempts to follow it
automatically:
■ For relative symlinks, the link can be followed because the server can follow the
path from the link itself.
■ For absolute symlinks, the server does not have access to the root of the file system
on the NFS client that created the link, so it cannot follow the link automatically.
The server provides global symlinks, which allow clients using SMB1 to follow absolute
symlinks:
● If an absolute symlink refers to a file or directory in the same SMB share as the
symlink, the server follows the symlink (on behalf of the SMB client) internally.
● If an absolute symlink refers to an object in a different SMB share to the symlink,
the SMB client is redirected to the link's destination via the Microsoft DFS
mechanism.
The link's destination may be on the same file system as the link, on a different file
system within a server farm, or on a remote SMB server. To associate a global
symlink with an absolute symlink, the server maintains a translation table between
absolute symlink paths and global symlink paths.
Global symlinks (also called absolute symlinks) start with a slash character (/), and they
allow you to set up links to data outside a cluster. NFS clients follow the global symlink
directly and, for SMB clients, the server maintains a server-side translation table, that
allows those clients to access the symlink destination. Both NFS and SMB clients can
follow the same global symlink to the destination directory, when the global symlink, the
exports, shares, and mount points are set up correctly. When a client encounters a global
symlink:
■ For NFS clients, the server returns the content of the global symlink, allowing the
client to follow the link to the destination. This means that the NFS client's mount
points and the NFS exports must be set up correctly.
■ For SMB clients, the server causes the client to request a symlink lookup from the
local EVS translation table. Once the client requests the lookup, the server returns the
destination server name, share name, and path to the SMB client, allowing it to access
the destination.
Caution: Symlink Destination Directory Alert! After the SMB client follows the
path for the global symlink, it may not ask the server for another lookup for
that symlink for an extended period of time. Because the symlink is not
looked up every time the client follows the symlink, if the destination
directory is changed or deleted, the SMB client may attempt to connect to the
wrong destination, possibly causing the client to report an error.
Using global symlinks with SMB has a performance penalty. Therefore, global symlinks
are disabled by default, but can be enabled by selecting the Follow Global Symbolic Links
check box on the Add Share page (when creating the share) or CIFS Share Details page
(after the share has been created).
Symlink translation tables are maintained on a per-EVS basis, meaning that:
■ Table entries do migrate with the EVS. If an EVS is migrated, all of its table entries
migrate along with the EVS.
■ Table entries do not replicate from the EVS. When replicating data from one EVS to
another, the mapping information for global symlinks is not automatically relocated,
and it must be recreated in the translation table of the EVS into which the data was
copied.
■ Table entries do not move with a file system. If a file system is moved from one EVS
to another, the mapping information for global symlinks is not automatically
relocated and must be manually adjusted, except for those symlinks that are relative
to a CNS tree (those symlinks do not require adjustment).
■ Table entries irrelevant for symlinks that are relative to a CNS. When an EVS is
migrated, no adjustment is necessary for symlinks that are relative to a CNS because,
when the client follows the symbolic link, it is first referred to the CNS tree, then from
the CNS tree to a real file system when the path crosses a CNS link.
The storage server supports mixed mode access for file systems, meaning that a
mapping is required between the file system permissions and ownes in order to ensure
consistent security and access. NIS/LDAP services allow the server to locate and map
users and permissions based on an existing NIS/LDAP service on the network, instead of
creating a local account on the storage server.
On an existing LDAP service, one of the following methods will typically be used for
allowing the server to locate and map users and permissions:
■ RFC 2307 Schema
RFC 2307 defines a standard convention for the storage and retrieval of user and
group mapping information from an LDAP server. If your site uses RFC 2307 schema,
and you configure your storage server/cluster to support both mixed mode
operations and LDAP services, it is assumed that you have already loaded the RFC
2307 schema into your directory, and that you have already provisioned the user
objects appropriately.
■ Services for UNIX (SFU) schema
If you have configured SFU (Services for UNIX), you must explicitly enable NIS
participation for each account in the active directory (AD) domain. You can enable NIS
participation for an individual account on the UNIX Attributes tab of the user account
properties in the Active Directory Users and Computers utility.
The RFC 2307 or Services for UNIX attributes may or may not be previously indexed on
the LDAP server, depending on the distributor of the director services.
To track indexing performance, you can use the ldap-stats command, which permits
you to monitor response times for LDAP queries. It is necessary to first let the storage
server complete some successful user lookups so that some statistical data can be
gathered. In a short period of time, however, you should be able to determine whether
any of the attributes are not indexed.
■ Server default configuration (the server treats byte range locks by NFSv4 clients as
mandatory):
Lock type when the client accessing the locked file is:
Locked by client
of type NFSv2 or NFSv3 NFSv4 SMB
■ When the server is configured to treat byte range locks from NFSv4 clients as
advisory:
Lock type when the client accessing the locked file is:
Locked by client
of type NFSv2 or NFSv3 NFSv4 SMB
To change the server configuration so that it treats NFSv4 byte-range locks as advisory,
use the command set nfsv4-locking-is-advisory-only 1.
To change the server configuration so that it treats NFSv4 byte-range locks as mandatory,
use the command set nfsv4-locking-is-advisory-only 0.
Level II oplocks
A Level II oplock is a non-exclusive (read-only/deny-write) file lock that an SMB client may
obtain at the time it opens a file. The server grants the oplock only if all other
applications currently accessing the file also possess Level II oplocks:
■ If another client owns an Exclusive or Batch oplock, the server breaks it and converts it
to a Level II oplock before the new client is granted the oplock.
■ If a client owns a Level II oplock on a file, it can cache part or all of the file locally. The
clients owning the oplock can read file data and attributes from local information
without involving the server, which guarantees that no other client may write to the
file.
■ If a client wants to write to a file that has a Level II oplock, the server asks the client that
has the oplock to release it, then allows the second client to perform the write. This
happens regardless of the network protocol that the second client uses.
Note: When a Windows user creates a file and the UNIX user or group
mapping fails, the server sets the UID or the GID to 0 (root). In previous
releases, the server sets the UID or GID to 0 (root) or to 65534 (nobody).
Procedure
1. Navigate to Home > File Services > User Mapping to display the User Mapping
page.
The fields and items on this page are described in the table below:
Field/Item Description
EVS Security Context Displays the currently selected EVS security context.
Click change to select a different EVS security context or
to select the global configuration. Selecting a different
EVS security context changes the EVS to which the
mapping applies.
refresh cache Clears the NAS Manager cache, and then repopulates
the cache with the relevant objects.
Field/Item Description
2. If necessary, click change to select a different EVS security context or to select the
global configuration.
The EVS Security Context displays the currently selected EVS security context.
Changes made to mappings using this page apply only to the currently selected EVS
security context.
■ If an EVS uses the Global configuration, any changes made to the global
configuration settings will affect the EVS.
■ If an EVS uses an Individual security context, changes made to the global
configuration settings will not affect the EVS. To change the mappings of an EVS
using an individual security context, you must select the EVS' individual security
context to make changes, even if those settings are the same as the settings
used by the global security context.
Procedure
1. Navigate to Home > File Services > User Mapping to display the User Mapping
page.
2. Click add to display the Add User Mapping page.
The following table describes the fields and items on this page:
Field/Item Description
Kerberos Name The Kerberos principal (of the form user@realm) for the user.
Procedure
1. Navigate to Home > File Services > User Mapping.
2. Select the check box on the user mapping to modify and then click details to
display the User Mapping Details page.
Field/Item Description
Kerberos Displays the Kerberos principal (of the form user@realm) for the
Name user.
Procedure
1. Navigate to Home > File Services > User Mapping to display the User Mapping
page.
2. Select the check box next to the NFSv2/3 name of the user mapping to delete, and
click delete.
3. Click OK to confirm the deletion.
Procedure
1. Navigate to Home > File Services > Group Mappings to display the Group
Mappings page.
2. Click add to display the Add Group Mapping page.
The following table describes the items and fields on this page:
Field/Item Description
Procedure
1. Navigate to Home > File Services > Group Mappings.
2. Select the check box next to the group mapping to modify, and click details to
display the Group Mapping Details page.
The following table describes the items and fields on this page:
Field/Item Description
Save to NAS Requires that you manually enter the mapping name or ID.
server
Discover The server uses information discovered from NIS servers, LDAP
servers or domain controllers for the selected mapping.
Ignore Grays out the name or ID field and the server does not use this
information.
Procedure
1. Navigate to Home > File Services > Group Mappings.
2. Select the check box next the group mapping to delete, and click delete.
3. To confirm the deletion, click OK.
About importing user or group mappings from a file or an NIS LDAP server
You can specify user or group details by importing them from a file.
NFSv4 user and group names are distinct from the UNIX name associated with UNIX
UIDs and GIDs. However, in many environments a user/group's NFSv4 name can be
derived from their UNIX name by appending the NFSv4 domain. The storage server can
perform this conversion automatically, based on the settings specified on the Domain
Mappings page of NAS Manager or through the CLI command domain-mappings-add.
To display the Domain Mappings page, navigate to Home > File Services, select User
Mapping or Group Mapping, and select the View Domain Mapping link. For more
information on the domain-mappings-add command, refer to the Command Line
Reference.
A UNIX /etc/passwd file can be imported, providing the server with a mapping of user
name to UID. The /etc/groups file should also be imported to provide the server
with a mapping of Group name to GID.
The server will ignore other fields from the passwd file, such as the encrypted password
and the user's home directory. Users or Groups configured by importing from the /etc/
passwd file will then appear in the appropriate list on the User Mappings page or the
Group Mappings page.
Choose one of the three following formats and use it consistently throughout the file:
■ NFSv2/3 user/group data only. The source of the user data can be a UNIX password
file, such as /etc/passwd.
When using Network Information Service (NIS), use the following command to create
the file:
ypcat passwd > /tmp/x.pwd
The resulting file has the following format:
john:x:544:511:John Brown:/home/john:/bin/bash
keith:x:545:517:Keith Black:/home/keith:/bin/bash
miles:x:546:504:Miles Pink:/home/miles:/bin/bash
carla:x:548:504:Carla Blue:/home/carla:/bin/bash
■ NFSv2/3-to-Windows user/group mappings only. Create a file with entries in the
following format:
UNIXuser="NT User", "NT Domain"
with the following syntax rules:
● NT domain is optional.
● NFS user names cannot contain spaces.
● NT names must be enclosed in quotation marks.
● If the domain name is omitted, the server domain is assumed. If the empty
domain name is required, it must be specified like this:
users="Everyone", ""
where the Everyone user is the only common account with an empty domain
name.
■ Both NFSv2/3 user/group data and NFSv2/3-to-Windows user mappings. Create a file
with entries in the following format:
UNIXuser:UNIXid="NT User", "NT Domain"
with the same rules for NFS and NT names as for the NFSv2/3-to-Windows user
mapping.
The resulting file has entries in the following format:
john:544="john", "Domain1"
keith:545="keith", "Domain1"
miles:546="miles", "Domain1"
carla:548="carla", "Domain1"
Procedure
1. Navigate to Home > File Services > User Mapping.
2. Click Import Users to the display the Import User Mapping page.
Field/Item Description
Import From Import a mapping from the currently used NIS/LDAP server by
NIS/LDAP clicking Import.
Procedure
1. Navigate to Home > File Services > Group Mappings.
2. Click Import Groups to display the Import Group Mappings page.
Field/Item Description
Import From Import a mapping from the currently used NIS/LDAP server by
NIS/LDAP clicking Import.
Note: Auditing of SMB is based on the open and close operations; because
NFSv3 is a stateless protocol and lacks equivalent operations, auditing checks
must be performed on each I/O operation, which can be costly in terms of
system performance. Therefore, if auditing was enabled for a file system in a
previous release, on upgrade to this release NFSv3 auditing is disabled. The
protocols which cause audit events to be generated can be controlled with
the --audit-protocol (-p) option of the filesystem-audit command.
After a file has been externally migrated (migrated to an external server), for example to
a Hitachi Content Platform (HCP) system, subsequent access to the file through the NAS
server is audited as if the file were still local.
For known users (users with a Windows user mapping), the NAS server logs Object
Access events 560, 562, 563 and 564. As with the Windows operating system, auditable
events for objects are specified by SACLs (system access control lists). Auditing events are
logged under the following conditions:
■ 560 – open handle
This event is logged when a network client asks for access to an object. An access
check is performed against the DACL (discretionary access control list) and an audit
check is performed against the SACL. If the result of the access check matches the
result of the audit check, an audit record is generated.
■ 562 – close handle
This event is logged when an application closes (disposes of) an existing handle, and
is logged in conjunction with event 560.
■ 563 – open handle for delete
This event is logged when a network client asks for access to a file using the CreateFile
call, and the delete-on-close flag is specified. An access check is performed against
the DACL and an audit check is performed against the SACL. If the result of the access
check matches the result of the audit check, an audit record is generated.
For successful deletions, the audit records the accesses that were granted, and for
failures the audit records the accesses that were requested.
■ 564 – delete
This event is logged when an application closes (disposes of) an existing handle, and
is logged in conjunction with event 563.
Note: Events for any user who is a member of the Audit Service Accounts
local group are excluded from the audit log. Adding the third party auditing
software user to this group results in a small but measurable performance
gain.
Note: File System Audit logs are saved in Windows XP format. An effect of this
is that, depending upon how the saved .evt file is opened, a Windows Vista
or Windows 2008 Server event viewer can report the file as corrupted, or
might not be able to fully interpret the events. Note that the same situation
occurs when a Windows Vista event viewer is used to display saved logs from
an XP system. To display the logs correctly, use a Windows XP event viewer.
Audit log files are limited in size, and the retention behavior when a log fills is
configurable. When an audit log reaches its maximum size, log entries (file
system events) can be overwritten, or the full audit log can be saved, and a
new log started
Note: All file system audit log parameters are specified on a per file system
basis.
You can specify a backup policy, which backs up the active log at regular intervals, and
starts a new active log file. Backup log files are created in the same directory as the active
audit log file.
In the event of a server crash, active file system audit logs are recovered only if a rollback
is performed on restart. Note that a rollback may reset the audit log file to a time when it
can be recovered, thus saving some records that would otherwise be lost.
Procedure
1. Navigate to Home > Files Services > File System Audit Policies, and click add to
display the Add File System Audit Policy page.
Field/Item Description
EVS/File System Lists the currently selected EVS and file system, to which
the audit policy will apply. Click change to go to the
Select a File System page, where you can select a
different EVS and file system.
Access via Unsupported When clients attempt to access the file system through
Protocols a protocol that does not support auditing (such as
NFSv2), this setting determines if those clients are
permitted to access the file system. You can select
either:
■ Deny Access. Client access to the file system using
unauditable protocols (such as NFSv2) is denied.
■ Allow Access. Allows client access to the file system
using unauditable protocols (such as NFSv2), but
does not create any auditing events.
Audited Protocols When clients attempt to access the file system through
a protocol that does not support auditing (such as
NFSv2), this setting determines if those clients are
Field/Item Description
permitted to access the file system. You can select
either:
■ smb. Only the SMB protocol is audited. Access to
SMB is always allowed, and access via other
protocols is determined via the Other Protocol
Support option.
■ smb, nfsv3. Both the SMB and NFSv3 protocols are
audited. Access to SMB and NFSv3 is always allowed,
and access via other protocols is determined via the
Other Protocol Support option.
Active Log File Name Specify the file name for the file system audit log. The
file name must have an .evt extension. The default file
name is audit.evt.
Logging Directory Specify the directory within the file system in which the
file system audit log files are saved. You can use the
browse button to search for an existing directory, or
enter the name of a directory to be created.
Maximum Log File Size Specify the maximum size of the active audit log file in
KiB or MiB. The default size is 512 KiB. The maximum
log file size is 50 MiB.
Log roll over policy Determines what the system does once the active audit
log file is full (when it reaches the Maximum Log File
Size). You can select either:
■ Wrap, which causes the system to delete the oldest
existing audit entry to allow room for a new entry.
■ New, which causes the system to create a new active
audit log file. The default is New.
Number of files to Specify the number of backup audit log files to retain.
retain The default is 10. The maximum number of files to
retain is 50.
3. Specify the name for the active audit log file. The file type suffix must be .evt.
4. Click browse to specify an existing logging directory, or enter the name of a
directory to create.
For ease of access to the audit log files, the logging directory should be within in a
CIFS share that can be accessed by those who need to review the access log.
5. Specify the maximum log file size.
6. Specify the roll over (retention) policy.
7. Specify the backup interval.
8. Specify the number of files to retain.
9. Click OK to save the policy as specified.
Note: Only members of the Administrators local group have the right to edit
the file system audit log policy from within Windows Explorer. A user that is
not a member of Administrators Local Group cannot amend the audit
settings of a file or directory.
Procedure
1. Right-click a folder that resides on a server file system that is configured for auditing
and select Properties, and then the Security tab.
2. Click Advanced and select the Auditing tab.
3. Select Add and select which users get audited.
For example, select Everyone so that all users get audited.
4. A box pops up and allows you to specify which events are to be audited for the
specified user.
5. You can choose to audit Successful, Failed, or both for each access type.
Note: By default, when file system auditing is enabled, access to the file
system is limited to the SMB and NFSv3 protocols. Access by clients using
other protocols, like iSCSI, can, however, be allowed. When such access is
allowed, access to file system objects through these protocols is not audited.
To enable file system auditing for a particular file system, the file system must be added
to the file system audit list.
Procedure
1. Navigate to Home > File Services > File System Audit Policies.
Field/Item Description
EVS Lists the EVS to which host the file system is assigned. Click
change to go to the Select an EVS page, where you can select a
different EVS.
Audit Log The server uses this cache for reporting file system audit events
Consolidated to Windows clients. Only one consolidated cache file can be
Cache configured per EVS.
modify Enables the user to configure a file system, directory where the
file is stored and file name for the audit log consolidated cache
file.
File System Lists all file systems in the specified EVS that have an audit
policy.
details Displays the File System Audit Policy Details page, in which
you can change the auditing options for a file system.
add Displays the Add File System Audit Policy page, in which you
can set the auditing options for a file system. Only one audit
policy is allowed per file system.
enable Enables file system auditing for the selected file system.
disable Disables file system auditing for the selected file system.
2. If the file system on which you want to enable auditing is listed, an audit policy has
already been defined for that file system.
■ If the Audit Policy Status is enabled, logging is already enabled for the file
system, and no further actions are required.
■ If the Audit Policy Status is disabled, select the check box next to the file system
name, and click enable.
If the file system on which you want to enable auditing is not displayed, a file
system audit policy may not have been defined for that file system, or the file
system may have an audit policy defined, but the file system is not in the currently
selected EVS.
3. Click change to display the Select an EVS page, in which you can select a different
EVS.
■ If, after selecting the EVS that hosts the file system, the file system on which you
want to enable auditing is now listed on the File System Audit Policies page,
select the check box next to the file system name, and click enable.
■ If, after selecting the EVS that hosts the file system, the file system on which you
want to enable auditing is still not displayed, you must define a file system audit
policy for that file system. Click add to display the Add File System Audit Policy
page, in which you can set the auditing options for a file system.
Procedure
1. Navigate to Home > Files Services > File System Audit Policies.
If the file system with the audit policy you want to change is not displayed, change
the currently selected EVS to display the EVS hosting the file system with the audit
policy you want to change. To select a different EVS, click change to go to the Select
an EVS page, in which you can select a different EVS.
2. Click the details button on the file system with the audit policy you want to modify
to display the File System Audit Policy Details page.
Field/Item Description
EVS/File System Lists the currently selected EVS and file system, to which
the audit policy will apply. Click change to go to the
Select a File System page, where you can select a
different EVS and file system.
Access via Unsupported When clients attempt to access the file system through
Protocols a protocol that does not support auditing (such as
NFSv2), this setting determines if those clients are
permitted to access the file system. You can select
either:
■ Deny Access. Client access to the file system using
unauditable protocols (such as iSCSI) is denied.
■ Allow Access. Allows client access to the file system
using unauditable protocols (such as iSCSI), but does
not create any auditing events.
Audited Protocols When clients attempt to access the file system through
a protocol that does not support auditing (such as iSCS),
this setting determines if those clients are permitted to
access the file system. You can select either:
■ smb. Only the SMB protocol is audited. Access to
SMB is always allowed, and access via other
protocols is determined via the Other Protocol
Support option.
■ smb,nfsv3. Both the SMB and NFSv3 protocols are
audited. Access to SMB and NFSv3 is always allowed,
and access via other protocols is determined via the
Other Protocol Support option.
Active Log File Name Specify the file name for the file system audit log. The
file name must have an .evt extension. The default file
name is audit.evt.
Logging Directory Specify the directory within the file system in which the
file system audit log files are saved. You can use the
browse button to search for an existing directory, or
enter the name of a directory to be created.
Field/Item Description
Maximum Log File Size Specify the maximum size of the active audit log file in
KiB or MiB. The default is 512 KiB. The maximum value
is 50 MiB.
Log roll over policy Determines what the system does once the active audit
log file is full (when it reaches the Maximum Log File
Size). You can select either:
■ Wrap, which causes the system to delete the oldest
existing audit entry to allow room for a new entry.
■ New, which causes the system to create a new active
audit log file. The default is New.
Number of files to Specify the number of backup audit log files to retain.
retain The default is 10.
Procedure
1. Navigate to Home > Files Services > File System Audit Policies.
If the file system with the audit policy you want to change is not displayed, change
the currently selected EVS to display the EVS hosting the file system with the audit
policy you want to change. To select a different EVS, click change to go to the Select
an EVS page, in which you can select a different EVS.
2. Select the check box next to the name of the file system with the audit policy you
want to enable or disable.
3. Click Enable to allow a disabled policy to function again, or click Disable to stop the
policy from functioning.
When disabled, file system access operations are not logged, and protocol
restrictions are not enforced. Note that disabling a policy does not delete it.
Procedure
1. Navigate to Home > Files Services > File System Audit Policies.
If the file system with the audit policy you want to change is not displayed, change
the currently selected EVS to display the EVS hosting the file system with the audit
policy you want to change. To select a different EVS, click change to go to the Select
an EVS page, in which you can select a different EVS.
2. Select the check box next to the name of the file system with the audit policy you
want to delete, and click delete.
Note: Existing log files are not deleted automatically when a policy is
deleted. If you want to delete these logs, you must do so manually,
Note: Only one consolidated cache file can be configured per EVS. Audit
events from all file systems assigned to that EVS are collected into this
single consolidated cache file.
When you create the consolidated cache file, you must specify the name of the file
system in which the file will be stored. The cache file is located in the .audit
directory of the root of the named file system. The default name for the
consolidated cache file is audit_cache.evt (audit log files for individual file
systems have a default name of audit.evt).
NFS 2, 3, and 4
Port Mapper 2
Mount 1 and 3
Caution: While it is possible to use UDP with NFS on versions 2 and 3, it is not
recommended due to inherent risks. On the NAS server, UDP is not
automatically presented as a transport option for the NFS service by the Port
Mapper service. To register NFS over UDP in the Port Mapper service, see the
rpc-service-nfs-udp command.
NFS statistics
NFS statistics for the server are available for activity since the previous reboot, or since
the point when the statistics were last reset.
Statistics for NFS requests received by the server are broken down by NFS version and
procedure, and are shown (in 10-second time slices) on the NFS Statistics page (refer to
the Server and Cluster Administration Guide for more information). The statistics can
also be sampled and viewed on-demand using the nfs-stats CLI command.
Unicode support
The storage server (or cluster) stores metadata about the files, directories, migration
paths, CIFS shares, NFS exports, user names, group names, log entries, mount points and
so on for the virtual servers, file systems, and namespaces served by the server/cluster.
When interacting with another network device, the metadata transmitted to or received
by the storage server/cluster must use a character encoding supported by the other
network device. Typically, clients/devices using the SMB/SMB2 protocol (Windows)
encode data in the UCS-2 character encoding, and clients/devices that use the NFS
protocol encode data in the UTF-8 character encoding.
When using the FTP protocol to communicate with clients/devices, the storage server/
cluster supports the UTF-8 character encoding for user names, passwords, and file/
directory names.
You can specify the character encoding to be used when communicating with NFS clients
and/or NIS servers using the protocol-character-set command.
The NFS character set controls:
■ File, directory, and export names to/from NFSv2 and NFSv3 clients.
■ Symlinks to/from NFS clients.
The NIS character set controls:
■ NIS user and group names.
■ LDAP user and group names.
Character set encoding may not be set for:
■ Namespace links.
■ Namespace directories.
■ Communication with NFSv4 clients.
Note: Communication with NFSv4 clients uses only the UTF-8 character set.
Note: When the EUC-KR, EUC-JP, or EUC-CN character encodings are enabled
for NFSv2/NFSv3 clients, there is a performance penalty for operations that
are not handle-based when compared to the UTF-8 or Latin-1 character
encodings.
Procedure
1. Navigate to Home > File Services > Enable File Services to display the Enable File
Services page.
Fields/Item Description
■ CIFS/Windows Select the check box for each service you want to enable.
■ NFS/UNIX
■ FTP
■ iSCSI
■ CNS
■ ReadCache
Notes:
■ With the exception of FTP, all of these services require a valid license.
■ If ReadCache is selected or deselected, a reboot may be required. If
so, follow the on-screen instructions to restart the server.
Kerberos configuration
Note: The Kerberos implementation has been updated with the Advanced
Encryption Standard (AES). The Data Encryption Standard (DES) has been
deprecated and is insufficiently secure. AES pre-requisites are:
■ Windows Server 2008 or higher is required to deploy a Microsoft Windows KDC that
supports AES encryption.
■ Configuration may be required on the clients. The configuration of the KDC and
clients may vary depending on their operating systems.
■ The Kerberos Principle accounts on the KDC may need to be configured to support
AES.
■ Supported AES encryption types are
● AES256: HMAC-SHA1-96
● AES128: HMAC-SHA1-96
Configuring the server requires the following steps:
Procedure
1. Create the principal and key of the service (the EVS) on the KDC (Key Distribution
Center).
The keytab file must contain the service principal for the NFS service for the EVS.
Once the NFS service principal for the EVS has been added, you can then create a
keytab file specifically for the EVS. The type of key is critical.
■ AES: To use AES, the keytab must contain an AES key to enable AES by default. If
an AES only keytab is imported, DES is disabled. If an AES only keytab is
imported, all clients must be configured to support AES and have an AES key in
their keytabs.
■ DES:
● To use DES, the client must perform the Kerberos authentication with any of
the supported encryption types except AES.
● The server must have a key that corresponds to whatever encryption type the
client used.
■ AES and DES: The keytab must contain
● An AES key and
● Any old supported encryption type key (it does not have to be DES), provided
that it is supported by the client as well.
For example, with an EVS named "man" in the Kerberos realm
AESIR.EXAMPLE.COM, the keytab file for the NFS service on "man" should contain a
principal nfs/[email protected]. The format of the
principal starts with the service (nfs), followed by a slash, then the fully-qualified-
domain name of the EVS, then the symbol @, and finally the Kerberos realm. Note
that case is significant. Kerberos realms are always in uppercase. Also, there must
be no trailing period after the Kerberos realm.
2. Export a keytab file from the KDC.
Typically you will use the kadmin utility run from the master KDC to export a
keytab file. For details on creating an appropriate keytab file, refer to the
documentation for the tools supplied with your version of Kerberos.
3. Import the keytab file into the server.
Transfer the keytab file to the flash of the server.
For example: securely move the keytab file to the NAS Manager and transfer it to
the NAS server. Log on with ssc, and do the following:
SERVER:$ ssput man.nfs.keytab man.nfs.keytab
The first name is the local file name on the NAS Manager, the second name is the
name to use on the server. Once the file has been placed on the server, import the
keytab in the context of the EVS with:
SERVER:$ krb5-keytab import man.nfs.keytab
After the keytab has been imported, the uploaded keytab file can be safely
removed with:
SERVER:$ ssrm man.nfs.keytab
Procedure
1. Navigate to Home > File Services > NFS Exports to display the NFS Exports page.
Field/Item Description
Cluster Name Space Displays the currently selected name space or EVS/File
or EVS / File System System
■ When Cluster Name Space is displayed, the cluster
(global) name space has been selected.
■ When EVS / File System is displayed, a particular EVS
(and optionally a particular file system) has been
selected.
Field/Item Description
The currently selected name space controls which NFS
exports are displayed on this page.
File System The name of the file system (or CNS link to a file system) to
which the NFS exports is assigned.
Path The path and directory to which the NFS export is directed.
details Opens the NFS Export Details page in which you can
display detailed information about the NFS export.
refresh cache Clears the NAS Manager cache, and then repopulates it
with the relevant objects. Note that this is different than
clicking the browser refresh button, which picks up any
recent updates without clearing the cache.
Download Exports Downloads a CSV file containing a list of all configured NFS
exports on the selected EVS and file system. Note that the
downloaded file cannot be used to restore NFS exports (you
must restore NFS exports from an NFS exports backup file).
To download a list of exports from another file system, click
change.
Backup & Restore Displays the NFS Export Backup & Restore page.
Procedure
1. Navigate to Home > File Services > NFS Exports to display the NFS Exports page.
2. Click add to display the Add Export page.
Field/Item Description
EVS/File Currently selected file system, to which the NFS Export will link.
System
Cluster Currently selected cluster namespace, to which the NFS Export will
Namespace link.
change / Enables the user to select a different file system or (on a cluster) a
browse different cluster namespace.
(depending
on Web
browser)
Path / CNS Path to the source directory for the export. To locate a source
Path directory for the export, click the browse/change button.
Local Read Allows caching of files or cross file system links from the file system
Cache (file to which this export points:
systems
■ Cache all files. Allows caching of files and cross file system links
only)
in the file system of the export. Cross file system links are local
links that point to a data file in a remote file system. The remote
file system may be on a remote server or storage device.
■ Cache cross-file system links. Allows only cross file system links
to be cached
Field/Item Description
■ Do not cache files. Do not allow read caching of files and cross
file system links.
Local read caching is not supported for NFSv4 clients.
Transfer to When a file system is recovered from a snapshot, one of the final
Object steps is to import the NFS exports found in the snapshot
Replication representing the selected version of the file system. Only those NFS
Target (file exports marked as transferable will be imported.
systems
■ Enable: NFS exports will be transferred to recovered file systems.
only)
■ Disable: NFS exports will not be transferred to recovered file
systems.
■ Use FS default: When the target file system is brought online,
NFS exports will be transferred if Transfer Access Points During
Object Replication option is enabled for the file system.
Access IP addresses, host names, or the NIS netgroups of the clients who
Configurati are allowed to access the NFS export (up to 5957 characters). If the
on system has been set up to work with a name server, you can enter
the NIS netgroup to which the clients belong, or the client’s
computer name rather than its IP address (not case sensitive).
You can also specify the required flavors of NFS security in a colon-
separated list using the option (sec=<list>).
The supported flavors are:
■ none - Connect as a null user
■ sys - The traditional security flavor used by NFS, users are not
authenticated by the server
■ krb5 - Kerberos authentication
■ krb5i - Kerberos authentication with per-messaging integrity
■ krb5p - Kerberos authentication with per-message privacy
For example: 10.1.*.*(sec=sys:krb5:krb5i)
See the mount-point-access-configuration man page for
further information.
3. To add an export to a new EVS or file system, click change next to that line and
make a selection from the Select a File System page.
4. Enter the Export Name through which clients will access the export.
5. Type the path to the directory being exported or click browse... to locate an existing
directory.
7. If snapshots are present, make them visible to clients by selecting from the list:
■ Show and Allow Access, to display and allow access to snapshots.
■ Hide and Allow Access, to hide snapshots, but still allow access to the hidden
snapshots.
■ Hide and Disable Access, to hide and disallow access to snapshots.
In order for this change to become effective on NFS clients, all NFS clients should
unmount and then remount the export, or the administrator must run ʹ touch .
ʹ from within the root directory of the export.
8. Select the Local Read Cache setting. To allow caching of files or cross file system
links from the file system to which this export points, select one of the following:
■ Cache all files. Allows caching of files and cross file system links in the file
system of the export. Cross file system links are local links that point to a data
file in a remote file system. The remote file system may be on a remote server or
storage device.
■ Cache cross-file system links. Allows only cross file system links to be cached.
Local read caching is not supported for NFSv4 clients.
10. In the Access Configuration field, type the IP addresses, host names, or the NIS
netgroups of the clients who are allowed to access the NFS export (up to 5,957
characters). If the system has been set up to work with a name server, you can
enter the NIS netgroup to which the clients belong, or the client’s computer name
rather than its IP address (not case sensitive). You can also specify the flavor of NFS
security using the option (sec=<mode>). The table outlines what to type in this field.
Specific address or name. Examples: Only clients with the specified names or
192.0.2.0, client.dept.example.com addresses can access the export.
A range of addresses using Classless Clients with addresses within the range
Inter-Domain Routing (CIDR) notation. can access the export.
Example: 192.0.2.0/24
Qualifier Description
all_squash, allsquash Maps all user IDs and group IDs to the
anonymous user or group.
Procedure
1. Navigate to Home > File Services > NFS Exports to display the NFS Exports page.
2. Select the check box next to the NFS export to display, and click details to display
the NFS Export Details page.
The following table describes the fields and items on this page:
Field/Item Description
EVS/File Currently selected file system, to which the NFS Export will link.
System
Cluster Currently selected cluster namespace, to which the NFS Export will
Namespace link.
change / Enables the user to select a different file system or (on a cluster) a
browse different cluster namespace.
(depending
on Web
browser)
Path / CNS Path to the source directory for the export. To locate a source
Path directory for the export, click the browse/change button.
Field/Item Description
Local Read Allows caching of files or cross file system links from the file system
Cache (file to which this export points:
systems
■ Cache all files. Allows caching of files and cross file system links
only)
in the file system of the export. Cross file system links are local
links that point to a data file in a remote file system. The remote
file system may be on a remote server or storage device.
■ Cache cross-file system links. Allows only cross file system links
to be cached
■ Do not cache files. Do not allow read caching of files and cross
file system links.
Local read caching is not supported for NFSv4 clients.
Transfer to When a file system is recovered from a snapshot, one of the final
Object steps is to import the NFS exports found in the snapshot
Replication representing the selected version of the file system. Only those NFS
Target (file exports marked as transferable will be imported.
systems
■ Enable: NFS exports will be transferred to recovered file systems.
only)
■ Disable: NFS exports will not be transferred to recovered file
systems.
■ Use FS default: When the target file system is brought online,
NFS exports will be transferred if Transfer Access Points During
Object Replication option is enabled for the file system.
Access IP addresses, host names, or the NIS netgroups of the clients who
Configurati are allowed to access the NFS export (up to 5957 characters). If the
on system has been set up to work with a name server, you can enter
the NIS netgroup to which the clients belong, or the client’s
computer name rather than its IP address (not case sensitive).
You can also specify the required flavors of NFS security in a colon-
separated list using the option (sec=<list>).
Field/Item Description
The supported flavors are:
■ none - Connect as a null user
■ sys - The traditional security flavor used by NFS, users are not
authenticated by the server
■ krb5 - Kerberos authentication
■ krb5i - Kerberos authentication with per-messaging integrity
■ krb5p - Kerberos authentication with per-message privacy
For example: 10.1.*.*(sec=sys:krb5:krb5i)
See the mount-point-access-configuration man page for
further information.
Caution: Export Deletion Alert! Before carrying out the instructions that
follow for deleting an export, verify that it is not currently being accessed. If
an export is deleted while users are accessing it, their NFS sessions will be
terminated and any unsaved data may be lost.
When replacing a storage enclosure, delete all the exports associated with it.
Then, when the replacement enclosure is available, add new exports on the
new system drives.
Procedure
1. Navigate to Home > File Services > NFS Exports to display the NFS Exports page.
2. Select the check box(es) next to the NFS export(s) to delete, and click delete.
3. To confirm the deletion, click OK.
Procedure
1. Navigate to Home > File Services > NFS Exports to display the NFS Exports page.
2. Click Backup & Restore to display the NFS Exports Backup & Restore page.
The server reports only Hard Limit quota information through rquotad. Three different
quota limitations can be defined:
■ User and group quotas to limit space and file quantity for individuals and groups
within a virtual volume.
■ User and group quotas to limit space and file quantity for individuals and groups
within an entire file system.
■ Virtual volume quotas to limit space and file quantity by a virtual volume as a whole.
The rquotad service can be configured to report quota information using one of two
modes:
■ Restrictive mode. For the user or group specified in the client-side quota command,
the rquotad service reports the quota information for the quota with the most
constraints.
■ Matching mode. For the user or group specified in the client-side quota command,
the rquotad service reports the quota information for the first quota that meets the
parameters defined by the client-side quota command.
Note: If the rquotad service is disabled, all requests are rejected with an error
code of "EPERM".
Note: The restrictive mode option returns quota information combined from
the quota that most restricts usage and the quota that most restricts file
count. For example:
If the user quota allowed 10 K of data and 100 files to be added, and the virtual volume
quota allowed 100 K of data and 10 files to be added, rquota would return information
stating that 10 K of data and 10 files could be added. Similarly, if the user quota is 10 K of
data of which 5 K is used, and the virtual volume quota is 100 K of data of which 99 K is
used, rquota would return information stating that 1 K of data could be added.
The console command rquota is provided to change between the two options, and also
to disable access to quota information. For information on how to configure rquota,
refer to the Command Line Reference.
Note:
The server does not support the following SMB features:
■ Windows Extended Attributes (note that this should not be confused with
NFS or POSIX xattr).
■ BranchCache.
■ Support for remote management from Server Manager (Windows Server
2012 or later).
■ SMB2 large read/write MTU (NAS Server limited to 64KiB).
■ SMB3 Directory Leasing.
■ SMB Direct (SMB3 over RDMA).
■ Offloaded Data Transfer (ODX).
■ Library storage (for Hyper-V management tools).
Prerequisites
Client
Security Model Authentication Configuration Method
When configured to join an Active Directory, the server functions the same way as a
server added to an NT domain, except that after joining an Active Directory, the server
can authenticate clients using the Kerberos protocol as well as NT 4-style authentication.
Most modern Windows clients support both authentication methods, though a number
of older Windows clients only support NT 4-style authentication.
Supported clients
The server supports platforms and clients that are compliant with SMB versions 1, 2, 2.1,
and 3.
Domain controller interaction
The storage server relies on Windows domain controllers to authenticate users and to
obtain user information (for example, group membership). The server automatically
discovers and connects to the fastest and most reliable domain controllers. Because
operating conditions can change over time, the server selects the best domain controller
every 10 minutes.
By default, when authenticating clients in an Active Directory, the server uses the time
maintained by the domain controller, automatically adjusting for any clock
inconsistencies.
Dynamic DNS
The storage server supports DNS and DDNS. For more information, see the Network
Administration Guide.
SMB (CIFS) Statistics
SMB statistics for the storage server (in 10-second time slices) are available for activity
since the previous reboot or since the point when statistics were last reset.
The maximum supported SMB version advertised by the NAS server can be configured
using the smb-max-supported-version CLI command (see below). The maximum
supported SMB dialect is not server or cluster-wide - it is set on a per-EVS basis.
The NAS server supports UCS-2 character encoding when using the SMB protocols (the
character set is not negotiable when using the SMB2 protocol).
Notes:
■ A valid CIFS license is required in order to enable SMB2 or SMB3 support
(CIFS is a dialect of SMB). For more information about license keys, refer to
the Server and Cluster Administration Guide.
■ One of the features of SMB is the ability to assign rights to machine
(computer) accounts. The feature acts the same way as authentication of a
normal user for an SMB session and can be used for authentication using
machine accounts (SessionSetup SMB requests), and for management
(add, delete, list) of rights for machine accounts. A machine account is
generated automatically by the operating system and registered in Active
Directory. It can be used for authentication within a domain. Machine
account authentication can be only done by an application which has built-
in support. For example, Hyper-V server allows storing virtual machines on
remote shares. Such shares should allow full access for the machine
account of a computer running Hyper-V server. Authenticated connections
using machine accounts will show up in "connection" command output as
if it was a normal user connection. The man pages for cifs-saa and
cacls-add include an example of computer account use.
Note: SMB2 cannot be enabled if there are NT4 names and no ADS names
configured on the server.
Disabling SMB1
To disable SMB1 on the NAS server, use the following command:
smb-min-supported-version 2
This command sets the minimum SMB version on the NAS server to SMB2, therefore
preventing any new clients connecting using SMB1.
Notes:
■ When a client initiates an SMB connection it advertises support for several
versions/dialects. The server will choose the maximum version/dialect the
client provides that is within its configured maximum/minimum. For
example, a client that supports SMB1, SMB2 and SMB2.1 can establish an
SMB2 connection if the max-supported version on the server is set to
SMB2.
■ Some SMB clients cache the connection type they last used with a server.
If they last used SMB2/2.1/3, they may not offer SMB1 as an option until
they are restarted.
■ Existing SMB/SMB2/SMB3 client connections will continue to function after
the minimum supported version has been raised.
SMB3 Multichannel is automatically enabled if the EVS is configured for version 3 of the
SMB protocol. To set the version, use the smb-max-supported-version 3 command.
CLI commands
All settings for Multichannel are per EVS. Use the following CLI commands to configure
or view the maximum channels per session:
■ smb3-multichannel-max-channels-per-session-set
Sets the maximum number of channels for all subsequent sessions.
● Default: 32 channels
● Minimum: 2 channels
● Maximum: 64 channels
■ smb-multichannel-max-channels-per-session-show
Shows the maximum number of channels per session.
For more information about the CLI commands, see the Command Line Reference.
Caution: SMB3 Encryption can severely impact SMB performance and should
be enabled only where it is necessary.
CLI commands
To use SMB3 Encryption, the cifs-auth command must be set to on.
SMB2 clients cannot connect to a server or share that requires encryption. Use the
following commands together with the smb3-encryption and cifs-share commands
to allow or reject unencrypted access for SMB2 clients:
■ smb3-reject-unencrypted-access-enable
Rejects unencrypted client access to the current EVS.
■ smb3-reject-unencrypted-access-disable
Allows unencrypted client access to the current EVS.
Notes:
■ SMB3 Encryption does not affect SMB1 clients. To prevent access by SMB1
clients, you must turn off the SMB1 server by using the smb-min-
supported-version 2 command.
■ Some Remote Procedure Call (RPC) virus scanners are not compatible with
SMB3 Encryption and will not work with smb3-reject-unencrypted-
access enabled. Check with your virus scanner vendor for information
about compatibility.
For more information about the CLI commands, see the Command Line Reference.
Note: SMB3 Encryption does not affect SMB1 clients. To prevent unencrypted
access by SMB1 clients, you must turn off the SMB1 server by using the smb-
min-supported-version 2 command.
Note: Only accounts that have been created in the domain or in a trusted
domain can access the server.
When a user attempts to access a share, the server verifies appropriate permissions;
once access is granted at this level, standard file and directory access permissions apply.
The server operates on a specific domain and can, optionally, join an Active Directory. It
interacts with a domain controller (DC) in its domain to validate user credentials. The
server supports Kerberos-based authentication to an Active Directory, as well as NTLM
authentication (using pre-Windows 2000 protocols). In addition to users belonging to its
domain, the server allows connections from members of trusted domains.
The server automatically grants administrator privileges to domain administrators who
have been authenticated by the DC. In addition, local administration privileges can be
assigned, including backup operator privileges to selected groups (or users).
Procedure
1. Navigate to File Services > CIFS Setup to display the CIFS Setup page.
Field/Item Description
EVS Indicates the selected EVS. Click change to select another EVS.
Mode
Domain The name of the NT domain in which the server resides. The domain
Name is set when the first CIFS name is added.
NetBIOS
NetBIOS When NetBIOS is enabled, it allows NetBIOS and WINS use on this
server. If this server communicates by name with computers that use
earlier Windows versions, this setting is required. By default, the
server is configured to use NetBIOS. Click disable to disable NetBIOS.
Mode Displays the mode for each CIFS serving name. Mode defines the
authentication protocol used to communicate with the Windows
network clients and domain controllers. The mode can be:
■ ADS: The ADS-style communication protocol (Kerberos) is used to
communicate with the Windows clients and domain controllers.
■ NT4: The Windows NT 4-style communication protocol (NTLMSSP)
is used to communicate with the Windows clients and domain
controllers.
Disjoint Indicates whether the DNS suffix matches the Active Directory
domain primary DNS suffix.
■ no: There is no disjoint namespace between the DNS and ADS.
Field/Item Description
■ yes: There is a disjoint namespace between the DNS and ADS.
add Opens the Add CIFS Server Names page, in which you can add
server names.
Reboot Opens the Reboot or Shutdown Server page, which enables you to
or Shut shut down or reboot a server, a cluster node, or an entire cluster.
Down
Server
File Opens the File System Security page, which displays all EVSs and the
System configured security mode.
Security
Procedure
1. Navigate to Home > File Services > CIFS Setup to display the CIFS Setup page.
2. Click add to display the Add CIFS Server Names page.
Field/Item Description
EVS Displays the name of the EVS to which the new server
name is added.
CIFS Server Name The computer name through which CIFS clients will access
file services on the server. In an ADS domain, the maximum
number of characters for the CIFS server name is 63. In an
NT4 domain, the maximum number of characters for the
CIFS server name is 15.
NT4
NT4 Select the NT4 option to indicate that the CIFS server is to
be a part of an NT4 domain.
ADS
Field/Item Description
DNS Suffix Use this option only if you need to set a DNS suffix other
than the Active Directory domain’s primary DNS suffix. (For
example, set this if you have a disjoint domain.)
3. Enter the name corresponding with the newly created computer account into the
field labeled CIFS Server Names.
■ In an ADS domain, the maximum number of characters for the CIFS server name
is 63.
■ In an NT4 domain, the maximum number of characters for the CIFS server name
is 15.
4. If you are adding a server to an NT domain, complete the following steps. If you are
joining an ADS domain, see the next step.
a. Select the NT4 option.
b. Enter the domain name.
c. Click OK to return to the CIFS Server page.
d. To create an NT 4 domain account, run Server Manager from a domain
controller in the NT 4 Domain, and create a new Windows NT Workstation or
Server account using the desired host name.
6. Click OK.
Caution: SMB Name Deletion Alert! At least one SMB name must be
configured on the server to support connections from Windows clients. As a
result, if the last configured SMB name is removed, Windows clients are no
longer able to access the server over SMB.
On the server, the administrator can add users to any of the following local groups:
■ Root: If a user is a member of the local Root group, the user bypasses all security
checks, and can take ownership of any file in the file system.
■ Administrators: If a user is a member of the local Administrators group, the user can
take ownership of any file in the file system.
■ Audit Service Accounts: If a user is a member of the Audit Service Accounts group,
the server does not add any of their events to the audit log. However, the server does
add events to the audit log for any user who is not a member of this group. These
events consist of the Windows file access and deletion events which are recorded by
the server. As an alternative to the NAS Manager, it is possible to use the
localgroup CLI commands to add, remove or display the users for this group.
■ Backup Operators: If a user is a member of the local Backup Operators group, the
user bypasses all security checks, but cannot take ownership of a file in the file
system. The privilege to bypass all security checks in the file system is required for
accounts that run Backup Exec or perform virus scans. Virus scanner servers that are
a part of the Backup Operators group can, however, take ownership of any file in the
file system.
■ Forced Groups: If a user is a member of the local Forced Groups group, when the
user creates a file, the user’s defined primary group is overridden and the user
account will be used to indicate the file creator’s name.
Procedure
1. Navigate to Home > File Services > Local Groups to display the Local Groups
page.
2. If necessary, click Change to select a different EVS security context or to select the
global configuration. Changes made to local groups using this page apply only to
the currently selected EVS security context.
■ If an EVS uses the Global configuration, any changes made to the global
configuration settings will affect the EVS.
■ If an EVS uses an Individual security context, changes made to the global
configuration settings will not affect the EVS. To manage local groups for an EVS
that uses an individual security context, you must select the EVS' individual
security context to make changes, even if those settings are the same as the
settings used by the global security context.
Field/Item Description
Group ■ Select Use existing local group and then select from the
list to add from an existing local group.
■ Select Add new local group and then enter the name to
add a new local group.
Members Enter the member's user name and then click add. To remove
a member's user name, click on the X button.
Procedure
1. Navigate to Home > File Services > Local Groups to display the Local Groups
page.
2. If necessary, click Change to select a different EVS security context or to select the
global configuration. Changes made to local groups using this page apply only to
the currently selected EVS security context.
Deleting a local group is a two-stage process; you must delete all members of the
group before you can delete the group itself.
3. Delete all members of the group:
a. Fill the check box next to all members of the group you want to delete.
b. Click delete to delete the selected group members.
c. Click OK to confirm the deletion return to the Local Groups page.
Procedure
1. To add user alice in domain EXAMPLE, with password Alligat0r, enter:
local-password-set EXAMPLE\alice Alligat0r
If preferred, the password may be entered interactively by omitting the password
from the above command and then entering the password when prompted. Note
that interactive entry limits password length to 127 characters.
All three fields are case sensitive, with EXAMPLE\Alice, example\alice and EXAMPLE
\alice considered three separate users.
2. To see which users are configured, enter:
local-password-list
All configured users are shown, along with their stored password hashes.
3. To delete a single user, in this case, bob in domain EXAMPLE , enter:
local-password-delete EXAMPLE\bob
4. To irreversibly delete all configured users, enter:
local-password-delete-all --confirm
5. FTP: To use local user authentication with FTP, enable CIFS-based authentication
with:
ftp-cfg --ntsecurity on
SID mappings
The owner (typically the creator) of any file is identified by a Security Identifier (SID)
associated with that user. The local users feature does not automatically create an SID
for each user, so you must assign SIDs.
To assign SIDs:
Procedure
1. To assign SID S-1-81-1 to user EXAMPLE\david, enter:
user-mappings-add --nt-name EXAMPLE\david --nt-id S-1-81-1
If an SID is not assigned to a local user in this way, that user will still be able to
authenticate but will be treated as the Anonymous Logon user. You should ensure
that any SIDs are assigned before a user connects, as any changes will not take
effect while they remain connected.
2. Typically, a user would also have a primary group SID (used to give a group to files
created by that user) and may be a member of one or more additional groups.
These may be configured using the primary-group-set and localgroup
commands, respectively.
$ primary-group-set "Unix user\1234" "Unix group\1234"
$ localgroup add "Backup Operators" "Unix user\521"
No programmatic assistance is provided for allocating SIDs. Instead, it is
recommended that the first local user be given the SID S-1-81-0; the second,
S-1-81-1, and so on. It is recommended that the first locally allocated group be given
the SID S-1-82-0; the second, S-1-82-1, and so on.
3. Use user-mappings-list to see users with SIDs assigned.
Configuration
NTLMv2 authentication for local users using the NTOWF_V2 hash is not supported in
versions prior to 12.3, is supported but is disabled by default in version 12.3, and is
supported and enabled by default in versions 12.4 and later. The NAS server does not
store user passwords in plain text. Instead, user passwords are passed through a variety
of one-way functions to produce hashes; these do not permit retrieval of the originally
entered password.
To perform NTLMv2 authentication using the NTOWF_V2 hash:
Procedure
1. Ensure the appropriate password hashes are available.
The server calculates up to four of these hashes: NTOWF_V1, LMOWF_V1,
NTOWF_V2 and LMOWF_V2, with the set shown by the local-password-list command
dependent on your configuration.
2. The server may also store passwords encoded using a two-way function. While
passwords encoded this way are not stored as plain text, a skilled attacker could
reverse the two-way function to obtain the original input. Consequently, it is
recommended to disable the two-way function in version 12.3 and later. For version
12.3 and later, disable the two-way function using:
set shouldKeepObfuscatedLocalPasswords false
Note that you should only use this command once you are sure you do not wish to
downgrade past version 12.3; software versions prior to this do not support one-
way hashes and rely on the two-way function encoding to authenticate local users.
Disabling the two-way function and downgrading will result in local users being
unable to authenticate unless their passwords are reset.
3. In versions that support it, NTLMv2 authentication for local users using the
NTOWF_V2 hash may be enabled with:
set ntlmV2-authentication-allowed true
This is a cluster-wide setting, is persistent across reboots, and will take immediate
effect. NTLMv2 has long been supported for externally managed users.
Procedure
1. Navigate to Home > File Services > CIFS Shares.
2. Click add to display the Add Share page.
Field/Item Description
EVS/File System Currently selected file system to which the CIFS share will
link.
change / browse Enables the user to select a different file system or (on a
(depending on Web cluster) a different cluster namespace.
browser)
Path or CNS Path The directory to which the CIFS share points. Users
accessing the share are able to access this directory, and
any directories under it in the directory tree. To find a
directory, click change / browse.
On a file system only, select the Create path if it does not
exist option to create the path if it does not already exist. If
the file system is mounted read-only, for example it is an
object replication target, it is not possible to create a new
directory. Select a path to an existing directory.
Max Users The maximum number of users who can be associated with
the CIFS share. The default is unlimited.
Show Snapshots ■ Show and Allow Access: Displays and allows access to
snapshots.
■ Hide and Allow Access: Hides snapshots, but still allows
access to the hidden snapshots.
■ Hide and Disable Access: Hides and disallows access to
snapshots.
Changes to this setting become effective when a CIFS client
refreshes its folder view.
Field/Item Description
■ Automatic Local Caching for Documents. The
Automatic mode is applied for all non-executable files on
the entire share. When a user accesses any non-
executable file in this share, it is made available to the
user for offline access. This operation does not
guarantee that a user can access all the non-executable
files, because only those files that have been used at
least once are cached. Automatic can also be defined for
programs.
■ Automatic Local Caching for Programs. The Automatic
mode is applied for all executable files on the entire
share. When a user accesses any executable file in this
share, it is made available to the user for offline access.
This operation does not guarantee that a user can
access all the executable files, because only those
executable files that have been used at least once are
cached. Automatic can also be defined for documents.
■ Local Caching Disabled. No caching of files or folders
occurs.
Transfer to Object When a file system is recovered from a snapshot, one of the
Replication Target final steps is to import the CIFS shares found in the
snapshot representing the selected version of the file
system. Only those CIFS shares marked as transferable will
be imported.
Use the list to specify one of the following:
■ Enable: CIFS shares will be transferred to recovered file
systems.
■ Disable: CIFS shares will not be transferred to recovered
file systems.
■ Use FS default (the default): When the target file system
is brought online, CIFS shares will be transferred if
Transfer Access Points During Object Replication is
enabled for the file system.
Access IP addresses of the clients who can access the share (up to
Configuration 5,957 characters allowed in this field). Refer to IP Address
Configuration** at the end of this table.
Field/Item Description
Follow Global Enables CIFS clients to follow global (absolute) symlinks via
Symbolic Links the Microsoft DFS mechanism for this share.
Enable ABE By default, ABE is disabled for shares and on the server/
cluster as a whole. Before enabling ABE for a share, you
must make sure ABE is enabled for the server/cluster as a
whole (the CLI command to enable ABE support is fsm set
disable-ABE-support false).
When enabled, ABE filters the contents of a CIFS share so
that only the files and directories to which a user has read
access rights are visible to the user.
Enable Virus If virus scanning is enabled and configured for the global
Scanning context or for the EVS hosting the file system pointed to by
the share then, when the share is created, virus scanning is
enabled by default. If virus scanning is not enabled for the
global context or for the EVS hosting the file system pointed
to by the share then, when the share is created, virus
scanning is not enabled by default, but you can enable it a
per-EVS basis.
Field/Item Description
This SMB3 option is available only in a clustered
environment of more than one cluster node, and is disabled
by default.
Share Permissions
Field/Item Description
leading \. If this field is left blank, user home directories will
be created directly in the share root.
By default, only one share per file system can be configured
with home directories. The cifs-home-directory command
can be used to relax this restriction, in which case great
care must be taken not to configure conflicting home
directories.
For example, a share with a path of \home1 and a share
with a path of \home2 would not cause a conflict, whatever
home directory paths were configured. However, a share
with a path of \ and a default home directory path would
conflict with a share with a path of \dir and a default
home directory path.
3. Click change to change the EVS/File System or Cluster Name Space (CNS) in which
the CIFS share will reside.
4. Enter the Share Name. Clients will access the share through this name.
5. Type a comment that is meaningful to you or your users. This comment appears in
Windows Explorer on client computers, and it is optional.
6. Type the Path to the directory being shared. Click browse to help find an existing
directory (this button only exists if the path being created is the path in a file
system, not a name space). To create the path automatically when it does not
already exist, select the Create path if it does not exist check box.
7. To limit the number of users who can access the share simultaneously, enter the
number of users in the Max Users field. By default, a share has unlimited access.
Note: This only limits the number of users that can concurrently access
a share. It does not provide security restrictions.
8. If snapshots are present and you want them to be visible to clients, select the Show
snapshots check box. If snapshots are not taken, or if you don't want clients to view
snapshots, clear this check box.
9. To allow clients to traverse symbolic links, select the Follow Symbolic Links check
box.
10. To enable CIFS clients to follow global (absolute) symlinks via the Microsoft DFS
mechanism for this share, select the Follow Global Symbolic Links check box.
11. To force all characters to be lowercase when files and directories are created, select
the Force Filenames to be Lowercase check box.
12. To disable Virus Scanning for the share, clear the Enable Virus Scanning check box.
The default setting will add this share to the server-wide Virus Scan.
13. To enable ABE (access based enumeration), select the check box.
ABE is disabled by default. When enabled, ABE filters the contents of a CIFS share so
that only the files and directories to which a user has read access rights are visible
to the user.
14. To enable persistent file handles and transparent failover on the share, select the
Ensure Share Continuously Available check box.
15. To alter the caching option (Offline Files Access), select the desired new value from
the Cache Options list.
16. To import the CIFS shares found in the snapshot representing the selected version
of the file system, select the desired new value from the Transfer to Object
Replication Target list. Only those CIFS shares marked as transferable will be
imported.
17. In the Access Configuration field, specify the IP addresses of the clients who can
access the share and the client's permissions for this share. The table outlines what
to type in this field.
Specific address or name. Examples: Only clients with the specified names or
192.0.2.0, client.dept.example.com addresses can access the export.
A range of addresses using Classless Clients with addresses within the range
Inter-Domain Routing (CIDR) notation. can access the export.
Example: 192.0.2.0/24
Procedure
1. Navigate to Home > File Services > CIFS Shares.
2. Select the check box for the share to view or modify, and then click details.
The following table describes the fields on this page:
Field/Item Description
EVS/File System Currently selected file system to which the CIFS share will
link.
change / browse Enables the user to select a different file system or (on a
(depending on Web cluster) a different cluster namespace.
browser)
Path or CNS Path The directory to which the CIFS share points. Users
accessing the share are able to access this directory, and
any directories under it in the directory tree. To find a
directory, click change / browse.
On a file system only, select the Create path if it does not
exist option to create the path if it does not already exist. If
the file system is mounted read-only, for example it is an
object replication target, it is not possible to create a new
directory. Select a path to an existing directory.
Max Users The maximum number of users who can be associated with
the CIFS share. The default is unlimited.
Show Snapshots ■ Show and Allow Access: Displays and allows access to
snapshots.
■ Hide and Allow Access: Hides snapshots, but still allows
access to the hidden snapshots.
■ Hide and Disable Access: Hides and disallows access to
snapshots.
Field/Item Description
Changes to this setting become effective when a CIFS client
refreshes its folder view.
Transfer to Object When a file system is recovered from a snapshot, one of the
Replication Target final steps is to import the CIFS shares found in the
snapshot representing the selected version of the file
system. Only those CIFS shares marked as transferable will
be imported.
Field/Item Description
Use the list to specify one of the following:
■ Enable: CIFS shares will be transferred to recovered file
systems.
■ Disable: CIFS shares will not be transferred to recovered
file systems.
■ Use FS default (the default): When the target file system
is brought online, CIFS shares will be transferred if
Transfer Access Points During Object Replication is
enabled for the file system.
Access IP addresses of the clients who can access the share (up to
Configuration 5,957 characters allowed in this field). Refer to IP Address
Configuration** at the end of this table.
Follow Global Enables CIFS clients to follow global (absolute) symlinks via
Symbolic Links the Microsoft DFS mechanism for this share.
Enable ABE By default, ABE is disabled for shares and on the server/
cluster as a whole. Before enabling ABE for a share, you
must make sure ABE is enabled for the server/cluster as a
whole (the CLI command to enable ABE support is fsm set
disable-ABE-support false).
When enabled, ABE filters the contents of a CIFS share so
that only the files and directories to which a user has read
access rights are visible to the user.
Enable Virus If virus scanning is enabled and configured for the global
Scanning context or for the EVS hosting the file system pointed to by
the share then, when the share is created, virus scanning is
enabled by default. If virus scanning is not enabled for the
global context or for the EVS hosting the file system pointed
Field/Item Description
to by the share then, when the share is created, virus
scanning is not enabled by default, but you can enable it a
per-EVS basis.
Share Permissions
Field/Item Description
Window's domain name, if any, is ignored.) For example,
a user DOMAIN\John Smith would result in a home
directory of john_smith.
■ DomainAndUser. Create the user's home directory by
creating a directory named for the user's Windows
domain name, then converting the user's Windows user
name to lower case and creating a sub-directory by that
name. For example, a user DOMAIN\John Smith would
result in a home directory of domain\john_smith.
■ Unix. Create the user's home directory by converting the
user's UNIX user name to lower case.
Qualifier Description
read_only, Grants the specified client read-only access to the SMB share.
readonly, ro
Note: One of the features of SMB is the ability to assign rights to machine
(computer) accounts. A machine account is generated automatically by the
operating system and registered in Active Directory. It can be used for
authentication within a domain. A machine account authentication can be
only done by an application which has built-in support. For example, Hyper-V
server allows storing virtual machines on remote shares. Such shares should
allow full access for the machine account of a computer running Hyper-V
server.
Run applications a a a
When configuring access to a share, it is only possible to add users or groups that are:
■ Known to domain controllers, and
■ Seen by the server on the network.
Note: When a user is given access to a share, if the user has also a
member of a group with a different access level, the more permissive level
applies. For example, if a user is given Read access to a share, and that
user also belongs to a group that has Change access to that same share,
the user will have Change access to the share, because Change access is
more permissive than Read access.
Procedure
1. Navigate to Home > File Services > CIFS Shares to display the CIFS Shares page.
2. Select the check box next to the share to modify, and then click details.
3. In the Share Permissions area of the CIFS Share Details page, click change.
Field/Item Description
4. To add a new user or group, follow these steps. To change permissions for an
existing user or group, see step the next step.
a. Enter the name for the new user or group in the New User/Group field, and
then click add.
b. Select the new user/group from the list.
c. Select the Allow or Deny check boxes to set the appropriate permissions. If the
Allow check box is selected for full control, the user/group can perform all
actions.
The Home Directories feature simplifies the management of per-user home directories
for larger environments:
■ A per-user network directory is automatically generated when the user initiates an
SMB connection to the EVS.
■ If configured in the AD user profile, a Windows client will automatically map the drive
letter from %HOMEDRIVE% to the network share %HOMESHARE% as a user logs in.
■ These variables can be set automatically from Active Directory, or by a user login
script.
Windows OS can be configured to automatically attach a remote CIFS share as a user’s
home directory when the user logs on. To do this, two environment variables are
configured:
■ %HOMEDRIVE% contains the drive letter to be used for the mapped drive.
■ %HOMESHARE% contains the remote CIFS share to map
The home directories feature is compatible with name spaces. However, note that home
directories are not supported in a virtual file system.
Creating user home directories in a name space is considered a lazy process. When you
first connect to the share in the name space, no home directory is created. If the user
then browses or changes directory to the link from the name space to the regular file
system, the server uses an SMB DFS referral to redirect them to a hidden share on the
regular file system. When the DFS referral completes, and the user connects to the
regular file system, their home directory is created.
Name space and file system layout example:
cns:
\cnsdir
\link --> Span0FS:
\homes
When the user connects to the cns name space, no home directory is created. However,
if the user later moves into cns:\cnsdir\link, their home directory is created, as that
is the transition into a regular file system.
Procedure
1. Navigate to Home > File Services > CIFS Shares to display the CIFS Shares page.
2. Click Backup & Recovery to display the CIFS Shares Backup & Restore page.
3. To back up: Click backup. In the browser, specify the name and location of the
backup file, and then click OK or Save (the buttons displayed and the method you
use to save the backup file depend on the browser you use).
A backup file name is suggested, but you can customize it. The suggested file name
uses the syntax:
CIFS_SHARES_date_time.txt, where the following example illustrates the appropriate
syntax: CIFS_SHARES_Aug_4_2006_11_09_22_AM.txt
When a client connects to an SMB share, the client has to identify and then register with
the witness EVS to receive witness event notifications. If the client cannot identify or
register with the witness EVS, the NAS server cannot provide witness notifications to that
client and standard timeouts will apply.
Note: You can use only CLI commands to configure and manage a witness
EVS. For more information about the CLI commands, and for more CLI
options for the Service Witness Protocol, see the Command Line Reference.
Procedure
1. Set up a share on the service EVS that you want the witness EVS to monitor.
2. Create a witness EVS for the service EVS that contains the share by using the
following command:
evs create [-l <label>] -i [<ipaddr/prefix> | <ipaddr> -m <mask>]
-p <port> [-n <dst-nodeid>][-w <witness-for>]
For example, a witness EVS on cluster node 2 for a service EVS on cluster node 1
would be:
evs create -l WITNESSEVS01 -i 192.0.2.1/24 -p ag1 n 2 -w EVS01
The witness EVS is created and bound to the service EVS.
3. Configure an ADS CIFS name for the witness EVS:
a. Put the witness EVS in context.
b. Add an ADS CIFS name by using the following command:
cifs-name add -m ads -a <dc ipaddr> <my-ads-name>
c. Enter your user name and password.
4. Map the continuously available share in Windows 8, 8.1, Server 2012, or Server 2012
R2.
Note: When mapping the share, use a fully qualified domain name
(FQDN) instead of the IP address of the service node.
5. To verify that the client is registered to the witness EVS, use the following command:
witness-registration-list-show
Note: For older versions of Windows, the equivalent of this tool is provided
by Server Manager.
Procedure
1. In the Windows interface, from Administrative Services, select Computer
Management; then right-click on Computer Management (Local) to display a
context menu, and select Connect to another computer:
2. Optionally, select the domain from the drop-down Look in field, then highlight a
name or an IP address to use for file services on the server, and click OK.
Do not specify a server administration name or IP address for this purpose.
3. Click Event Viewer to display the server’s event log:
4. On the event log window:
a. Click Shares to list all of the shares. Some or all of the users can be
disconnected from specific shares.
b. Click Sessions to list all users currently connected to the system. Some or all of
the users can be disconnected.
c. Click Open Files to list all the open shared resources. Some or all of the shared
resources can be closed.
Note: The NAS server also supports FXP (inter-server data transfer).
Procedure
1. Navigate to Home > File Services > FTP Configuration to display the FTP
Configuration page.
Field/Item Description
Session Timeout The number of minutes of inactivity after which to end an FTP
session automatically (the Timeout). The valid range is from 15
to 14,400 minutes.
2. In the Password Authentication Services area, fill the check box for NT or NIS.
If operating in UNIX or Mixed security mode, both NT and NIS password
authentication are supported. If both services are enabled, the FTP user will be
authenticated against the configured NT domain first. If authentication fails, the
server will attempt to authenticate the user against the configured NIS domain.
3. Enter the Session Timeout value.
The valid range is between 15 and 35,000 minutes (35,000 minutes = 24 days).
4. Specify whether read-write is allowed for anonymous requests. Fill the ReadOnly
check box to limit anonymous requests to read only.
5. Click apply.
Procedure
1. Navigate to Home > File Services > FTP Users to display the FTP Users page.
Field/Item Description
EVS / File System Label This field displays the EVS and File System where the FTP
users listed on the page have been configured.
Filter The filter button allows you to filter the users based on
user Name or Path.
File System Shows the file system containing the FTP user's initial
directory path.
Path Shows the path to the initial directory for the user after
logging in over FTP.
add Opens the Add FTP User page, allowing you to set up a
new user.
Import Users Opens the Import FTP Users page, allowing you to set
up new users by importing them from a file.
Procedure
1. Navigate to Home > File Services > FTP Users to display the FTP Users page.
2. Click add to display the Add User page.
Field/Item Description
User Name The name with which the user is to log in. To allow
anonymous logins to the mount point, specify the user
name as anonymous or ftp.
Initial Directory for The directory in which the user starts when logging in over
the user FTP. Click browse to navigate to and insert the path.
Path Options The Create path if it does not exist option creates the path
automatically when it does not already exist. If the file
system is mounted read-only, for example it is an object
replication target, it is not possible to create a new directory.
Select a path to an existing directory.
3. Enter the user name. To allow anonymous logins to the mount point, specify the
user name as anonymous or ftp.
The password authentication service that you use determines whether users must
log in with their NT domain name or UNIX user name.
4. In the Initial Directory for the user field, type the path to the directory in which the
user starts when he or she logs in over FTP.
5. To create the path automatically when it does not already exist, select the Create
path if it does not exist check box.
6. Click OK.
Procedure
1. Navigate to Home > File Services > FTP Users.
2. Click Import FTP Users to display the Import FTP Users page.
Field/
Item Description
Filename The name of the file to import. Use the Choose File button to select
the file.
3. In the Filename field, enter the file name that contains the user details, or click
Browse to search for the file name.
The user details in the import file have the following syntax:
user_name file_system initial_directory
Each entry must be separated by at least one space. If either the user_name or
initial_directory contains spaces, the entry must be within double-quotes. For
example:
If you cannot be certain that the initial directory exists, you can create it
automatically by specifying the option ENSURE_PATH_EXISTS on a separate line in
the file. For example:
ENSURE_PATH_EXISTS true
carla Sales /Sales/Documents
miles Sales "/Sales/Sales Presentations"
ENSURE_PATH_EXISTS false
john Marketing /Marketing
In the first instance of the ENSURE_PATH_EXISTS option, the true attribute turns on
the option, and it applies to the two following entries until the option is turned off
by the second instance of the option, with the attribute false. The default for the
ENSURE_PATH_EXISTS option is true so that the initial directory is automatically
created.
To insert a comment in the file, precede it with a hash character (#).
4. Click Import.
Procedure
1. Navigate to Home > File Services > FTP Users.
2. Fill the check box next to the user to display or modify, and click details.
The following table describes the fields on this page:
Field/Item Description
File System Displays the file system which owns the FTP user.
Initial This directory is the location where the user starts after logging in
Directory for over FTP. You can change the directory by typing the path to the
the user new directory. You can click the browse button to find the
required directory. This directory is the location where the user
starts after logging in over FTP.
Path Options Click the Create path if it does not exist check box to create the
path automatically when it does not already exist. If the file
system is mounted read-only, for example it is an object
replication target, it is not possible to create a new directory.
Select a path to an existing directory.
■ In the Initial Directory for the user field, you can change the directory by typing
the path to the new directory. You can click the browse button to find the
desired directory. This directory is the location where the user starts after logging
in over FTP.
■ In the Path Options box, you can fill the check box Create path if it does not
exist to create the path automatically when it does not already exist.
4. Click OK.
FTP statistics
FTP statistics for the storage server (in 10-second time slices) are available for activity
since the previous reboot or since the point when statistics were last reset.
iSCSI support
To use iSCSI storage on the server, one or more iSCSI LUs (LUs) must be defined. iSCSI
LUs are blocks of SCSI storage that are accessed through iSCSI targets. iSCSI targets can
be found through an iSNS database or through a target portal. After an iSCSI target has
been found, an Initiator running on a Windows server can access the LU as a “local disk”
through its target. Security mechanisms can be used to prevent unauthorized access to
iSCSI targets.
On the server, iSCSI LUs are just regular files residing on a file system. As a result, iSCSI
benefits from file system management functions provided by the server, such as NVRAM
logging, snapshots, and quotas.
The contents of the iSCSI LUs are managed on the Windows server. Where the server
views the LUs as files containing raw data, Windows views each iSCSI target as a logical
disk, and manages it as a file system volume (typically using NTFS). As a result, individual
files inside of the iSCSI LUs can only be accessed from the Windows server. Server
services, such as snapshots, only operate on entire NTFS volumes and not on individual
files.
iSCSI MPIO
iSCSI MPIO (Multi-path Input/Output) uses redundant paths to create logical “paths”
between the client and iSCSI storage. In the event that one or more of these components
fails, causing the path to fail, multi-pathing logic uses an alternate path so that
applications can still access their data.
For example, clients with more than one Ethernet connection can use logical paths to
establish a multi-path connection to an iSCSI target on the server. Redundant paths
mean that iSCSI sessions can continue uninterrupted in the event of the failure of a
particular path. An iSCSI MPIO connection can also be used to load-balance
communication to boost performance.
If you intend to use an offload engine, make sure it is compatible with Microsoft multi-
path and load-balancing.
iSCSI MPIO is supported by Microsoft iSCSI Initiator 2.0.
iSCSI prerequisites
Note: Other iSCSI initiators and/or versions of the initiators listed above may
also work with the server, but have not been tested. Check with your Hitachi
representative for the latest list of supported iSCSI initiators.
Offload engines
The server currently supports the use of the Alacritech SES1001T and SES1001F offload
engines when used with the Microsoft iSCSI initiator version 1.06 or later. Check with
your Hitachi representative for the latest list of supported offload engines.
Configuring iSCSI
In order to configure iSCSI on the server, the following information must be specified:
■ iSNS servers
■ iSCSI LUs
Configuring iSNS
The Internet Storage Name Service (iSNS) is a network database of iSCSI initiators and
targets. If configured, the server can add its list of targets to iSNS, which allows Initiators
to easily find them on the network.
The iSNS server list can be managed through the iSNS Servers NAS Manager page. The
server registers its iSCSI targets with iSNS database when any of the following events
occurs:
■ A first iSNS server is added.
■ An iSCSI target is added or deleted.
■ The iSCSI service is started.
■ The iSCSI domain is changed.
■ A server IP address is added or removed.
Procedure
1. Navigate to Home > File Services > iSNS Servers to display the iSNS Servers page.
Field/Item Description
Note: customer support recommends that all iSCSI LUs are placed within a
well-known directory, for example /.iscsi/. This provides a single
repository for the LUs in a known location.
Procedure
1. Navigate to Home > File Services > iSCSI Logical Units to display the iSCSI Logical
Units page.
Field/Item Description
EVS/File System Selector for EVS and File System where LUs reside, or where
LUs can be created. To switch to a different EVS/File System,
click change.
Filter Use Filter to display a subset of the iSCSI Logical Units. The
options are:
■ Alias - name of the LU.
■ File System - the file system for the LU.
Field/Item Description
■ Path - the path of the LU.
■ Used in Target - the target of the LU.
File System:Path The file system and path for the LU.
Size Size of the LU. This value cannot exceed the amount of
available free space on the configured file system.
add Opens the Add iSCSI Logical Unit page where you can
create a new iSCSI LU.
details Opens the iSCSI Logical Unit Details page for the selected
LU.
Procedure
1. Navigate to Home > File Services > iSCSI Logical Units and then click add to
display the Add iSCSI Logical Units page:
Field/Item Description
Path to File The path where the logical unit resides. browse
can be used to assist in finding the desired path of
a predefined Logical Unit.
File Already Exists Choose this option if the file already exists.
Create File Choose this option if the file does not exist.
Create path to file if it does Creates the path specified in the Path to File field.
not exist
Procedure
1. Navigate to Home > File Services > iSCSI Logical Units.
2. Select the check box next to the iSCSI logical unit to modify and then click details.
The following table describes the fields on this page:
Field/Item Description
EVS/File System Displays the EVS and file system hosting the LU.
File System Free The amount of free space available in the file system.
Capacity
Size Size of the LU. This value cannot exceed the amount of
available free space on the configured file system.
iSCSI Targets Opens the iSCSI Targets page, in which you can add,
modify, and delete iSCSI Targets.
Procedure
1. Navigate to Home > File Services > iSCSI Logical Unitsto display the iSCSI Logical
Units page.
2. Select the check box next to the logical unit to delete and then click delete.
3. Click OK to confirm the deletion.
Caution: If backing up the iSCSI LU from the server, ensure that the iSCSI
initiators are disconnected, or make the backup from a snapshot.
Procedure
1. Disconnect the iSCSI Initiator from the Target.
2. Unmount the iSCSI Logical Unit.
To unmount the iSCSI LU, you can use the following CLI command:
iscsi-lu unmount <name>
Where <name> is the name of the iSCSI LU.
For safety, you should either back up the iSCSI LU to a snapshot or to another
backup device.
Procedure
1. Disconnect the iSCSI Initiator from the Target.
2. Unmount the iSCSI logical unit.
Use the following CLI command: iscsi-lu unmount <name>, where name is the
name of the LU
3. Restore the logical unit from a snapshot or backup.
4. Mount the iSCSI logical unit.
Use the following CLI command: iscsi-lu mount <name>, where name is the
name of the LU.
5. Reconnect to the Target using the iSCSI Initiator.
6. If necessary, rescan disks in Computer Management.
Procedure
1. Navigate to Home > File Services > iSCSI Targets to display the iSCSI Targets page.
Field/Item Description
EVS iSCSI Domain Displays the iSCSI domain, which is the DNS domain
used when creating unique qualified names for
iSCSI targets. To set an iSCSI domain, enter a
domain name and click apply.
iSCSI Logical Units Advances to the iSCSI Logical Units settings page.
Procedure
1. Navigate to Home > File Services > iSCSI Targets to display the iSCSI Targets page.
2. Click add to display the Add iSCSI Target page.
Field/Item Description
iSCSI Domain The DNS domain used when creating the Globally
Unique Name of an iSCSI target.
Available Logical Units The list of Logical Units available to assign an iSCSI
target.
Selected Logical Units The list of Logical Units added to the target.
Logical Unit Number Enter a Logical Unit Number. The number can be any
unique number between 0 and 255.
Specific address or name. Examples: Only clients with the specified names or
10.168.20.2, addresses can access the target.
client.dept.company.com
To deny access to a specific host, use
the no_access or noaccess qualifier. For
example, 10.1.2.38 (no_access) will deny
access to the host with the IP address
10.1.2.38.
4. Click OK.
Procedure
1. Navigate to Home > File Services > iSCSI Targets to display the iSCSI Targets page.
2. Select the check box next to the target and then click details to display the iSCSI
Target Details page.
The following table describes the fields on this page:
Field/Item Description
Field/Item Description
3. Select an LU from the Available Logical Units list, specify a number (0-255) in the
Logical Unit Number field, and then click the right arrow to move the LU to the
Selected Logical Units list.
Note: You should make sure that the LU is not already assigned to a
target.
4. Click OK.
Procedure
1. Navigate to Home > File Services > iSCSI Targets to display the iSCSI Targets page.
2. Select the check box next to the target to modify and then click details.
The following table describes the fields on this page:
Field/Item Description
EVS iSCSI Domain Displays the iSCSI Domain, which is the DNS domain used
when creating unique qualified names for iSCSI Targets.
Available logical The list of LUs available for assignment to the iSCSI Target.
units This list includes all LUs on the EVS. Some of these LUs may
already be assigned to other targets.
Selected LUN - The list of LUs selected to be part(s) of the iSCSI Target.
LUN Name
Logical Unit The number assigned to the LU (the LUN). Enter a Logical
Number Unit Number in the range of 0-255.
3. The iSCSI Domain, Alias, Available LUs, and Logical Unit Numbers are required.
Optionally, you can specify the Comment, Secret, and/or Access Configuration for
the Target.
Note: Once set, the iSCSI Domain cannot be changed, but it will be
overridden/replaced if you later specify a new iSCSI Target with a
different iSCSI Domain in the same EVS. The most recently specified
iSCSI Domain overrides all previously-specified iSCSI Domains set for all
previously added iSCSI Targets in the EVS.
The following table describes what you can type in the Access Configuration field.
Specific address or name. Examples: Only clients with the specified names or
10.168.20.2, addresses can access the target.
client.dept.company.com
To deny access to a specific host, use
the no_access or noaccess qualifier. For
example, 10.1.2.38 (no_access) will deny
access to the host with the IP address
10.1.2.38.
4. Click OK.
Procedure
1. Navigate to Home > File Services > iSCSI Targets.
2. Select the check box next to the target to remove and then click delete.
3. To confirm the deletion, click OK.
Procedure
1. Navigate to Home > File Services > iSCSI Initiator Authentication.
Field/Item Description
details Click to display the iSCSI Initiator Details page for the
selected initiator.
Check All Click to fill the check box of all initiators in the list.
Clear All Click to empty the check box of all initiators in the list.
Field/Item Description
Initiator Name Identifies the initiator with a globally unique name. This name
is display in the Change Initiator node name dialog of the
Microsoft iSCSI initiator.
Secret The Secret for the Initiator. This is the secret which will be
entered in the Chap Secret Setup dialog of the iSCSI Initiator.
This secret is a password which is used to secure the Initiator
from unauthorized access. The secret should be from 12 to 17
characters in length, but may be between 1-255 characters in
length.
Procedure
1. Navigate to Home > File Services > iSCSI Initiators.
Field/Item Description
details Click to display the iSCSI Initiator Details page for the
selected initiator.
Check All Click to fill the check box of all initiators in the list.
Clear All Click to empty the check box of all initiators in the list.
Field/Item Description
Secret The Secret for the Initiator. This is the secret which will be
entered in the Chap Secret Setup dialog of the iSCSI
Initiator. This secret is a password which is used to secure
the Initiator from unauthorized access. The secret should
be from 12 to 17 characters in length, but may be
between 1-255 characters in length.
d. Click OK to save the changed secret, or click cancel to return to the iSCSI
Initiator Authentication page.
4. Click OK to save the changed secret, or click cancel to return to the iSCSI Initiator
Authentication page.
5. Click details to display the iSCSI Initiator Details page.
Note:
■ For the latest version of Microsoft iSCSI Software Initiator, visit: http://
www.microsoft.com/.
■ The visible screens depend on the operating system version.
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system:
a. Start the Microsoft iSCSI Initiator.
b. Open the iSCSI Initiator Properties dialog.
c. Select the General tab.
d. Click Secret to display the CHAP Secret Setup dialog.
e. Enter a secret.
In the field, enter the secret which allows the target to authenticate with
initiators when performing mutual CHAP.
f. Click OK to save the secret and return to the General tab of the iSCSI Initiator
Properties dialog.
Note: Microsoft currently only supports creating a Basic Disk on an iSCSI LU.
To ensure data integrity, do not create a dynamic disk. For more information,
refer to the Microsoft iSCSI Initiator User Guide.
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system.
2. Open the iSCSI Initiator Properties dialog.
Note: After the iSNS servers have been added, all available iSCSI targets
that have been registered in iSNS will appear as available targets.
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system.
2. Open the iSCSI Initiator Properties dialog.
3. Select the Discovery tab.
4. In the Target Portals area, click Add to display the Add Target Portal dialog.
5. Enter the file services IP address of the EVS.
6. Click OK to save the IP address and return to the Discovery tab of the iSCSI
Initiator Properties dialog.
7. If necessary, add another target portal.
8. Save your changes.
Verify your settings, then click OK to save the list of target portals or Cancel to
decline.
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system.
2. Open the iSCSI Initiator Properties dialog.
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system.
2. Open the iSCSI Initiator Properties dialog.
3. Select the Targets tab.
4. Look at the Status column for the target.
The Status column for the target should display "Connected."
Procedure
1. Navigate to the iSCSI Initiator Properties on your Windows system.
2. Open the iSCSI Initiator Properties dialog.
3. Select the Targets tab.
4. Select the target with the connection you want to end.
5. Click Details to display the Target Properties dialog.
6. Select the session to terminate.
In the list of sessions, select the identifier for the session you want to end.
Note: When using a storage subsystem, there are commonly used Host Mode
Options (HMOs) and System Option Modes (SOMs) which should be set
correctly. Contact customer support for more information.
Note: It is strongly recommended that you always use thin provisioning with
HDP.
The NAS server reads the real space available in a DP pool. When you create or expand a
file system, the NAS server checks for available space, then pre-allocates the space
needed for that operation. If the DP pool has too little free space for the operation to
complete successfully, the NAS server safely aborts the creation or expansion of the file
system.
Every new storage pool should use a single stripeset that resides on a thinly provisioned
HDP pool. This way, storage can be expanded in small increments without loss of
performance, and all I/O will use all DP-Vols (and all their queue depth) and all physical
storage media. For more about queue depth, see the sd-queue-depth man page. To
use a single stripeset, follow the instructions below.
The process is as follows:
1. When provisioning a new NAS server storage pool, use just enough real disk space
to meet your immediate needs for performance and capacity.
2. Place all your parity groups into a single HDP pool, then create DP-Vols whose total
capacity roughly meets your expected needs for the next 18 to 24 months.
3. Create a NAS server storage pool on these DP-Vols, placing all the DP-Vols into a
single stripeset.
Use enough DP-Vols to provide adequate queue depth in the future, after you have
added enough parity groups to match the total capacity of the DP-Vols. Four DP-
Vols is the bare minimum, but eight DP-Vols will provide better performance than
four, and sixteen DP-Vols will be faster than 8 DP-Vols. In practice, a storage pool
usually contains an even number of DP-Vols, and the capacity of each DP-Vol is 8
TiB.
Note: If using the CLI span-create command, list all the SDs in the
initial span-create command. Do not run a single span-create
command, then a series of span-expand commands.
Note: When using an application to create a storage pool, specify all the
available SDs when creating the storage pool; do not create a single
storage pool on a subset of the available SDs, then expand that storage
pool onto the rest of the available SDs.
If there are more than 32 available DP-Vols, create the minimum possible number of
NAS server stripesets consistent with making all stripesets identical, even if this
means creating slightly more or slightly fewer DP-Vols than would otherwise have
been created. For example, if you initially estimate that, in two years, you will need
50 8 TiB DP-Vols, you should now create 48 DP-Vols and make 2 stripesets of 24 DP-
Vols each.
4. To expand the NAS server storage pool beyond the total capacity of the original DP-
Vols, simply add another, identical set of DP-Vols (refer to the span-expand man
page for more information).
Note: Every new storage pool contains one stripeset, and every expansion
(other than by adding storage to the underlying HDP pool) adds a further
stripeset.