Operation Manager 38 Administration Guide
Operation Manager 38 Administration Guide
NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 USA Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: [email protected] Information Web: http://www.netapp.com Part number: 210-04384_A0 May 2009
Table of Contents | 3
Contents
Copyright information.................................................................................19 Trademark information...............................................................................21 About this guide............................................................................................23
Audience......................................................................................................................23 Terminology conventions in Operations Manager.......................................................23 Command, keyboard, and typographic conventions....................................................24 Special messages.........................................................................................................25
Discovery process..........................................................................................35
Discovery by the DataFabric Manager server.............................................................35 What host discovery is.................................................................................................36 Ping methods in host discovery...................................................................................36 What host-initiated discovery is..................................................................................36 How DataFabric Manager server discovers vFiler units..............................................37 Discovery of storage systems......................................................................................37 Discovery of storage systems and networks................................................................38 Methods of adding storage systems and networks......................................................39 Guidelines for changing discovery options.....................................................39 What SNMP is.............................................................................................................41 When to enable SNMP....................................................................................42
4 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 SNMP protocols to discover and monitor storage systems.............................42 What the Preferred SNMP Version option is...................................................43 How DataFabric Manager chooses network credentials for discovery............43 Discovery process using SNMPv1 or SNMPv3..............................................43 Monitoring process using SNMPv1................................................................44 Monitoring process using SNMPv3................................................................44 Setting SNMPv1 or SNMPv3 as the preferred version...................................45 Setting SNMPv1 as the only SNMP version...................................................45 Setting SNMPv1 or SNMPv3 to monitor a storage system.............................45 Modifying the network credentials and SNMP settings..................................46 Deleting the SNMP settings for the network...................................................46 Addition of a storage system from an undiscovered network.........................46 Diagnosis of SNMP connectivity....................................................................47
Table of Contents | 5 Summary of the global group..........................................................................61 Who local users are..........................................................................................61 What domain users are....................................................................................67 What Usergroups are.......................................................................................69 What roles are..................................................................................................72 What jobs display............................................................................................75
6 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Information about the DataFabric Manager MIB............................................92 Descriptions of events and their severity types............................................................93 Viewing events.................................................................................................94 Managing events..............................................................................................94 Operations on local configuration change events............................................94 Alarm configurations...................................................................................................95 Configuration guidelines.................................................................................95 Creating alarms................................................................................................95 Testing alarms..................................................................................................96 Comments in alarm notifications ....................................................................97 Example of alarm notification in e-mail format..............................................97 Example of alarm notification in script format...............................................97 Example of alarm notification in trap format..................................................98 Response to alarms..........................................................................................98 Deleting alarms................................................................................................98 Working with user alerts..............................................................................................98 What user alerts are.........................................................................................99 Differences between alarms and user alerts.....................................................99 User alerts configurations..............................................................................100 E-mail addresses for alerts.............................................................................100 Domains in user quota alerts.........................................................................101 What the mailmap file is................................................................................101 Guidelines for editing the mailmap file.........................................................102 How the contents of the user alert are viewed...............................................102 How the contents of the e-mail alert are changed..........................................102 What the mailformat file is............................................................................102 Guidelines for editing the mailformat file.....................................................103 Introduction to DataFabric Manager reports.............................................................103 Introduction to report options........................................................................105 Introduction to report catalogs.......................................................................105 Different reports in Operations Manager.......................................................105 What performance reports are.......................................................................109 Configuring custom reports...........................................................................109 Deleting custom reports.................................................................................110 Putting data into spreadsheet format.............................................................111 What scheduling report generation is............................................................111
Table of Contents | 7 Methods to schedule a report.........................................................................113 What Schedules reports are...........................................................................116 What Saved reports are..................................................................................117 Data export in DataFabric Manager...........................................................................120 How to access the DataFabric Manager data.................................................121 Where to find the database schema for the views..........................................121 Two types of data for export..........................................................................122 Files and formats for storing exported data...................................................122 Format for exported DataFabric Manager data .............................................122 Format for exported Performance Advisor data............................................123 Format for last updated timestamp................................................................123
8 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Table of Contents | 9 Creating a new group of hosts.......................................................................154 Adding an FSRM path...................................................................................154 Adding a schedule.........................................................................................154 Grouping the FSRM paths.............................................................................155 Viewing a report that lists the oldest files......................................................155
User quotas..................................................................................................157
About quotas..............................................................................................................157 Why you use quotas...................................................................................................157 Overview of the quota process...................................................................................158 Differences between hard and soft quotas.................................................................158 User quota management using Operations Manager.................................................158 Prerequisites for managing user quotas using Operations Manager..............159 Where to find user quota reports in Operations Manager..........................................159 Monitor interval for user quotas in Operations Manager..............................160 Modification of user quotas in Operations Manager ................................................160 Prerequisites to edit user quotas in Operations Manager..............................160 Editing user quotas using Operations Manager.............................................161 Configuring user settings using Operations Manager...............................................161 What user quota thresholds are..................................................................................162 What DataFabric Manager user thresholds are..............................................162 User quota thresholds in Operations Manager..............................................162 Ways to configure user quota thresholds in Operations Manager.................162 Precedence of user quota thresholds in DataFabric Manager........................163
10 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 List of tasks performed on the Host Agent Details page...............................172 How storage systems, SAN hosts, and LUNs are grouped........................................173 Granting access to storage systems, SAN hosts, and LUNs..........................173 Introduction to deleting and undeleting SAN components.......................................173 Deleting a SAN component...........................................................................174 How a deleted SAN component delete is restored........................................174 Where to configure monitoring intervals for SAN components................................174
Table of Contents | 11 Dependencies of a Snapshot copy ................................................................192 Thresholds on Snapshot copies .....................................................................193 Storage chargeback reports........................................................................................193 When is data collected for storage chargeback reports.................................194 Determine the current months and the last months values for storage chargeback report...................................................................194 Chargeback reports in various formats..........................................................194 The chargeback report options...................................................................................195 Specifying storage chargeback options at the global or group level................................................................................................196 The storage chargeback increment................................................................196 Currency display format for storage chargeback...........................................196 Specification of the annual charge rate for storage chargeback....................197 Specification of the Day of the Month for Billing for storage chargeback...................................................................................197 The formatted charge rate for storage chargeback.........................................198 What deleting storage objects for monitoring is........................................................198 Reports of deleted storage objects.................................................................198 Undeleting a storage object for monitoring...................................................199
12 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Console connection through Telnet...............................................................207 Managing active/active configurations with DataFabric Manager............................207 Requirements for using the cluster console in Operations Manager.............208 Accessing the cluster console........................................................................208 What the Takeover tool does..........................................................................208 What the Giveback tool does.........................................................................209 DataFabric Manager CLI to configure storage systems................................210 Remote configuration of a storage system................................................................210 Prerequisites for running remote CLI commands from Operations Manager.................................................................................211 Running commands on a specific storage system ........................................211 Running commands on a group of storage systems from Operations Manager.................................................................................211 Storage system management using FilerView...........................................................212 What FilerView is..........................................................................................212 Configuring storage systems by using FilerView..........................................212 Introduction to MultiStore and vFiler units...............................................................213 Why monitor vFiler units with DataFabric Manager....................................213 Requirements for monitoring vFiler units with DataFabric Manager...........213 vFiler unit management tasks........................................................................214
14 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Prerequisites for using Disaster Recovery Manager..................................................241 Tasks performed by using Disaster Recovery Manager............................................242 What a policy is.........................................................................................................242 What a replication policy does......................................................................242 What a failover policy does...........................................................................244 Policy management tasks...............................................................................244 Connection management...........................................................................................245 Connection management tasks......................................................................246 What the connection describes......................................................................246 What multipath connections are....................................................................247 Authentication of storage systems.............................................................................247 Authentication of discovered and unmanaged storage systems.....................247 Addition of a storage system.........................................................................248 Modification of NDMP credentials...............................................................248 Deletion of a storage system..........................................................................248 Volume or qtree SnapMirror relationship..................................................................248 Decisions to make before adding a new SnapMirror relationship.................249 Addition of a new SnapMirror relationship...................................................250 Modification of an existing SnapMirror relationship....................................250 Modification of the source of a SnapMirror relationship..............................250 Reason to manually update a SnapMirror relationship.................................250 Termination of a SnapMirror transfer............................................................251 SnapMirror relationship quiescence..............................................................251 View of quiesced SnapMirror relationships...................................................251 Resumption of a SnapMirror relationship.....................................................251 Disruption of a SnapMirror relationship.......................................................251 View of a broken SnapMirror relationship....................................................251 Resynchronization of a broken SnapMirror relationship...............................252 Deletion of a broken SnapMirror relationship...............................................252 What lag thresholds for SnapMirror are....................................................................252 Where to change the lag thresholds...............................................................253 Lag thresholds you can change......................................................................253 Reasons for changing the lag thresholds.......................................................253 What the job status report is..........................................................................253
Table of Contents | 15 Where to find information about DataFabric Manager commands...........................256 What audit logging is.................................................................................................256 Events audited in DataFabric Manager .........................................................256 Global options for audit log files and their values.........................................257 Format of events in audit log file...................................................................257 Permissions for accessing the audit log file...................................................259 What remote platform management interface is........................................................259 RLM card monitoring in DataFabric Manager..............................................260 Prerequisites for using the remote platform management interface..............260 Scripts overview........................................................................................................260 Commands that can be used as part of the script...........................................261 Package of the script content.........................................................................261 What script plug-ins are.................................................................................261 What the script plug-in directory is...............................................................262 What the configuration difference checker script is......................................263 What backup scripts do..................................................................................263 What the DataFabric Manager database backup process is.......................................263 When to back up data....................................................................................264 Where to back up data...................................................................................264 Recommendations for disaster recovery .......................................................265 Backup storage and sizing.............................................................................265 Limitation of Snapshot-based backups..........................................................265 Access requirements for backup operations..................................................265 Changing the directory path for archive backups..........................................266 Starting database backup from Operations Manager ....................................266 Scheduling database backups from Operations Manager..............................267 Specifying backup retention count................................................................267 Disabling database backup schedules............................................................267 Listing database backups...............................................................................268 Deleting database backups from Operations Manager..................................268 Displaying diagnostic information from Operations Manager......................268 Exportability of a backup to a new location..................................................268 What the restore process is........................................................................................269 Restoring the database from the archive-based backup.................................269 Restoring the database from the Snapshot copy-based backup.....................269 Restoration of the database on different systems .........................................270
16 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Disaster recovery configurations...............................................................................270 Disaster recovery using Protection Manager.................................................271 Disaster recovery using SnapDrive................................................................276
Table of Contents | 17 High traffic in HBA Port...............................................................................287 Import and export of configuration files....................................................................288 How inconsistent configuration states are fixed........................................................288 Data ONTAP issues impacting protection on vFiler units.........................................288
List of events and severity types................................................................291 Report fields and performance counters..................................................313
Report Fields and Performance Counters for Filer Catalogs.....................................313 Report Fields and Performance Counters for vFiler Catalogs...................................315 Report Fields and Performance Counters for Volume Catalogs................................316 Report Fields and Performance Counters for Qtree Catalogs...................................318 Report Fields and Performance Counters for LUN Catalogs....................................318 Report Fields and Performance Counters for Aggregate Catalogs............................319 Report Fields and Performance Counters for Disk Catalogs.....................................320
SAN management.......................................................................................323
Discovery of SAN hosts by DataFabric Manager.....................................................323 SAN management using DataFabric Manager..........................................................324 Prerequisites for SAN management with DataFabric Manager....................324 List of tasks performed for SAN management..............................................326 List of user interface locations to perform SAN management tasks....................................................................................326 Reports for monitoring SANs....................................................................................327 Location of SAN reports................................................................................327 DataFabric Manager managed SAN data in spreadsheet format...................329 Where to find information for specific SAN components.............................329 Where to view LUN details of SAN components.........................................329 Tasks performed on the LUN Details page for a SAN host..........................329 Information about FCP Target on a SAN host...............................................330 Information about FCP switch of a SAN host...............................................331 Access to the FC Switch Details page...........................................................331 Information about FC Switch on a SAN host................................................331
18 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Tasks performed on the FC Switch Details page for a SAN host..................................................................................................331 Information about Host Agent on a SAN host...............................................332 Accessing the HBA Port Details page for a SAN host..................................332 Details on the HBA Port Details page...........................................................333 List of SAN management tasks.....................................................................333 LUN management..........................................................................................333 Initiator group management...........................................................................334 FC switch management.................................................................................335 DataFabric Manager options......................................................................................335 DataFabric Manager options for SAN management.................................................335 Where to configure monitoring intervals for SAN components....................337 Deleting and undeleting SAN components ..................................................337 Reasons for deleting and undeleting SAN components................................337 Process of deleting SAN components...........................................................338 Process of undeleting SAN components.......................................................338 How SAN components are grouped..........................................................................338 Restriction of SAN management access........................................................338 Access control on groups of SAN components.............................................339
Glossary.......................................................................................................341 Index.............................................................................................................347
Copyright information | 19
Copyright information
Copyright 1994-2009 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S.A. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark information | 21
Trademark information
All applicable trademark attribution is listed here. NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company, Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone, FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen, SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, StoreVault, SyncMirror, Topio, VFM, VFM (Virtual File Manager), and WAFL are registered trademarks of NetApp, Inc. in the U.S.A. and/or other countries. gFiler, Network Appliance, SnapCopy, Snapshot, and The evolution of storage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarks in some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal; Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; EdgeFiler; FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault; NOW; ONTAPI; OpenKey, RAID-DP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; Serving Data by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache; SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot; SpinStor; vFiler; Virtual File Manager; VPolicy; and Web Filer are trademarks of NetApp, Inc. in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert are service marks of NetApp, Inc. in the U.S.A. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. A complete and current list of other IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml. Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or other countries. Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the U.S.A. and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks. NetApp, Inc. NetCache is certified RealSystem compatible.
Audience on page 23 Terminology conventions in Operations Manager on page 23 Command, keyboard, and typographic conventions on page 24 Special messages on page 25
Audience
Here you can learn if this guide is right for you, based on your job, knowledge, and experience. This document is for system administrators and others interested in managing and monitoring storage systems with DataFabric Manager. This document is written with the assumption that you are familiar with the following technology: Data ONTAP operating system software The protocols that you use for file sharing or transfers, such as NFS, CIFS, iSCSI, FC, or HTTP The client-side operating systems (UNIX or Windows)
24 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 General storage system terminology Storage systems that run Data ONTAP are sometimes referred to as filers, appliances, storage appliances, or systems. The terms used in Operations Manager reflect one of these common usages. When the term appliance is used in Operations Manager, the information applies to all supported storage systems, NearStore systems, FAS appliances, and in some cases Fibre Channel switches. When the term filer is used, it can refer to any supported storage system, including FAS appliances or NearStore systems.
General terms The term type means pressing one or more keys on the keyboard. The term enter mean pressing one or more keys on the keyboard and then pressing the Enter key, or clicking in a field in a graphical interface and typing information into it.
Typographic conventions The following table describes typographic conventions used in this document.
Type of information Words or characters that require special attention. Placeholders for information you must supply. For example, if the guide says to enter the arp -d hostname command, you enter the characters "arp -d" followed by the actual name of the host. Book titles in cross-references.
Monospaced font
Command names, option names, keywords, and daemon names. Information displayed on the system console or other computer monitors. The contents of files.
Words or characters you type. What you type is always shown in lowercase letters, unless you must type it in uppercase letters.
Special messages
This document might contain the following types of messages to alert you to conditions you need to be aware of. Danger notices and caution notices only appear in hardware documentation, where applicable.
Note: A note contains important information that helps you install or operate the system efficiently. Attention: An attention notice contains instructions that you must follow to avoid a system crash,
personal injury.
Caution: A caution notice warns you of conditions or procedures that can cause personal injury that
Overview of new and changed features on page 27 User interface changes on page 27 New and changed CLI commands on page 29
Host-initiated discovery This discovery mechanism is based on DNS SRV record (RFC 2782), where DataFabric Manager details are maintained. Currently, host-initiated discovery is supported by NetApp Host Agent only. Changes Withdrawal of support for Solaris platforms Removal of NetCache support DataFabric Manager 3.8 and later do not support Solaris platforms. DataFabric Manager 3.8 and later do not support NetCache-related features.
28 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Terminology changes All of the instances of the term "data set" across the Operations Manager Web-based UI are changed to "dataset." The following table describes the change in the data set terminology.
Old terminology Data Set DataSet New terminology Dataset Dataset
Changes to Web-based UI pages Following are the modifications to the Web-based UI pages, based on task flow: The Protection Data Transfer category is added to the Report Categories list (Control Center > Reports > All). Backup, Mirror, and Dataset are groups under this category. The following new backup reports are added: DP Transfer Backup, Daily DP Transfer Backup, Individual DP Transfer Backup, Monthly DP Transfer Backup, Quarterly DP Transfer Backup, Weekly DP Transfer Backup, Yearly
The following new mirror reports are added: DP Transfer Mirror, Daily DP Transfer Mirror, Individual DP Transfer Mirror, Monthly DP Transfer Mirror, Quarterly DP Transfer Mirror, Weekly DP Transfer Mirror, Yearly
The following new dataset reports are added: DP Transfer Dataset, Daily DP Transfer Dataset, Monthly DP Transfer Dataset, Quarterly DP Transfer Dataset, Weekly DP Transfer Dataset, Yearly
What is new in this release | 29 The following fields are added to the PrimaryDirectory and SnapmirrorRelationship custom reports: Imported Orphan Redundant
What DataFabric Manager server does on page 31 What a license key is on page 32 Access to Operations Manager on page 32 Information to customize in Operations Manager on page 32 Administrator accounts on the DataFabric Manager server on page 33 Authentication methods on the DataFabric Manager server on page 33
32 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Introduction to Operations Manager | 33 High Availability (HA) over Veritas Cluster Servers (VCS) "hosts.equiv" file based authentication APIs over HTTPS do not work for storage systems managed using IPv6 addresses, when the option httpd.admin.access is set to a value other than legacy. Discovery of storage systems and host agents that exist on remote network Protocols such as RSH and SSH do not support IPv6 link local address to connect to storage systems and host agents.
Note: Link local address works with SNMP and ICMP only.
Related concepts
Authentication with native operating system on page 33 Authentication with LDAP on page 34
34 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Based on the native operating system, the DataFabric Manager application supports the following authentication methods: For Windows: local and domain authentication For UNIX: local password files, and NIS or NIS+
Note: Ensure that the administrator name you are adding matches the user name specified in the
Discovery process | 35
Discovery process
Discovery is the process that the DataFabric Manager server uses to find storage systems on your organizations network. Discovery is enabled by default; however, you might want to add other networks to the discovery process or to enable discovery on all networks. Depending on your network setup, you might want to disable discovery entirely. You can disable autodiscovery and use manual discovery only if you do not want SNMP network walking. When you install the DataFabric Manager software, the DataFabric Manager server attempts to discover storage systems on the local subnet.
Next topics
Discovery by the DataFabric Manager server on page 35 What host discovery is on page 36 Ping methods in host discovery on page 36 What host-initiated discovery is on page 36 How DataFabric Manager server discovers vFiler units on page 37 Discovery of storage systems on page 37 Discovery of storage systems and networks on page 38 Methods of adding storage systems and networks on page 39 What SNMP is on page 41
36 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Manual addition is secondary to the discovery process. You typically only need it for the storage systems and the networks that you add after the server discovers the infrastructure.
Discovery process | 37
Related information
When you disable this option, the server continues to monitor the discovered vFiler units. However, when the server discovers a vFiler unit, it does not add the network to which the vFiler unit belongs to its list of networks on which it runs host discovery. In addition, when you delete a network, the server continues to monitor the vFiler units present in that network. The server monitors the hosting storage system once every hour to discover new vFiler units that you configured on the storage system. The server deletes from the database the vFiler units that you destroyed on the storage system. You can change the default monitoring interval in Setup menu > Options > Monitoring options, or by using the following CLI command:
dfm option set vFilerMonInterval=1hour Related tasks
Changing password for storage systems in DataFabric Manager on page 131 Changing passwords on multiple storage systems on page 132
38 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Stage 2.
Description If... The SNMP GET request is successful Then... DataFabric Manager adds the discovered storage systems to its database and continues to Stage 4. If the storage system is a hosting storage system on which vFiler units are configured, DataFabric Manager also discovers those vFiler units. Note: vFiler units will be discovered only after you set the credentials for the hosting storage system.
3.
DataFabric Manager repeats Stages 1 through 2 until it has sent queries to all the networks in its database. The minimum interval for repeating the cycle is set by the Discovery Interval (the default is every 15 minutes) and the Discovery Timeout (the default is 2 seconds). The actual interval depends on the number of networks to scan and their size. Note: DataFabric Manager repeats Stages 1 to 2 to discover new storage systems. The minimum
interval for repeating the discovery process is set by the Discovery Interval option.
Discovery process | 39
Stage 3.
Description DataFabric Manager issues another SNMP GET request to routers that responded to the first SNMP request. The purpose of the request is to gather information about other networks to which these routers might be attached. When DataFabric Manager receives replies, if it finds networks that are not included in its database, it adds the new networks to its database. DataFabric Manager selects another network from its database and issues an SNMP GET request to all hosts on that network. DataFabric Manager repeats Stages 2 through 5 until it has sent SNMP queries to all the networks in its database. By default, the minimum interval for repeating the network discovery cycle is set at every 15 minutes.
4. 5. 6.
40 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
of networks and their size determines the interval. If you choose a longer interval, there might be a delay in discovering new storage systems, but the discovery process is less likely to affect the network load. Discovery Timeout (2 seconds) This option specifies the time interval after which DataFabric Manager considers a discovery query to have failed. Change the default value if you want to lengthen the time before considering a discovery to have failed (to avoid discovery queries on a local area network failing due to long storage system response times of a storage system). This option enables the discovery of storage systems through SNMP. Change the default value if any of the following situations exist: All storage systems that you expected DataFabric Manager to discover have been discovered and you do not want DataFabric Manager to keep scanning for new storage systems. You want to manually add storage systems to the DataFabric Manager database. Manually adding storage systems is faster than discovering storage systems in the following cases: You want DataFabric Manager to manage a small number of storage systems. You want to add a single new storage system to the DataFabric Manager database.
This option enables the discovery of networks. Change the default value if you want the DataFabric Manager server to automatically discover storage systems on your entire network.
Note: When the Network Discovery option is enabled, the list of networks on
the Networks to Discover page can expand considerably as DataFabric Manager discovers additional networks attached to previously discovered networks. Network Discovery Limit (in hops) (15) This option sets the boundary of network discovery as a maximum number of hops (networks) from the DataFabric Manager server. Change the default value if you want to increase this limit if the storage systems that you want DataFabric Manager to discover are connected to networks that are more than 15 hops (networks) away from the network to which the DataFabric Manager server is attached. The other method for discovering these storage systems is to add them manually.
Discovery process | 41
Decrease the discovery limit if a smaller number of hops includes all the networks with storage systems you want to discover. For example, reduce the limit to six hops if there are no storage systems that must be discovered on networks beyond six hops. Reducing the limit prevents DataFabric Manager from using cycles to probe networks that contain no storage systems that you want to discover. Networks to discover This option allows you to manually add and delete networks that DataFabric Manager scans for new storage systems. Change the default value if you want to add a network to DataFabric Manager that it cannot discover automatically, or you want to delete a network in which you no longer want storage systems to be discovered. Host agent discovery This option allows you to enable or disable host agents. Change the default value if you want to disable the discovery of LUNs or storage area network (SAN) hosts. This option enables you to specify, change, or delete an SNMP community that DataFabric Manager uses for a specific network or host. Change the default value if storage systems and routers that you want to include in DataFabric Manager do not use the default SNMP community.
Network Credentials
What SNMP is
Simple Network Management Protocol (SNMP) is an application-layer protocol that facilitates the exchange of management information between network devices. SNMP is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP enables network administrators to manage network performance; find and solve network problems; and plan for network growth.
Next topics
When to enable SNMP on page 42 SNMP protocols to discover and monitor storage systems on page 42 What the Preferred SNMP Version option is on page 43 How DataFabric Manager chooses network credentials for discovery on page 43 Discovery process using SNMPv1 or SNMPv3 on page 43 Monitoring process using SNMPv1 on page 44 Monitoring process using SNMPv3 on page 44 Setting SNMPv1 or SNMPv3 as the preferred version on page 45 Setting SNMPv1 as the only SNMP version on page 45
42 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Setting SNMPv1 or SNMPv3 to monitor a storage system on page 45 Modifying the network credentials and SNMP settings on page 46 Deleting the SNMP settings for the network on page 46 Addition of a storage system from an undiscovered network on page 46 Diagnosis of SNMP connectivity on page 47
You can use SNMPv3 to discover and monitor storage systems if SNMPv1 is disabled.
Note: The user on the storage system whose credentials are specified in Operations Manager should
have the login-snmp capability to be able to use SNMPv3. The version specified in the Preferred SNMP Version option at the storage system level is used for monitoring the discovered storage system. If no version is specified at the storage system level, then either the network setting or the global setting is used. However, you can modify the SNMP version, if required.
Note: If the monitoring fails using the specified SNMP version, then the other SNMP version is not
Discovery process | 43
Related references
When DataFabric Manager is installed for the first time or updated, by default, the global and network setting uses SNMPv1 as the preferred version. However, you can configure the global and network setting to use SNMPv3 as the default version.
Related tasks
44 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
If... The storage system is discovered using the preferred SNMP version (let us say, SNMPv1)
Then... The discovered storage system is added with the preferred SNMP version as Global/Network Default. This implies that the network or global settings are used for monitoring. SNMPv3 is used for storage system discovery. SNMPv3 is set as the preferred version for monitoring.
The storage system is not discovered using SNMPv1 The discovery succeeds using SNMPv3
When all or most of the storage systems in a network are running only a particular SNMP version, then you are recommended to specify only that version as the preferred SNMP version for the network. This speeds up the discovery of storage systems running only a particular SNMP version. You can prevent using a particular version of SNMP from being used for discovery. For example, if a particular version of SNMP is not in use in the network, then you can disable that SNMP version. This speeds up the discovery process.
The Preferred SNMP Version option is set to SNMPv1, The community string set at the network level is used or the Preferred SNMP Version option is not set for the for the SNMPv1 monitoring. storage system, and the global or network setting is SNMPv1 The community string is not specified at either global or network level SNMPv1 is disabled and an event is generated to indicate the SNMP communication failure with the storage system.
The Preferred SNMP Version option is set to SNMPv3, The login and the password specified for the storage or the Preferred SNMP Version option is not set for the system are used for the SNMPv3 monitoring. storage system, and the global or network setting is SNMPv3 The storage system credentials are not specified No credentials are provided at the network level The login and the password specified at the network level are used for the SNMPv3 monitoring. The login and the password specified at the global level are used for the SNMPv3 monitoring.
Discovery process | 45
Then... An event is generated to indicate the SNMP communication failure with the storage system.
1. Select the Network Credentials submenu from the Setup menu. Alternatively, select the Discovery submenu from the Setup menu and click the edit link corresponding to the Network Credentials option. 2. Provide values for each of the parameters requested. 3. Click Add.
1. Go to the Network Credentials page. 2. Click the edit link corresponding to the Edit field for the SNMPv3 enabled network. 3. In the Edit Network Credentials section, modify the value of the Preferred SNMP Version option to SNMPv1. 4. In the SNMPv3 Settings section, clear the Login and Password values. 5. Click Update. 6. If the storage system in the network has the Preferred SNMP Version option set to SNMPv3, then a) Go to the Edit Appliance Settings page of the corresponding storage system. b) Modify the value of the Preferred SNMP Version option to Global/Network Default.
46 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. Go to the Edit Appliance Settings page. 2. Set the Preferred SNMP Version option for the corresponding storage system. 3. Click Update.
1. Select the Network Credentials submenu from the Setup menu. Alternatively, select the Discovery submenu from the Setup menu and click the edit link corresponding to the Network Credentials option. 2. Click the edit link corresponding to the Edit field in the Network Credentials page. 3. Modify values for the parameters required. 4. Click Update.
1. Go to the Network Credentials page. 2. Select the check box corresponding to the Delete field for the required network. 3. Click Delete Selected.
In this case, the discovery is not enabled on the storage systems network.
Discovery process | 47
Use of the Diagnose Connectivity tool for a managed storage system on page 282 Use of the Diagnose Connectivity tool for unmanaged storage system on page 282 Where to find the Diagnose Connectivity tool in Operations Manager on page 283 Reasons why DataFabric Manager might not discover your network on page 283
What role-based access control is on page 49 Configuring vFiler unit access control on page 50 Logging in to DataFabric Manager on page 50 List of predefined roles in DataFabric Manager on page 51 Active Directory user group accounts on page 52 Adding administrative users on page 53 How roles relate to administrators on page 53 What an RBAC resource is on page 57 How reports are viewed for administrators and roles on page 59 What a global and group access control is on page 59 Management of administrator access on page 59
However, when you initiate an operation that requires specific privileges, DataFabric Manager prompts you to log in. For example, to create administrator accounts, you need to log in with Administrator account access. RBAC allows administrators to manage groups of users by defining roles. If you need to restrict access to the database to specific administrators, you must set up administrator accounts for them. Additionally, if you want to restrict the information that these administrators can view and the operations they can perform, you must apply roles to the administrator accounts you create.
50 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
The following restrictions are applicable to vFiler units' administrators: If a vFiler unit has a volume assigned to it, the vFiler administrator cannot view details or reports for the aggregate in which the volume exists. If a vFiler unit has a qtree assigned to it, the vFiler administrator cannot view details or reports for the volume in which the qtree exists.
Note: The full name of a qtree contains a volume name (for example, 10.72.184.212:/hemzvol/hagar_root_backup_test) even though the vFiler unit does not
contain the volume. This procedure describes how to configure access control that allows an administrator to view and monitor vFiler units.
Steps
1. Create a group that contains vFiler objects. 2. From the Edit Group Membership page, select vFiler units to add to the group. 3. From the Roles page, create a role for the vFiler administrator and assign it the following database operations: Delete, Read, and Write. 4. From the Edit Administrator Settings page, assign role to the vFiler administrator.
Role-based access control in DataFabric Manager | 51 2. Type your administrator name and password. 3. Click Log In.
Everyone account
access by default. If you upgrade to DataFabric Manager 3.3 and later, these legacy privileges are retained by the Everyone account and mapped to the GlobalRead and GlobalSRM roles.
52 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Roles GlobalAlarm GlobalBackup GlobalConfigManagement GlobalDataProtection GlobalDataSet GlobalDelete GlobalEvent GlobalExecute GlobalFailover GlobalFullControl GlobalMirror GlobalPerfManagement GlobalProvisioning GlobalQuota GlobalRead GlobalReport GlobalResourceControl GlobalRestore GlobalSAN GlobalSDConfig GlobalSDDataProtection GlobalSDDataProtectionAndRestore GlobalSDFullControl GlobalSDSnapshot GlobalSDStorage GlobalSRM GlobalWrite
Everyone
No roles
Role-based access control in DataFabric Manager | 53 To set up administrator accounts as a user group, use the following naming convention: <AD
domain>\group_dfmadmins
In this example, all administrators who belong to group_dfmadmins can log in to DataFabric Manager and inherit the roles specified for that group.
1. Log in to the Administrator account. 2. In the Control Center, select Administrative Users from the Setup menu. 3. Type the name for the administrator or domain name for the group of administrators.
Note: User when added must be present locally.
4. Optionally, enter the e-mail address for the administrator or administrator group. 5. Optionally, enter the pager address, as an e-mail address or pager number, for the administrator or administrator group. 6. Click Add.
54 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 What inheritance roles are on page 55 What capabilities are on page 56 Role precedence and inheritance on page 56 Creating roles on page 56 Modifying roles on page 57
Related concepts
Operations You can manage views, event thresholds, and alarms apart from viewing performance information in Performance Advisor. You can provision primary dataset nodes and can attach resource pools to secondary or tertiary dataset nodes. You also have all the capabilities of the GlobalResourceControl, GlobalRead, and GlobalDataset roles for dataset nodes that are configured with provisioning policies. You can view user quota reports and events. You can view the DataFabric Manager database, backup configurations, events and alerts, and replication or failover policies. You can manage custom reports and report schedules. You can add members to dataset nodes that are configured with provisioning policies. You can perform restore operations from backups on secondary volumes. You can create, expand, and destroy LUNs. You can read, modify, and delete SnapDrive configuration. You can manage backups and datasets with SnapDrive. You can perform backup and restore operations with SnapDrive. You can perform operations specific to GlobalSDConfig, GlobalSDSnapshot, and GlobalSDStorage roles. You can list the snapshots and the objects inside them. You can create, modify, and delete snapshots. You can create clones of volumes, luns, and qtrees. You can restore volumes, luns, and qtrees from snapshots.
GlobalQuota GlobalRead GlobalReport GlobalResourceControl GlobalRestore GlobalSAN GlobalSDConfig GlobalSDDataProtection GlobalSDDataProtection AndRestore GlobalSDFullControl GlobalSDSnapshot
You can list, create, modify and delete storage objects and their attributes. You can view information collected by SRM path walks. You can view or write to the DataFabric Manager database.
Note: Super users are assigned the GlobalFullControl role in Operations Manager. For Linux, super
user is the root user. For Windows, super-users belong to the administrators group.
56 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 When you view roles for an administrator, the settings are those explicitly set for the administrator at the group level. For example, if administrators have the GlobalRead role, they implicitly have the Read role on all groups. Similarly, if administrators have the Read role on a parent group, they implicitly have the Read role on all the subgroups of that parent group. Several other factors also affect the group role that is granted to an administrator: The capabilities granted to the administrator, "Everyone" The administrator's membership in Active Directory (AD) user groups that have been added to the DataFabric Manager server database
Group roles are named similarly to the global roles that are defined in the previous table.
Note: Roles are carried forward prior to DataFabric Manager 3.3
Creating roles
You can create roles from the Setup menu in Operations Manager.
Steps
1. Choose Roles from the Setup menu. 2. Click Add Capabilities... and, from the Capabilities window, select a resource from the resource tree. 3. Select the operations that you want to allow for the resource and click OK.
Role-based access control in DataFabric Manager | 57 4. Optionally, to inherit capabilities from an existing role, select that role from the Inherit Capabilities list and click \>> to move the role to the list at the right. 5. Click Add Role.
Modifying roles
You can edit the roles created from the Setup menu in Operations Manager.
Steps
1. Modify Roles from the Setup menu. 2. Find the role in the list of roles and click edit. 3. Optionally modify the basic settings of the role, such as the name and description. 4. Click Update. 5. Modify role inheritance by doing one of the following: To disinherit a role, select the role from the list at the right, and click << to remove it. To inherit a role, select the role from the Inherit Capabilities list and click >> to move the role to the Inherit Capabilities list.
Note: This step is optional.
6. Click Update.
User with the Database Write capability in the global scope is assigned the Policy Write capability. User with the Database Delete capability is assigned the Policy Delete capability.
58 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Next topics
Granting restricted access to RBAC resources on page 58 Access check for application administrators on page 58
The following example shows you how to create a user role called EventRole using the CLI:
$ dfm role create EventRol
2. Add the following capabilities to the role created in Step 1: Read capability on Global resource for events Write capability on Global resource for events
Example
3. Assign the role created in Step 1 to user Everyone, using the following command: $ dfm user
role add Everyone EventRole Note: You can also use Operations Manager GUI to perform Steps 1 through 3.
4. Open Operations Manager. Read and acknowledge events without logging in. 5. Ensure the user Everyone does not have the capability DFM.Database.Write.
Note: A user with the capability DFM.Database.Write can delete all events.
Role-based access control in DataFabric Manager | 59 When a user configures the client application, the Core AccessCheck capability has to be assigned to a role. Application administrators can check the access permissions of any user, only if they have the permission to do so. A client application user configured on the DataFabric Manager server with this role allows the client application to check the access of all users.
Note: After upgrading to DataFabric Manager 3.6 or later, a user with the Database Read capability
administrators.
dfm report admin-rolesLists all administrators and the roles they are assigned, sorted by
role. For information about how to use the CLI, see the DataFabric Manager man pages for dfm report commands. The man pages specifically describe command organization and syntax.
60 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 By managing administrator access on storage systems and vFiler units, you can complete the following tasks. Manage and control access on storage systems and vFiler units from DataFabric Manager. Monitor and manage user groups, local users, domain users and roles on storage systems and vFiler units. Create and modify identical local users, roles, and user groups on more than one storage system or vFiler unit. Edit user groups, local users, domain users, and roles on storage systems and vFiler units. Push user groups, local users, domain users, and roles from a storage system or vFiler unit to another storage system or vFiler unit. Modify passwords of local users on a storage system or vFiler unit.
Next topics
Prerequisites for managing administrator access on page 60 Limitations in managing administrator access on page 61 Summary of the global group on page 61 Who local users are on page 61 What domain users are on page 67 What Usergroups are on page 69 What roles are on page 72 What jobs display on page 75
Viewing a specific summary page on page 61 Viewing users on the host on page 61 Viewing a specific summary page You can view the summary page specific to a storage system or vFiler unit.
Steps
1. Click Control Center Home Member Details Appliances 2. Choose the storage system under Appliance and click on the link. 3. From the left pane, under Appliance Tools, click Host Users Summary. Viewing users on the host You can view the users on a host using the Host Users report. The Host Users report displays information about the existing users on the host.
Steps
1. From any page, select Management Host Users. 2. Select Host Users, All from the Report drop-down list.
Viewing local users on the host on page 62 Viewing local user settings on the host on page 62 Adding local users to the host on page 63
62 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Editing local user settings on the host on page 63 Users with Execute capability on page 64 Viewing local users on the host You can view the local users on a host using the Host Local Users report. The Host Local Users report displays information about the existing local users on the host.
Steps
1. From any page, select Management Host Users. 2. Select Host Local Users, All from the Report drop-down list. Viewing local user settings on the host You can view the local users on the storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Local Users. 2. Click the view link corresponding to the local user. The following details of the selected local user appear. Host Name User Name Description User Full-name Usergroups Roles Capabilities Minimum Password Age Maximum Password Age Status Name of the storage system or vFiler unit Name of the local user Description of the local user Full name of the local user Usergroups that the user belongs to Roles assigned to the local user Capabilities of roles assigned to the local user part of user group Minimum number of days that a password must be used. The number of days should be less than or equal to maximum password age. Maximum number of days (0 to 2321) that a password can be used Displays the current status of the user account: Enabled: The user account is enabled. Disabled: The user account is disabled. Expired: The user account is expired.
except for the root login. When the user fails to enter the correct password, even after the maximum retries, then the user account is disabled. The status of the user account is enabled only if the administrator resets the password for the user. The user account expires if the user fails to change the password within the maximum password age. For more information about maximum retries, see the Data ONTAP System Administration Guide.
Related information
Data ONTAP System Administration Guide http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml Adding local users to the host You can add a local user to a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Local Users. 2. Specify the parameters. Host Name User Name Password Confirm Password User Full Name (optional) Description (optional) Minimum Password Age (optional) Maximum Password Age (optional) Usergroup Membership Name of the storage system or vFiler unit which user is to be created Name of the local user Password of the local user Confirm the password of the local user Full name of the local user Description of the local user Minimum number of days that a password must be used Maximum number of days that a password must be used User groups you want the user to be a member of
3. Select one or more user groups from the list. 4. Add Local User. Editing local user settings on the host You can edit local user settings on a storage system or vFiler unit.
64 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. From any page, select Management Host Users Local Users. 2. Click the user link in the Edit column corresponding to the local user. 3. Edit the parameters. User Full-name Description Minimum Password Age (in days) Maximum Password Age (in days) Usergroup Membership Full name of the local user Description of the local user Minimum number of days that a password must be used Maximum number of days that a password must be used Usergroups you want to be a member of
Note: You cannot edit Host Name and User Name in the Edit Local User section.
4. Select one or more user groups from the list. 5. Click Update. Users with Execute capability DataFabric Manager users with the Execute capability can reset the password of a local user on storage system or vFiler unit using the credentials that are stored in the database. Other users who do not have the Execute capability use the credentials that are provided, to modify the password.
Next topics
Pushing passwords to a local user on page 64 Deleting local users from the host on page 65 Pushing local users to hosts on page 66 Monitoring changes in local user configuration on page 66 Editing passwords on page 66 Pushing passwords to a local user You can push an identical password to a local user on multiple storage systems or vFiler units.
Steps
Role-based access control in DataFabric Manager | 65 2. From the List of Existing Local Users section, click the password link in the Push column corresponding to the local user.
If... Then...
The local user is on the storage The Storage System Passwords page containing the section Modify system Password on Storage Systems is displayed. Note: For more information about Storage System Passwords, see the Operations Manager Help . The local user is on the vFiler The vFiler Passwords page containing the section Modify Password on vFilers is displayed. Note: For more information about vFiler Passwords, see the Operations Manager Help .
3. Specify the parameters. User Name Old Password New Password Confirm New Password Select groups and/or Storage systems Name of the local user Password of the local user New password of the local user Confirm the new password of the local user Select the following from the respective list: Storage systems on which the local user exists DataFabric Manager groups on which the local user exists
Apply to subgroups
Select the check box if the password change applies to the storage systems of the selected group and the subgroups of the selected group
4. Click Update.
Note: Pushing an identical password creates a job that is displayed in the Jobs tab of Password
Deleting local users from the host You can delete a local user from a storage system or vFiler unit.
Steps
66 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 2. From the List of Existing Local Users section, select the local user that you want to delete. 3. Click Delete Selected. Pushing local users to hosts You can push a local user to a group of storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Local Users. 2. Select the DataFabric Manager group or storage system on which you want to push the local user. 3. Select OK in the Resources dialog box. 4. Click Push.
Note: Pushing local users to host creates a job that is displayed in the Jobs tab of Host User
Management window.
Monitoring changes in local user configuration You can monitor changes in local user configuration on a storage system or vFiler unit.
Steps
1. Click Setup Alarms. 2. Create a new alarm for the event Host User Modified. Editing passwords You can edit the password of a local user on a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Local Users. 2. Click the password link in the Edit column corresponding to the local user.
Note: You cannot edit Host Name and User Name in the Edit Password page.
3. Enter the old password. 4. Enter the new password. 5. Confirm the new password.
Viewing domain users on the host on page 67 Adding domain users to the host on page 67 Viewing domain user settings on the host on page 68 Editing domain user settings on the host on page 68 Removing domain users from all the user groups on page 69 Pushing domain users to hosts on page 69 Monitoring changes in domain user configuration on page 69 Viewing domain users on the host You can view the domain users on a host using the Host Domain Users report. The Host Domain Users report displays information about the existing domain users on the host.
Steps
1. From any page, select Management Host Users. 2. Select Host Domain Users, All from the Report drop-down list. Adding domain users to the host You can add a domain user to a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Domain Users. 2. Specify the parameters. Host Name User Identifier (domain-name\username or SID) Name of the storage system or vFiler unit from the drop-down list Any one of the following: Domain user name Security Identifier (SID) of the domain user
68 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Usergroup Membership
3. Select one or more user groups from the list. 4. Click Add Domain User. Viewing domain user settings on the host You can view the domain user settings on the storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Domain Users. 2. Click the view link corresponding to the domain user. The following details of the selected domain user appear. Host Name User Name SID Usergroups Roles Capabilities Name of the storage system or vFiler unit Name of the domain user Security Identifier of the domain user Usergroups that the user belongs to Roles assigned to the domain user Capabilities of roles assigned to the domain user as part of the user group
Editing domain user settings on the host You can edit a domain user on a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Domain Users. 2. Click the edit link corresponding to the domain user. 3. Edit Usergroup Membership. Usergroup Membership Usergroups you want to be a member of
Note: You cannot edit Host Name and User Name in the Edit Domain User section.
4. Click Update.
Removing domain users from all the user groups You can remove a domain user from all the user groups.
Steps
1. From any page, select Management Host Users Domain Users. 2. Select the domain user that you want to remove. 3. Click Remove From All Usergroups. Pushing domain users to hosts You can push a domain user to a group of storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Domain Users. 2. Click the push link corresponding to the domain user. 3. Select the DataFabric Manager group, storage system or vFiler unit on which you want to push the domain user. 4. Select OK. 5. Click Push. Monitoring changes in domain user configuration You can monitor the changes in domain user configuration on a storage system or vFiler unit.
Steps
1. Click Setup Alarms. 2. Create a new alarm for the event Host Domain User Modified.
Viewing user groups on the host on page 70 Adding Usergroups to the host on page 70 Viewing Usergroup settings on the host on page 70
70 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Editing Usergroup settings on the host on page 71 Deleting Usergroups from the host on page 71 Pushing Usergroups to hosts on page 72 Monitoring changes in Usergroup configuration on page 72 Viewing user groups on the host You can view the user groups on a host using the Host Usergroups report. The Host Usergroups report displays information about the existing user groups on the host.
Steps
1. From any page, select Management Host Users. 2. Select Host Usergroups, All from the Report drop-down list. Adding Usergroups to the host You can add a user group to a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Usergroups. 2. Specify the parameters. Host Name Usergroup Name Description Select Roles 3. Select one or more roles. 4. Click Add Usergroup. Viewing Usergroup settings on the host You can view the user group settings on the storage systems or vFiler units.
Steps
Name of the storage system or vFiler unit from the drop-down list Name of the user group Description of the user group Capabilities of roles
1. From any page, select Management Host Users Usergroups. 2. Click the view link corresponding to the user group.
Role-based access control in DataFabric Manager | 71 The following details of the selected user group appear: Host Name Usergroup Name Description Roles Capabilities Name of the storage system or vFiler unit Name of the user group Description of the user group Roles assigned to the user group Capabilities of the user group
Editing Usergroup settings on the host You can edit user group settings on a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Usergroups. 2. Click the edit link corresponding to the user group that you want to edit. 3. Edit the parameters. Usergroup Name Description Select Roles Name of the user group Description of the user group Capabilities of roles
Note: You cannot edit Host Name in the Edit Usergroup section.
4. Select one or more roles. 5. Click Update. Deleting Usergroups from the host You can delete a user group from a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Usergroups. 2. Select the user group that you want to delete. 3. Click Delete Selected.
72 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Pushing Usergroups to hosts You can push identical user groups to a group of storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Usergroups. 2. Click the push link of the user group that you want to push on other storage systems or vFiler units. 3. Select the DataFabric Manager group, storage system or vFiler unit on which you want to push the user group. 4. Select OK. 5. Click Push. Monitoring changes in Usergroup configuration You can monitor the changes in user group configuration on a storage system or vFiler unit.
Steps
1. Click Setup Alarms. 2. Create a new alarm for the event Host Usergroup Modified.
Viewing roles on the host on page 72 Adding roles to the host on page 73 Viewing role settings on the host on page 73 Editing role settings on the host on page 74 Deleting roles from the host on page 74 Pushing roles to the hosts on page 74 Monitoring changes in role configuration on page 75 Viewing roles on the host You can view the role settings on the storage systems or vFiler units by using the Host Roles report.
1. From any page, click Management Host Users. 2. Select Host Roles, All from the Report drop-down list. Adding roles to the host You can add a role to a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Roles. 2. Specify the parameters. Host Name Role Name Description Capabilities Name of the storage system or vFiler unit from the drop-down list Name of the role Description of the role Capabilities of the role Click the Add Capabilities link. 3. Select one or more capabilities you want to add. 4. Click OK. 5. Click Add Role. Viewing role settings on the host You can view the roles on the storage systems or vFiler units.
Steps
1. From any page, select Management Host Users Roles. 2. Click the view link corresponding to the host role. The following details of the selected host role appear: Host Name Role Name Description Name of the storage system or vFiler unit Name of the role Description of the role
74 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Capabilities
Editing role settings on the host You can edit a role on a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Roles. 2. Click the edit link corresponding to the host role that you want to edit. 3. Edit the parameters. Description Capabilities Description of the role Capabilities of the role Click the Edit link.
Note: You cannot edit Host Name and Role Name in the Edit Role section.
4. Select one or more capabilities you want to add. 5. Click Ok. 6. Click Update. Deleting roles from the host You can delete a role from a storage system or vFiler unit.
Steps
1. From any page, select Management Host Users Roles. 2. Select the host role that you want to delete. 3. Click Delete Selected. Pushing roles to the hosts You can push identical roles to a group of storage systems or vFiler units.
Steps
Role-based access control in DataFabric Manager | 75 2. Click the push link of the host role you want to push on other storage systems or vFiler units. 3. Select the DataFabric Manager group or storage system on which you want to push the role. 4. Select OK. 5. Click Push. Monitoring changes in role configuration You can monitor the changes in role configuration on a storage system or vFiler unit.
Steps
1. Click Setup Alarms. 2. Create a new alarm for the event Host Role Modified.
Pushing jobs on page 75 Deleting push jobs on page 75 Pushing jobs To view the status of the push jobs, select Management > Host Users >Jobs. Deleting push jobs You can delete a push job.
Steps
1. Select the push job that you want to delete. 2. Click Delete.
Following is a set of considerations for creating groups: You can group similar or different objects in a group. An object can be a member of any number of groups. You can group a subset of group members to create a new group. You cannot create a group of groups. You can create any number of groups. You can copy a group or move a group in a group hierarchy.
Next topics
What group types are on page 78 What a Global group is on page 79 What hierarchical groups are on page 79 Creating groups on page 80 Creating groups from a report on page 80 What configuration resource groups are on page 81 Guidelines for managing groups on page 82
78 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Guidelines for creating configuration resource groups on page 82 Guidelines for adding vFiler units to Appliance Resource group on page 82 Editing group membership on page 83 What group threshold settings are on page 83 What group reports are on page 84 What summary reports are on page 84 What subgroup reports are on page 84
What homogeneous groups are on page 78 What mixed-type groups are on page 79
)contains LUNs only )contains storage systems associated with one or more
Dataset ( )is the data that is stored in a collection of primary storage containers, including all the copies of the data in those containers Resource pool ( allocated )is a collection of storage objects from which other storage containers are
Manager, but you cannot perform management tasks on the Global group.
80 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You can select arguments for reports to be generated.
Creating groups
You can create a new group from the Edit Groups page.
Before You Begin
To create a group, you must be logged in as an administrator with a role having Database Write capability on the parent group. To create a group directly under the Global group, the administrator must have a role with Database Write capability on the Global group.
Steps
1. From the Control Center, click the Edit Groups. 2. In the Group Name field, type the name of the group you want to create. See Naming conventions for groups. 3. From the list of groups, select the parent group for the group you are creating. You might need to expand the list to display the parent group you want. 4. Click Add. The new group is created. The Current Groups list in the left-pane area is updated with the new group. You might need to expand the Current Groups list to display the new group.
1. From the Control Center, click the Member Details tab. 2. Click Aggregate, File Systems, LUN, or tabs. 3. To the left of the list of objects in the main window, select the check boxes for the objects that you want to add to the group. 4. At the bottom left of the main window, click Add to New Group. 5. In the Group Name field, type the name of the group you want to create. See Naming conventions for groups.
Groups and objects | 81 6. From the list of groups, select the parent group for the group you are creating. You might need to expand the list to display the parent group you want. 7. Click Add. The new group is created. The Current Groups list in the left-pane area is updated with the new group. You might need to expand the Current Groups list to display the new group.
Besides specifying configuration settings by associating individual configuration files with a group of storage systems, you can also specify another configuration resource group from which to acquire configuration settings. Such a group is known as a parent group. For example, a previously created configuration resource group might already have most or all, of the settings you require.
82 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
discussed in
Groups and objects | 83 If you add a hosting Storage system that is configured with vFiler units to a group, the vFiler units are also added as indirect members. When you remove a hosting storage system from a group, its vFiler units are also removed. If you add a storage resource assigned to a vFiler unit to a group, the vFiler unit is also added as an indirect member. If you remove the storage resources from a group, the vFiler unit is also removed.
Note: Indirect members are considered for determining the group status.
1. Go to the Groups area on the left side of Operations Manager and expand the list as needed to display the group to which you want to add members. 2. Click the name of the group to which you want to add members. 3. From the Current Group menu at the lower left of Operations Manager, click Edit Membership. 4. Select the object from the Choose from All Available list and click >>to move the object to the list at the right. Operations Manager adds the selection to the group and updates the membership list displayed on the right side of the Edit Group Membership area.
the objects in the group. These new threshold values are not associated with the group. That is, if you add another object to a group, after applying a threshold change, the threshold value of the new object is not changed. The threshold value of the new object does not change if it is different from
84 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 the current group. Additionally, if you apply threshold changes to an object that belongs to multiple groups, the threshold value is changed for this object across all groups. For information about how to change the thresholds for a group of objects, see the Operations Manager Help.
the group and then clicking the appropriate Operations Manager tab.
Groups and objects | 85 about the aggregates in its subgroups. You do not see data about other object types in the parent group or the subgroups. If you run a report on a mixed-type object group, Operations Manager runs the report on group members of the applicable type. For example, qtrees for the Qtree Growth report. Operations Manager combines the results, and then eliminates the duplicates, if any.
What monitoring is on page 87 Links to FilerView on page 88 Query intervals on page 89 What SNMP trap listener is on page 90 Descriptions of events and their severity types on page 93 Alarm configurations on page 95 Working with user alerts on page 98 Introduction to DataFabric Manager reports on page 103 Data export in DataFabric Manager on page 120
What monitoring is
Monitoring involves several processes. First, DataFabric Manager discovers the storage systems supported on your network. DataFabric Manager periodically monitors data that it collects from the discovered storage systems, such as CPU usage, interface statistics, free disk space, qtree usage, and chassis environmental. DataFabric Manager generates events when it discovers a storage system, when the status is abnormal, or when a predefined threshold is breached. If configured to do so, DataFabric Manager sends a notification to a recipient when an event triggers an alarm. The following flow chart illustrates the DataFabric Manager monitoring process.
88 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Links to FilerView
In DataFabric Manager 2.3 and later, UI pages displaying information about some DataFabric Manager objects contain links, indicated by the icon to FilerView ( ), the Web-based UI for storage systems.
Storage monitoring and reporting | 89 When you click the icon, you are connected to the FilerView location where you can view information about, and make changes to the object whose icon you clicked. Depending on your setup, you might need to authenticate the storage system whose FilerView you are connecting to, using one of the administrator user accounts on the storage system.
Query intervals
DataFabric Manager uses periodic SNMP queries to collect data from the storage systems it discovers. The data is reported by DataFabric Manager in the form of tabular and graphical reports and event generation. The time interval at which an SNMP query is sent depends on the data being collected. For example, although DataFabric Manager pings each storage system every minute to ensure that the storage system is reachable, the amount of free space on the disks of a storage system is collected every 30 minutes.
Next topics
What global monitoring options are on page 89 Considerations before changing monitoring intervals on page 89
90 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
What SNMP trap events are on page 90 How SNMP trap reports are viewed on page 91 When SNMP traps cannot be received on page 91 SNMP trap listener configuration requirements on page 91 How SNMP trap listener is stopped on page 92 Configuration of SNMP trap global options on page 92 Information about the DataFabric Manager MIB on page 92
Storage monitoring and reporting | 91 If the severity of a trap is unknown, DataFabric Manager drops the trap.
Related concepts
with an error and the Warning event is displayed in the Events page.
92 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
DataFabric Manager can send only the traps that are available in the MIB.
Note: DataFabric Manager can send information to an SNMP trap host only when an alarm for
which the trap host is specified is generated. DataFabric Manager cannot serve as an SNMP agent; that is, you cannot query DataFabric Manager for information from an SNMP traphost.
notification of events. Each event is associated with a severity type to help you determine priorities for taking corrective action, as follows.
Note: Performance Advisor uses only the Normal and Error events. Severity type Normal or worse Description A previous abnormal condition for the event source returned to a normal state and the event source is operating within the desired thresholds. To view events with this severity type, you select the All option. The event is a normal occurrenceno action is required. The event source experienced an occurrence that you should be aware of. Events of this severity do not cause service disruption and corrective action might not be required. The event source is still performing; however, corrective action is required to avoid service disruption. A problem occurred that might lead to service disruption if corrective action is not taken immediately. The event source unexpectedly stopped performing and experienced unrecoverable data loss. You must take corrective action immediately to avoid extended downtime. The event source is in an unknown state. To view events with this severity type, you select the All option.
Next topics
Viewing events on page 94 Managing events on page 94 Operations on local configuration change events on page 94
94 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Viewing events
You can view a list of all events that occurred and view detailed information about any event.
Step
1. View the events logged by Operations Manager in any of the following ways: Click the Events: Emergency, Critical, Error, Warning link located at the top of the Operations Manager main window. From the Control Center tab, click the Events tab located in the Group Summary page. Select the Details pages for storage systems, SAN hosts, FC switches, HBA ports, and FCP targets. The Details pages provide lists of events related to the specific component. From the Backup Manager tab or the Disaster Recovery Manager tab, click the Events tab.
Note: User quota threshold events can be viewed only with the User Quota Events report available
Managing events
If DataFabric Manager is not configured to trigger an alarm when an event is generated, you cannot find out about the event. However, to identify the event, you can check the events log on the server on which DataFabric Manager is installed.
Steps
1. From an Events view, select the check box for the event that you want to acknowledge. You can select multiple events. 2. Click Acknowledge Selected to acknowledge the event that caused the alarm. 3. Find out the cause of the event and take corrective action. 4. Delete the event.
Alarm configurations
DataFabric Manager uses alarms to tell you when events occur. DataFabric Manager sends the alarm notification to one or more specified recipients: an e-mail address, a pager number, an SNMP traphost, or a script that you write. You are responsible for which events cause alarms, whether the alarm repeats until it is acknowledged, and how many recipients an alarm has. Not all events are severe enough to require alarms, and not all alarms are important enough to require acknowledgment. Nevertheless, you should configure DataFabric Manager to repeat notification until an event is acknowledged, to avoid multiple responses to the same event. DataFabric Manager does not automatically send alarms for the events. You must configure alarms for the events, you specify.
Next topics
Configuration guidelines on page 95 Creating alarms on page 95 Testing alarms on page 96 Comments in alarm notifications on page 97 Example of alarm notification in e-mail format on page 97 Example of alarm notification in script format on page 97 Example of alarm notification in trap format on page 98 Response to alarms on page 98 Deleting alarms on page 98
Configuration guidelines
When configuring alarms you must follow a set of guidelines. Alarms must be created by group, either an individual group, or the Global group. If you want to set an alarm for a specific object, you must first create a group with that object as the only member. Then, create an alarm for the newly created group. Alarms you create for a specific event are triggered when that event occurs. Alarms you create for a type of event are triggered when any event of that severity level occurs. Alarms can be for events of severity Information or higher.
Related concepts
Creating alarms
You can create alarms from the Alarms page in Operations Manager.
96 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. Click Control Center Setup Alarms. 2. From the Alarms page, select the group that you want Operations Manager to monitor. You might need to expand the list to display the one you want to select. 3. Specify what triggers the alarm: an event or the severity of event. 4. Specify the recipient of the alarm notification.
Note: If you want to specify more than one recipient or configure repeat notification, continue
to Step 5. 5. Click Add to set the alarm. If you want to configure additional options, continue with Step 5. 6. Click Advanced Version. 7. Optionally, if you want to specify a class of events that should trigger this alarm, specify the event class. You can use normal expressions. 8. Optionally, specify the recipients of the alarm notification. Formats include administrator names, e-mail addresses, pager addresses, or an IP address of the system to receive SNMP traps (or port number to send the SNMP trap to). 9. Optionally, specify the period that Operations Manager sends alarm notifications. 10. Optionally, select Yes to resend the alarm notification until the event is acknowledged or No to notify the recipients only once. 11. Optionally, set the interval (in minutes) that Operations Manager waits before it tries to resend a notification. 12. Activate the alarm by selecting No in the Disable field. 13. Click Add.
Testing alarms
You can test the alarms from the Alarms page in Operations Manager.
Steps
1. From the Alarms page, click Test (to the right of the alarm you want to test). 2. Click Test. DataFabric Manager generates an alarm and sends a notification to the recipient.
98 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Response to alarms
When you receive an alarm, you should acknowledge the event and resolve the condition that triggered the alarm. In addition, if the repeat notification feature is enabled and the alarm condition persists, you continue to receive notifications.
Deleting alarms
You can delete alarms from the Alarms page in Operations Manager.
Steps
1. From the Alarms page, select the alarm for deletion. 2. Click Delete Selected.
What user alerts are on page 99 Differences between alarms and user alerts on page 99 User alerts configurations on page 100 E-mail addresses for alerts on page 100
Storage monitoring and reporting | 99 Domains in user quota alerts on page 101 What the mailmap file is on page 101 Guidelines for editing the mailmap file on page 102 How the contents of the user alert are viewed on page 102 How the contents of the e-mail alert are changed on page 102 What the mailformat file is on page 102 Guidelines for editing the mailformat file on page 103
User alerts can be sent to the user who exceeds the user quota thresholds. User alerts can be only in the form of an e-mail message.
Alarms can be sent to only users listed as administrators User alerts are sent to any user with user quota on the Administrators page of Operations Manager. information in the DataFabric Manager database.
100 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Alarms
User alerts
Alarms can be configured for any events with severity User alerts can be sent only when the following user of Information or higher. quota events occur: User Disk Space Quota Almost Full User Disk Space Quota Full User Files Quota Almost Full User Files Quota Full
If you need to specify e-mail addresses of many users, by using the Edit User Settings page for each user might not be convenient. Therefore, DataFabric Manager provides a mailmap file that enables you to specify many e-mail addresses in one operation. When you do not specify the e-mail address, the default e-mail domain is configured and appended to the user name, by DataFabric Manager. The resulting e-mail address is used to send the alert.
Note: DataFabric Manager uses only the part of the user name that is unique to the user. For
example, if the user name is company/joe, Operations Manager uses joe as the user name.
Storage monitoring and reporting | 101 If a default e-mail domain is not configured, again Operations Manager uses the part of the user name that is unique to the user (without the domain information).
Note: If your SMTP server processes only e-mail addresses that contain the domain information,
you must configure the domain in DataFabric Manager to ensure that e-mail messages are delivered to their intended recipients.
A case-sensitive keyword that must appear at the beginning of each entry The Windows or UNIX user name. If the name contains spaces, enclose the name in either double or single quotes. E-mail address to which the quota alert is sent when the user crosses a user quota threshold
102 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
mail-headers body
The SMTP headers to be sent in the DATA section of the SMTP message Body of the e-mail
Storage monitoring and reporting | 103 Any words that begin with DFM_ are treated as DataFabric Manager variables and are replaced by their values. The following table lists the valid variables.
Variable DFM_EVENT_NAME DFM_QUOTA_FILE_SYSTEM_NAME DFM_QUOTA_FILE_SYSTEM_TYPE DFM_QUOTA_PERCENT_USED DFM_QUOTA_USED DFM_QUOTA_LIMIT DFM_QUOTA_TYPE DFM_LINK_EVENT DFM_QUOTA_USER_NAME Variable is replaced with... Name of the event Name of the file system (volume or qtree) that caused the quota event Type of file system (volume or qtree) Percentage of quota used Amount of disk space or number of files used Total disk space or files quota Type of quota (disk space or files), depending on whether the disk space or files quota threshold was exceeded Hyperlink to the event Name of the user exceeding the quota threshold
104 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You can run reports and create custom reports from the CLI. However, DataFabric Manager provides reports in easy-to-use Operations Manager interface, in which you can do the following: View a report. Save a report in Excel format. Print a report. Create a report. Delete a custom report. You cannot delete a standard report. Use a custom report as a template to create another custom report.
By using DataFabric Manager 3.6 or later, you can search all the reports from Reports menu All. All the reports are divided under the following categories: Recently Viewed Favorites Custom Reports Logical Objects Physical Objects Monitoring Performance Backup Disaster Recovery Data Protection Transfer Miscellaneous
For more information about these categories, see Operations Manager Help.
Note: The report category Performance contains the performance characteristics of objects. However,
you can view the complete reports under their respective report categories.
Next topics
Introduction to report options on page 105 Introduction to report catalogs on page 105 Different reports in Operations Manager on page 105 What performance reports are on page 109 Configuring custom reports on page 109 Deleting custom reports on page 110 Putting data into spreadsheet format on page 111 What scheduling report generation is on page 111 Methods to schedule a report on page 113 What Schedules reports are on page 116
Storage monitoring and reporting | 105 What Saved reports are on page 117
The custom report also has methods that let you create, delete, and view your custom reports. You can configure the report options in Operations Manager with respect to Name, Display tab, Catalogs, and Fields.
Every report that is generated by DataFabric Manager, including those you customize, is based on the catalogs. For more information about how to use the CLI to configure and run reports, use the dfm report help command. The command specifically describes how to list a report catalog and its fields and command organization and syntax.
106 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 you can view aggregate reports from Control Center Home Member Details Aggregates Report. Appliances The appliances report shows you information about your storage systems, such as storage space used, storage space available, and the chargeback reports. By default, you can view the appliances reports from Control Center Home Member Details Appliances Report. The array LUNs report shows you information about the LUN residing on the third-party storage arrays that are attached to a V-series system. Information such as model, vendor, serial number of the LUN, and size is available in these reports. By default, you can view array LUNs reports from Control Center Home Member Details Appliances Report.
Array LUNs
Aggregate Array The aggregate array LUNs report shows you information about array LUNs LUNs contained on the aggregates of a V-series system. Information such as model, vendor, serial number of the LUN, and size is available in these reports. By default, you can view aggregate array LUNs reports from Control Center Home Member Details Aggregates Report. Backup Datasets The backup report shows you information about the data transfer during a backup. The datasets report shows you information about the resource, protection, and the conformance status of the dataset. This report also displays information about the policy with which the dataset is associated. The dataset report shows you information about the data transfer from individual mirror and backup relationships within a dataset. The disks report shows you information about the disks in your storage systems, such as model, vendor, and size. You can view the performance characteristics and sort these reports by broken or spare disks, as well as by size. By default, you can view disks reports along with the appliance reports in the Member Details tab. The events report shows you information about event severity, user quotas, and SNMP traps. The information about all events, including deleted, unacknowledged, and acknowledged events, in the DataFabric Manager database are available in these reports. By default, you can view event reports in the Group Status tab. The FC link report shows you information about the logical and physical links of your FC switches and fabric interfaces. By default, you can view FC link reports along with the SAN reports in the Member Details tab. The FC switch report shows you FC switches that have been deleted, have user comments associated with them, or are not operating. By default, you can view FC switch reports along with the SAN reports in the Member Details tab. The FCP target report shows you information about the status, port state, and topology of the target. The FCP target also reports the name of the FC switch, the port to which the target connects and the HBA ports that the target can access. By
Dataset Disks
Events
FC Link
FC Switch
FCP Target
Storage monitoring and reporting | 107 default, you can view FCP target reports in the Control Center Home Member Details LUNs tab. File System The file system report shows you information about all file systems, and you can filter them into reports by volumes, Snapshot copies, space reservations, qtrees, and chargeback information. By default, you can view file system reports in the Control Center Home Member Details File systems tab. The group summary report shows the status, storage space used, and storage space available for your groups. The FCP target report includes storage chargeback reports that are grouped by usage and allocation. By default, you can view group reports in the Group Status tab. The host users report shows you information about the existing users on the host. By default, you can view host users reports from Management Host Users Report .
Group Summary
Host Users
Host Local Users The host local users report shows you information about the existing local users on the host. By default, you can view the host local users reports from Management Host Local Users Report . Host Domain Users The host domain users report shows you information about the existing domain users on the host. By default, you can view the host domain users reports from Management Host Domain Users Report .
Host Usergroups The host usergroups report shows you information about the existing user groups on the host. By default, you can view the host usergroups reports from Management Host Usergroups Report . Host Roles The host roles report shows you information about the existing roles on the host. By default, you can view the host roles reports from Management Host Roles Report. The LUN report shows you information and statistics about the LUNs and LUN initiator groups on the storage systems, along with performance characteristics. By default, you can view LUN reports in the Member Details tab. The History, Performance Events report displays all the Performance Advisor events. By default, you can view the History, Performance Events reports from Group Status Events Report. The Mirror report displays information about data transfer in a mirrored relationship. The Performance Events report displays all the current Performance Advisor events. By default, you can view the Performance Events reports from Control Center Home Group Status Events Report. The quota report shows you information about user quotas that you can use for chargeback reports. By default, you can view quota reports along with the group summary reports in the Group Status tab.
LUN
108 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Report Outputs The Report Outputs report shows you information about the report outputs that are generated by the report schedules. By default, you can view Report Outputs reports in the Reports Schedule Saved Reports. Report Schedules The Report Schedules report shows you information about the existing report schedules. By default, you can view Report Schedules reports in the Reports Schedule Report Schedules. A report schedule is an association between a schedule and a report for the report generation to happen at that particular time. Resource Pools The Resource Pools report shows you information about the storage capacity that is available and the capacity that is used by all the aggregates in the resource pool. This report also displays the time zone and the status of the resource pool. The SAN host report shows you information about SAN hosts, including FCP traffic and LUN information, and the type and status of the SAN host. By default, you can view SAN host reports in the Member Details tab. The Schedules report shows you information about the existing schedules and the names of the report schedules that are using a particular schedule. By default, you can view Schedules reports in the Reports Schedule Schedules . The Schedules tab displays all the schedules. Schedules are separate entities which can be associated with reports. The Manage Schedules link in the Reports-Add a Schedule and Reports-Edit a Schedule page points to this page. The Scripts report shows you information about the script jobs and script schedules. By default, you can view the Scripts reports in the Member Details tab. The Spare Array LUNs report shows you information about spare array LUNs of a V-series system, such as model, vendor, serial number of the LUN, and size. By default, you can view the Spare Array LUNs reports from Control Center Home Member Details Appliances Report. The SRM report shows you information about the SRM files, paths, directories, and host agents. The file space statistics which is reported by an SRM path walk differ from the "volume space used" statistics that is provided by the file system reports. By default, you can view SRM reports in the Group Status tab.
SAN Host
Schedules
SRM
Storage systems The storage systems report shows you information about the capacity and operations of your storage systems, performance characteristics, and the releases and protocols running on them. By default, you can view storage system reports in the Control Center Home Member Details Appliances tab. User Quotas The User Quotas report shows you information about the disk space usage and user quota thresholds collected from the monitored storage systems. By default, you can view User Quotas reports along with the SRM reports in the Group Status tab.
vFiler
The vFiler report shows you the status, available protocols, storage usage, and performance characteristics of vFiler units that you are monitoring with DataFabric Manager. By default, you can view the vFiler reports in the Member Details tab. The Volume report shows you all volumes with the following details, for the current month or for the past month: Name Capacity Available space Snapshot capacity Growth rates Expendability Chargeback by usage or allocation
Volume
The reports also show the performance characteristics of volumes. By default, you can view the volume reports along with the file system reports in the Member Details tab.
Note: The FC link and FC switch reports are available only when the SAN license for DataFabric
Manager is installed. NetApp has announced the end of availability for the SAN license. To facilitate this transition, existing customers can continue to license the SAN option with DataFabric Manager.
Related concepts
110 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. Select Custom from the Reports menu. 2. Enter a (short) name for the report, as you want it to display in the CLI. 3. Optionally, enter a (long) name for the report, as you want it to display in Operations Manager. 4. Optionally, add comments to the report description. 5. Select the catalog from which the available report fields are based. 6. Select where you want DataFabric Manager to display this report in Operations Manager. 7. Select the related catalog from which you want to choose fields. You might need to expand the list to display the catalog you want to select. 8. You can view two types of information fields in the Choose From Available Fields section: To view fields related to the usage and configuration metrics of the object, click Usage. To view fields related to performance metrics of the object, click Performance.
9. Select a field from "Choose From Available Fields." 10. Optionally, enter the name for the field, as you want it displayed on the report. Make your field name abbreviated and as clear as possible. You must be able to view a field name in the reports and determine which field the information relates to. 11. Optionally, specify the format of the field. If you choose not to format the field, the default format displayed is used. 12. Click Add to move the field to the Reported Fields list. 13. Repeat Steps 8 to 12 for each field you want to include in the report. 14. Optionally, click Move Up or Move Down to reorder the fields. 15. If you clicked Performance, select the required data consolidation method from the list. 16. Click Create. 17. To view this report, locate this report in the list at the lower part of the page and click the display tab name. Find the report from the Report drop-down list.
1. Select Custom from the Reports menu. 2. Find the report from the list of configured reports and select the report you want to delete. 3. Click Delete.
Reports about LUNs, SAN hosts, and FCP targets must be available on the LUNs page of Member Details tab.
Steps
1. Click Member Details on the LUNs page to view the reports. 2. You can view the data in report in a spreadsheet format, by clicking on the spreadsheet icon () on the right side of the Report drop-down list. You can use the data in the spreadsheet to create your own charts and graphs or to analyze the data statistically.
Next topics
What Report Archival Directory is on page 112 Additional capabilities for categories of reports on page 112 What Report Schedules reports are on page 112 Scheduling a report using the All submenu on page 112 Scheduling a report using the Schedule submenu on page 113
112 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
What Report Archival Directory is Report Archival Directory is a repository where all the reports are archived. You can modify the location of the destination directory by using the following CLI command: dfm
options set reportsArchiveDir=<destination dir>
When you modify the Report Archival Directory location, DataFabric Manager checks whether the directory is writable to archive the reports. In the case of a Windows operating system, if the directory exists on the network, then the destination directory must be a UNC path. Besides, to save the reports, the scheduler service must run with an account that has write permissions on the directory. The server service must run with an account that has read and delete permissions on the directory to view and delete report output, respectively. The permissions for a service can be configured using Windows Service Configuration Manager.
Note: You require the Database Write capability on the Global group to modify the Report Archival
Directory option.
Additional capabilities for categories of reports You require report-specific read capability on the object in addition to the Database Read capability, for categories of reports such as SRM Reports, Event Reports, and so on. The capabilities that you require for the categories of reports are as follows:
Report category SRM Reports Event Reports Mirror Reports Policy Reports BackupManager Reports Capabilities SRM Read capability Event Read capability Mirror Read capability Policy Read capability BackupManager Read capability
What Report Schedules reports are The Report Schedules report shows you information about the existing report schedules. A report schedule is an association between a report and a schedule for the report to be generated at a particular time. By default, Report Schedules reports display in Reports Schedule Report Schedules. Scheduling a report using the All submenu You can schedule a report using the All submenu from the Reports menu in Operations Manager.
1. From any page, click Reports All to display the Report Categories page. By default, the Recently Viewed category appears. 2. Select a report of your choice. 3. Click Show to display the selected report. 4. Click the Schedule This Report icon, , located in the upper right corner of the page.
5. In the Reports - Add a Schedule page, specify the report schedule parameters. For details about the report schedule parameters, see the Operations Manager Help. 6. Click Add. Scheduling a report using the Schedule submenu You can schedule a report using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Add New Report Schedule. 3. In the Reports - Add a Schedule page, specify the report schedule parameters. For details about the report schedule parameters, see the Operations Manager Help. 4. Click Add.
Next topics
Editing a report schedule on page 114 Deleting a report schedule on page 114 Enabling a report schedule on page 114 Disabling a report schedule on page 115
114 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Running a report schedule on page 115 Retrieving the list of enabled report schedules on page 115 Retrieving the list of disabled report schedules on page 115 Listing all the run results of a report schedule on page 116 Editing a report schedule You can edit a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click the report schedule that you want to edit. Alternatively, click Saved Reports to list all the report outputs, and then click Report Schedules entry, that you want to edit. 3. In the Reports - Edit a Schedule page, edit the report schedule parameters. 4. Click Update. Deleting a report schedule You can delete a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the report schedule that you want to delete. 3. Click Delete Selected. Enabling a report schedule You can enable a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the report schedule that you want to enable. 3. Click Enable Selected.
Disabling a report schedule You can disable a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the report schedule that you want to disable. 3. Click Disable Selected. Running a report schedule You can run a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the report schedule that you want to run. 3. Click Run Selected. Retrieving the list of enabled report schedules You can retrieve the list of enabled report schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the Report Schedules, Enabled entry from the Report drop-down list. Retrieving the list of disabled report schedules You can retrieve the list of disabled report schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Select the Report Schedules, Disabled entry from the Report drop-down list.
116 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Listing all the run results of a report schedule You can list all the run results of a report schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Last Result Value of a report schedule to display the run result for that particular report schedule.
Listing all the schedules on page 116 Adding a New Schedule on page 116 Editing a schedule on page 117 Deleting a schedule on page 117 Listing all the schedules You can list all the schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Schedules. Adding a New Schedule You can add a new schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click the Schedules tab.
Storage monitoring and reporting | 117 3. Click Add New Schedule. 4. In the Schedules - Add a Schedule page, specify the schedule parameters. For details about the schedule parameters, see the Operations Manager Help. 5. Click Add. Editing a schedule You can edit a schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Schedules. 3. Click the schedule you want to edit. 4. In the Schedules - Edit a Schedule page, edit the schedule parameters. 5. Click Update. Deleting a schedule You can delete a schedule using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Schedules. 3. Select the schedule that you want to delete.
Note: If the schedule is used by a report schedule, then the schedule cannot be selected for
118 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Next topics
Listing the report outputs on page 118 Listing the successful report outputs on page 118 Listing the failed report outputs on page 118 Viewing the output of report outputs from the status column on page 119 Viewing the output of report outputs from the Output ID column on page 119 Viewing the output details of a particular report output on page 119 Listing the report outputs You can list the report outputs that are generated by all the report schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Saved Reports to display the list of report outputs. Listing the successful report outputs You can list the successful report outputs generated by all the report schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Saved Reports. 3. Select the Report Outputs, Successful entry from the Report drop-down list. Listing the failed report outputs You can list the failed report outputs that are generated by all the report schedules using the Schedule submenu from the Reports menu.
Steps
1. From any page, click Reports > Schedule to display all the report schedules. 2. Click Saved Reports. 3. Select the Report Outputs, Failed entry from the Report drop-down list.
Viewing the output of report outputs from the status column There are two possible methods to view the output of a particular report output that is generated by a report schedule. Following are the steps to view the output of report output: You can also view the output of a report output from the Output ID column in Operations Manager.
Steps
1. From any page, select Schedule from the Reports menu. 2. Click Saved Reports. 3. Click the link under the Status column corresponding to the report output. Viewing the output of report outputs from the Output ID column here are two possible methods to view the output of a particular report output, which is generated by a report schedule.
Steps
1. From any page, select Schedule from the Reports menu. 2. Click Saved Reports. 3. Click the Output ID column entry of the report output. 4. Click the Output link to view the output. Viewing the output details of a particular report output You can view the output details of a particular report output, which is generated by a report schedule in Operations Manager.
Steps
1. From any page, select Schedule from the Reports menu. 2. Click Saved Reports. 3. Click the Output ID entry of the report output.
120 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Data export provides the following benefits: Saves effort in collecting up-to-date report data from different sources Provides database access to the historical data collected by the DataFabric Manager server Provides database access to the information provided by the custom report catalogs in the DataFabric Manager server Provides and validates the following interfaces to the exposed DataFabric Manager views: Open Database Connectivity (ODBC) Java Database Connectivity (JDBC)
Enables you to export the Performance Advisor and DataFabric Manager data to text files, easing the loading of data to user-specific database Allows you to schedule the export Allows you to customize the rate at which the performance counter data is exported Allows you to specify the list of the counters to be exported Allows you to consolidate the sample values of the data export Access to
Next topics
How to access the DataFabric Manager data on page 121 Where to find the database schema for the views on page 121 Two types of data for export on page 122 Files and formats for storing exported data on page 122 Format for exported DataFabric Manager data on page 122 Format for exported Performance Advisor data on page 123 Format for last updated timestamp on page 123
All these operations can be performed only through the CLI. For more information about the CLI commands, see the DataFabric Manager manual (man) pages. By using a third-party reporting tool, you can connect to the DataFabric Manager database for accessing views. Following are the connection parameters: Database name: monitordb User name: <database user name> Password: <database user password> Port: 2638 dobroad: none Links: tcpip
Note: The .jar files required for iAnywhere and jConnect JDBC drivers are copied as part of
DataFabric Manager installation. A new folder dbconn, under misc in the install directory, is created to hold the new jar files.
122 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
You can export the data, either on-demand or on-schedule, to text files using the CLI. For more information about the CLI commands, see the DataFabric Manager manual (man) pages. To schedule the data export you must have the CoreControl capability.
Note: Database views are created within the DataFabric Manager embedded database. This might
increase the load on the database server if there are many accesses from the third-party tools to the exposed views.
Related information
The fields in each line of the file correspond to the columns in iGroupView.
124 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Export Status: [Success | Failed | Canceled | Running] Delimiter: [tab | comma] Sampling Interval: <secs> Consolidation Method: [average | min | max | last] History: <secs> DataFabric Manager Data Export Completion Timestamp: <timestamp> Last PA data Export for following hosts at time <timestamp> ------<host-name>----------<host-name>-----
Security configurations
You can configure Secure Sockets Layer (SSL) in DataFabric Manager to monitor and manage storage systems over a secure connection by using Operations Manager.
Next topics
Types of certificates in DataFabric Manager on page 125 Secure communications with DataFabric Manager on page 128 Managed host options on page 129 Changing password for storage systems in DataFabric Manager on page 131 Changing passwords on multiple storage systems on page 132 Issue with modification of passwords for storage systems on page 133 Authentication control in DataFabric Manager on page 133
You can create self-signed certificate and add trusted CA-signed certificates with DataFabric Manager.
Next topics
Self-signed certificates in DataFabric Manager on page 125 Trusted CA-signed certificates in DataFabric Manager on page 126 Creating self-signed certificates in DataFabric Manager on page 126 Obtaining a trusted CA-signed certificate on page 127 Enabling HTTPS on page 127
126 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 external trusted CA. Self-signed certificates are not signed by a mutually trusted authority for secure Web services. When the DataFabric Manager server sends a self-signed certificate to a client browser, the browser has no way of verifying the identity of DataFabric Manager. As a result, the browser displays a warning indicating to create an acception. After the browser accepts the certificate, the browser allows the user to permanently import the certificate into the browser. If you decide to issue self-signed certificates, you must safeguard access to, and communications with, DataFabric Manager and the file system that contains its SSL-related private files.
1. Log into DataFabric Manager server as DataFabric Manager administrator. 2. In the CLI, enter the following command: dfm ssl server setup. 3. Enter the following information when prompted: Key Size Certificate Duration Country Name State or Province Locality Name Organization Name Organizational Unit Name Common Name Mail Address
DataFabric Manager is initialized with a self-signed certificate and puts the private key in the /conf/server.key file in any DataFabric Manager directory.
DataFabric Manager creates a CSR file. 2. Submit the CSR to a CA for signing. 3. Import the signed certificate by entering the following command:
dfm ssl import cert_filename
Enabling HTTPS
You can use the httpsEnabled option using DataFabric Manager CLI for the DataFabric Manager server to provide HTTPS services.
Before You Begin
You must set up the SSL server using the dfm ssl server setup command.
Steps
The default HTTPS port is 8443. 3. Stop the Web server by using the following command:
dfm service stop http
128 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
How clients communicate with DataFabric Manager on page 128 SecureAdmin for secure connection with DataFabric Manager clients on page 128 Requirements for security options in Operations Manager on page 128 Guidelines to configure security options in Operations Manager on page 129
Security configurations | 129 If you want to enable secure connections from any browser, you must enable HTTPS transport on the DataFabric Manager server. You cannot disable both HTTP and HTTPS transports. DataFabric Manager does not allow that configuration. To completely disable access to Operations Manager, stop the HTTP service at the CLI using the following command: dfm service stop http. You must select the default port for each transport type you have enabled. The ports must be different from each other.
Where to find managed host options on page 129 Guidelines for changing managed host options on page 130 Comparison between global and storage system-specific managed host options on page 131 Limitations in managed host options on page 131
130 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Command-line interface dfm option list (to view) dfm option set (to set)
Appliance-specific
Edit Appliance Settings page dfm host get (to view) (Appliances appliance name Appliance Tools dfm host set (to set) Edit Settings
Change the default value if you want a secure connection for active/active configuration operations, running commands on the storage system. Administration transport This options allows you to select conventional (HTTP) or secure (HTTPS) connection to monitor and manage storage systems. Change the default value if you want a secure connection for monitoring and displaying the storage system UI (FilerView). Administration port This options allows you to configure the administration port to monitor and manage storage systems. If you do not configure the port option at the appliance-level, the default value for the corresponding protocol is used. hosts.equiv option This option allows users to authenticate storage systems when user name and password are not provided. Change the default value if you have selected the global default option and you want to set authentication for a specific storage system.
Note: If you do not set the transport and port options for a storage system, then DataFabric Manager
uses SNMP to get appliance-specific transport and port options for communication. If SNMP fails, then DataFabric Manager uses the options set at the global level.
If a global setting conflicts with a storage system-specific setting, the storage system-specific setting takes precedence.
Note: You must use storage system-specific managed host options if you plan to use SecureAdmin
132 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. Go to the Appliance Details page for the storage system or hosting storage system (of the vFiler unit) and choose Edit Settings from the Appliance Tools menu (at the lower left of Operations Manager). The Edit Appliance Settings page is displayed. 2. In the Login field, enter a user name that DataFabric Manager uses to authenticate the storage system or vFiler unit. 3. In the Password field, enter a password that DataFabric Manager uses to authenticate the storage system or vFiler unit. 4. Click Update.
1. Log in to Operations Manager. 2. Depending on the type of storage system for which you want to manage passwords, select either of the following:
Type Storage system vFiler unit Procedure Management > Storage System > Passwords Management > vFiler > Passwords
3. Enter the user name. 4. Enter the old password of the local user on the host.
Note: This field is mandatory for storage systems running Data ONTAP version less than 7.0
and for all the vFiler units. 5. Enter a new password for the storage system or groups of storage systems. 6. Reenter the new password exactly the same way in the Confirm New Password field. 7. Select the target storage system or target groups. 8. Click Update.
Using hosts.equiv to control authentication on page 133 Configuring http and monitor service to run as different user on page 134
1. Edit the /etc/hosts.equiv file on the storage system and provide either the host name or the IP address of the system running DataFabric Manager, as an entry in the following format:
<host-name-or-ip-address>
2. You can edit the option on Edit Appliance Settings page in Operations Manager. Alternatively, provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running DataFabric Manager CLI, in the following format:
<host-name-or-ip-address> <username>
134 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 3. If the operating system Then...
is... Linux Provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running the HTTP service, in the following format: <host-name-or-ip-address> <username> Provide the host name or the IP address of the system running DataFabric Manager, and the user name of the user running the HTTP, server, scheduler, and monitor services, in the following format: <host-name-or-ip-address> <username>
Windows
Note:
By default, the HTTP service runs as nobody user on Linux. By default, the HTTP, server, scheduler, and monitor services run as LocalSystem user on Windows.
If DataFabric Manager is running on a host named DFM_HOST, and USER1 is running the dfm commands, then by default, on a Linux operating system, you need to provide the following entries:
DFM_HOST DFM_HOST USER1 DFM_HOST nobody
For more information about configuring the /etc/hosts.equiv file, see the Data ONTAP Storage Management Guide.
Related information
1. Operating system
Linux Windows
Command dfm service runas -u <user-name> http dfm service runas -u <user-name> -p <password> [http] [monitor]
Note: For security reasons the <user-name> cannot be root on Linux. On Windows hosts, <user-name> should belong to the administrator group.
You can then make an intelligent choice about how to efficiently use your existing space.
Note: The File SRM tab in the Operations Manager includes other storage monitoring utilities: for
How FSRM monitoring works on page 138 What capacity reports are on page 138 Difference between capacity reports and file system statistics on page 138 Prerequisites for FSRM on page 139 Setting up FSRM on page 139 NetApp Host Agent software overview on page 140 Managing host agents on page 142 Configuring host agent administration settings on page 144 What FSRM paths are on page 145
138 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
which are not exported by CIFS or NFS. Host agents can also gather FSRM data about other file system paths that are not on a NetApp storage system: for example, local disk or third-party storage systems.
For example, you can determine the capacity of a volume by viewing the corresponding volume capacity report (Reports All Volumes Volume Capacity).
Related concepts
Setting up FSRM
To set up and configure FSRM, you should perform a set of tasks such as, identifying FSRM host agents, adding host agents, adding paths, and setting up path-walk schedules.
140 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
2. Add new host agents manually if they have not been discovered. 3. Set up host agent administration access on the hosts to be monitored. You can verify the host administration access by checking the SRM Summary page. 4. Add paths. 5. Set up path-walk schedules.
Related concepts
Configuring host agent administration settings on page 144 Enabling administration access for one or more host agents on page 145 Enabling administration access globally for all host agents on page 145 Adding SRM paths on page 148 Adding a schedule on page 154
Related references
NetApp Host Agent communication on page 141 NetApp Host Agent software passwords on page 141 NetApp Host Agent software passwords for monitoring tasks on page 141 NetApp Host Agent software passwords for administration tasks on page 141
Any sessions initiated by the DataFabric Manager, by using this user name and password are limited to basic monitoring operations. If you later decide to change the guest password on the host agent, you must also set the same user name and password in Operations Manager, using the Host Agent Monitoring Password option on the Options page.
142 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You specify the password on the host agent's configuration UI (http://name-ofagent: 4092/). This user name and password allows full access to the host agent. After setting the administration user name and password in the host agent, you must also set the same user name and password in Operations Manager on the Options page (Setup menu Options Host Agent link).
Note: This process of password change is applicable globally. To change passwords for one or more
host agents,
Edit Host Agent Settings page (Group Status tab File SRM Report Edit host agent settings (including drop-down list: SRM Summary host agent name Edit Settings in passwords) Tools list) OR Edit Host Agents page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Hosts appliance host name edit) Edit host agent properties (for example, monitoring and management of API passwords, the HTTP port, and, HTTPS settings) Edit Agent Settings page (Group Status tab File SRM Report drop-down list: SRM Summary host agent name Manage Host Agent in Host Tools list)
Go here...
Configure host agent administration For one or more host agents: Edit Agent Logins page (Group Status tab access File SRM Report drop-down list: SRM Summary Edit Agent Logins) OR Create a global default using the Host Agent Options page (Setup menu Options Host Agent) List of available host agents If you have enabled a File SRM license on the workstation, DataFabric Manager automatically discovers all hosts that it can communicate with. Communication between the host agent and DataFabric Manager takes place over HTTP or HTTPS (port 4092 or port 4093, respectively). Host Agents, SRM view (Group Status tab File SRM Report drop-down list: SRM Summary Host Agents, SRM) Edit host agents Add Host Agents or Edit Host Agents page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Hosts link) Host Agent Options page (Setup menu Options Host Agent)
Obtain SRM host agent information Host Agent Details page (Group Status tab File SRM Report drop-down list: SRM Summary host agent name link) Change the SRM host agent monitoring interval Modify NetApp Host Agent software settings Monitoring Options page (Setup menu Options link Monitoring Options link Agent monitoring interval option) Edit Agent Settings page (Group Status tab File SRM Report drop-down list: SRM Summary host agent name link Manage Host Agent in Host Tools list) Host Agent Discovery option on the Options page (Setup menu Options link Discovery Host Agent Discovery field) Add Host Agents or Edit Host Agents page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Hosts link)
144 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
You must enable administration access to your host agents before you can use the FSRM feature to gather statistical data.
Considerations
Global options apply to all affected devices that do not have individual settings specified for them. For example, the Host Agent Login option applies to all host agents. The host agent access and communication options are globally set for all storage systems using the values specified in the Host Agent Options section on the Options page. Default values are initially supplied for these options. However, you should review and change the default values as necessary. To enable administration access, the passwords set in Operations Manager must match those set for the NetApp Host Agent software.
Steps
Next topics
Enabling administration access for one or more host agents on page 145
File Storage Resource Management | 145 Enabling administration access globally for all host agents on page 145
Related concepts
1. From the SRM Summary page, click Edit Agent Logins in the Host Agents Total section. 2. Select the host agents for which you want to enable administration access. 3. Modify the fields, as needed, and then click Update.
1. From any Summary page, select Options from the Setup drop-down menu. 2. Select Host Agent from the Edit Options list (in the left pane). 3. Enter (or modify) the required information and then click Update. This option changes all host agent login names and passwords, unless the host agent has a different login name or password already specified for it. For example, if an administrator has specified a password other than Global Default in the Password field of the Edit Host Agent Settings page, changing the global password option does not change the storage systems password.
146 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Note: The FSRM path-walk feature can cause performance degradation. However, you can schedule
Adding CIFS credentials on page 146 Path management tasks on page 147 Adding SRM paths on page 148 Path names for CIFS on page 148 Conventions for specifying paths from the CLI on page 149 Viewing file-level details for a path on page 149 Viewing directory-level details for a path on page 149 Editing SRM paths on page 149 Deleting SRM paths on page 150 Automatically mapping SRM path on page 150 What path walks are on page 151 SRM path-walk recommendations on page 151 What File SRM reports are on page 151 Access restriction to file system data on page 152 Identification of oldest files in a storage network on page 152 FSRM prerequisites on page 153 Verifying administrative access for using FRSM on page 153 Verifying host agent communication on page 153 Creating a new group of hosts on page 154 Adding an FSRM path on page 154 Adding a schedule on page 154 Grouping the FSRM paths on page 155 Viewing a report that lists the oldest files on page 155
1. Click Setup Options Host Agent. 2. In the Host Agent Options page, specify the CIFS account name in the Host Agent CIFS Account field. 3. In the Host agent CIFS Password field, type the password for the CIFS account. 4. Click Update.
Edit SRM path-walk schedules SRM Path Walk Schedule Times page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Schedules link schedule name Add Walk Times link) Delete SRM paths Delete SRM path-walk schedules Add SRM Paths or Edit SRM Paths page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Paths link) Edit SRM Path Walk Schedules page (Group Status tab File SRM Report drop-down list: SRM Summary Add/Edit Schedules link schedule name)
148 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Go here..
Delete SRM path-walk schedule SRM Path Walk Schedule Details page (schedule name) times
1. From the SRM Summary page, click the Add/Edit Paths in the SRM Paths Total section. 2. From the SRM Host drop-down list in the Add a New SRM Path section, select the name of the host agent that you want to monitor. 3. Type a path name, select a schedule, and then click Add SRM Path. Valid path entries
host:/u/earth/work host:/usr/local/bin host:/engineering/toi host:C:\Program Files
For CIFS, you must specify the path as a UNC path, as follows: host:\\storage
system\share\dir
The SRM feature does not convert mapped drives to UNC path names. For example, suppose that drive H: on the system host5 is mapped to the following path name:
\\abc\users\jones
The path entry host5:H:\ fails because the FSRM feature cannot determine what drive H: is mapped to. The following path entry is correct:
host5:\\abc\users\jones
UNIX requires that you double all backslashes, unless the argument is enclosed in double quotation marks. This convention is also true for spaces in file names. For example:
$ dfm srm path add inchon:C:\\Program\ Files $ dfm srm path add inchon:C:\Program Files $ dfm srm path add oscar:/usr/local
1. From the SRM Summary page, click a path name in the SRM Paths Total section.
1. From the SRM Summary page, click a path name in the SRM Paths Total section. 2. Click the Browse Directories link in the SRM Path Tools list (at the lower left of Operations Manager). 3. To view an expanded view of directory information that includes a listing of files by type and by user, click the Extended Details link (at the upper right corner of the File SRM tab window).
1. From the SRM Summary page, click Add/Edit Paths in the SRM Paths Total section.
150 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 2. Select the SRM path you want to modify. 3. Modify the fields, as needed, and then click Update.
1. From the SRM Summary page, click Add/Edit Paths in the SRM Paths Total section. 2. Select the SRM path(s) you want to delete and then click Delete.
When you cannot access SRM data, you cannot automatically map storage objects with an SRM path. If the SRM path is not on a storage device monitored by DataFabric Manager, you cannot associate an SRM path with a storage object.
Requirements for automatically mapping SRM path You can automatically create a new path for an object using the Create new SRM Path for this object link on the details page for the object. You must ensure the following, to create a new path for an object: The host agent is set up and properly configured. The host agent passwords match those set in DataFabric Manager. The host agent has access to the volume, qtree, or LUN: If the host agent is a Windows host, you must ensure that the CIFS passwords match. If the object is a LUN on a Windows host, SnapDrive must be installed and the LUN must be managed by SnapDrive. If the host agent is a UNIX host, then the volume or qtree must be NFS mounted. If the object is a LUN on a UNIX host, the LUN must be formatted and mounted directly into the file system (volume managers are not supported).
The Host Agent Login and Management Password are set correctly.
File Storage Resource Management | 151 You can also manually map SRM paths to volumes, qtrees, and LUNs.
name extension that exactly matches the file type specification in Operations Manager. For example, files that end in .JPG will not match the .jpg file type if the host agent is on UNIX. Even though they would match if the agent were on Windows. Running the host agent on Windows avoids this problem.
Viewing file system statistics You can view the File SRM report for a group by clicking on the File SRM tab.
Steps
1. Click the group for which you want a File SRM report. 2. Click File SRM and select a report from the Report drop-down list. The following reports are available: SRM Paths, All SRM Directories, Largest
152 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 SRM Files, Largest SRM Files, Least Recently Accessed SRM Files, Least Recently Modified SRM Files, Recently Modified SRM File Types SRM File Owners
Each FSRM report page displays statistics for the users who have storage space that are allocated on the objects (storage systems, aggregates, volumes, or qtrees) in your selected group. You can list reports by using the CLI dfm report list command (without arguments) to display all available reports.
FSRM prerequisites
Before using the FSRM feature for the first time, you must verify that all prerequisites are met by referring to difference between, capacity reports and file system statistics.
1. Select Options from the Setup drop-down menu. 2. Click Host Agent in the Edit Options section. 3. Change the Host Agent Login to Admin. 4. Verify that the Host Agent Management Password is set. 5. Click Update to apply the changes. 6. Click Home to return to the Control Center. 7. Click File SRM to return to the SRM Summary page.
1. Click File SRM then select SRM Summary from the Report drop-down list. 2. Check the list of host agents to view the status. If the status is Unknown, the host agent login settings might not be properly configured. 3. If the status of one or more of the storage systems is Unknown, click Edit Agent Logins. 4. Select the host agents for engineering that the administrator wants to communicate with. 5. Edit the login or password information. 6. Click Update. 7. Click File SRM to return to the SRM Summary page.
154 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
1. From the SRM Summary page, select Host Agents, SRM from the Report drop-down list. 2. Select the host agent in the engineering domain. 3. From the buttons at the bottom of the page, click Add To New Group. 4. When prompted, enter the name Eng and Click Add. DataFabric Manager refreshes. 5. Select SRM Summary from the Report drop-down list.
1. From the SRM Summary page, Add/Edit Paths. 2. Select a host from the SRM Host drop-down list. 3. Enter the path to be searched in the Path field. 4. Click Add A Schedule.
Adding a schedule
To create the path-walk schedule, the administrator must complete a certain list of tasks.
Steps
1. Click the Add/Edit Schedules link in the SRM Summary page (File SRM tab). 2. In the Add a New Schedule section, enter a meaningful name for the schedule. 3. In the Schedule Template list, select a schedule or select None. 4. Click Add. 5. In the SRM Path Walk Schedule Times , select the days and times to start the SRM path walks. 6. Click Update.
File Storage Resource Management | 155 7. Click Home to navigate back to the main window. 8. Click File SRM. 9. SRM Summary page, click Add/Edit Paths. 10. In the Add a New SRM Path section, select a host agent to associate the new schedule with. 11. In the Schedule field, select the schedule name the administrator just created. 12. Click Add SRM Path.
1. Click File SRM. 2. Select SRM Paths, All from the Report drop-down list. 3. Select the SRM path that the administrator wants to group. 4. From the buttons at the bottom of the page, click New Group. 5. When prompted, enter a name for the group. DataFabric Manager adds the new group and refreshes.
1. Click Home. 2. Click File SRM. 3. Select the engineering group in the Groups section at the left side of the tab window. 4. Select the report SRM Files, Least Recently Accessed from the Report drop-down list. 5. Review the data.
User quotas
You can use user quotas to limit the amount of disk space or the number of files that a user can use.
Next topics
About quotas on page 157 Why you use quotas on page 157 Overview of the quota process on page 158 Differences between hard and soft quotas on page 158 User quota management using Operations Manager on page 158 Where to find user quota reports in Operations Manager on page 159 Modification of user quotas in Operations Manager on page 160 Configuring user settings using Operations Manager on page 161 What user quota thresholds are on page 162
About quotas
Quotas provide a way to restrict or track the disk space and number of files that are used by a user, group, or qtree. Quotas are specified using the /etc/quotas file, and are applied to a specific volume or qtree.
158 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
User quotas | 159 Configure alarms to notify administrators of the user quota events
Related concepts
Where to find user quota reports in Operations Manager on page 159 Monitor interval for user quotas in Operations Manager on page 160 What user quota thresholds are on page 162
You must use DataFabric Manager to configure the root login name and root password of a storage system on which you want to monitor and manage user quotas. You must configure and enable quotas for each volume for which you want to view the user quotas. You must log in to Operations Manager as an administrator with the quota privilege to view user quota reports and events so that you can configure user quotas for volumes and qtrees. Additional requirements for editing quotas: Directives, such as QUOTA_TARGET_DOMAIN and QUOTA_PERFORM_USER_MAPPING must not be present in the /etc/quotas file on the storage system. The /etc/quotas file on the storage system must not contain any errors.
160 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
When you decrease the User Quota Monitoring Interval option to a low value, DataFabric Manager collects the information more frequently. However, decreasing the User Quota Monitoring Interval might negatively affect the performance of the storage systems and DataFabric Manager.
For more information about these fields, see the Operations Manager Help.
Next topics
Prerequisites to edit user quotas in Operations Manager on page 160 Editing user quotas using Operations Manager on page 161
User quotas | 161 Operations Manager conducts vFiler quota editing by using the jobs. If a vFiler quota editing job fails, verify the quota file on the hosting storage system. In addition, to protect the quota file against damage or loss, before starting a job, DataFabric Manager creates a backup file named DFM (timestamp).bak. If the job fails, you can recover data by renaming the backup quota file.
Ensure that the storage system meets the prerequisites before you edit user quotas in Operations Manager.
Steps
1. Click Control Center Home Group Status File SRM (or Quotas) Report User Quotas, All. 2. Click on any quota related fields for the required quota.
1. Click Control Center Home Group Status File SRM (or Quotas) Report User Quotas, All 2. Click the Edit Settings link in the lower left corner. 3. You can edit E-mail Address of the user, Send Quota Alerts Now, User Quota Full Threshold (%), User Quota Nearly Full Threshold (%), Owner E-mail, Owner Name, and Resource Tag. You can leave the e-mail address field blank if you want DataFabric Manager to use the default e-mail address of the user. 4. Click Update.
162 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
What DataFabric Manager user thresholds are on page 162 User quota thresholds in Operations Manager on page 162 Ways to configure user quota thresholds in Operations Manager on page 162 Precedence of user quota thresholds in DataFabric Manager on page 163
User quotas | 163 Apply user quota thresholds to all quotas on a specific file system (volume or qtree) or a group of file systems You can apply thresholds using the Edit Quota Settings links on the lower left pane of the Details page for a specific volume or qtree. You can access the Volume Details page by clicking on a volume name at Control Center Home Member Details File Systems Report Volumes, All. Similarly, for the Qtree Details page, clicking on the qtree name at Control Center Home Member Details File Systems Report Qtrees, All To apply settings to a group of file systems, select the group name from the Apply Settings To list on the quota settings page. Apply user quota thresholds to all quotas on all users on all file systems: that is, all user quotas in the DataFabric Manager database You can apply thresholds at Setup Options Edit Options: Default Thresholds.
Management of LUNs, Windows and UNIX hosts, and FCP targets | 165
Existing customers can continue to license the SAN option with DataFabric Manager. DataFabric Manager customers should check with their NetApp sales representative regarding other NetApp SAN management solutions.
Next topics
Management of SAN components on page 165 SAN and NetApp Host Agent software on page 166 List of tasks performed using NetApp Host Agent software on page 166 List of tasks performed to monitor targets and initiators on page 167 Reports for monitoring LUNs, FCP targets, and SAN hosts on page 169 Information available on the LUN Details page on page 169 Information about the FCP Target Details page on page 171 Information about the Host Agent Details page on page 171 How storage systems, SAN hosts, and LUNs are grouped on page 173 Introduction to deleting and undeleting SAN components on page 173 Where to configure monitoring intervals for SAN components on page 174
Related information
Data ONTAP Block Access Management Guide for iSCSI and FCP http://now.netapp.com/NOW/knowledge/docs/san/#ontap_san
166 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 After SAN components have been discovered, the DataFabric Manager server starts collecting pertinent datafor example, which LUNs exist on which storage systems. Data is collected periodically and reported through various Operations Manager reports. (The frequency of data collection depends on the values that are assigned to the DataFabric Manager server monitoring intervals.) The DataFabric Manager server monitors LUNs, FCP targets, and SAN hosts for a number of predefined conditions and thresholds. For example, when the state of an HBA port changes to online or offline or when the traffic on an HBA port exceeds a specified threshold. If a predefined condition is met or a threshold is exceeded, the DataFabric Manager server generates and logs an event in its database. These events can be viewed through the details page of the affected object. Additionally, you can configure the DataFabric Manager server to send notification about such events (also known as alarms) to an e-mail address. You can also configure DataFabric Manager server to send notifications to a pager, an SNMP trap host, or a script you write. In addition to monitoring LUNs, FCP targets, and SAN hosts, you can use the DataFabric Manager server to manage these components. For example, you can create, delete, or expand a LUN.
Management of LUNs, Windows and UNIX hosts, and FCP targets | 167
Note: NetApp Host Agent is also used for File Storage Resource Management functions, with a File
SRM license. For more information about the NetApp Host Agent software, see the NetApp Host Agent Installation and Administration Guide.
Related information
Next topics
Prerequisites to manage targets and initiators on page 167 Prerequisites to manage SAN hosts on page 168
Related concepts
Reports for monitoring LUNs, FCP targets, and SAN hosts on page 169 Information available on the LUN Details page on page 169 Tasks performed on the LUN Details page on page 170 Information about the FCP Target Details page on page 171 Information about the Host Agent Details page on page 171 List of tasks performed on the Host Agent Details page on page 172
168 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 SAN deployments are supported on specific hardware platforms running Data ONTAP 6.3 or later. For more information about the supported hardware platforms, see the Compatibility and Configuration Guide for FCP and iSCSI Products. For more information about specific software requirements, see the DataFabric Manager Installation and Upgrade Guide.
Related information
DataFabric Manager Installation and Upgrade Guide http://now.netapp.com/NOW/knowledge/docs/DFM_win/dfm_index.shtml Compatibility and Configuration Guide for FCP and iSCSI Products http://now.netapp.com/NOW/products/interoperability/
LUNs inherit access control settings from the storage system, volume, and qtree they are contained in. Therefore, to perform LUN operations on storage systems, you must have appropriate privileges set up on those storage systems.
Related information
NetApp Host Agent Installation and Administration Guide http://now.netapp.com/NOW/knowledge/docs/nha/nha_index.shtml The NOW site - http://now.netapp.com/
Management of LUNs, Windows and UNIX hosts, and FCP targets | 169
For more information about descriptions of each report field, see the Operations Manager Help.
170 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Storage system on which the LUN exists Volume or qtree on which the LUN exists Initiator groups to which the LUN is mapped
You can access all LUN paths that are mapped to an initiator group by clicking the name of the initiator group.
Note: If a LUN is mapped to more than one initiator group, when you click an initiator group, the
displayed page lists all the LUN paths that are mapped to the initiator group. Additionally, the report contains all other LUN mappings (LUN paths to initiator groups) that exist for those LUN paths. Size of the LUN Serial number of the LUN Description of the LUN Events associated with the LUN Groups to which the LUN belong Number of LUNs configured on the storage system on which the LUN exists and a link to a report displaying those LUNs Number of SAN hosts mapped to the LUN and a link to the report displaying those hosts Number of HBA ports that can access this LUN and a link to the report displaying those LUNs Time of the last sample collected and the configured polling interval for the LUN Graphs that display the following information: LUN bytes per secondDisplays the rate of bytes (bytes per second) read from and written to a LUN over time LUN operations per secondDisplays the rate of total protocol operations (operations per second) performed on the LUN over time
Related concepts
Reports for monitoring LUNs, FCP targets, and SAN hosts on page 169
Management of LUNs, Windows and UNIX hosts, and FCP targets | 171
Run a Command
Runs a Data ONTAP command on the storage system on which this LUN exists
Note: To manage a shared LUN on MSCS, perform the operation on the active controller. Otherwise,
the operation fails. You must have appropriate authentication to run commands on the storage system from the DataFabric Manager server.
Node name (WWNN) and port name (WWPN) of the target Name of the FC port to which the target connects Number of other FCP targets on the storage system on which the target is installed (link to report) Number of HBA ports (SAN host ports) that the target can access (link to report) Time of the last sample collected and the configured polling interval for the FCP target
172 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You can access the Details page for a NetApp Host Agent by clicking its name in any of the SAN Host reports. The Details page for a Host Agent on a SAN host contains the following information: Status of the SAN host and the time since the host has been up The operating system and the NetApp Host Agent version, in addition to protocols and features running on the SAN host The MSCS configuration information about the SAN host, if any, such as the cluster name, cluster partner, and cluster groups to which the SAN host belongs The events that occurred on this SAN host The number of HBAs and HBA ports on the SAN host (links to report) The devices related to the SAN host, such as the FC switch ports connected to it and the storage systems accessible from the SAN host Graphs of information, such as the HBA port traffic per second or the HBA port frames for different time intervals
For more information about the SAN Host reports, see the Operations Manager Help.
Related concepts
Management of LUNs, Windows and UNIX hosts, and FCP targets | 173
1. To allow administrator access, go to the Administrator page and select Setup menu Administrative users.
Option To create a new administrator Description In the Administrators page, complete the Add a New Administrator option, and then select GlobalSAN from the Roles list. In the Administrator page, from the List of administrators, click on the Edit column of the administrator to be granted access, and then select GlobalSAN from the Roles list.
174 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You cannot stop monitoring a specific FCP target or an HBA port, unless you first stop monitoring the storage system (for the FCP target) or the SAN host (for the HBA port) on which the target or the port exists.
Note: When you delete a SAN component from any group except Global, the component is deleted
only from that group. The DataFabric Manager server does not stop collecting and reporting data about it. You must delete the SAN component from the Global group for the DataFabric Manager server to stop monitoring it altogether.
Next topics
Deleting a SAN component on page 174 How a deleted SAN component delete is restored on page 174
1. Select the component you want to delete by clicking the check boxes in the left-most column of a report. 2. Click the Delete Selected button at the bottom of each report to delete the selected component.
Access to storage-related reports on page 175 Storage capacity thresholds in Operations Manager on page 175 Management of aggregate capacity on page 177 Management of volume capacity on page 182 Management of qtree capacity on page 187 Volumes and qtrees monitored on a vFiler unit on page 190 What clone volumes are on page 191 Why Snapshot copies are monitored on page 191 Storage chargeback reports on page 193 The chargeback report options on page 195 What deleting storage objects for monitoring is on page 198
You can find storage-related reports on the tabs accessible from the Member Details tab: Appliances, vFilers, File Systems, Aggregates, SANs, and LUNs tabs. Each tab has a Report drop-down list from which you can select the report you want to display. For information about specific storage-related reports, see the Operations Manager Help.
176 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Storage capacity thresholds determine at what point you want DataFabric Manager to generate events about capacity problems. You can configure alarms to send notification whenever a storage event occurs. When DataFabric Manager is installed, the storage capacity thresholds for all aggregates, volumes, and qtrees are set to default values. You can change the settings as needed for an object, a group of objects, or the Global group.
Next topics
Modification of storage capacity thresholds settings on page 176 Changing storage capacity threshold settings for global group on page 176 Changing storage capacity threshold settings for an individual group on page 176 Changing storage capacity threshold settings for a specific aggregate, volume, or qtree on page 177
1. Select Setup Options. 2. Click Default Thresholds in the left pane. 3. Edit the Default settings as needed. 4. Click Update.
1. Select Control Center Home Member Details. 2. Click Aggregates, to change aggregate options or, File Systems to change volume or qtree options. 3. Click the name of an aggregate, volume, or qtree.
File system management | 177 4. Click Edit Settings under the Tools section in the left pane. 5. Edit the settings as needed. 6. Select the name of the group from Apply Settings to drop-down list. 7. Click Update. 8. Approve the change by clicking OK on the verification page.
Changing storage capacity threshold settings for a specific aggregate, volume, or qtree
Perform the following steps to change the storage capacity threshold settings for specific aggregate, volume, or qtree.
Steps
1. Click Aggregates to change Aggregate options or File Systems to change volume or qtree options. 2. Click the name of an aggregate, volume, or qtree. 3. Click Edit Settings under the Tools section in the left pane. 4. Edit the desired settings.
Note: To revert to the default settings, leave the fields empty.
Volume space guarantees and aggregate overcommitment on page 177 Available space on an aggregate on page 178 Considerations before modifying aggregate capacity thresholds on page 178 Aggregate capacity thresholds and their events on page 179
178 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 a space guarantee of none or file so that the aggregate size does not limit the volume size. Each volume can be larger than its containing aggregate. You can use the storage space that the aggregate provides, as needed, by creating LUNs, or adding data to volumes. By using aggregate overcommitment, the storage system can advertise more available storage than actually exists in the aggregate. With aggregate overcommitment, you could provide greater amounts of storage that you know would be used immediately. Alternatively, if you have several volumes that sometimes need to grow temporarily, the volumes can dynamically share the available space with each other.
Note: If you have overcommitted your aggregate, you must monitor its available space carefully
and add storage as needed to avoid write errors due to insufficient space For details about volume space reservations, and aggregate overcommitment, see the Data ONTAP Storage Management Guide.
Related information
Also, review the capacity graphs of historical data to get a sense of how the amount of storage used changes over time. Set the Aggregate Full and Aggregate Nearly Full thresholds Set the Aggregate Full and Aggregate Nearly Full thresholds so that you have time to take corrective action if storage usage approaches capacity. Because the aggregate is overcommitted, you might want to set the Aggregate Full and Aggregate Nearly Full thresholds to values lower than the default. Lowering the thresholds generate an event well before completely filling the storage. Early notification gives you more time to take corrective action, such as installing more storage, before the storage space is full and write errors occur. If you do not use aggregate overcommitment as a storage-management strategy, you must leave the Aggregate Overcommitted and Nearly Overcommitted threshold values unchanged from their default.
Set the Aggregate Nearly If an aggregate is routinely more than 80 percent full, set the Aggregate Full threshold Nearly Full threshold to a value higher than the default.
Note: If you edit capacity thresholds for a particular aggregate, the edited thresholds override the
global thresholds. You can edit thresholds for a particular aggregate, from the Aggregate Details page.
as the only member. You can set the following aggregate capacity thresholds: Aggregate Full (%) Description: Specifies the percentage at which an aggregate is full.
Note: To reduce the number of Aggregate Full Threshold events generated, you
can set an Aggregate Full Threshold Interval (0 seconds). This causes DataFabric Manager to generate an Aggregate Full event only if the condition persists for the specified time. Default value: 90
180 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Event generated: Aggregate Full Event severity: Error Corrective action: Take one or more of the following actions: To free disk space, ask your users to delete files that are no longer needed from volumes contained in the aggregate that generated the event. Add one or more disks to the aggregate that generated the event. Add disks with caution. Once you add a disk to an aggregate, you cannot remove it without first destroying all flexible volumes present in the aggregate to which the disk belongs. Destroy the aggregate itself once all the flexible volumes are removed from the aggregate. Temporarily reduce the Snapshot reserve. By default, the reserve is 20percent of disk space. If the reserve is not in use, reducing the reserve can free disk space, giving you more time to add a disk. There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. It is, therefore, important to maintain a large enough reserve for Snapshot copies so that the active file system always has space available to create new files or modify existing ones. For more information about the Snapshot reserve, see the Data ONTAP Data Protection, Online Backup and Recovery Guide.
Aggregate Description: Specifies the percentage at which an aggregate is nearly full. Nearly Full (%) Default value: 80. The value for this threshold must be lower than the value for Aggregate Full Threshold for DataFabric Manager to generate meaningful events. Event generated: Aggregate Almost Full Event severity: Warning Corrective action: Same as Aggregate Full. Aggregate Description: Specifies the percentage at which an aggregate is overcommitted. Overcommitted Default value: 100 (%) Event generated: Aggregate Overcommitted Event severity: Error Corrective action: Take one or more of the following actions: Create new free blocks in the aggregate by adding one or more disks to the aggregate that generated the event.
Note: Add disks with caution. Once you add a disk to an aggregate, you
cannot remove it without first destroying all flexible volumes present in the
aggregate to which the disk belongs. Destroy the aggregate itself once all the flexible volumes are destroyed. Temporarily free some already occupied blocks in the aggregate by taking unused flexible volumes offline.
Note: When you take a flexible volume offline, it returns any space it uses
to the aggregate. However, when you bring the flexible volume online again, it requires the space again. Permanently free some already occupied blocks in the aggregate by deleting unnecessary files.
Aggregate Description: Specifies the percentage at which an aggregate is nearly overcommitted. Nearly Default value: 95. The value for this threshold must be lower than the value for Overcommitted Aggregate Full Threshold for DataFabric Manager to generate meaningful events. (%) Event generated: Aggregate Almost Overcommitted Event severity: Warning Corrective action: Same as Aggregate Overcommitted. Aggregate Snapshot Reserve Nearly Full Threshold (%) Description: Specifies the percentage of the Snapshot reserve on an aggregate that you can use before the system raises the Aggregate Snapshots Nearly Full event. Default value: 80 Event generated: Aggregate Snapshot Reserve Almost Full Event severity: Warning Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. If you disable the aggregate Snapshot autodelete option, it is important to maintain a large enough reserve. Disabling would ensure that there is always space available to create new files or modify present ones. Disabling would ensure that there is always space available to create new files or modify present ones. See the Operations Manager Help for instructions on how to identify Snapshot copies you can delete. For more information about the Snapshot reserve, see the Data ONTAP Data Protection, Online Backup and Recovery Guide. Aggregate Snapshot Reserve Full Threshold (%) Description: Specifies the percentage of the Snapshot reserve on an aggregate that you can use before the system raises the Aggregate Snapshots Full event. Default value: 90 Event generated: Aggregate Snapshot Reserve Full Event severity: Warning
182 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them.
Note: A newly created traditional volume tightly couples with its containing aggregate so that the
capacity of the aggregate determines the capacity of the new traditional volume. For this reason, synchronize the capacity thresholds of traditional volumes with the thresholds of their containing aggregates.
Related tasks
Data ONTAP Data Protection Online Backup and Recovery Guide http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml
Volume capacity thresholds and events on page 182 Normal events for a volume on page 186 Modification of the thresholds on page 187
as the only member. You can set the following volume capacity thresholds:
you can set a Volume Full Threshold Interval to a non-zero value. By default, the Volume Full threshold interval is set to zero. This causes DataFabric Manager to generate a Volume Full event only if the condition persists for the specified period. Default value: 90 Event generated: Volume Full Event severity: Error Corrective action: Take one or more of the following actions: Ask your users to delete files that are no longer needed, to free disk space. For flexible volumes containing enough aggregate space, you can increase the volume size. For traditional volumes containing aggregate with limited space, you can increase the size of the volume by adding one or more disks to the containing aggregate.
Note: Add disks with caution. Once you add a disk to an aggregate,
you cannot remove it without destroying the volume and its containing aggregate. For traditional volumes, temporarily reduce the Snapshot copy reserve. By default, the reserve is 20 percent of disk space. If the reserve is not in use, reducing the reserve frees disk space, giving you more time to add a disk. There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. Therefore, it is important to maintain a large enough reserve for Snapshot copies. By maintaining the reserve for Snapshot copies, the active file system always has space available to create new files or modify existing ones. For more information about the Snapshot copy reserve, see the Data ONTAP Data Protection Online Backup and Recovery Guide.
Volume Nearly Full Description: Specifies the percentage at which a volume is considered nearly Threshold (%) full. Default value: 80. The value for this threshold must be lower than the value for the Volume Full Threshold in order for DataFabric Manager to generate meaningful events. Event generated: Volume Almost Full Event severity: Warning
184 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Corrective action: Same as Volume Full. Volume Space Reserve Nearly Depleted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed most of its reserved blocks. This option applies to volumes with LUNs, Snapshot copies, no free blocks, and a fractional overwrite reserve of less than 100%. A volume that crosses this threshold is getting close to having write failures. Default value: 80 Event generated: Volume Space Reservation Nearly Depleted Event severity: Warning Volume Space Reserve Depleted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed all its reserved blocks. This option applies to volumes with LUNs, Snapshot copies, no free blocks, and a fractional overwrite reserve of less than 100%. A volume that has crossed this threshold is getting dangerously close to having write failures. Default value: 90 Event generated: Volume Space Reservation Depleted Event severity: Error When the status of a volume returns to normal after one of the preceding events, events with a severity of Normal are generated. Normal events do not generate alarms or appear in default event lists, which display events of Warning or worse severity. Volume Quota Overcommitted Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed the whole of the overcommitted space for that volume. Default value: 100 Event generated: Volume Quota Overcommitted Event severity: Error Corrective action: Take one or more of the following actions: Create new free blocks by increasing the size of the volume that generated the event. Permanently free some of the already occupied blocks in the volume by deleting unnecessary files.
Description: Specifies the percentage at which a volume is considered to have consumed most of the overcommitted space for that volume. Default Value: 95
Event generated: Volume Quota Almost Overcommitted Event Severity: Warning Corrective action: Same as that of Volume Quota Overcommitted. Volume Growth Event Minimum Change (%) Description: Specifies the minimum change in volume size (as a percentage of total volume size) that is acceptable. If the change in volume size is more than the specified value, and the growth is abnormal with respect to the volume-growth history, the DataFabric Manager server generates a Volume Growth Abnormal event. Default value: 1 Event generated: Volume Growth Abnormal Volume Snap Reserve Full Threshold (%) Description: Specifies the value (percentage) at which the space that is reserved for taking volume Snapshot copies is considered full. Default value: 90 Event generated: Volume Snap Reserve Full Event severity: Error Corrective action: There is no way to prevent Snapshot copies from consuming disk space greater than the amount reserved for them. If you disable the volume Snapshot autodelete option, it is important to maintain a large enough reserve. Disabling would ensure Snapshot copies that there is always space available to create new files or modify present ones. For instructions on how to identify Snapshot copies you can delete, see the Operations Manager Help. User Quota Full Threshold (%) Description: Specifies the value (percentage) at which a user is considered to have consumed all the allocated space (disk space or files used) as specified by the users quota. The user's quota includes hard limit in the /etc/quotas file. If this limit is exceeded, the DataFabric Manager server generates a User Disk Space Quota Full event or a User Files Quota Full event. Default value: 90 Event generated: User Quota Full User Quota Nearly Description: Specifies the value (percentage) at which a user is considered Full Threshold (%) to have consumed most of the allocated space (disk space or files used) as specified by the users quota. The users' quota includes hard limit in the /etc/quotas file. If this limit is exceeded, the DataFabric Manager server generates a User Disk Space Quota Almost Full event or a User Files Quota Almost Full event. Default value: 80 Event generated: User Quota Almost Full
186 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Description: Specifies the value (percentage) at which a volume is considered to have consumed all the free space for its space reservation. This is the space that the volume needs when the first Snapshot copy is created. This option applies to volumes that contain space-reserved files, no Snapshots , a fraction of Snapshot copies overwrite reserve set to greater than 0, and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. Default value: 90 Event generated: Volume No First Snapshot
Volume Nearly No Description: Specifies the value (percentage) at which a volume is considered First Snapshot to have consumed most of the free space for its space reservation. This is the Threshold (%) space that the volume needs when the first snapshot is created. This option applies to volumes that contain space-reserved files, no snapshots, a fractional overwrite reserve set to greater than 0, and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. Default value: 80 Event generated: Volume Almost No first Snapshot
Note: When a traditional volume is created, it is tightly coupled with its containing aggregate so
that its capacity is determined by the capacity of the aggregate. For this reason, you should synchronize the capacity thresholds of traditional volumes with the thresholds of their containing aggregates.
Related concepts
Data ONTAP Data Protection Online Backup and Recovery Guide http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml
File system management | 187 Click the Events tab; then go to the Report drop-down list and select the History report.
Volume Snapshot copy thresholds and events on page 187 Qtree capacity thresholds and events on page 189
as the only member. You can set the following volume Snapshot thresholds: Volume Snap Reserve Description: Specifies the percentage at which the space that is reserved for Full Threshold (%) taking volume Snapshot copies is considered full. Default value: 90 Event generated: Snapshot Reserve Full Event severity: Warning
188 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Corrective action: 1. Access the Volume Snapshot details report. 2. Select the Snapshot copies. 3. Click Compute Reclaimable. Volume Nearly No First Snapshot Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed most of the free space for its space reservation. This is the space that the volume needs when the first Snapshot copy is created. This option applies to volumes that contain space-reserved files, no Snapshot copies, a fractional overwrite reserve set to greater than 0, and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. Default value: 80 Event generated: Nearly No Space for First Snapshot Event severity: Warning Volume No First Snapshot Threshold (%) Description: Specifies the percentage at which a volume is considered to have consumed all the free space for its space reservation. This is the space that the volume needs when the first Snapshot copy is created. This option applies to volumes that contain space-reserved files, no Snapshot copies , a fractional overwrite reserve set to greater than 0, and where the sum of the space reservations for all LUNs in the volume is greater than the free space available to the volume. Default value: 90 Event generated: No Space for First Snapshot Event severity: Warning Volume Snapshot Count Threshold Description: Specifies the number of Snapshot copies, which, if exceeded, is considered too many for the volume. A volume is allowed up to 255 Snapshot copies. Default value: 250 Event generated: Too Many Snapshots Event severity: Error Volume Too Old Snapshot Threshold Description: Specifies the age of a Snapshot copy, which, if exceeded, is considered too old for the volume. The Snapshot age can be specified in seconds, minutes, hours, days, or weeks. Default value: 52 weeks Event generated: Too Old Snapshots
only member. You can set the following qtree capacity thresholds: Qtree Full (%) Description: Specifies the percentage at which a qtree is considered full.
Note: Note To reduce the number of Qtree Full Threshold events generated, you can set a Qtree
Full Threshold Interval (0 seconds). This causes DataFabric Manager to generate a Qtree Full event only if the condition persists for the specified period. Default value: 90 Event generated: Qtree Full Event severity: Error Corrective action: Take one or more of the following actions: Ask users to delete files that are no longer needed, to free disk space. Increase the hard disk space quota for the qtree.
Qtree Nearly Full Threshold (%) Description: Specifies the percentage at which a qtree is considered nearly full. Default value: 80. The value for this threshold must be lower than the value for Qtree Full Threshold for DataFabric Manager to generate meaningful events. Event generated: Qtree Almost Full Event severity: Warning
190 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Corrective action: Take one or more of the following actions: Ask users to delete files that are no longer needed, to free disk space. Increase the hard disk space quota for the qtree.
Related tasks
How qtree quotas are monitored on page 190 Where to find vFiler storage resource details on page 190
192 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Use DataFabric Manager to answer the following questions about Snapshot copies: How much aggregate and volume space is used for Snapshot copies? Is there adequate space for the first Snapshot copy? Which Snapshot copies can be deleted? Which volumes have high Snapshot growth rates? Which volumes have Snapshot copy reserves that are nearing capacity?
Snapshot copy monitoring requirements on page 192 Detection of Snapshot copy schedule conflicts on page 192 Dependencies of a Snapshot copy on page 192 Thresholds on Snapshot copies on page 193
File system management | 193 The Snapshots area of the Volume Details page displays information about, up to 10 of the most recent Snapshot copies for the volume. This includes the last time the Snapshot copies that were accessed and their dependencies, if any. This information helps you determine whether you can delete a Snapshot copy or, if the Snapshot copy has dependencies, what steps you need to take to delete the Snapshot copy. To generate a page that lists dependent storage components for a Snapshot copy, and the steps you would need to take to delete the copy, click the hyperlinked text in the Dependency column for that Snapshot copy. The link is not available when the dependency is due to SnapMirror or SnapVault or to a FlexClone volume, that is offline.
194 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Determine the current months and the last months values for storage chargeback report on page 194 Chargeback reports in various formats on page 194
Determine the current months and the last months values for storage chargeback report
You can calculate the current and previous month's chargeback report. When you select a Chargeback, This Month or Last Month view, the data displayed pertains to the current or the last billing cycles, respectively. The Day of the Month for Billing option determines when the current month begins and the last month ends, as described in the following example. Company As DataFabric Manager system is configured for the billing cycle to start on the fifth day of every month. If Chris (an administrator at Company A) views the Chargeback, this Month report on April 3, the report displays data for the period of March 5 through midnight (GMT) of April 2. If Chris views the Chargeback, Last Month report on April 3, the report displays data for the period of February 5 through March 4. All chargeback reports contain Period Begin and Period End information that indicates when the billing cycle begins and ends for the displayed report.
File system management | 195 groups-chargeback-last-month groups-chargeback-allocation-this-month groups-chargeback-allocation-last-month filers-chargeback-this-month filers-chargeback-last-month filers-chargeback-allocation-this-month filers-chargeback-allocation-last-month volumes-chargeback-this-month volumes-chargeback-last-month volumes-chargeback-allocation-this-month volumes-chargeback-allocation-last-month qtrees-chargeback-this-month qtrees-chargeback-last-month qtrees-chargeback-allocation-this-month qtrees-chargeback-allocation-last-month
For more information about using the dfm report command, see the DataFabric Manager man pages.
Specifying storage chargeback options at the global or group level on page 196 The storage chargeback increment on page 196 Currency display format for storage chargeback on page 196 Specification of the annual charge rate for storage chargeback on page 197 Specification of the Day of the Month for Billing for storage chargeback on page 197 The formatted charge rate for storage chargeback on page 198
196 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Go to...
All objects that DataFabric Manager The Options page (Setup Options link); then select Chargeback manages in the Edit Options section. Objects in a specific group The Edit Group Settings page (click Edit Groups in the left pane); then click the Edit column for the group for which you want to specify an annual charge rate.
Monthly
File system management | 197 Although a decimal separator is optional in the currency format, if you use it, you must specify at least one # character after the decimal separator. For example, $ #,###.# and JD #,###.###. You can optionally specify a thousands-separator. A thousands-separator separates digits in numbers into groups of three. For example, the comma (,) is the thousands-separator in the number 567,890,123. The symbol used as a thousands-separator depends on the type of currency. For example, a comma (,) is used for US dollars and a period (.) is used for Danish Kroner. You can use any currency symbol, such as EUR or , to suit your needs. If the currency symbol you want to use is not part of the standard ASCII character set, use the code specified by the HTML Coded Character Set. For example, use for the Yen () symbol. You can specify only one currency format per DataFabric Manager. For example, if you specify $ #, ###.## as your currency format for a specific installation, this format is used for all chargeback reports generated by that installation.
Rate box. Even if you are specifying a currency format that uses a comma (,) as the decimal separator. For example, to specify 150,55 Danish Kroner, enter 150.55.
Specification of the Day of the Month for Billing for storage chargeback
You can specify the day of the month from which the billing cycle begins. The Day of the Month for Billing setting indicates the day of the month on which the billing cycle begins. By default, this value is set to 1. The following values can be specified for this option: 1 through 28 These values specify the day of the month. For example, if you specify 15, it indicates the fifteenth day of the month.
198 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
-27 through 0
These values specify the number of days before the last day of the month. Therefore, 0 specifies the last day of the month. For example, if you want to bill on the fifth day before the month ends every month, specify -4.
from that group; DataFabric Manager does not stop collecting and reporting data about it. You must delete the object from the Global group for DataFabric Manager to stop monitoring it.
Next topics
Reports of deleted storage objects on page 198 Undeleting a storage object for monitoring on page 199
File system management | 199 Aggregates, Deleted Fibre Channel Switches, Deleted SAN Hosts, Deleted LUNs, Deleted
Note: These reports are accessible from the Report drop-down list on the Member Details tab for
each storage object (Storage Systems, vFiler units, File Systems, Aggregates, SANs, and LUNs).
1. Select the check box next to each object you want to return to the database. 2. Click Undelete.
Management tasks performed using Operations Manager on page 201 Operations Manager components for managing your storage system on page 202 Storage system groups on page 202 Custom comment fields in Operations Manager on page 203 Consolidated storage system and vFiler unit data and reports on page 203 Where to find information about a specific storage system on page 204 Managing active/active configurations with DataFabric Manager on page 207 Remote configuration of a storage system on page 210 Storage system management using FilerView on page 212 Introduction to MultiStore and vFiler units on page 213
202 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Link to FilerView for a selected storage system or vFiler unit Insert values into custom comment fields View user, qtree, and group quotas Edit user quotas
Storage system management | 203 To display information about a specific group of systems, select the desired group name in the Groups pane on the left side of Operations Manager. To manage your storage systems effectively, you should organize them into smaller groups so that you can view information only about objects in which you are interested. You can group your storage systems to meet your business needs, for example, by geographic location, operating system version, and storage system platform.
204 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You can view global and group information and select individual system data in detail using the Member Details report pages.
Next topics
Tasks performed by using the storage systems and vFiler unit report pages on page 204 What Appliance Tools of Operations Manager is on page 204
Tasks performed by using the storage systems and vFiler unit report pages
You can view system data for all groups, generate spreadsheet reports, get information about storage systems and launch FilerView. Similar to the other Operations Manager Control Center tab pages, the Appliances and vFiler reports enable you to view a wide variety of details in one place. You can perform the following tasks: View system data for all or a group of monitored systems Generate spreadsheet reports Obtain detailed information about a specific storage system Launch FilerView
Tasks performed from a Details page of Operations Manager on page 205 Editable options for storage system or vFiler unit settings on page 205 What the Diagnose Connectivity tool does on page 206 The Refresh Monitoring Samples tool on page 206
Storage system management | 205 The Run a Command tool on page 206 The Run Telnet tool on page 207 Console connection through Telnet on page 207
Authentication
206 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Threshold values The threshold values indicate the level of activity that must be reached on the storage system before an event is triggered. By using these options, you can set specific storage system or group thresholds. For example, the Appliance CPU Too Busy threshold indicates the highest level of activity the CPU can reach before a CPU Too Busy event is triggered. Threshold values specified on this page supersede any global values specified on the Options page. Threshold intervals The threshold interval is the period of time during which a specific threshold condition must persist before an event is triggered. For example, if the monitoring cycle time is 60 seconds and the threshold interval is 90 seconds, the event is generated only if the condition persists for 2 monitoring cycles. You can configure threshold intervals only for specific thresholds, as listed on the Options page.
Related information
Prerequisite DataFabric Manager uses the following connection protocols for communication:
Storage system management | 207 Remote Shell (RSH) connection for running a command on a storage system To establish an RSH connection and run a command on a storage system, DataFabric Manager must authenticate itself to the storage system. Therefore, you must enable RSH access to the storage system and configure login and password credentials that are used to authenticate Data ONTAP. Secure Socket Shell (SSH) connection for running a command on an RLM card, if the installed card provides a CLI.
Restrictions The following restrictions exist: There are several Data ONTAP run commands that are available on storage systems, but are restricted in DataFabric Manager. For a list of restricted commands, see the Operations Manager Help. You cannot run a command on the Global group.
Related concepts
Remote configuration of a storage system on page 210 DataFabric Manager CLI to configure storage systems on page 210 Prerequisites for running remote CLI commands from Operations Manager on page 211 What remote platform management interface is on page 259
Related tasks
Running commands on a specific storage system on page 211 Running commands on a group of storage systems from Operations Manager on page 211
208 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 The cluster console enables you to view the status of an active/active configuration (controller and its partner) and perform takeover and giveback operations between the controllers. For detailed information about active/active configurations, see the Data ONTAP Storage Management Guide.
Next topics
Requirements for using the cluster console in Operations Manager on page 208 Accessing the cluster console on page 208 What the Takeover tool does on page 208 What the Giveback tool does on page 209 DataFabric Manager CLI to configure storage systems on page 210
Related information
1. Click Control Center Home Member Details Appliances. 2. Click the Report drop-down list and select Active/Active Controllers, All. 3. Click on the appliance. 4. Click View Cluster Console under Appliance Tools.
Storage system management | 209 Once you select Takeover, the Takeover page is displayed. The Takeover page enables you to select the type of takeover you want the controller to perform. You can select from one of the following options: Take Over Normally This option is the equivalent of running the cf takeover command in which the controller takes over its partner in a normal manner. The controller allows its partner to shut down its services before taking over. This option is used by default.
Take Over Immediately This option is the equivalent of running the cf takeover -f command in which the controller takes over its partner without allowing the partner to gracefully shut down its services. Force a Takeover This option is the equivalent of running the cf forcetakeover -f command in which the controller takes over its partner even in cases when takeover of the partner is normally not allowed. Such a takeover might cause data loss. This option is for MetroClusters only and is the equivalent of running the cf forcetakeover -f -d command. Use this option if the partner is unrecoverable.
Note: The Force a Takeover and Takeover After a Disaster options, are also available
in circumstances when the interconnect between the controller, and its partner is down. It enables you to manually take over the partner. Once you have made a selection, the Status option on the Cluster Console page displays the status of the takeover operation. Once the takeover operation is complete, the Cluster Console page displays the updated controller-icon colors. The Cluster Console page also displays the status of each controller. The Tools list of each controller is adjusted appropriately to indicate the active/active configuration operation each controller can now perform.
210 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
This option is the equivalent of the cf giveback -f command in which the controller does not gracefully shut down the services of the taken over controller.
Once you have selected an option, the Status option on the Cluster Console page displays the status of the giveback operation. Once the giveback operation is complete, the Cluster Console page displays the updated controller-icon colors. The Cluster Console page also displays status of each controller. The Tools list of each controller is adjusted appropriately to indicate the active/active configuration operation each controller can now perform.
You can remotely configure the following DataFabric Manager features: Host users management User quota management Password management Roles management
Next topics
Prerequisites for running remote CLI commands from Operations Manager on page 211 Running commands on a specific storage system on page 211 Running commands on a group of storage systems from Operations Manager on page 211
1. Click Control Center Home Member Details Appliances 2. Click on the appliance to go to the Appliance Details page for the storage system or hosting storage system (of a vFiler unit) that you want to run a command on. 3. Select Run a Command under Appliance Tools. 4. Enter the command in the Appliance Command box. 5. Click Run.
1. In the left pane of the Operations Manager window, select the group that you want to run a command on. The Group Summary page is displayed. 2. Select Run a Command from the Appliance Tools menu. The Run Command page is displayed. 3. Enter the command in the Appliance Command box. 4. Click Run.
212 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
What FilerView is on page 212 Configuring storage systems by using FilerView on page 212
What FilerView is
Operations Manager enables you to view information about storage systems and vFiler units from a Web-based UI called FilerView. In DataFabric Manager 2.3 and later, pages displaying information about storage systems and vFiler units provide access to the Web-based UI, FilerView. You can access FilerView by clicking the icon next to the storage system or vFiler unit name in the details pages for events, storage systems, vFiler units, aggregates, LUNs, qtrees, and volumes. To access FilerView for a selected storage system or vFiler unit, click the storage system icon next to the storage system or vFiler unit name in the respective details page.
1. If...
Then...
You are running DataFabric Manager 3.3 In the Appliance Details or vFiler Details page, click the icon or later before the system name. Go to Step 3. You are running DataFabric Manager 2.3 On the Appliances page, click the FilerView next to the name or later of the storage system you want to configure. Go to Step 3. You are running DataFabric Manager 2.2 On the Appliances page, select the name of the storage system or earlier that you want to configure.
2. On the Appliance Details page, click on the FilerView icon. 3. When prompted, provide your user name and the password.
Why monitor vFiler units with DataFabric Manager on page 213 Requirements for monitoring vFiler units with DataFabric Manager on page 213 vFiler unit management tasks on page 214
Related information
214 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Data ONTAP release support: The Manager MultiStore monitoring feature supports hosting storage systems running Data ONTAP 6.5 or later.
Note: To run a command on a vFiler unit using a Secure Socket Shell (SSH) connection, the
vFiler unit must be running Data ONTAP 7.2 or later. Network connectivity: To monitor a vFiler unit, DataFabric Manager and the hosting storage system must be part of the same routable network that is not separated by firewalls. Hosting storage system discovery and monitoring: You must first discover and monitor the hosting storage system before discovering and monitoring vFiler units. If you do not have access to the hosting storage system (by acquiring a vFiler unit through an SSP), you are unable to monitor your vFiler unit using DataFabric Manager. NDMP discovery: DataFabric Manager uses NDMP as the discovery method to manage SnapVault and SnapMirror relationships between vFiler units. To use NDMP discovery, you must first enable SNMP and HTTPS discovery. Monitoring the default vFiler unit: When you license Operations Manager, which includes MultiStore, Data ONTAP automatically creates a default vFiler unit on the hosting storage system unit called vFiler0. Operations Manager does not provide vFiler0 details. Editing user quotas: To edit user quotas that are configured on vFiler units, you must have Data ONTAP 6.5.1 or later installed on the hosting storage systems of vFiler units. Monitoring backup relationships: DataFabric Manager collects details of vFiler unit backup relationships from the hosting storage system. DataFabric Manager then displays them if the secondary storage system is assigned to the vFiler group, even though the primary system is not assigned to the same group. Monitoring SnapMirror relationships: DataFabric Manager collects details of vFiler SnapMirror relationships from the hosting storage system and displays them if the destination vFiler unit is assigned to the vFiler group, even though the source vFiler unit is not assigned to the same group.
Management of storage system configuration files on page 217 What a configuration resource group is on page 220 Configuring multiple storage systems or vFiler units on page 223
Prerequisites to apply configuration files to storage systems and vFiler units on page 217 List of access roles to manage storage system configuration files on page 218 List of tasks for configuration management on page 218 What configuration files are on page 219 What a configuration plug-in is on page 219 Comparison of configurations on page 219 Verification of a successful configuration push on page 220
218 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Set the login and password for the storage system or vFiler unit before you set up configuration groups. Obtain a Data ONTAP plug-in for each version of the configuration file that you use in DataFabric Manager. You must have write privileges for a group to push configurations to it. You can download the plug-ins from the NOW site.
Note: The DataFabric Manager storage system configuration management feature supports storage
systems running Data ONTAP 6.5.1 or later when you install the appropriate Data ONTAP plug-in with DataFabric Manager.
Related information
Configuration of storage systems | 219 Import and export configuration files Remove an existing configuration file from a groups configuration list Change the order of files in the configuration list Specify configuration overrides for a storage system or a vFiler unit assigned to a group Exclude configuration settings from being pushed to a storage system or a vFiler unit View Groups configuration summary for a version of Data ONTAP Push configuration files to a storage system or a group of storage systems, or to vFiler units or a group of vFiler units Delete push configuration jobs View the status of push configuration jobs
Related concepts
Comparison of configurations
Operations Manager enables you to compare your configuration file settings against those of a template configuration. You can also compare storage systems, vFiler units or groups of storage systems, vFiler units against a configuration file, and create jobs to obtain comparison results. Use Operations Manager to access the comparison job results. You can view the configuration comparison results in a report format. Use this report to identify the configuration settings that do not conform to those of the standard template.
220 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
List of tasks for managing configuration groups on page 220 Considerations when creating configuration groups on page 221 Creating configuration resource groups on page 221 Parent configuration resource groups on page 222
(default), to complete the configuration push job. You cannot reconfigure the number of retries from Operations Manager, but you can use the following CLI command to specify a new retry limit: dfm config push -R. Exclude configuration settings from being pushed to a storage system or a vFiler unit View Groups configuration summary for a version of Data ONTAP
Configuration of storage systems | 221 Push configuration files to a storage system or a group of storage systems, or to vFiler units or a group of vFiler units Delete push configuration jobs View the status of a push configuration jobs
1. Create an empty group. 2. From the Groups pane, select the group you want to edit. 3. From the Current Group pane, select Edit Membership. 4. Populate the group from the available members. 5. From the Current Group pane, select Edit Storage System Configuration to add one or more configuration files to the group. After configuration files have been associated with the group, the following icon is attached to the group name so that you can identify the group as a configuration resource group:
222 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Parent group considerations on page 222 When to assign parent groups on page 222 Properties of configuration files acquired from a parent on page 223 Parent group considerations You should follow a set of considerations when before assigning a parent group. Before assigning a parent group, consider the following: When you assign a parent, you inherit only the parent group's configuration files. You do not inherit the storage systems in the member group. Parent groups can have parents of their own. A parent group might also have its own parent, and so on. The configuration settings of all parents are added to the beginning of the childs configuration settings. There is no limit to the potential length of these parent chains.
Note: Ensure to review the settings in a parent group to ensure that they do not have unintended
When to assign parent groups You should assign a parent group if you want to control all or most of the configuration settings of a storage system from Operations Manager. Remember that when you assign a parent group, you inherit all configuration settings in the parent group. Therefore, you should carefully scan a parents configuration for any undesirable settings before assigning a parent. You would probably not want to assign a parent if you want to use only a few of a parent groups settings. For example, if an existing group contains most of the access control list (ACL) rules you require, you cannot assign the group as a parent. You also cannot add more ACL rules in another configuration file.
Properties of configuration files acquired from a parent One of the properties of configuration files acquired from parent groups is that they are initially read-only. When you include configuration files from another group, consider the following points: A configuration resource group can include configuration files from only one parent group. Configuration files acquired from parent groups are always read first. You cannot change the order in which the acquired files are read unless you re-order the configuration files from within the parent group.
1. Pull a configuration file from a storage system or a vFiler unit. 2. Click (Management Storage System or vFiler Configuration Files Edit Configuration File. 3. Edit the file settings. 4. Click Compare Configuration Files to compare your storage system or vFiler configuration file against a standard template configuration. 5. Create a configuration resource group by adding a configuration file. 6. If necessary, click (Edit Storage System Configuration or Edit vFiler Configuration Edit Configuration Pushed for Appliance) to specify configuration overrides for a specific storage system or vFiler unit, or exclude configuration settings from being pushed to the storage system or vFiler units. 7. Click Edit Storage System Configuration or Edit vFiler Configuration and push the configuration file or files out to the storage systems or to the group. 8. Verify that the configuration changes have taken effect by reviewing the status of the push jobs.
Backup Manager
You can manage disk-based backups for your storage systems using Backup Manager. You can access it from the Backup tab in Operations Manager. Backup Manager provides tools for selecting data for backup, scheduling backup jobs, backing up data, and restoring data.
Note: Backup Manager does not support IPv6. Next topics
Backup management deployment scenario on page 225 System requirements for backup on page 226 What backup scripts do on page 263 What the Backup Manager discovery process is on page 227 SnapVault services setup on page 229 Management of SnapVault relationships on page 230 What backup schedules are on page 232 Management of discovered relationships on page 235 What lag thresholds are on page 235 List of CLI commands to configure SnapVault backup relationships on page 237 Primary directory format on page 240 Secondary volume format on page 240
226 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
The configuration provides data protection between two NetApp storage systems and from a NetApp storage system to a UNIX or a Windows storage system.
Data ONTAP 6.4 or later for storage systems and Data ONTAP 6.5 or later for vFiler units SnapVault primary license SnapVault and NDMP enabled (configured using Data ONTAP commands or FilerView) Open Systems SnapVault module for open systems platforms, such as UNIX, Linux, and Windows
Requirements Data ONTAP 6.4 or later for storage systems and Data ONTAP 6.5 or later for vFiler units SnapVault secondary license SnapVault and NDMP enabled (configured using Data ONTAP commands or FilerView ) Licenses for open systems platforms that are backing up data to secondary volumes on the secondary storage system
Methods of storage system discovery on page 227 What SnapVault relationship discovery is on page 228 New directories for backup on page 228 Viewing directories that are not backed up on page 228
228 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 if the discovered storage system is a primary or a secondary storage system. If DataFabric Manager attempts to connect to an NDMP server, and the NDMP server rejects the authentication credentials, DataFabric Manager does not add the storage system to its database. Therefore, DataFabric Manager avoids spamming NDMP servers with Login failed errors. When DataFabric Manager cannot authenticate a storage system with NDMP, it uses SNMP to discover primary and secondary storage systems. DataFabric Manager then adds the storage systems to its database without NDMP authentication credentials. On authentication, Backup Manager communicates with the primary and secondary storage systems to perform backup and restore operations.
1. You can view the directories that are not backed up either by CLI or using the GUI.
Option Description
By using the Command-Line Interface (CLI), execute dfbm report the command... primary-dirs-discovered By using Graphic User Interface (GUI), go to... Directories not scheduled for backup view
2. To disable the discovery of directories that are not backed up on Open Systems SnapVault hosts, execute the dfbm primary dir ignore all command at the CLI.
the Run a Command tool, you must first enable RSH on the storage system.
Next topics
Configuring the SnapVault license on page 229 Enabling NDMP backups on page 229
1. Enter license add sv_primary_license. 2. Enter license add sv_secondary_license. 3. Enter options snapvault.enable on. 4. Enter options snapvault.access host=snapvault_secondary to name the secondary storage systems that you want to designate for backups. 5. Enter options snapvault.access host=snapvault_primary to name the primary storage systems that you want to back up.
1. Enter ndmpd on to enable NDMP service on each primary and secondary storage system.
230 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 2. Enter options ndmpd.access host=dfm_server_host to let DataFabric Manager perform backup and restore operations.
systems platforms to the DataFabric Manager database for backup management, you cannot manage these platforms with DataFabric Manager.
Next topics
Adding secondary storage systems on page 230 Adding secondary volumes on page 231 Adding primary storage systems on page 231 Selecting primary directories or qtrees for backup on page 232
The SnapVault server feature must be licensed on the secondary storage system.
Steps
1. Click Backup Storage Systems. 2. Click All Secondary Storage System from the View drop-down list. 3. In the Secondary Storage Systems page, enter the name (or IP address) of the secondary storage system. 4. Enter the NDMP user. 5. Obtain the NDMP password by entering the following command on the storage system:
ndmpd password username
Backup Manager | 231 6. From the Secondary Storage Systems page, type the NDMP password. 7. Click Add.
Ensure that you have added a secondary storage system to Backup Manager, so that the volumes of secondary storage system are automatically discovered by DataFabric Manager and are added to its database.
Steps
1. Click Backup Backup and then the icon next to Secondary Volume. 2. From the Secondary Volumes page, select the secondary storage system. 3. Select a volume on the secondary storage system.
Note: If DataFabric Manager has not yet discovered a volume, you might need to click Refresh.
4. Click Add.
1. From the Primary Storage Systems page, enter the name (or IP address) of the primary storage system. 2. Enter the NDMP user name used to authenticate the primary storage system in the NDMP User field. 3. Obtain the NDMP password by entering the following command on the storage system:
ndmpd password username
. 4. If the primary storage system you are adding is an Open Platform system, you can configure a non-default value for the NDMP port on which DataFabric Manager will communicate with the system. 5. From the Primary Storage Systems page, enter the NDMP password. 6. Click Add.
232 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 After you have added a primary storage system to Backup Manager, DataFabric Manager automatically discovers the primary directories, and adds them to its database. DataFabric Manager also discovers primary qtrees, in the case of primary storage systems that support qtrees.
Before baseline transfers can start, you must add the primary directory to Backup Manager and configure its volumes or qtrees for backups to the secondary volume.
Steps
1. Click Backup Storage Systems. 2. Select Qtrees Not Scheduled For Backup from the View drop-down list. 3. Select the qtree that you want to back up. 4. Click Back Up.
relationships associated with a secondary volume must use the same backup schedule.
Next topics
Best practices for creating backup relationships on page 232 Snapshot copies and retention copies on page 233 Requirements to create a backup schedule on page 233 Creating backup schedules on page 233 Local data protection with Snapshot copies on page 234 Snapshot copy schedule interaction on page 234
Backup Manager | 233 Following are the recommendations for creating backup relationships: Create backup relationships during off-peak hours so that any performance impact does not affect users. Alternatively, you can specify a limit on the amount of bandwidth, a backup transfer can use.
Note: If a baseline backup transfer starts on a storage system when it is busy providing file
services (NFS and CIFS), then the performance of file services is not impacted. These services are given higher priority than the backup. However, the backup takes longer to complete, because the storage systems resources are being consumed by services of higher priority. Avoid creating multiple backup relationships at the same time to avoid initiating multiple baseline transfers.
not created on open systems platforms. Instead, entire changed files are transferred.
When you create a backup schedule, you can identify it as the default backup schedule. Any secondary volumes subsequently added to Backup Manager are then automatically associated with this default backup schedule.
234 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. From the Backup page, click 2. Type a name for the schedule. 3. Create a schedule using one of the following methods: Select a template. Select None to create a schedule without using a template. to open the Backup Schedules page.
4. Click Add to add the schedule to the DataFabric Manager database. The Schedule Details page is displayed. 5. Optionally, check the Use as Default for New Secondary Volumes check box to apply the schedule to all secondary volumes subsequently added to Backup Manager. 6. Optionally, enter the retention count for the hourly, weekly, and nightly schedule. 7. Optionally, click Add a schedule to open the Edit Schedule page and configure backup times for each hourly, weekly, and nightly schedule. 8. Click Update.
Backup Manager | 235 the secondary storage system. Although such a situation does not lead to any data loss, it causes the primary and secondary storage systems to make unnecessary transfers, thus consuming resources on those storage systems. Hence, uses network bandwidth required for the backup transfers.
1. Enable NDMP on the primary and secondary storage systems. 2. Enter the NDMP credentials for the primary and the secondary storage systems. 3. Associate a backup schedule with the secondary volume.
Note: Turn off all Snapshot copy schedules and policies defined for the imported backup relationship that were created using the Data ONTAP snapvault snap sched command.
236 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Next topics
Setting global thresholds on page 236 Setting local thresholds on page 236 Bandwidth limitation for backup transfers on page 237 Configuring backup bandwidth on page 237
1. Select Setup Options and choose Backup Default Thresholds from the Edit Options menu at the left side of Operations Manager. 2. In the SnapVault Replica Nearly Out of Date Threshold field, enter the limit at which the backups on a secondary volume are considered nearly obsolete. 3. In the SnapVault Replica Out of Date Threshold field, enter the limit at which the backups on a secondary volume are considered obsolete. 4. Click Update.
1. From any Backup Manager report, click the secondary volume name to access the Secondary Volume Details page. 2. In the Lag WarningThreshold field, specify the lag warning threshold limit in weeks, days, hours, minutes, or seconds. Lag Warning threshold specifies the lag time after which the DataFabric Manager server generates the SnapVault Replica Nearly Out of Date event. 3. In the Lag ErrorThreshold field, specify the lag warning threshold limit in weeks, days, hours, minutes, or seconds. Lag Error threshold specifies the lag time after which the DataFabric Manager server generates the SnapVault Replica Out of Date event. 4. Click Update.
For the restore operations, the maximum available bandwidth is always used.
1. Select the directory for which you want to configure a bandwidth limit by doing one of the following: For a new backup relationship, select the Backup tab to open the Backup page. For an existing backup relationship, select a primary directory name from any view to open the Primary Directory Details page.
2. Enter the bandwidth limit that you want to impose on the backup transfers for this relationship. 3. Click Update.
238 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
CLI command dfbm backup list dfbm backup ls dfbm backup start dfbm event list dfbm job abort dfbm job detail dfbm job list dfbm job purge dfbm ndmp add dfbm ndmp delete dfbm ndmp modify dfbm ndmp list dfbm option list dfbm option set dfbm primary dir add dfbm primary dir delete dfbm primary dir discovered dfbm primary dir ignore dfbm primary dir list dfbm primary dir modify dfbm primary dir relinquish dfbm primary dir unignore dfbm primary host add dfbm primary host delete dfbm primary host list dfbm primary host modify
Manages the list of user names and passwords used for Network Data Management Protocol (NDMP) discovery.
Manages the global backup options that control the operation of Backup Manager.
CLI command dfbm reports events dfbm reports events-error dfbm reports events-unack dfbm reports events-warning dfbm reports jobs dfbm reports jobs-1d dfbm reports jobs-30d dfbm reports jobs-7d dfbm reports jobs-aborted dfbm reports jobs-aborting dfbm reports jobs-completed dfbm reports jobs-failed dfbm reports jobs-running dfbm reports backups-by-primary dfbm reports backups-bysecondary dfbm schedule add dfbm schedule create dfbm schedule delete dfbm schedule destroy dfbm schedule diag dfbm schedule modify dfbm secondary host add dfbm secondary host delete dfbm secondary host list dfbm secondary host modify dfbm secondary volume add dfbm secondary volume delete dfbm secondary volume list dfbm secondary volume modify
240 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Supported storage systems For a primary directory called engineering/projects in volume vol1 of a storage system named jupiter, enter:
jupiter:/vol1/engineering/projects.
Windows system For a primary directory called engineering\projects on the D drive of a Windows system named mars, enter:
mars:D:\engineering\projects
(although your capitalization could be different). UNIX system For a primary directory /usr/local/share on a UNIX system named mercury, enter:
mercury:/usr/local/share
A SnapMirror relationship is the replication relationship between a source storage system or a vFiler unit and a destination storage system or a vFiler unit by using the Data ONTAP SnapMirror feature. Disaster Recovery Manager provides a simple, Web-based method of monitoring and managing SnapMirror relationships between volumes and qtrees on your supported storage systems and vFiler units. You can view and manage all SnapMirror relationships through the Disaster Recovery tab of Operations Manager. You can also configure SnapMirror thresholds, so that Disaster Recovery Manager generates an event and notifies the designated recipients of the event. For more information about SnapMirror, see the Data ONTAP Data Protection Online Backup and Recovery Guide.
Next topics
Prerequisites for using Disaster Recovery Manager on page 241 Tasks performed by using Disaster Recovery Manager on page 242 What a policy is on page 242 Connection management on page 245 Authentication of storage systems on page 247 Volume or qtree SnapMirror relationship on page 248 What lag thresholds for SnapMirror are on page 252
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml
Continuance Management license. If you do not have this license, contact your sales representative. The SnapMirror destination storage systems must be running Data ONTAP 6.2 or later.
242 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Note: Disaster Recovery Manager can discover and monitor only volume and qtree SnapMirror
relationships in which the SnapMirror destinations have Data ONTAP 6.2 or later installed. The source and destination storage systems must have Data ONTAP 6.5 installed to perform any of the SnapMirror management tasks. The source and destination storage systems configured with vFiler units must be running Data ONTAP 6.5 or later to perform any of the SnapMirror relationship management and monitoring tasks.
What a policy is
A policy is a collection of configuration settings that you can apply to one or more SnapMirror relationships. The ability to apply a policy to more than one SnapMirror relationship makes a policy useful when managing many SnapMirror relationships. There are two types of policies that you can create and apply to SnapMirror relationships: Replication Failover
Next topics
What a replication policy does on page 242 What a failover policy does on page 244 Policy management tasks on page 244
List of parameters for an asynchronous replication policy on page 243 List of parameters for a synchronous replication policy on page 243 List of parameters for an asynchronous replication policy You must set a list of parameters for an asynchronous replication policy. Schedule Specifies when an automatic update occurs. You can specify the schedule using Operations Manager or you can enter the schedule in the cron format.
Note: For more information about scheduling using the cron format, see
the na_snapmirror.conf(5) man page for Data ONTAP. Maximum Transfer Speed Restart Lag Warning Threshold Lag Error Threshold Specifies the maximum transfer speed, in kilobytes per second. Specifies the restart mode that SnapMirror uses to continue an incremental transfer from a checkpoint if it is interrupted. Specifies the limit at which the SnapMirror destination contents are considered nearly obsolete. If this limit is exceeded, Disaster Recovery Manager generates a SnapMirror Nearly Out of Date event. Specifies the limit at which the SnapMirror destination contents are considered obsolete. If this limit is exceeded, Disaster Recovery Manager generates a SnapMirror Out of Date event. Specifies, in bytes, the amount of data that a source can send on a connection before it requires acknowledgment from the destination that the data was received. Specifies the use of a Cyclic Redundancy Check (CRC) checksum algorithm. Use the checksum option if the error rate of your network is high enough to cause an undetected error.
Checksum
List of parameters for a synchronous replication policy You must set a list of parameters for a synchronous replication policy. Fully synchronous Semi-synchronous Specifies full synchronicity between the source and the destination. Specifies the level of synchronicity between the source and the destination. The destination can be behind the source by 0 to 60 seconds or by 0 to 500 write operations. Specifies a time interval after which the transferred data becomes visible on the destination.
Visibility interval
244 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Specifies the amount of data that a source can send on a connection before it requires acknowledgment from the destination that the data was received. Specifies the use of a Cyclic Redundancy Check (CRC) checksum algorithm. Use the checksum option if the error rate of your network is high enough to cause an undetected error.
OR Remove the policy from SnapMirror Disaster Recovery Mirrors-Mirrored SnapMirror relationships before deleting it; Relationship Source Manage Replication Policies icon otherwise, Disaster Recovery Manager sends an error message.
Click... Disaster Recovery Add a Mirror Manage Failover Policies icon OR Disaster Recovery Mirrors-Mirrored SnapMirror Relationship Source Manage Failover Policies icon
Disaster Recovery Add a Mirror Edit Selected Failover Policy icon OR Disaster Recovery Mirrors-Mirrored SnapMirror Relationship Source Policy name Manage Failover Policies icon
Disaster Recovery tab Add a Mirror link Manage Failover Policies icon
Note: Remove the policy from OR SnapMirror relationships before deleting Disaster Recovery Mirrors-Mirrored SnapMirror it; otherwise, Disaster Recovery Relationship Source Manage Failover Policies icon Manager sends an error message. Note: All the replication policies in the earlier versions that are not assigned to any mirror relationships
are deleted, while upgrading DataFabric Manager from versions earlier than 3.2 to 3.2 or later.
Connection management
You can specify one or two specific network paths between a source storage system or a vFiler unit, and a destination storage system or a vFiler unit using connection management. The advantages of multiple paths between source and destination storage systems or vFiler units are as follows: Increased transfer bandwidth Networking failover capability
Note: Asynchronous SnapMirror does not support multiple paths in Data ONTAP 6.5.
For more information, see the Data ONTAP Data Protection Online Backup and Recovery Guide.
Next topics
Connection management tasks on page 246 What the connection describes on page 246
246 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 What multipath connections are on page 247
Related information
Data ONTAP Data Protection Online Backup and Recovery Guide http://now.netapp.com/NOW/knowledge/docs/ontap/ontap_index.shtml
Authentication of discovered and unmanaged storage systems on page 247 Addition of a storage system on page 248 Modification of NDMP credentials on page 248 Deletion of a storage system on page 248
or later.
248 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
system.
Disaster Recovery Manager | 249 If the new SnapMirror relationship is a qtree replication, ensure that the volume on the destination storage system where you want to replicate a qtree with SnapMirror is online and not restricted. Do not manually create a destination qtree. On upgrading to DataFabric Manager3.3, in a qtree SnapMirror relationship, you can select a qtree directly belonging to the vFiler unit, by selecting the volume belonging to the storage system.
Next topics
Decisions to make before adding a new SnapMirror relationship on page 249 Addition of a new SnapMirror relationship on page 250 Modification of an existing SnapMirror relationship on page 250 Modification of the source of a SnapMirror relationship on page 250 Reason to manually update a SnapMirror relationship on page 250 Termination of a SnapMirror transfer on page 251 SnapMirror relationship quiescence on page 251 View of quiesced SnapMirror relationships on page 251 Resumption of a SnapMirror relationship on page 251 Disruption of a SnapMirror relationship on page 251 View of a broken SnapMirror relationship on page 251 Resynchronization of a broken SnapMirror relationship on page 252 Deletion of a broken SnapMirror relationship on page 252
Connection Policies
Related concepts
250 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Selecting the destination storage system and destination volume or qtree Selecting the connection Selecting the type of replication and failover policy
Related concepts
252 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
For more information on the thresholds and the default values, see Operations Manager Help. Disaster Recovery Manager automatically generates SnapMirror events based on these thresholds. If you want to receive notifications in the form of e-mail messages, pager alerts, or SNMP traps when a SnapMirror event occurs, you can set up alarms.
Note: Disaster Recovery Manager does not generate events when lag thresholds of SnapMirror
sources are crossed. Only lag thresholds of SnapMirror destinations are used for generating events.
Next topics
Where to change the lag thresholds on page 253 Lag thresholds you can change on page 253 Reasons for changing the lag thresholds on page 253 What the job status report is on page 253
Accessing the CLI on page 255 Where to find information about DataFabric Manager commands on page 256 What audit logging is on page 256 What remote platform management interface is on page 259 Scripts overview on page 260 What the DataFabric Manager database backup process is on page 263 What the restore process is on page 269 Disaster recovery configurations on page 270
256 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
The audit log file resides in the default log directory of DataFabric Manager.
Next topics
Events audited in DataFabric Manager on page 256 Global options for audit log files and their values on page 257 Format of events in audit log file on page 257 Permissions for accessing the audit log file on page 259
DataFabric Manager logs each authorization failure and the user name associated with it. DataFabric Manager logs execution of each command. The complete command line (including options and arguments) is recorded in the audit log file. DataFabric Manager also logs the name of the user who executed the command; the failure status of the command, if any; and the type of request: Web or CLI. DataFabric Manager logs invocation of any API by using DataFabric Manager service. The complete details of the API call and the authenticated user's name, on whose behalf, the API was invoked, are recorded in the audit log file. When the scheduler starts a job by invoking a CLI, DataFabric Manager logs the scheduled action and the user affiliated with it in the audit log file.
API calls
Scheduled actions
In addition, a timestamp is recorded for each event. In the case of APIs, the IP address of the appliance from which the requests are received is logged. In the case of CLI requests, the IP address is always that of DataFabric Manager .
3 MB in size) can grow excessively. You have to ensure that you have enough space on DataFabric Manager to keep the audit log files forever. For audit logging, the dfm option list command requires global read capability and the dfm option set command requires global write capability. You must have Core Control Capability to modify auditLogEnable and auditLogForever global option. License features required: The dfm option set command requires an Operations Manager license.
258 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Apr 23 14:56:40 [dfm:NOTIC]:root:WEB:in:[ABCD:EF01:2345:6789:ABCD:EF01:2345:6789]:dfm report view backups: <ss><dfm-backup-directory>opt <dfm-backup-directory><dfm-backup-verify-settings> The <application-name> value denotes the application invoking the audit log facility. For example, it can be dfm/dfbm/dfdrm/dfcm if a CLI or Operations Manager request is being audit logged from the dfm/dfbm/dfsrm/dfcm executable. In the case of APIs, <application-name> is the actual name of the application that called the API (for example, dfm and sdu). If the API was called by an external application other than dfm and it does not pass the name of the application in the API input parameters, the field :: is not printed in the log message. The message priority field <priority> can have one of the following values: EMERG: unusable system ALERT: action required CRIT: critical conditions ERROR: error conditions WARN: warning conditions NOTIC: normal, but significant condition INFO: informational messages DEBUG: debug-level messages
The <username> field logs names of the users who invoked the CLIs and APIs. The <protocol> field describes the source of the event being logged. The protocol label can have one of the following values: API: In the case of an API invocation CMD: In the case of CLI commands LOG: When an event is explicitly logged by the system WEB: In the case of an Operations Managers request
The <label> field describes the type of the event being logged. The message label can have one of the following values: IN: for input OUT: for output (for example, in the case of API calls) ERR: for error (for example, in the case of LOG in failures) ACTION: for actions initiated by the user
Maintenance and management | 259 The <ip-address> value for APIs is the IP address of the system from which the API is invoked. In the case of the CLI, it is the IP address of the DataFabric Manager server. For requests coming through Operations Manager, it is the IP address of the workstation on which Operations Manager is installed. The <intent> field describes the following information: If the protocol is API, it conveys the intention of the API. If the protocol is CMD, it is the actual command used. If the protocol is WEB, it is the URL of the Web page. If audit-log API is called, this field remains blank.
The <message> field content depends on the value of <protocol>, as follows: If <protocol> is API, the XML input to the API is logged, excluding the <netapp> element. If <protocol> is CMD, it contains output or an error message. If <protocol> is WEB, it is empty.
Following is an example: July 04 22:11:59 [dfm:NOTIC]:NETAPP\tom:CMD:in:127.0.0.1:dfm host password set -p ****** jameel:Started job:2 July 06 13:27:15 [dfm:NOTIC]:NETAPP\tom:WEB:in:127.0.0.1:dfm user login username = tom password=******:Logged in as<B>NETAPP\tom <\B><BR> July 06 14:42:55[dfm:NOTIC]:NETAPP\tom:API:in:127.0.0.1:Add a role to a user:<rbac-admin-role-add> <role-name-or-id>4</role-name-orid> <admin-name-or-id>TOM-XP\dfmuser</admin-name-or-id></rbacadminrole-add>
260 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Following maintenance tasks can be performed using the interface on DataFabric Manager: Control system power, such as powering on or off the storage system Reset system firmware View system and event logs Obtain system sensor status
You can use the Run a Command tool in Operations Manager or use the dfm run cmd -r command on the CLI of DataFabric Manager to execute the remote platform management commands. By using the interface, you can access the Remote LAN Module (RLM) cards on the 30xx, 31xx, and 60xx storage systems.
Next topics
RLM card monitoring in DataFabric Manager on page 260 Prerequisites for using the remote platform management interface on page 260
For procedures to configure an RLM card IP address, see the Operations Manager help.
Scripts overview
The installation, management, and execution of your scripts are supported by DataFabric Manager.
Maintenance and management | 261 You begin creating a script by writing the script. You can use any scripting language, but keep in mind that your choice impacts your network administrators. The network administrator needs to install the interpreter you require on the system where DataFabric Manager is installed. It is recommended that you use a scripting language such as Perl. Perl is typically installed with the operating system on Linux and Windows workstations. Besides, the dfm report F Perl command in DataFabric Manager, generates reports for direct inclusion in Perl scripts. You have a way of importing information from DataFabric Manager reports into a Perl script. For more information, see the dfm report command man page. See the Operations Manager Help for task procedures and option descriptions.
Next topics
Commands that can be used as part of the script on page 261 Package of the script content on page 261 What script plug-ins are on page 261 What the script plug-in directory is on page 262 What the configuration difference checker script is on page 263 What backup scripts do on page 263
262 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Scripts are installed using a ZIP file that must contain the script, any data files that are needed by the script, and a file named package.xml. The package.xml file contains information about the script, and might optionally contain definitions of new event types that your script generates. Once you have your ZIP file ready, you can install it on the DataFabric Manager server to verify that your packaging functions correctly. Once you verify the functionality, you can use, or distribute the script. Following tasks can be performed using the script plug-in framework: Manage the scripts you add to DataFabric Manager Create and manage script jobs Create and manage schedules for the script jobs you have created Define new event classes during script installation or generate these events during script execution.
For more information about creating scripts and the contents of the script ZIP file, see the Operations Manager Help. Configuration difference checker script You can use the Configuration Difference Checker script to detect when the configuration of a storage system changes from a previous known configuration. You can manage the Configuration Difference Checker script with the functionality that is provided to you as part of the script plug-in framework. When the Configuration Difference Checker script runs, if a configuration change is detected, DataFabric Manager generates an event notifying you of the configuration changes. Using job reports, it stores a history of the configuration changes and stores one configuration obtained from the last time the script ran. A Configuration Difference Checker script called config_diff.zip is provided for you. You can obtain this script from the ToolChest accessible from http://now.netapp.com/. The ToolChest contains many frequently used tools, including downloadable tools and utilities, Web applications, third-party tools, and miscellaneous useful links. You can add, schedule, and run the Configuration Difference Checker script using the script plug-in pages and commands listed in
264 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Next topics
When to back up data on page 264 Where to back up data on page 264 Recommendations for disaster recovery on page 265 Backup storage and sizing on page 265 Limitation of Snapshot-based backups on page 265 Access requirements for backup operations on page 265 Changing the directory path for archive backups on page 266 Starting database backup from Operations Manager on page 266 Scheduling database backups from Operations Manager on page 267 Specifying backup retention count on page 267 Disabling database backup schedules on page 267 Listing database backups on page 268 Deleting database backups from Operations Manager on page 268 Displaying diagnostic information from Operations Manager on page 268 Exportability of a backup to a new location on page 268
You can also specify a different target directory. The backup file has an added file name extension of .db, .gz, .zip, or .ndb, depending on the version of the DataFabric Manager server that you are running.
Note: Current version of DataFabric Manager uses .ndb format.
The Snapshot-based backups are volume Snapshots. Therefore, unlike in the archive backups, you do not have to specify a target directory in the Snapshot based backups.
266 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Abort Set schedule Enable schedule Export Diagnose
To list the backups on a directory, and to get status and schedules of backups, you log in to Operations Manager with the GlobalRead role.
1. Select Setup Options 2. Click Database Backup under Edit Options. 3. In the Archive Backup Destination Directory field, change the directory path if you want to back up to a different location. 4. Click Update.
1. Select Setup Database Backup. 2. Select Backup Type. You can choose between the archive- and Snapshot-based backups. 3. Click Back Up Now. The Database Backup Confirmation page is displayed if the Backup Type is Archive. 4. Click Run Backup.
After You Finish Note: To stop a backup that is in progress, click Abort Backup.
While scheduling database backup in the archive format, hourly backups and multiple backups in a day are not feasible. Because backups in the archive format take time.
Steps
1. Select Setup Database Backup. 2. Select Backup Type. You can choose between the archive-based and Snapshot-based backups. 3. Select Enable Schedule to activate the schedule. 4. Select the frequency to run the backup and enter the time to run it. Entries are based on hourly, less frequent than hourly, daily, and weekly schedules. 5. Click Update Schedule. 6. Verify that the settings are correct, then click Update Data.
1. Select Setup Options. 2. Click on Database Backup. 3. In the Database Backup Retention Count field, enter the count. 4. Click Update.
1. Select Setup Database Backup. 2. Deselect Enable Schedule to disable the schedule.
268 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 3. Click Update Schedule.
1. Select Setup Database Backup. 2. Click List Database Backups. 3. Click List Database Backups. 4. Select Delete Selected.
1. Click Setup Database Backup. 2. Click Update Schedule. Update schedule after you schedule a database backup. 3. Click Diagnose.
Maintenance and management | 269 To export the backup file to a new path, use dfm backup export <backup_name> [target_filepath] command. If target_filepath is not specified, the archive-based backup is created by default in the directory specified in the Archive Backup Destination Directory field using Operations Manager.
If some commands are run, they can interfere with the restore or upgrade operation by locking database tables and causing the operation to fail.
Next topics
Restoring the database from the archive-based backup on page 269 Restoring the database from the Snapshot copy-based backup on page 269 Restoration of the database on different systems on page 270
1. Copy the backup file into the databaseBackupDir directory. 2. Type the following command at the command line: dfm backup restore <backup_name> backup_name is the name of the file to which you saved your DataFabric Manager database. A Completed restore message is displayed when the restore process finishes successfully.
270 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Steps
1. Type the following command at the command line to display the names of backup copies of the database: dfm backup list 2. Type the following command at the command line: dfm backup restore <backup_name> backup_name is the name of the backup copy in the database. A Completed restore message is displayed when the restore process finishes successfully.
Services for assistance. This can usually be avoided by ensuring a domain user (who has permission to log in to both systems) exists in DataFabric Manager with GlobalFullControl role before migrating the database.
Disaster recovery support enables you to recover the DataFabric Manager services quickly on another site. Disaster recovery support prevents any data loss due to disasters that might result in a total site failure.
Maintenance and management | 271 A disaster recovery plan typically involves deploying remote failover architecture. This remote failover architecture allows a secondary data center to take over critical operations when there is a disaster in the primary data center.
Next topics
Disaster recovery using Protection Manager on page 271 Disaster recovery using SnapDrive on page 276
Limitation of disaster recovery support on page 271 Prerequisites for disaster recovery support on page 271 Setting up DataFabric Manager on page 272 Recovering DataFabric Manager services on page 273 Recovering DataFabric Manager services using the dfm datastore mirror connect command on page 274 Failing back DataFabric Manager services on page 275 Limitation of disaster recovery support You must use SnapDrive to configure DataFabric Manager on Linux for disaster recovery. For more information, you can view the technical report on the NetApp website at http://media.netapp.com/documents/tr- 3655.pdf Prerequisites for disaster recovery support A set of prerequisites must be met for disaster recovery support for DataFabric Manager data. following are the prerequisites to be met for disaster recovery support for DataFabric Manager data: You must be a Windows domain administrator. You must have a Protection Manager license.
272 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You must have SnapDrive for Windows 6.0 or greater. DataFabric Manager is dependent on SnapDrive for Windows to provide the disaster recovery support. You must be using the same version of Data ONTAP on both the source and destination storage systems. To ensure that you have the required Data ONTAP version for SnapDrive, see the SnapDrive/Data ONTAP Compatibility Matrix page at the NOW site. You must have a SnapMirror license for both the source and destination storage systems. You must configure the Snapshot-based backup for DataFabric Manager. To grant root access to the Windows domain account that is used by the SnapDrive service, you must configure the source and destination storage systems. You can configure the storage systems, by setting the wafl.map_nt_admin_priv_to_root option to "On" in the CLI. You are advised to have a dedicated flexible volume for the DataFabric Manager data, because Volume SnapMirror is used for mirroring the data. DataFabric Manager data must be stored in LUN. The source and destination storage systems must be managed by Protection Manager. The Windows domain account that is used by the SnapDrive service must be a member of the local built-in group or local administrators group on both the source and destination storage systems. The Windows domain account used to administer SnapDrive must have full access to the Windows domain to which both the source and destination storage systems belong.
Related information
The NOW site -- http://now.netapp.com/ Setting up DataFabric Manager You must perform a set a tasks to set up DataFabric Manager for disaster recovery.
Steps
1. Install or upgrade DataFabric Manager on the primary site by completing the following steps: a) b) c) d) Install DataFabric Manager. Install SnapDrive for Windows and configure it with Protection Manager. Create an FCP-based or iSCSI-based LUN on the storage system using SnapDrive. Run the dfm datastore setup command to migrate the data to a directory in the FCP-based or iSCSI-based LUN.
2. Configure a schedule for Snapshot-based backup by following the steps described in 3. Run the dfm datastore mirror setup command to create the application dataset. 4. Configure DataFabric Manager disaster recovery using Protection Manager features, by completing the following steps: a) Create a volume on the destination storage system having the same size and space configuration as the primary volume.
Maintenance and management | 273 If you have either the Protection Manager Disaster Recovery license or the Provisioning Manager license, secondary volume provisioning can take advantage of the policies provided by that license. If you do not have either of these licenses, then provision the secondary volume manually. b) Assign the provisioned secondary volume to the application dataset. For more information on how to use Protection Manager to assign resources to a dataset, see the NetApp Management Console Help. c) Assign a schedule to the application dataset. For more information on how to use Protection Manager to assign schedules to a dataset, see the NetApp Management Console Help. d) Ensure that there are no conformance issues. 5. Run the dfm backup diag command and note down the Mirror location information from the command output. You would need this information while using the dfm datastore mirror connect command during the process of recovering DataFabric Manager. 6. Install DataFabric Manager on the secondary site. 7. Run the dfm service disable command on the secondary site to disable all DataFabric Manager. DataFabric Manager services must be enabled only during the disaster recovery, by using the dfm datastore mirror connect command.
Recovering DataFabric Manager services You can recover DataFabric Manager services on the secondary site if a disaster occurs at the primary site.
Steps
1. Connect to the LUN using the Microsoft Management Console (MMC) Connect Disk wizard on the secondary storage system by completing the following procedure: a) Expand the Storage option in the left panel of the MMC. b) Double-click SnapDrive List, if you are managing multiple instances of SnapDrive. Otherwise, double-click SnapDrive. c) Double-click the name of the SnapDrive host you want to manage. d) Right-click Disks. e) Select the Connect Disk option and follow the instructions on the Connect Disk wizard. For more information about connecting virtual disks, see the SnapDrive for Windows Installation and Administration Guide. 2. Run the dfm service enable command to enable the services.
274 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 3. Run the dfm datastore setup-n command to configure DataFabric Manager to use the mirrored data. 4. Run the dfm service start command to start DataFabric Manager services. 5. Using the Protection Manager UI, change the dataset created for DataFabric Manager data from the current mode to suspended mode. 6. Run the dfm options set command to reset the localHostName global option on the secondary site, if the primary site is clustered using MSCS. 7. Run the dfm service disable command to disable the services on the primary site. If the primary site is clustered using MSCS, offline the services before disabling them.
Related information
SnapDrive for Windows Installation and Administration Guide http://now.netapp.com/NOW/knowledge/docs/client_filer_index.shtml Recovering DataFabric Manager services using the dfm datastore mirror connect command You can use the dfm datastore mirror connect to recover the DataFabric Manager services on the secondary site if a disaster occurs at the primary site.
Steps
1. Run the dfm datastore mirror connect command on the DataFabric Manager server at the secondary site to start DataFabric Manager services using mirrored data. The dfm datastore mirror connect command performs the following operations: Breaks the mirror relationship between the source and destination DataFabric Manager. Connects to the mirrored volume or LUN using SnapDrive for Windows. Enables the services using the dfm service enable command. Configures DataFabric Manager to use the data from the mirrored location. Starts the DataFabric Manager services. Puts the dataset created for the DataFabric Manager data in suspended mode.
2. Run the dfm options set command to reset the localHostName global option on the secondary site, if the primary site is clustered using MSCS. 3. Run the dfm service disable command to disable the services on the primary site. If the primary site is clustered using MSCS, offline the services before disabling them.
Failing back DataFabric Manager services You must complete a list of tasks to fail back DataFabric Manager services to the primary site.
Steps
1. Ensure that the DataFabric Manager data at the source storage system is in synchronization with data at the destination storage system by completing the following steps: a) Run the dfdrm mirror list command to find relationships between the source and destination storage systems. b) Run the dfdrm mirror resync -r command to resynchronize the mirror relationships. This command reverses the mirror direction and starts updates. c) Run the snapmirror resync command to resynchronize the data at the storage system level, if the SnapMirror relationship is removed during the process of recovering DataFabric Manager services. d) Run dfdrm mirror initialize command to create a new relationship from the secondary storage system to the new primary storage system , if the primary storage system is destroyed during disaster. 2. Run the dfm service disable command to stop and disable the services at the secondary site. 3. Start DataFabric Manager services using the mirrored data on the primary site by using one of the following methods: Run the dfm datastore mirror connect command at the CLI Alternatively you perform the following procedure:
a) Connect to the LUN using MMC Connect Disk wizard on the primary storage system, by completing the procedure described in . b) Run the dfm service enable command to enable the services. c) Run the dfm datastore setup -n command to configure DataFabric Manager to use the mirrored data on LUN. d) Run the dfm service start command to start DataFabric Manager services. Note:: The dfm datastore mirror connect command does not support shared storage. Therefore, the command should not be used if the primary system is set up for cluster using MSCS. 4. Run the dfdrm mirror resync -r command to resynchronize the mirror relationships so that they are no longer reversed. 5. Run the snapmirror resync command to resynchronize the data at the storage system level, if the SnapMirror relationship is removed during the fail back process.
276 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 6. Run the dfm host discover command to discover the reversed relationships on the primary site, if they are not discovered already. 7. Run the dfdrm mirror list command to ensure that these relationships are discovered. 8. Run the dfm datastore mirror destroy command to destroy the application dataset created for DataFabric Manager data. 9. Run the dfm datastore mirror setup command to create a new application dataset for DataFabric Manager data. 10. Using the Protection Manager UI, import the SnapMirror relationship already established for DataFabric Manager data to the new application dataset. For more information about how to use Protection Manager to import SnapMirror relationships, see the NetApp Management Console Help.
AutoSupport in DataFabric Manager on page 277 DataFabric Manager logs on page 279 Common DataFabric Manager problems on page 281 How discovery issues are resolved on page 281 Troubleshooting network discovery issues on page 283 Troubleshooting appliance discovery issues with Operations Manager on page 284 How configuration push errors are resolved on page 285 How File Storage Resource Manager (FSRM) issues are resolved on page 285 Issues related to SAN events on page 286 Import and export of configuration files on page 288 How inconsistent configuration states are fixed on page 288 Data ONTAP issues impacting protection on vFiler units on page 288
Reasons for using AutoSupport on page 278 Types of AutoSupport messages in DataFabric Manager on page 278 Protection of private data by using AutoSupport on page 278
278 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Configuring AutoSupport on page 278
Weekly report
Note: If you are using a DataFabric Manager demonstration license, DataFabric Manager does not
Configuring AutoSupport
You can configure AutoSupport using Operations Manager.
1. Click Setup Options AutoSupport. 2. From the AutoSupport Settings page, identify the administrator to be designated as the sender of the notification. 3. Specify the type of AutoSupport content that messages should contain.
Note: If this setting is changed from "complete" to "minimal," any complete AutoSupport message
not sent is cleared from the outgoing message spool and notification of this is displayed on the console. 4. Enter the comma-delimited list of recipients for the AutoSupport email notification. Up to five e-mail address are allowed, or the list can be left empty. 5. Select Yes to enable AutoSupport notification to NetApp. 6. Specify the type of delivery - HTTP, HTTPS or SMTP - for AutoSupport notification to NetApp.
Note: By default, AutoSupport uses port numbers 80 for HTTP, 443 for HTTPS, and 25 for
SMTP. 7. Enter the number of times the NetApp system tries to resend the AutoSupport notification before giving up, if previous attempts have failed. 8. Enter the time to wait before trying to resend a failed AutoSupport notification. 9. Select Include to include the Performance Advisor AutoSupport data along with the DataFabric Manager AutoSupport data. 10. Select Include to include the Provisioning Manager AutoSupport data along with the DataFabric Manager AutoSupport data. 11. Click Update.
Access to logs on page 280 Accessing the logs through the DataFabric Manager command-line interface on page 280 Access to the SAN log on page 280 Apache and Sybase log rotation in DataFabric Manager on page 280
280 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Access to logs
You can access DataFabric Manager logs using Operations Manager GUI or by using the command-line interface. You can access the DataFabric Manager logs through the Diagnostics page. To access this page, use the following URL: http://mgmt_station:8080/dfm/diag In the preceding URL, mgmt_station is the name or IP address of the workstation on which DataFabric Manager is installed. You must scroll down to the Logs section to find the available DataFabric Manager logs. You access DataFabric Manager logs through the CLI , going to different directories depending on whether you are using a Windows or a UNIX workstation: On a Windows workstation, enter installation_directory\dfm\log. On a UNIX workstation, enter installation_directory/log.
Communication issues between DataFabric Manager and routers on page 281 E-mail alerts not working in DataFabric Manager on page 281
Related concepts
282 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 DataFabric Manager provides a Diagnose Connectivity tool that automates frequently used steps of the troubleshooting process for connectivity issues. Use this tool when you want to troubleshoot discovery problems. This tool queries the DataFabric Manager database about a selected storage system, runs connectivity tests, and displays information and test outcomes. The sequence of steps depends on whether the selected storage system is managed or unmanaged. A managed storage system is one that is in the DataFabric Manager database. An unmanaged storage system is one that is not in the DataFabric Manager database.
Next topics
Use of the Diagnose Connectivity tool for a managed storage system on page 282 Use of the Diagnose Connectivity tool for unmanaged storage system on page 282 Where to find the Diagnose Connectivity tool in Operations Manager on page 283 Reasons why DataFabric Manager might not discover your network on page 283
command line.
1. Ensure that the Network Discovery Enabled option on the Options page is set to Yes. Also ensure that the router is within the maximum number of hops set in the Network Discovery Limit option. 2. From the command line, run the Diagnose Connectivity tool against the IP address of the router of the network to determine if DataFabric Manager can communicate with the router through SNMP.
284 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 If you changed the IP address for the router, you must change the primary IP address stored in DataFabric Manager on the Edit Settings page. You can also modify the primary IP address by entering the following CLI command:
dfm host set host-id hostPrimaryAddress=ip-address
3. Determine whether an SNMP community string other than the default (public) is required for the network device to which the undiscovered network is attached. To set an SNMP community string in DataFabric Manager, click the Options link (in the Banner area), find Discovery Options, and then click the edit link beside Network Credentials. On the Network Credentials page, click the edit link at the right of the SNMP Community whose string you want to set. 4. If Steps 1 through 3 are not successful, add the network manually by using the Networks To Discover option on the Options page under Discovery Options.
1. Ensure that the Host Discovery Enabled option on the Options page is set to Enabled. 2. Click the edit link of the Networks to Discover option to check whether the network to which this appliance is attached has been discovered. If the network to which this storage system is attached has not been discovered, follow troubleshooting guidelines. 3. Determine whether an SNMP community string other than the default (public) is required for the network device to which the undiscovered network is attached. To set an SNMP community string in DataFabric Manager , click the Options link (in the Banner area), find discovery Options, and then click the edit link beside Network Credentials. On the Network Credentials page, click the edit link at the right of the SNMP Community whose string you want to set. 4. If Steps 1 through 3 are not successful, add the network manually by using the Networks to Discover option on the Options page under Discovery Options.
If a path walk takes a long time to complete: A path walk can take many hours to complete. You can monitor the status of the path walk that is in progress from the SRM Path Details page (Control Center Home Group Status File SRM SRM path name.
286 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Offline FC Switch Port or Offline HBA Port on page 286 Faulty FC Switch Port or HBA Port Error on page 286 Offline LUNs on page 287 Snapshot copy of LUN not possible on page 287 High traffic in HBA Port on page 287
Corrective action: 1. If the port state was not changed by an administrator, see the administration guide for your switch to diagnose the problem. 2. Ensure that the cable is connected securely to the port. 3. Replace the cables.
Offline LUNs
A LUN could be offline due if taken over by administrator or conflict between the serial numbers of two LUNs. Causes: A LUN goes offline typically in one of two situations: An administrator, to perform maintenance or to apply changes to the LUN, such as to modify its size might have changed the status The LUN has a serial number that conflicts with that of another LUN.
Corrective action: 1. 1 If an administrator did not change the LUN status, bring the LUN online from the storage system console. 2. Check for a conflicting serial number and resolve the issue.
288 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Troubleshooting in Operations Manager | 289 Issue: If the snapvault.access and snapmirror.access options on the source storage system allow access only to the destination vFiler unit, the relationship creation, scheduled backups, on-demand backups, SnapMirror updates, and SnapMirror resync from DataFabric Manager fails. DataFabric Manager displays the following error message: Request denied by the source storage system. Check access permissions on the source. Workaround: To allow access to the destination hosting storage system, set the snapmirror.access snapvault.access options on the source system. Issue: If the ndmpd.preferred_interfaces option is not set on the source hosting storage system, the backups from DataFabric Manager might not use the correct network interface. Workaround: Set the ndmpd.preferred_interfaces option on the source hosting storage system. Issue: The backups and SnapMirror updates from DataFabric Manager fail with the error message Source unknown. This issue occurs when both of the following conditions are met: A relationship between two vFiler units is imported into DataFabric Manager by autodiscovery or added manually. The destination hosting storage system is not able to contact the IP address of the source vFiler unit.
Workaround: Ensure that the host name or IP address of the source system that is used to create relationships can be reached from the destination hosting storage system .
292 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Agent
Event name Down Login Failed Login OK Up Severity Error Warning Normal Normal
Aggregate
Event name Almost Full Almost Overcommitted Deleted Discovered Failed Full Nearly Over Deduplicated Not Over Deduplicated Not Overcommitted Offline Online Overcommitted Over Deduplicated Restricted Snapshot Reserve Almost Full Severity Warning Warning Information Information Error Error Warning Normal Normal Error Normal Error Error Normal Warning
Alarm
Event name Created Deleted Modified Severity Information Information Information
CFO Interconnect
Event name Down Not Present Partial Failure Up Severity Error Warning Error Normal
CFO Partner
Event name Dead May Be Down OK Severity Warning Warning Normal
CFO Settings
Event name Disabled Enabled Not Configured Takeover Disabled Severity Normal Normal Normal Normal
294 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Severity Warning
Configuration Changed
Event name Config Group Severity Information
CPU
Event name Load Normal Too Busy Severity Normal Warning
Data Protection
Event name Job Started Policy Created Policy Modified Schedule Created Schedule Modified Severity Information Information Information Information Information
Database
Event name Backup Failed Severity Error
Dataset
Event name Backup Aborted Backup Completed Backup Failed Created Deleted DR State Ready DR State Failover Over DR State Failed Over DR State Failover Error DR Status Normal DR Status Warning DR Status Error Initializing Job Failure Member Clone Snapshot Discovered Member Clone Snapshot Status OK Member Dedupe Operation Failed Member Dedupe Operation Succeeded Member Destroyed Member Destroy Operation Failed Member Resized Member Resize Operation Failed Severity Warning Normal Error Information Information Information Warning Information Error Information Warning Error Information Warning Information Information Error Normal Information Information Information Information
296 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Event name Modified Protected Protection Failed Protection Lag Error Protection Lag Warning Protection Suspended Protection Uninitialized Provisioning Failed Provisioning OK Space Status: Normal Space Status: Warning Space Status: Error Write Guarantee Check - Member Resize Required Write Guarantee Check - Member Size OK
Severity Information Normal Error Error Warning Warning Normal Error Normal Normal Warning Error Warning Normal
Dataset Conformance
Event name Conformant Conforming Initializing Nonconformant Severity Normal Information Information Warning
Disks
Event name No Spares None Failed None Reconstructing Some Failed Severity Warning Normal Normal Error
Enclosures
Event name Active Disappeared Failed Found Inactive OK Severity Information Warning Error Normal Warning Normal
Fans
Event name Many Failed Normal One Failed Severity Error Normal Error
Filer Configuration
Event name Changed OK Push Error Severity Warning Normal Warning
298 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Severity Normal
Global Status
Event name Critical Non Critical Non Recoverable OK Other Unknown Severity Critical Error Emergency Normal Warning Warning
HBA Port
Event name Offline Online Port Error Traffic High Traffic OK Severity Warning Normal Error Warning Normal
Host
Event name Cluster Configuration Error Cluster Configuration OK Cold Start Deleted Discovered Down Identity Conflict Identity OK Severity Error Normal Information Information Information Critical Warning Normal
Event name Login Failed Login OK Modified Name Changed SNMP Not Responding SNMP OK System ID Changed Up
Host Agent
Event name Down Up Host Agent: Login Failed Severity Error Normal Warning
Inodes
Event name Almost Full Full Utilization Normal Severity Warning Error Normal
Interface Status
Event name Down Testing Unknown Up Severity Error Normal Normal Normal
300 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 LUN
Event name Offline Online Snapshot Not Possible Snapshot Possible Severity Warning Normal Warning Normal
Management Station
Event name Enough Free Space File System File Size Limit Reached License Expired License Nearly Expired License Not Expired Load OK Load Too High Node Limit Nearly Reached Node Limit OK Node Limit Reached Not Enough Free Space Provisioning Manager Node Limit Nearly Reached Provisioning Manager Node Limit Ok Provisioning Manager Node Limit Reached Protection Manager Node Limit Nearly Reached Protection Manager Node Limit Ok Protection Manager Node Limit Reached Severity Normal Error Error Warning Normal Normal Warning Warning Normal Error Error Warning Normal Error Warning Normal Error
NDMP
Event name Credentials Authentication Failed Credentials Authentication Succeeded Communication Initialization Failed Communication Initialization Succeeded Down Up Severity Warning Normal Warning Normal Warning Normal
Network
Event name OK Too Large Severity Normal Warning
Network Services
Event name CIFS Service - Up Severity Normal
302 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Event name CIFS Service - Down NFS Service - Up NFS Service - Down iSCSI Service - Up iSCSI Service - Down FCP Service - Up FCP Service - Down
No Schedule Conflict
Event name Between Snapshot and SnapMirror Schedules Between Snapshot and SnapVault Schedules Severity Normal Normal
NVRAM Battery
Event name Discharged Fully Charged Low Missing Normal Old Overcharged Replace Unknown Status Severity Error Normal Warning Error Normal Warning Warning Error Warning
Power Supplies
Event name Many Failed Normal One Failed Severity Error Normal Error
Primary
Event name Host Discovered Severity Information
Protection Policy
Event name Created Deleted Modified Severity Information Information Information
Protection Schedule
Event name Created Deleted Modified Severity Information Information Information
Provisioning Policy
Event name Created Severity Information
304 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Qtree
Event name Almost Full Files Almost Full Files Full Files Utilization Normal Full Growth Rate Abnormal Growth Rate OK Space Normal Severity Warning Warning Error Normal Error Warning Information Normal
Resource Group
Event name Created Deleted Modified Severity Information Information Information
Resource Pool
Event name Created Deleted Severity Information Information
Script
Event name Critical Event Emergency Event Error Event Information Event Normal Event Warning Event Severity Critical Emergency Error Information Normal Warning
SnapMirror
Event name Abort Completed Abort Failed Break Completed Break Failed Date OK Delete Aborted Delete Completed Delete Failed Initialize Aborted Severity Normal Error Normal Error Normal Warning Information Error Warning
306 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Event name Initialize Completed Initialize Failed Nearly Out of Date Not Scheduled Not Working Off Out of Date Possible Problem Quiesce Aborted Quiesce Completed Quiesce Failed Resume Completed Resume Failed Resync Aborted Resync Completed Resync Failed Unknown State Update Aborted Update Completed Update Failed Working
Severity Normal Error Warning Normal Error Normal Error Warning Warning Normal Error Normal Error Warning Normal Error Warning Warning Normal Error Normal
Snapshot(s)
Event name Age Normal Age Too Old Count Normal Count OK Severity Normal Warning Normal Normal
Event name Count Too Many Created Failed Full Schedule Conflicts with the SnapMirror Schedule Schedule Conflicts with the SnapVault Schedule Schedule Modified Scheduled Snapshots Disabled Scheduled Snapshots Enabled
Severity Error Normal Error Warning Warning Warning Information Warning Normal
SnapVault
Event name Backup Aborted Backup Completed Backup Failed Host Discovered Relationship Create Aborted Relationship Create Completed Relationship Create Failed Relationship Delete Aborted Relationship Delete Completed Relationship Delete Failed Relationship Discovered Relationship Modified Replica Date OK Replica Nearly Out of Date Replica Out of Date Restore Aborted Severity Warning Information Error Information Warning Information Error Warning Information Error Information Information Normal Warning Error Warning
308 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Sync
Event name SnapMirror In Sync SnapMirror Out of Sync Severity Information Warning
Temperature
Event name Hot Normal Severity Critical Normal
Unprotected Item
Event name Discovered Severity Information
vFiler Unit
Event name Deleted Discovered Hosting Storage System Login Failed IP Address Added IP Address Removed Renamed Storage Unit Added Storage Unit Removed Severity Information Information Warning Information Information Information Information Information
310 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Volume
Event name Almost Full Automatically Deleted Autosized Clone Deleted Clone Discovered Destroyed First Snapshot OK Full Growth Rate Abnormal Growth Rate OK Maxdirsize Limit Nearly Reached Maxdirsize Limit Reached Nearly No Space for First Snapshot Nearly Over Deduplicated New Snapshot Next Snapshot Not Possible Next Snapshot Possible No Space for First Snapshot Not Over Deduplicated Offline Offline or Destroyed Online Over Deduplicated Severity Warning Information Information Information Information Information Normal Error Warning Normal Information Information Warning Warning Normal Warning Normal Warning Normal Warning Warning Normal Error
Event name Quota Overcommitted Quota Almost Overcommitted Restricted Snapshot Automatically Deleted Snapshot Deleted Space Normal Space Reserve Depleted Space Reservation Nearly Depleted Space Reservation OK
Severity Error Warning Restricted Information Normal Normal Error Error Normal
Report Fields and Performance Counters for Filer Catalogs on page 313 Report Fields and Performance Counters for vFiler Catalogs on page 315 Report Fields and Performance Counters for Volume Catalogs on page 316 Report Fields and Performance Counters for Qtree Catalogs on page 318 Report Fields and Performance Counters for LUN Catalogs on page 318 Report Fields and Performance Counters for Aggregate Catalogs on page 319 Report Fields and Performance Counters for Disk Catalogs on page 320
314 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Field Filer.NFSv3Avglatency Filer.NFS4Avglatency Filer.CPUBusy Filer.iSCSIReadOps Filer.iSCSIWriteOps Filer.CIFSLatency Filer.NFSReadLatency Filer.NFSWriteLatency Filer.iSCSIRead Latency Filer.iSCSIWrite Latency Filer.FCPReadLatency Filer.FCPWriteLatency Filer.NASThroughput Filer.SANThroughput Filer.DiskThroughput Filer.NetThroughput Filer.LoadInbound Mbps Filer.LoadOutbound Mbps
Name/description
Performance counter
Storage System NFSv3 Avg Latency nfsv3:nfsv3_avg_op _latency (millisec) Storage System NFSv4 Avg Latency nfsv4:nfsv4_avg_op _latency (millisec) Storage System CPU Busy (%) system:cpu_busy
Storage System iSCSI Read Ops/Sec iscsi:iscsi_read_ops Storage System iSCSI Write Operations Storage System CIFS Latency (millisec) iscsi:iscsi_write_ops cifs:cifs_latency
Storage System NFS Read Latency nfsv3:nfsv3_read _latency (millisec) Storage System NFS Write Latency nfsv3:nfsv3_write _latency (millisec) Storage System iSCSI Read Latency iscsi:iscsi_read_latency (millisec) Storage System iSCSI Write Latency iscsi:iscsi_write _latency (millisec) Storage System FCP Read Latency (millisec) fcp:fcp_read_latency
Storage System FCP Write Latency fcp:fcp_write_latency (millisec) Storage System NAS Throughput (KB/Sec) Storage System SAN Throughput (KB/Sec) Storage System Disk Throughput (KB/Sec) system:nas_throughput system:san_throughput system:disk_ throughput
Storage System Network Throughput system:load_total _mbps (MB/Sec) Storage System Total Data Received system:load_inbound_ mbps (MB/Sec) Storage System Total Data Sent (MB/Sec) system:load_outbound_ mbps
Field Filer.NetDataSent Filer.NetDataRecv Filer.LoadReadBytes Ratio Filer.LoadWriteBytes Ratio Filer.DiskDataRead Filer.DiskDataWritten Filer.FCPWriteData Filer.FCPReadData Filer.iSCSIWriteData Filer.iSCSIReadData Filer.ProcessorBusy Filer.NFSLatency Filer.PerfViolation Count Filer.PerfViolation Period
Name/description
Performance counter
Storage System Network Data Sent system:net_data_sent (KB/Sec) Storage System Network Data Receive (KB/Sec) Storage System Ratio of disk data read and load outbound Storage System Ratio of disk data write and load inbound Storage System Disk Data Read (KB/Sec) Storage System Disk Data Written (KB/Sec) Storage System FCP Write Data (B/Sec) Storage System FCP Read Data (B/Sec) Storage System iSCSI Write Data (B/Sec) Storage System iSCSI Read Data (B/Sec) system:net_data_recv system:load_read_bytes _ratio system:load_write_byte s_ratio system:disk_data_read system:disk_data_ written fcp:fcp_write_data fcp:fcp_read_data iscsi:iscsi_write_data iscsi:iscsi_read_data
Storage System Processor Busy (%) system:avg_processor_ busy Storage System NFS Latency (millisec) Storage System Perf Threshold Violation Count Storage System Perf Threshold Violation Period (Sec) nfsv3:nfsv3_avg_op _latency Not Applicable Not Applicable
316 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Field vFiler.TotalOps vFiler.ReadOps vFiler.WriteOps vFiler.MiscOps vFiler.NetThroughput vFiler.ReadBytes vFiler.WriteBytes vFiler.NetDataRecv vFiler.NetDataSent vFiler.DataTransferred vFiler.PerfViolation Count vFiler.PerfViolation Period
Name/description vFiler Total Ops/Sec vFiler Read Ops/Sec vFiler Write Ops/Sec vFiler Miscellaneous Ops/Sec vFiler Network Throughput (KB/Sec) vFiler Number of Bytes Read (KB/Sec) vFiler Number of Bytes Write (KB/Sec) vFiler Network Data Received (KB/Sec)
Performance counter vfiler:vfiler_total_ops vfiler:vfiler_read_ops vfiler:vfiler_write_ops vfiler:vfiler_misc_ops vfiler:vfiler_nw_ throughput vfiler:vfiler_read_bytes vfiler:vfiler_write_ bytes vfiler:vfiler_net_data_ recv
vFiler Network Data Sent (KB/Sec) vfiler:vfiler_net_data_ sent vFiler Total Data Transferred (KB/Sec) vFiler Perf Threshold Violation Count vFiler Perf Threshold Violation Period (Sec) vfiler:vfiler_data_ transferred Not Applicable Not Applicable
Field Volume.SANOtherOps Volume.ReadOps Volume.WriteOps Volume.OtherOps Volume.NFSReadOps Volume.NFSWriteOps Volume.NFSOtherOps Volume.CIFSReadOps Volume.CIFSWriteOps Volume.CIFSOtherOps Volume.FlexCache ReadOps Volume.FlexCache WriteOps Volume.FlexCache OtherOps Volume.Latency Volume.CIFSLatency Volume.NFSLatency Volume.SANLatency Volume.ReadLatency Volume.WriteLatency Volume.OtherLatency Volume.CIFSRead Latency Volume.CIFSWrite Latency Volume.CIFSOther Latency Volume.SANRead Latency Volume.SANWrite Latency Volume.SANOther Latency
Name/description Volume SAN Other Ops/Sec Volume Read Ops/Sec Volume Write Ops/Sec Volume Other Ops/Sec Volume NFS Read Ops/Sec Volume NFS Write Ops/Sec Volume NFS Other Ops/Sec Volume CIFS Read Ops/Sec Volume CIFS Write Ops/Sec Volume CIFS Other Ops/Sec Volume FlexCache Read Ops/Sec Volume FlexCache Write Ops/Sec Volume FlexCache Other Ops/Sec Volume Latency (millisec) Volume CIFS Latency (millisec) Volume NFS Latency (millisec) Volume SAN Latency (millisec) Volume Read Latency (millisec) Volume Write Latency (millisec) Volume Other Latency (millisec) Volume CIFS Read Latency (millisec) Volume CIFS Write Latency (millisec) Volume CIFS Other Latency Volume SAN Read Latency (millisec) Volume SAN Write Latency (millisec) Volume SAN Other Latency (millisec)
Performance counter volume:san_other_ops volume:read_ops volume:write_ops volume:other_ops volume:nfs_read_ops volume:nfs_write_ops volume:nfs_other_ops volume:cifs_read_ops volume:cifs_write_ops volume:cifs_other_ops volume:flexcache_read _ops volume:flexcache_ write_ops volume:flexcache_ other_ops volume:avg_latency volume:cifs_latency volume:nfs_latency volume:san_latency volume:read_latency volume:write_latency volume:other_latency volume:cifs_read_ latency volume:cifs_write_ latency volume:cifs_other_ latency volume:san_read_ latency volume:san_write_ latency volume:san_other_ latency
318 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Field Volume.NFSRead Latency Volume.NFSWrite Latency Volume.NFSOther Latency Volume.Data Throughput Volume.PerfViolation Count Volume.PerfViolation Period
Name/description Volume NFS Read Latency (millisec) Volume NFS Write Latency Volume NFS Other Latency (millisec) Volume Throughput (KB/Sec) Volume Perf Threshold Violation Count Volume Perf Threshold Violation Period (Sec)
Performance counter volume:nfs_read_ latency volume:nfs_write_ latency volume:nfs_other_ latency volume:throughput Not Applicable Not Applicable
Field LUN.ReadOps LUN.WriteOps LUN.OtherOps LUN.Latency LUN.Throughput LUN.ReadData LUN.WriteData LUN.PerfViolation Count LUN.PerfViolation Period
Name/description LUN Read Ops/Sec LUN Write Ops/Sec LUN Other Ops/Sec LUN Latency (millisec) LUN Throughput (KB/Sec) LUN Read Data (KB/Sec) LUN Write Data (KB/Sec)
LUN Perf Threshold Violation Count Not Applicable LUN Perf Threshold Violation Period (Sec) Not Applicable
Aggregate Perf Threshold Violation Not Applicable Count Aggregate Perf Threshold Violation Not Applicable Period (Sec)
320 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
DataFabric Manager server communication on page 321 DataFabric Manager access to storage systems on page 321 DataFabric Manager access to host agents on page 322 DataFabric Manager access to Open Systems SnapVault agents on page 322
to restart the HTTP service from the CLI. Use the following command to restart the HTTP service: dfm service start http.
322 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
SAN management
You can use DataFabric Manager to monitor and manage componentssuch as logical unit numbers (LUNs), Fibre Channel (FC) switches, and Windows and UNIX SAN hostsof your NetApp storage area networks (SANs). The NetApp SANs are storage networks that have been installed in compliance with the "SAN setup guidelines" by NetApp. For information about setting up a NetApp SAN, see the Data ONTAP Block Access Management Guide for iSCSI and FC.
Note: NetApp has announced the end of availability for the SAN license for DataFabric Manager .
Existing customers can continue to license the SAN option with DataFabric Manager. DataFabric Manager customers should check with their sales representative regarding other SAN management solutions.
Next topics
Discovery of SAN hosts by DataFabric Manager on page 323 SAN management using DataFabric Manager on page 324 Reports for monitoring SANs on page 327 DataFabric Manager options on page 335 DataFabric Manager options for SAN management on page 335 How SAN components are grouped on page 338
Related information
Data ONTAP Block Access Management Guide for iSCSI and FC http://now.netapp.com/NOW/knowledge/docs/san/#ontap_san
324 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 You can specify the protocol to use for communication in DataFabric Manager and when you install the NetApp Host Agent software on your SAN host. By default, both the Host Agent and DataFabric Manager are configured to use HTTP.
Note: If you choose to use HTTPS for communication between DataFabric Manager and a SAN
host, ensure that both DataFabric Manager and the NetApp Host Agent software on the SAN host are configured to use HTTPS. If the Host Agent is configured to use HTTP and DataFabric Manager is configured to use HTTPS, communication between the SAN host and DataFabric Manager does not occur. Conversely, if the NetApp Host Agent software is configured to use HTTPS and DataFabric Manager is configured to use HTTP, communication between the two occurs, but HTTP is used for communication. For more information about the NetApp Host Agent software, see the NetApp Host Agent Installation and Administration Guide.
Related information
Prerequisites for SAN management with DataFabric Manager on page 324 List of tasks performed for SAN management on page 326 List of user interface locations to perform SAN management tasks on page 326
SAN management | 325 For NetApp storage systems (targets) DataFabric Manager does not report any data for your SAN if you do not have it set up according to the guidelines specified by NetApp. SAN deployments are supported on specific hardware platforms running Data ONTAP 6.3 or later. For information about the supported hardware platforms, see the SAN Configuration Guide http://now.netapp.com/. For FC switches To enable discovery of FC switches, the following settings must be enabled: discoverEnabled (available from the CLI only) Host Discovery (Setup Options Edit Options: Discovery) SAN Device Discovery (Setup Options Edit Options: Discovery)
DataFabric Manager can discover and monitor only FC switches, specifically Brocade Silkworm switches, configured in a SAN set up as specified in the SAN Setup Overview for FCP guide.
Note: For a list of supported Brocade switches, see the SAN Configuration Guide at
http://now.netapp.com/. All FC switches to be managed by DataFabric Manager must be connected to a TCP/IP network either known to or discoverable by DataFabric Manager. The FC switches must be connected to the network through an Ethernet port and must have a valid IP address. Certain FC switch monitoring reports in DataFabric Manager require that the storage systems connected to an FC switch run Data ONTAP 6.4 or later. For example, a report displaying storage systems that are connected to an FC switch displays only storage systems that are running Data ONTAP 6.4 or later.
For SAN hosts (initiators) All SAN hosts to be managed by DataFabric Manager must be connected to a TCP/IP network either known to or discoverable by DataFabric Manager. The SAN hosts must be connected to the network through an Ethernet port and must each have a valid IP address. Each SAN host must have the NetApp Host Agent software installed on it. The NetApp Host Agent software is required for discovering, monitoring, and managing SAN hosts. For more information about the Host Agent software, see the NetApp Host Agent Installation and Administration Guide. The Windows SAN hosts must have proper version of SnapDrive software installed on it, for LUN management by using DataFabric Manager. To find out which SnapDrive version you must have installed, see the DataFabric Manager software download pages at http://now.netapp.com/.
Note: LUN management on UNIX SAN hosts by using DataFabric Manager is not currently available.
326 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Related information
Options link (Setup Options) To enable and disable the discovery of SAN components and change the monitoring intervals for FC switches, LUNs, and SAN hosts in the DataFabric Manager database
SAN management | 327 Events tab (Control Center Home Group Status Events)
Alarms link (Control Center To configure alarms for SAN events Home Group Status Alarms)
Location of SAN reports on page 327 DataFabric Manager managed SAN data in spreadsheet format on page 329 Where to find information for specific SAN components on page 329 Where to view LUN details of SAN components on page 329 Tasks performed on the LUN Details page for a SAN host on page 329 Information about FCP Target on a SAN host on page 330 Information about FCP switch of a SAN host on page 331 Access to the FC Switch Details page on page 331 Information about FC Switch on a SAN host on page 331 Tasks performed on the FC Switch Details page for a SAN host on page 331 Information about Host Agent on a SAN host on page 332 Accessing the HBA Port Details page for a SAN host on page 332 Details on the HBA Port Details page on page 333 List of SAN management tasks on page 333 LUN management on page 333 Initiator group management on page 334 FC switch management on page 335
328 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 LUNs, Comments Fibre Channel Switches, All Fibre Channel Switches, Deleted Fibre Channel Switches, Comments Fibre Channel Switches, Compact Fibre Channel Switches, Down Fibre Channel Switch Environmentals Fibre Channel Switch Locations Fibre Channel Switch Firmware Fibre Channel Switches, Up Fibre Channel Switches, Uptime Fibre Channel Switch Ports Fibre Channel Links, Physical Fibre Channel Links, Logical FCP Targets HBA Ports, All HBA Ports, FCP HBA Ports, iSCSI SAN Hosts, Comments SAN Hosts, All SAN Hosts, FCP SAN Hosts, iSCSI SAN Hosts, Deleted SAN Hosts Traffic, FCP SAN Host Cluster Groups SAN Host LUNs, All SAN Host LUNs, iSCSI SAN Host LUNs, FCP LUNs, All LUNs, Comments LUNs, Deleted LUNs, Unmapped LUN Statistics LUN Initiator Groups Initiator Groups
The LUN Details page provides the following details: The status and size of a LUN Events associated with the LUN All groups to which the LUN belongs
In addition, you can obtain graphs of information about the SAN components, access the management tools for these components, and view events associated with these components
330 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Refresh Monitoring SamplesObtains current monitoring samples from the storage system on which this LUN exists FilerViewLaunches FilerView, the Web-based UI of the storage system on which the LUN exists Manage LUNs with FilerViewLaunches FilerView and displays a page where you can manage the LUN Run a CommandRuns a Data ONTAP command on the storage system on which this LUN exists
Note: You must have appropriate authentication set up, to run commands on the storage system.
Specifics about the target such as hardware version, firmware version, and speed of the target FC topology of the targetTopology can be one of the following: Fabric Point-To-Point Loop Unknown
WWNN and WWPN of the target Other FCP targets on the storage system on which the target is installed
SAN management | 331 Time of the last sample collected and the configured polling interval for the FCP target
Number of devices connected to the switch and a link to a report that lists those devices The DataFabric Manager groups to which the FC switch belongs Time when the last data sample was collected and the configured polling interval for the switch Graph of FC traffic per second on the switchYou can view the traffic over a period of one day, one week, one month, one quarter (three months), or one year.
332 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 The Tools list on the FC Switch Details page enables you to select the following tasks to perform for the switch whose Details page you are on: Edit SettingsDisplays the Edit FC Switch Settings page where you configure the login and password information in DataFabric Manager for the switch. DataFabric Manager requires the login and password information to connect to a switch using the Telnet program. Refresh Monitoring SamplesObtains current monitoring samples from the FC switch. Invoke FabricWatchConnects you to FabricWatch, the Web-based UI, of the Silkworm switch. You might want to connect to FabricWatch to manage and configure the switch. Run TelnetConnect to the CLI of the switch using the Telnet program. DataFabric Manager requires the login and password information to connect to a switch using the Telnet program.
SAN management | 333 2. Click on the name of the HBA port. The HBA Port Details page is displayed.
LUN management
You can manage a LUN in two ways by using DataFabric Manager.
334 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 Use the LUN management options available in DataFabric Manager The Host Agent Details page provides a Create a LUN option in the Tools list to create LUNs. When you select the Create a LUN option, a wizard is launched that steps you through the process of creating a LUN. The LUN Details page provides the following LUN management options in the Tools list: Expand this LUN and Destroy this LUN. These LUN management options launch wizards specific to their function. The wizards step you through the process of expanding or destroying a LUN.
By using a wizard available in the Tools list on the Host Agent Details page and LUN Details page, you can create, expand, and destroy a LUN. Before you run the wizard, ensure the availability of the following details: That the SAN host management options are appropriately set on the Options page or the Edit Host Agent Settings page. For more information about the SAN Host management options, That to manage a shared LUN on an MSCS, you perform the operation on the active node of the cluster. Otherwise, the operation fails.
Connect to FilerView, the Web-based management interface, of your storage system The LUN Details page provides a Manage LUNs with FilerView option in the Tools list, as shown in the previous example. This option enables you to access FilerView, the Web-based UI of your storage system. You can perform the following LUN management functions from FilerView: Add or delete a LUN Modify configuration settings such as the size or status of a LUN Map a LUN to or unmap a LUN from initiator groups Create or delete initiator groups
The Tools list on the LUN Details page displays two options for FilerView. The Invoke FilerView option connects you to the main window of the UI of your storage system and the Manage LUNs with FilerView option connects you directly to the Manage LUNs window. LUNs inherit access control settings from the storage system, volume, and qtree they are contained in. Therefore, to perform LUN operations on storage systems, you must have appropriate privileges set up on those storage systems.
FC switch management
The FC Switch Details page provides the Invoke FabricWatch option in the Tools list. You can use this option to connect to FabricWatch, the Web-based management tool for the Brocade SilkWorm switches.
Examples of DataFabric Manager objects are storage systems, LUNs, FC switches, and user quotas. When DataFabric Manager is installed, these options are assigned default values; however, you can change these values. The options can be changed globallyto apply to all objects in the DataFabric Manager database, or locallyto apply to a specific object or a group of objects in the DataFabric Manager database. Some options can be set globally, but not locally. When both global and local options are specified for an object, the local options override the global options.
336 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8 For SAN Host, 5 minutes
Global and local options: Host Agent Login This option specifies the user name that is used to authenticate to the NetApp Host Agent software, for SAN monitoring and management. By default, SAN monitoring is enabled; therefore, the user name guest is used. If you want to enable SAN management in addition to monitoring, you must select the user name admin. Host Agent Monitoring Password This option specifies the password that is used for the user name guest to authenticate to the Host Agent software for SAN monitoring. By default, public is used as the password; however, you can change it. If you change the password in DataFabric Manager, ensure that you change the password to the same value in the Host Agent software running on the SAN hosts. Otherwise, DataFabric Manager is not able to communicate with the SAN host. Host Agent Management Password This option specifies the password that is used for the user name admin to authenticate to the NetApp Host Agent software for SAN monitoring and management. There is no default value for the management password. Therefore, you must specify a value for this option before you can use the LUN management features through DataFabric Manager. The password you specify for this option must match the password specified in the Host Agent software running on the SAN hosts. Otherwise, DataFabric Manager is not able to communicate with the SAN host. Host Agent Administration Transport This option specifies the protocol, HTTP or HTTPS, used to connect to the NetApp Host Agent software. By default, this option is set to HTTP. Host Agent Administration Port This option specifies the port that is used to connect to the Host AgentNetApp Host Agent software. By default, 4092 is used for HTTP and 4093 for HTTPS. HBA Port Too Busy Threshold This threshold specifies the value, as a percentage, at which an HBA port has so much incoming and outgoing traffic that its optimal performance is hindered. If this threshold is crossed, DataFabric Manager generates an HBA Port Traffic High event. By default, this threshold is set to 90 for all HBA ports.
Next topics
Where to configure monitoring intervals for SAN components on page 337 Deleting and undeleting SAN components on page 337
SAN management | 337 Reasons for deleting and undeleting SAN components on page 337 Process of deleting SAN components on page 338 Process of undeleting SAN components on page 338
only from that group; DataFabric Manager does not stop collecting and reporting data about it. You must delete the SAN component from the Global group for DataFabric Manager to stop monitoring it altogether.
338 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
Restriction of SAN management access on page 338 Access control on groups of SAN components on page 339
Glossary | 341
Glossary
backup relationship A persistent association between a primary directory and a secondary volume for disk-based data backup and restore using the Data ONTAP SnapVault feature. baseline transfer An initial backup (also known as a level-0 backup) of a primary directory to a secondary volume in which the entire contents of the primary directory are transferred. A mirror that no longer replicates data from a source volume or qtree to a destination volume or qtree.
Note: The destination is readable and writable; therefore, the destination
broken-off mirror
can be used or can be resynchronized to the source. connection A connection is the physical path used to replicate data from a source storage system or vFiler unit to a destination storage system or vFiler unit. You can define up to two paths for a particular SnapMirror relationship. You can also set the connection mode for SnapMirror relationships that use multiple paths. See Managing connections for more details. A statistical measurement of activity on a storage system or storage subsystem that is provided by Data ONTAP.
counter
DataFabric Manager DataFabric Manager provides infrastructure services such as discovery, monitoring, role-based access control, auditing, logging for products in the NetApp storage and data suites. DataFabric Manager software runs on a separate server. It does not run on the storage systems. DataFabric Manager has a CLI for scripting commands that might otherwise be performed using a Web-based user interface, known as Operations Manager. dataset A collection of storage sets along with configuration information associated with data. The storage sets associated with a dataset include, a primary storage set used to export data to clients, and the set of replicas and archives that exist on other storage sets. Datasets represent exportable user data.
destination volume or A read-only volume or qtree to which you are replicating data from a volume qtree or qtree. The destination volume or qtree is usually on a storage system or vFiler unit unit that is separate from the storage system on which the original volume or qtree resides. Users access the destination volumes and qtrees only in the following cases: A disaster takes down the source volumes or qtrees
342 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
The administrator uses SnapMirror commands to make the replicated data at the destination accessible and writable
Failover is the process of making the standby DataFabric Manager server, at the secondary site, operational. Failback is the process of restoring the dataset to its original operational state. Data that is archived by the performance monitoring server on the DataFabric Manager server. All the data that is included in the Performance Advisor default views is also archived as historical data. The historical data is accessible to any Performance Advisor that can connect to the workstation. Historical data is collected on an ongoing basis, independent of whether a client has the associated performance view open or not. Historical data can be used for diagnosing past performance problems or for short-term trend analysis. A workstation or server running the NetApp Host Agent software is called a host. If the host has SAN hardware, it can also be referred to as a SAN host. If you are collecting file-system data from a host, it can be referred to as an SRM host. The physical storage system on which one or more vFiler units are configured. Some counters that Performance Advisor tracks apply to both storage systems and vFiler units. Other counters (for example, CPU usage) apply only to storage systems and the associated host of a vFiler unit. The hosting storage system is also referred to as the vFiler host.
incremental transfer A subsequent backup after a baseline transfer has occurred of a primary directory in which only the new and changed data since the last backup (baseline or incremental) is transferred. The transfer time of incremental transfers can be significantly less than the baseline transfer. logical objects logical hierarchy managed object Object types that represent storage containers such as volume, qtree, LUNs, vFiler units, and datasets are known as Logical Objects. The hierarchy that only the Logical Objects and instances when selected. A managed object represents any object that has an identity and a name in the DataFabric Manager object table. A managed object is an object that is contained within a DataFabric Manager group. Volumes, aggregates, qtrees, and LUNs are the examples of managed objects.
NetApp Management NetApp Management Console is the client platform for the Java-based NetApp Console Management Software applications. NetApp Management Console runs on a Windows or a Linux workstation, separate from the system on which the DataFabric Manager server is installed.
Glossary | 343
Typically there is an object associated with each hardware or software subsystem within Data ONTAP. Examples of hardware objects are processor, disk, NVRAM, and networking card objects. FCP, iSCSI, CIFS, and NFS are the examples of software protocol objects. WAFL, RAID and target are the examples of internal objects specific to Data ONTAP. Virtual objects like the system object capture key statistics across all the other objects in one single place. Examples of system objects are avg_processor_busy, nfs_ops, cifs_ops, and net_data_recv. A server running AIX, Solaris, HP-UX, Linux, Windows NT, Windows 2000, or Windows 2003, whose data can be backed up to a SnapVault secondary storage system.
Operations Manager Operations Manager is the Web-based user interface of DataFabric Manager, from which you can monitor and manage multiple storage systems, clusters, and other appliances. Operations Manager is used for day-to-day monitoring, alerting, and reporting on storage infrastructure. Performance Advisor The Performance Advisor component installed on the NetApp Management Console platform enables you to monitor the performance of storage system and vFiler unit as described in this chapter. The user interface of Performance Advisor contains only performance monitoring information. This Performance Advisor interface is distinct from Operations Manager, which contains other DataFabric Manager information. physical objects Object types that represent the physical resources in a storage system such as disk, aggregates, memory, network interfaces, resource pools, and RAID groups are known as Physical Objects. The hierarchy that only the Physical Objects and instances when selected. A collection of configuration settings that you can apply to multiple SnapMirror relationships. You can decide on the following using a policy: How a SnapMirror relationship replicates data from a source storage system or vFiler unit to a destination storage system or vFiler unit What a SnapMirror relationship does if the storage system or vFiler unit fails
A directory on a primary storage system that is to be backed up. A system whose data is to be backed up.
344 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
primary system qtree A qtree on a primary storage system whose data is backed up to a secondary qtree on a secondary storage system. quiesced mirror A mirror that is in a stable statethat is, no data transfers are occurring, and any future data transfers to the destination volume or qtree are blocked. You can quiesce only volumes and qtrees that are online and that are SnapMirror destinations. You cannot quiesce a restricted or offline volume or a qtree in a restricted or offline volume. A managed object in DataFabric Manager, containing storage provisioning resources like storage systems, aggregates and spare disks. A storage system or NearStore to which data is backed up. A qtree on a secondary storage system to which data from a primary qtree on a primary storage system is backed up. A volume on a secondary storage system to which data is backed up from one or more primary directories. The primary directories being backed up to a secondary volume might exist on one or many primary storage systems. A backup of DataFabric Manager data stored as a volume Snapshot copy having an .sndb extension. The replication relationship between a source and destination storage systems orvFiler units by using the Data ONTAP SnapMirror feature. SnapVault is a disk-based storage backup feature of Data ONTAP that enables data stored on multiple storage systems to be backed up to a central, secondary storage system quickly and efficiently as read-only Snapshot copies. For detailed information about SnapVault and Snapshot copies, see the Data ONTAP Data Protection Online Backup and Recovery Guide. An initial complete backup of a primary storage qtree or an Open Systems platform directory to a corresponding qtree on the secondary storage system. The backup relationship between a qtree on a primary system or a directory on an Open Systems primary platform and its corresponding secondary system qtree.
resource pool secondary storage system secondary system qtree secondary volume
SnapVault A follow-up backup to the secondary storage system that contains only the incremental transfer changes to the primary storage data between the current and last transfer actions. source volume or qtree storage set A writable volume or qtree whose data is to be replicated. The source volumes and qtrees are the objects normally visible, accessible, and writable by the storage system clients. Containers that are used for delegation, replication and in some cases, sub storage provisioning. The only container of merit in a storage set is a volume
Glossary | 345
(flexible or traditional). A storage set contains a group of volumes whereas a volume should be in at most one storage set. storage system An appliance that is attached to a computer network and is used for data storage. FAS appliances and NearStore systems are the examples of storage systems. Objects apart from the managed objects belong to the class of unmanaged objects. An unmanaged object does not have a unique identity in the DataFabric Manager table. One or more virtual storage systems that can be configured on a single physical storage system licensed for the MultiStore feature. DataFabric Manager 3.4 and later enables monitoring and management of vFiler units.
unmanaged object
vFiler unit
Index | 347
Index
A
access check, for application administrator 49 access control configuring vFiler units 50 on groups of SAN components 339 precedence of, global and group 56 access roles manage configuration files 218 accessing CLI 255 accessing DataFabric Manager CLI 255 Active Directory user group accounts 52 active/active configuration managing with DataFabric Manager 207 add administration access for host agents 145 administration passwords, host agents 141 automapping SRM paths 150 SRM paths 145 adding primary, secondary storage systems 230 Administration Transport option 130 Administrator access managing 60 administrators accounts default accounts everyone account 51 creating accounts 59 types of access controls 59 Aggregate Full threshold 179 Aggregate Nearly Full threshold 178 Aggregate Nearly Overcommitted threshold 179 Aggregate Overcommitted threshold 178, 179 aggregates capacity graphs 178 chargeback (billing). See storage chargeback 193 historical data 178 name format 178 relation to traditional volume 182 aggregates space availability 178 alarm notification customize 97 alarms acknowledging 98 alerts and alarms, differences 99 configuration guidelines 95 creating 96 e-mail messages 99, 102 SNMP traps as 90 testing 96 alerts See user alerts 100 annual charge rate, storage chargeback reports 197 Appliance Details page tasks 205 appliance management. See appliances, managing 201 Appliance Tools 204 appliances commands, running 211 configuration changes 210 console, connecting 207 grouping 202 managing, administrative access 205 archived reports Report Archival Directory 112 assign parent groups 222 authentication requirements NDMP 235
B
backing up access requirements 265 deleting from Web-based user interface 268 directory location 264 disabling schedules 267 displaying information about 268 process described 263 restoring from backup file 269 scheduling 267 starting from Web-based user interface 266 storage and sizing 265
348 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
backup requirements 233 retention copies 233 Snapshot copies 233 backup management 225, 227, 263, 341 hot mode for databases 227, 263 scripts, pre and postbackup 227, 263 Backup Manager 227, 229, 230, 231, 232, 235, 236, 240 adding primary storage systems 231 adding secondary storage systems 230 adding secondary volumes 231 authentication 227 discovering 227 discovery process 227 initial setup 229 NDMP SNMP 227 primary directory format 240 secondary volume format 240 select primary directories, qtrees 232 SnapVault relationships 229 backup relationships bandwidth limitation 237 baseline transfer 237 configuring bandwidth limit 237 create 233 defined 341 backup schedules creating 233 Snapshot copies 232 BackupManager reports 112 baseline transfer 341 billing cycle, storage chargeback reports 197 billing reports. See storage chargeback 193 Business Continuance license 226 Business Continuance Management license 241 certificates (continued) obtaining, CA-signed 127 role of, in secure connections 125 self-signed, generating 126 self-signed, security precautions for 125 CIFS, SRM requirements 148 CLI 255 CLI, accessing 256 Clone List report 191 clone volumes Clone List report 191 identifying 191 parents, relation to 191 cluster console requirements 208 configuration files acquired from parent groups 223 compare configurations 219 manage 217 prerequisites, for storage systems and vFiler Units 217 properties 223 pull 219 tasks to manage 218 template configuration 219 verify successful push 220 configuration group remove configuration file 220 tasks to manage 220 configuration management storage system 217 configuration plug-ins Data ONTAP plug-in 219 configuration resource groups 81, 82, 222 assign parent group 222 see also configuration files creating 82 configuration settings 220 configuration using DataFabric Manager data protection 225 configuring multiple storage systems 223 connect storage system Run Telnet tool 207 considerations assigning parent group inherit, configuration files of parent group 222 console access to DataFabric Manager 255 Create a LUN wizard 172 CSR 126 currency format, storage chargeback reports 196
C
CA-signed certificate 126 capacity reports 138 capacity thresholds 187 catalogs See reports 105 Certificate Signing Request 126 certificates definition of CA-signed 126
Index | 349
custom comment fields 97, 98, 203 alarm notification by e-mail 97 script format 97 trap format 98 Disaster Recovery Manager connection management 245 defined 241 managing SnapMirror relationships 241 storage system authentication 247 tasks, general 241 discoveries new, qtrees 228 Open Systems SnapVault monitor 228 discovery DataFabric Manager Host Agent software 165 SAN host 165 Disk space hard limit option 160 Disk space soft limit option 160 Domain users Pushing 69 Viewing settings 68
D
database backup about Backup process 263 DataFabric Manager 263 Restore process 269 database scripts, pre and postbackup 227, 263 DataFabric Manager database backup 263 deleting and undeleting objects 198 dfm backup command (CLI) 269 groups objects 77 logging in to 50 restoring backup file 269 DataFabric Manager Host Agent software administration access 144 capabilities 165 communication ports 141 overview 140 passwords 141 DataFabric Manager options global and local options 335 objects 335 DataFabric Manager server access to host agents 322 access to Open Systems SnapVault agents 322 access to storage systems 321 communication 321 HTTP, HTTPS 321 protocols and port numbers 321, 322 Day of the Month for Billing option 197 default backup schedule 233 default role 53 delete SRM paths 150 deleting 75, 173 SAN components 173 deleting and undeleting 198 directories not backed up viewing 228 Disaster Recovery Management policy management 244
E
edit SRM paths 149 editing user quotas 213 Event Reports 112 events Aggregate Almost Full 179 Aggregate Almost Overcommitted 179 Aggregate Full 179 Aggregate Overcommitted 179 clearing configuration events 94 defined 93 list of, complete 291 managing 94 Qtree Full 189 Qtree Nearly Full 189 severity types of 93 user, notification of 99 viewing 94 Volume Almost Full 182 Everyone, administrator account 51 example assigning parent groups 222
F
failover mode, of multiple paths 247 FC switch managing 335 FCP Target Details page 171 FCP targets 171
350 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
FCP topology 171 File Storage Resource Management, FSRM 137 FilerView 89, 212 configuring storage systems 212 links 89 Files hard limit option 160 Files soft limit option 160 GlobalSRM role 54 GlobalWrite role 54 graphs LUNs 169 Graphs SAN hosts 171 group access control applying 59 description 59 precedence over global access control 53 group access privileges accounts 59 groups changing thresholds 83 definition of 77 global 79 guidelines for creating 82 reports (views) about 84 See also configuration resource groups 82
G
giveback 209 GloablDataProtection 54 global access control applying to administrator accounts 59 description of 59 precedence over group access control 53 global and local options HBA Port Too Busy Threshold 335 Host Agent Administration Port 335 Host Agent Administration Transport 335 Host Agent login 335 Host Agent Management Password 335 Host Agent Monitoring Password 335 SAN management 335 Global Delete 218 global groups 79 global privileges creating accounts 59 Global Read 218 Global SAN role 54 global thresholds setting 236 Global Write 218 Global-only options LUN monitoring interval 335 managing SAN 335 SAN device discovery 335 global, group information location 203 GlobalBackup role 54 GlobalDataSet 54 GlobalDelete role 54 GlobalDistribution role 54 GlobalEvent role 54 GlobalFullControl role 54 GlobalMirror role 54 GlobalPerfManag 54 GlobalQuota role 54 GlobalRead role 54 GlobalRestore role 54
H
hierarchical groups 79 host agent administration access 144 passwords 144 Host Agent Details page 171 host agents types of 140 host discovery 36 hosts.equiv 133 hosts.equiv file 133 hot backup mode 227, 263 HTTPS enabling 127 requirements for 128
I
incremental transfer, defined 341 initiator group connecting to FilerView 334 managing 334 viewing LUNs mapped to 169 installing licenses 229
Index | 351
J
jobs 75
L
lag thresholds 235, 252 SnapMirror 252 level-0 backup 341 license key Business Continuance Management 241 SRM 139 licenses, permissions 229 limitations managed host options 131 local threshold setting 236 Local users 61, 62, 63, 64, 65, 66 Adding 63, 66 Deleting 65 Editing passwords 66 Pushing 66 Pushing passwords 64 Viewing settings 62 login and password SAN hosts 172 Login Protocol option, for managed hosts 130 LUN connecting to FilerView 334 managing 334 LUN Details page 169, 170 LUNs creating 172 deleting and undeleting 173 destroying 170 expanding 170 graphs 169 initiator groups mapped to 169 reports in spreadsheet form 111 status of 169 stop monitoring 173
Managed host options (continued) where to find 129 managing administrator access domain users 67 Mirror Reports 112 modification of passwords issue 133 monitoring 77 monitoring intervals See options 89 monitoring options 89 guidelines for changing 89 location to change 89 monitoring passwords, host agents 141 monitoring process flow chart 87 multiple paths failover mode 247 multiplexing mode 247 MultiStore 213
N
NDMP service enabling 229
O
Open Systems SnapVault hosts 228 Operations Manager deleting a backup from 268 starting a backup from 266 options 89, 130, 160 Administration Transport, managed hosts 130 Disk space hard limit 160 Disk space soft limit 160 Files hard limit 160 Files soft limit 160 Login Protocol, for managed hosts 130 order of applying privileges 56 overcommitment strategy 178 Overview 60
M
mailformat file 102 mailmap file 100 manage discovered relationships enable DataFabric Manager 235 Managed host options limitations 131
P
page 161, 178 passwords for host agents 141 path walks recommendations 151
352 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
paths adding paths 148 CIFS requirements 148 CLI quoting conventions 149 UNC requirements, 148 valid path formats 148 paths, managing automapping 150 deleting 150 editing 149 quick reference 147 viewing details, directory-level 149 Permanently stop monitoring 337 Policy Reports 112 potential length, parent chains 222 predefined roles 51 prerequisites 139, 159, 160, 226, 241 editing user quotas 160 managing user quotas 159 managing, SnapMirror relationships 241 SRM components 139 system backup 226 primary directories 341 primary storage systems 341 defined 341 push jobs 223 report categories capabilities 112 report generation scheduling 111 report outputs list 118 report schedule deleting 114 disabling 115 editing 114 listing results 116 retrieving 115 running 115 Report Schedules report existing report schedules 112 reports aggregate 105 aggregate array LUNs 105 appliance 105 array LUNs 105 catalogs 105 configuring 109 customizing options 105 datasets 105 deleting custom 110 disks 105 events 105 FC link 105 FC switch 105 FCP target 105 file system 105 group summary 105 groups 84 History performance reports 105 Host domain users 105 Host local users 105 Host roles 105 Host usergroups 105 Host users 105 LUN 105 performance events 105 putting data in spreadsheets 111 Quotas 105 Report outputs 105 Report schedules 105 resource pools 105 SAN hosts 105 schedules 105 scripts 105 spare array LUNs 105
Q
quick reference for tasks 142 quiesced SnapMirror relationships 251 quota thresholds 162 quotas hard 158 process 158 soft 158 why you use 157 Quotas tab renamed to SRM 139
R
RBAC access 58 RBAC resource 57 remote configuration feature 217 remote platform management RLM card management 260 system requirements 260 Remote Platform Management configuring RLM card IP address 260
Index | 353
reports (continued) SRM 105 storage systems 105 types 104 user quotas 105 vFiler 105 viewing, tasks performed 204 volume 105 requirements SRM 139 SRM license requirements 159 restoring DataFabric Manager database 269 restricting access to SRM data 152 roles create 56 global 53 GlobalWrite GlobalDelete 217 group 56 modify 57 order applying 56 Roles 72, 73 configuring 73 run commands on storage systems Run a Command tool 206 running commands on storage systems 211 SAN management (continued) SAN hosts, initiators 324 tasks performed 326 tasks performed, UI locations 326 SAN reports 327 saved reports 117, 118, 119 list of failed report outputs 118 list of successful report outputs 118 viewing the output of a particular report 119 Saved reports Viewing the output details of a particular report output 119 Viewing the output of a particular report output 119 scenario example, identify oldest files 152 schedule report defined 116 methods 113 schedules adding 116 deleting 117 editing 117 list 116 scheduling report 113 scripts, pre and postbackup 227, 263 secondary storage systems defined 341 secondary volumes 341 defined 341 secure connections components of 128 enabling HTTPS for 127 global options versus appliance options 131 protocols used for 128 requirements for, on workstation 128 secure shell 125 Secure Sockets Layer 128 See Volumes 182 server console access 255 set up, steps for FSRM 139 Setting up the DataFabric Manager database backup 263 severity types for events 93 SnapMirror Business Continuance Management license key 241 lag thresholds 252, 253 license requirement for managing 241 relationships, managing 241 synchronous, modes 247 SnapMirror Lag Error threshold 253 SnapMirror Lag Warning threshold 253
S
SAN components access control 339 configuring monitoring intervals 337 deleting and undeleting 337 deleting process 338 grouping 338 reasons, deleting, and undeleting 337 undeleting process 338 SAN hosts administration transport 172 editing settings for 172 host agents 172 port for communication 172 SAN management FC switches 324 Global and local options 335 Global-only options 335 GlobalSAN role 338 license key 324 list of tasks 333 prerequisites 324
354 | Operations Manager Administration Guide For Use with DataFabric Manager Server 3.8
SnapMirror relationships resynchronize 252 Snapshot copies local data protection 234 schedules, interaction 234 Snapshot copy monitoring requirements 192 Snapshot-based backups limitations 265 SnapVault 229, 341 defined 341 SnapVault backup relationships CLI commands 237 configure 237 SnapVault relationship NDMP 228 SnapVault relationships configuration of DataFabric Manager 230 SNMP 90, 91, 92, 223 traps 90, 91, 92 spreadsheets LUN reports, putting into 111 SRM comparison to capacity reports 138 license keys 139 monitoring, requirements for 138 password, setting 145 passwords, setting 145 SRM hosts administration access configuration settings 144 described 140 enabling administration access 145 passwords, types of 141 quick reference of management tasks 142 SRM Reports 112 SSH 128 SSL 128 stop monitoring a LUN 173 storage-related report 175 Subgroup reports 85 Summary reports 84 Symbols /etc/quotas file 160, 161 quotas file 158 synchronous SnapMirror failover mode 247 multiplexing mode 247
T
takeover, tool 208 targets, See FCP Targets 171 templates customizing reports 104 Temporarily stop monitoring 337 thresholds Aggregate Full 179 Aggregate Full interval 179 Aggregate Nearly Full 179 Aggregate Nearly Overcommitted 179 Aggregate Overcommitted 179 changing group 83 descriptions of 162 editing 176 global and group levels 176 overriding global settings 178 precedence of 163 SnapMirror 252 user quota 162 ways and locations to apply 162 traditional volumes 182
U
undeleting (restoring) objects 198, 199 SAN components 173 user accounts 51 user alert e-mail message 99 mailformat file 102 mailmap file 100 user alerts 99, 101, 102 alarms and alerts, differences 99 contents of e-mail message 102 Default Email Domain for Quota Alerts option 101 defined 99 user capabilities 56 user quota about editing 160 mailformat file 102 mailmap file 101 User Quota Monitoring Interval option 160 user quotas alarms and alerts, differences 99 alert See user alert 99 Usergroups 69, 70, 71, 72 Adding 70
Index | 355
Usergroups (continued) Deleting 71 Editing settings 71 Pushing 72 Viewing settings 70 using DataFabric manager 217 using SnapVault 341 viewing data 151 volume capacity thresholds modify 187 Volume Full Threshold 182 Volume Nearly Full threshold 182 Volume Nearly No First Snapshot Threshold 187 Volume Snap Reserve Full Threshold 187 Volume Snapshot Count Threshold 187 Volume Snapshot No First Count Threshold 187 volume thresholds Snapshot copy, events 187 Volume Too Old Snapshot Threshold 187 volumes chargeback (billing). See storage chargeback 193
V
vFiler administrator configuring 50 vFiler unit 213, 214 management tasks 214 vFiler units access control administrators 50 adding to Resource groups 82 vFiler Units editing quotas 159 view storage system details Refresh Monitoring Samples tool 206
W
workstation command line access (CLI) 255 Web-based access (GUI) 255