100% found this document useful (1 vote)
389 views

OnCommand Unified Manager DFM-Opration

OnCommand_Unified_Manager_DFM-Opration.docx

Uploaded by

Purushothama Gn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
389 views

OnCommand Unified Manager DFM-Opration

OnCommand_Unified_Manager_DFM-Opration.docx

Uploaded by

Purushothama Gn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

OnCommand Unified Manager

Installation and Setup Guide


For Use with Core Package 5.2.1 -October 2014

Overview of manageability software components


OnCommand Unified Manager Core Package provides backup and restore capabilities, monitoring,
and provisioning for a storage environment. OnCommand Unified Manager Core Package brings
together multiple products, including Operations Manager, protection capabilities, and provisioning
capabilities, into a single framework that provides an integrated, policy-based data and storage
management.
5.1 and later supports Data ONTAP operating in 7-Mode environments or clustered environments.
However, 5.1 or later does not support management of both modes from the same instance. During
the installation process, you are prompted to select either a 7-Mode environment or a clustered
environment.
Note: NetApp has announced the end of availability of OnCommand Unified Manager Host

Package.

architecture
OnCommand Unified Manager Core Package includes interaction among front-end user interfaces
(such as the OnCommand console, the NetApp Management Console, and the Operations Manager
console) and back-end servers or services (such as the DataFabric Manager server and storage
systems).
You can use the Operations Manager console and the NetApp Management Console to manage your
physical environment.

Contents of the OnCommand Unified Manager Core


Package
Understanding what components compose the OnCommand Unified Manager Core Package and
what these components enable you to do helps you determine which components you want to enable
during the installation and setup process.

Components installed with the OnCommand Unified Manager Core


Package
Understanding the different components of the OnCommand Unified Manager Core Package helps
you determine which components you want to enable during the installation and setup process.
The following components are installed on your system:
DataFabric Manager server
Enabled by default.
DataFabric Manager server services
Enabled by default.
NetApp Management Console with protection, provisioning, and Performance Advisor

capabilities
Bundled with the Core Package but must be installed separately.

Functionality available with OnCommand Unified Manager Core


Package
You can manage physical storage objects on primary and secondary storage after installing
OnCommand Unified Manager Core Package software using OnCommand console, Operations
Manager console, NetApp Management Console, and separate PowerShell Cmdlets for , which all
are installed with OnCommand Unified Manager Core Package.
The Core Package includes the graphical user interface (GUI) console from which you can access
storage management functionality that was previously accessible through separate NetApp software
products. delivers access to this functionality through three GUI consoles and a separate set of

PowerShell Cmdlets for :


OnCommand console
The OnCommand console enables you to perform the following tasks:
View a set of dashboard panels that provide high-level status of physical objects and support
drill-down capabilities.
Launch other capabilities in the Core Package.
Export, share, schedule, sort, filter, hide, and print data in the reports for physical objects.
Operations Manager console
The Operations Manager console enables you to perform the following tasks:
Manage users and roles.
Monitor clusters, nodes, and vFiler units.
Monitor physical objects for performance issues and failures.
Manage storage systems and vFiler units, and virtual servers.
Schedule and manage scripts.
Track storage usage and available capacity.
NetApp Management Console
The NetApp Management Console enables you to perform the following tasks:
Provision physical resources.
Back up and restore physical objects.
Manage space on secondary storage.
Provide disaster recovery for physical objects (automated failover and manual failback).
Monitor performance.
View dashboards for physical objects.
Create and edit storage services.

System requirements
Before you install the software, you must ensure that your host system conforms to all supported
platform requirements. Servers running OnCommand Unified Manager Core Package must meet
specific software, hardware, and operating system requirements.

Browser support, requirements, and limitations


To ensure that you can install and launch the software successfully, you must follow the
requirements and limitations for the Microsoft Internet Explorer and Mozilla Firefox browsers
supported by the management software.
Supported browsers
The management software supports the following browsers, based on the operating system and the
GUI console used:

Browser requirements and limitations


Mozilla Firefox
Mozilla Firefox versions 17.0, 18.0, and 19.0 are not supported. You should disable the
automatic upgrade feature in Firefox to avoid installing an unsupported version.
Microsoft Internet Explorer, version 8
To avoid browser display issues, you must disable the Compatibility View feature
before launching the OnCommand console.
For details, see the Microsoft support site.
To ensure that OnCommand console can be launched, you must ensure that active
scripting and that binary and script behaviors are enabled.
If enhanced security is enabled in Internet Explorer 8, you might have to add http://
DataFabricManagerserverIPaddress:8080 to the browser's list of trusted
sites so that you can access the server.
You might also have to add port 8443 to the list of trusted sites if you are using SSL.
You can add ports and URLs in the Security tab under Tools > Internet Options.
Microsoft Internet Explorer, version 9
If enhanced security is enabled, you must disable it. If enhanced security is enabled in

Internet Explorer 9, the OnCommand console might not load.

License requirements
Each of the components has specific licensing requirements.
Core Package
OnCommand Unified Manager Core Package does not require a license.
DataFabric Manager server
The DataFabric Manager server requires one core license key, which is free and is used
only to establish a unique serial number for the server.
Data ONTAP requirements
Certain functionality requires other types of licenses for Data ONTAP.
NetApp has announced the end of availability for SAN licenses. DataFabric Manager
server customers should check with their NetApp sales representative regarding other
NetApp SAN management solutions.
OnCommand Unified Manager Core Package 5.2.1 does not support the Business
Continuance Option (BCO) license because NetApp has announced its end of support.
If you are using the BCO license to monitor and manage your data protection
environment, use the data protection capability of the NetApp Management Console in
Core Package 5.2.1. Otherwise, use Core Package 5.1 or earlier.

License requirements for data protection


If you want to use the Business Continuance Option (BCO) license for data protection, you should
use Core Package 5.1 or earlier. If you want to upgrade to Core Package 5.2R1, you must use the
NetApp Management Console to monitor and manage your data protection environment.
In OnCommand Unified Manager Core Package 5.2R1, you cannot use the BCO license to monitor
and manage your data protection environment. OnCommand Unified Manager Core Package 5.2R1
does not support the BCO license because NetApp has announced end of support for this feature.
Important: Before upgrading to OnCommand Unified Manager Core Package 5.2R1, all BCO

functionality must be transferred to protection capability or removed.

Network storage requirements for database files


To enable optimal database access and performance results, DataFabric Manager server requires that
the DataFabric Manager server database files be installed on a server using either SAN or iSCSI to
connect to the network.
Sybase and DataFabric Manager server do not support accessing the DataFabric Manager server
Sybase database files on NAS.
You should not delete the SQL files that are installed in the /tmpdirectory. If the SQL files are
deleted from the /tmpdirectory, the DataFabric Manager server cannot start.

Software required prior to installing OnCommand Unified Manager


Core
Package
Before installing OnCommand Unified Manager Core Package, you must install Adobe Flash Player
8.0 or later on the machine from which you launch the OnCommand console.
You can download the software from the Adobe downloads site.
Before you download Flash Player, you should ensure that file downloads are enabled in your web
browser and, if you are using Microsoft Internet Explorer, verify that the security settings for
ActiveX controls are enabled.
You must install Adobe Flash Player from each browser type that you intend to use with the
OnCommand console, even if the browsers are on the same system. For example, if you have both
Mozilla Firefox and Microsoft Internet Explorer on the same system and you think you might use
both browsers to access the OnCommand console, you must install Adobe Flash Player using the
Firefox browser, and then install Adobe Flash Player using the Internet Explorer browser.

Software required for Open Systems SnapVault


You must separately download and install Open Systems SnapVault software if you intend to back
up and restore data residing on non-NetApp physical storage systems; otherwise, you cannot back up
and restore data on those storage environments.
OnCommand Unified Manager Core Package supports the use of Open Systems SnapVault to back
up and restore virtual machines in a non-NetApp storage environment, but it is not required.
OnCommand Unified Manager Core Package supports Open Systems SnapVault 2.6.1, 3.0, and
3.0.1.

Software required for NetApp Host Agent (7-Mode environments only)


You must separately download and install NetApp Host Agent software if you want to monitor SAN
hosts.
The Host Agent software collects information such as operating system name and version, HBA port
details, and file-system metadata, and then sends that information to the DataFabric Manager server.
The NetApp Host Agent software must be installed on any Windows or Linux hosts from which you
want to monitor SAN host NetApp OnCommand management software.
NetApp Host Agent is also required if you want to remotely start, stop, or restart Open Systems
SnapVault software by using NetApp Management Console. In this case, the Host Agent must be
installed on the same machine as Open Systems SnapVault.
The minimum version supported by OnCommand Unified Manager Core Package is NetApp Host
Agent version 2.7.

Hardware requirements for Windows Server with 1 to 25 storage


systems
You must meet certain software and hardware requirements when you use systems running Windows
64-bit OS on x64 hardware.

Operating system requirements


The software requirements are as follows:
Microsoft Windows Server 2008, Enterprise or Standard edition
Microsoft Windows 2008 R2, Enterprise or Standard edition
Microsoft Windows Server 2012, Datacenter or Standard edition
Microsoft Windows Server 2008 or 2008 R2 running on VMware ESX 4.0, ESX 4.1, ESXi 4.0,
ESXi 4.1, ESXi 5.0, ESXi 5.1, or ESXi 5.5
Microsoft Windows Server 2012 Standard edition or 2012 R2 running on VMware ESX 4.0, ESX
4.1, ESXi 4.0, ESXi 4.1, ESXi 5.0, ESXi 5.1, or ESXi 5.5

Installation requirements specific to 7-Mode


environments
Some installation and setup features are specific to 7-Mode environments and are noted as 7-Mode
only in the documentation. All other installation information, requirements, and instructions apply to
both 7-Mode and clustered environments.
When you begin installing in a 7-Mode environment, you must select 7-Mode when prompted.
If you change your environment from clustered to 7-Mode, you must delete the clustered objects
from the DataFabric Manager server.

Installation requirements specific to clustered


environments
supports both clustered and 7-Mode environments, however; there are some minor distinctions in the
installation process in the clustered environment of which you should be aware.
When you begin installing in a clustered environment, you must select clustered environment when
prompted. can monitor up to 250 controllers in the clustered environment.
If you are installing or upgrading in a clustered environment, you can install only the Standard

edition of the software.


When you upgrade your 7-Mode environment to a clustered environment, you must delete the 7Mode objects from the DataFabric Manager server.
Upgrading from the Express edition of to a clustered environment is not allowed. If you currently
have the Express edition of installed, you must first upgrade to the Standard edition of 5.0.
Host services and NetApp Host Agent are not supported in a clustered environment. If you are
managing host services when upgrading to a clustered environment, you are notified that the host
services will be removed. Information related to host services and NetApp Host Agent features are
noted as 7-Mode only in the documentation. All other installation information, requirements, and
instructions apply to both 7-Mode and clustered environments.

Downloading OnCommand Unified Manager


Core
Package
Before installing OnCommand Unified Manager Core Package, you must download the software
package from the NetApp Support Site.
Before you begin

You must have an account on the NetApp Support Site.


About this task

You can choose from the executable files based on your operating system.
You must not change the default location of the local TempFolder directory, or the installation fails.
You can install OnCommand Unified Manager Core Package 5.2.1 on 64-bit systems only.
Note: You must install Core Package 5.1 or earlier to use a 32-bit Windows system or Linux

system.

Installing OnCommand Unified Manager Core


Package on Windows
After you have met the guidelines, requirements, and restrictions for installing OnCommand Unified
Manager Core Package, you can follow the prompts in the installation wizard to install the software.
Before you begin

You must have administrator privileges for the Windows computer on which you are installing
the Core Package.
You must have downloaded the setup file.
You must have the following items:
The DataFabric Manager server license key
Credentials for network access
The IP address of the server on which you are installing the software
The path of the directory on which you want to install, if different from the default location
In addition, your antivirus software must include the following changes:
Either the antivirus software is disabled or an exclusion is added for the DataFabric Manager

server.
If this condition is not met, the installation fails.
The Sybase ASA files are excluded to avoid both DataFabric Manager server performance
issues and the possibility of database corruption.
About this task

If you have to manage both 7-Mode and clustered Data ONTAP server environments, you must
install two separate Core Packages on two Windows servers.
Steps

1. Start the Core Package installation wizard by running the appropriate setup file.
2. Choose the environment: 7-Mode or Cluster-Mode.
Attention: After the Core Package installation is complete, you cannot change the
environment.
3. Select the installation location, if different from the default.
Note: Do not change the default location of the local TempFolderdirectory, or the installation
fails. The installer automatically extracts the installation files to the %TEMP%location.

4. Review the summary screen and consider whether you want to make changes before completing
the installation, and then click Install.
5. When the Installation Complete screen is displayed, click Next to continue.
6. If you want to start the OnCommand console, clear your browser cache, and then select Launch
OnCommand console.
7. Click Finish.
After you finish

During the installation process, the installer creates some temporary folders that are automatically
deleted the next time you reboot the system. You can delete these folders without adversely affecting
the installation of the Core Package.

Installing OnCommand Unified Manager Core


Package with a script
You can quickly deploy OnCommand Unified Manager Core Package using a scripted, unattended
installation. The installation script contains the installation settings for the Core Package.
Before you begin

You must have administrator privileges for the Windows computer on which you are installing
the Core Package.
The script must contain the following required information:
server license key
Credentials for network access
IP address of the server on which you are installing
Directory path where you want to install if different from the default location
About this task

The installation script can reside in one of the following locations:


Default installation script
FTP
HTTP/HTTPS
NFS
Local disk
USB flash drive
Steps

1. Create a script using the supported commands.


2. Edit the installation script as required to change the options that are unique for each installation.
3. Save the script to the location from which you want to run it.
4. Run the scripted installation or set a schedule for when the script is to run.

Setting up Web security after restoring a


database on a new OnCommand Unified
Manager Core Package installation
You can restore a database backup from another DataFabric Manager server instance to the new
DataFabric Manager server installation, for instance, when you want to upgrade your hardware;
however, database backups do not include the key and certificate file, so these must be generated or
imported, and HTTPS must be enabled if it was set on the old system.
About this task

Perform these steps from a console session on the new DataFabric Manager server after you install
the OnCommand Unified Manager Core Package.
Steps

1. Perform one of the following actions:


Enter the dfmsslservicesetupcommand to create new client certificates.
Enter dfmsslserverimportto import an existing certificate.
2. If the HTTPS service was enabled on the system from which the database backup was made, you
must also enable the HTTPS service on the new system by entering
dfmoptionsethttpsEnabled=Yes.

Enabling FIPS on DataFabric Manager server


You can enable the Federal Information Processing Standard (FIPS) 140-2 mode on the DataFabric
Manager server by using the DataFabric Manager global options. By default, FIPS 140-2 is disabled
on OnCommand Unified Manager Core Package.
Before you begin

Your storage systems must be running clustered Data ONTAP.


About this task

You must perform this task from a console session on the DataFabric Manager server after you
install OnCommand Unified Manager Core Package.
Steps

1. Enable FIPS by running the following command from a console session:


dfmoptionsetsslFipsEnabled=Yes

2. When prompted, restart all the all the DataFabric Manager server services by running the
following commands:
dfmservicestop
dfmservicestart

3. Verify that the DataFabric Manager server is operating in FIPS mode by choosing one of the
following options:
Check log files such as dfmmonitor.log, dfmserver.log,dfmeventd, and error.log.
Use the DataFabric Manager global option sslFipsEnabled.

Installing NetApp Management Console


You can download and install NetApp Management Console through the OnCommand console.
NetApp Management Console is required to perform many of your physical storage tasks. You must
install NetApp Management Console 3.3, which contains bug fixes found in the 3.2 version.
Before you begin

You must be authorized to perform all the steps of this task; your RBAC administrator can confirm
your authorization in advance.
About this task

During this task, the OnCommand console launches the Operations Manager console. Depending on
your browser configuration, you can return to the OnCommand console by using the Alt-Tab key
combination or clicking the OnCommand console browser tab. After the completion of this task, you
can leave the Operations Manager console open, or you can close it to conserve bandwidth.

Steps

1. Log in to the OnCommand console if necessary.


2. Click the File menu, and then click Download Management Console.
A separate browser tab or window opens to the Management Console Software page in the
Operations Manager console.
3. Click the download link for the Linux or Windows installation.
4. In the download dialog box, click Save File.
The executable file is downloaded to your local system, from the system on which the
OnCommand Unified Manager Core Package was installed.
5. From the download directory, run the nmconsolesetupxxx.xxxexecutable file.
The NetApp Management Console installation wizard opens.
6. Follow the prompts to install NetApp Management Console.
Result

After installation, you can access NetApp Management Console from the following locations:
On Windows systems, the default installation path is C:\ProgramFiles\NetApp
\ManagementConsole.
You can launch the console from the NetApp directory on the Start menu.
On Linux systems, the default installation path is /usr/lib/NetApp/management_console/.
You can launch the console from /usr/bin.

Determining whether a storage system belongs


to a workgroup or a domain
The storage system that you use for the Core Package installation must belong to a domain rather
than a workgroup. Prior to installing the Core Package, you must determine if the system belongs to a
workgroup or a domain.
Step

1. Rightclick My Computer and click Properties and the Computer Name tab.
For details, see the documentation for your Windows operating system.
The Computer Name tab displays either a Workgroup label or a Domain label

Enabling secure communication between the


DataFabric Manager server and Data ONTAP
You should configure the storage system running Data ONTAP and the DataFabric Manager server
to enable secure communication.
Before you begin

You must have enabled secure communication on the storage system running Data ONTAP by
using OnCommand System Manager or the Data ONTAP command-line interface.
You must have enabled SNMPv3 on the storage system running Data ONTAP and on the
DataFabric Manager server.

Enabling secure communication between the

DataFabric Manager server and Data ONTAP


You should configure the storage system running Data ONTAP and the DataFabric Manager server
to enable secure communication.
Before you begin

You must have enabled secure communication on the storage system running Data ONTAP by
using OnCommand System Manager or the Data ONTAP command-line interface.
You must have enabled SNMPv3 on the storage system running Data ONTAP and on the
DataFabric Manager server.
For more information about enabling SNMP, see the Data ONTAP Network Management Guide
and the OnCommand Unified Manager Operations Manager Administration Guide.
Steps

1. Initialize the DataFabric Manager server private key and generate a self-signed certificate by
running the following command and following the prompt:
dfmsslserversetupf

2. Restart the HTTP service by running the following commands:


dfmservicestophttp
dfmservicestarthttp

3. Enable HTTPS by running the following command:


dfmoptionsethttpsEnabled=Yes

4. Request for a signed certificate from a well-known CA by running the following command:
dfmsslserverreqfoserver.csr
The server.csrfile should be signed by a CA.

5. Import the signed certificate to the DataFabric Manager server by running the following
command:
dfmsslserverimportserver.crt

6. Restart the HTTP service by running the following commands:


dfmservicestophttp
dfmservicestarthttp

7. Enter the certificate information for a CA setup by running the following command and following
the prompt:
dfmsslselfsetupf

The CA is ready to sign requests.


8. If the DataFabric Manager server is running a private CA, perform the following steps:
a. Run the following command to allow certificate signing requests:
dfmsslselfsignfoserver.crtserver.csr

b. Import the signed certificate to the DataFabric Manager server by running the following
command:
dfmsslserverimportserver.crt

9. Change the communication options by running the following commands:


dfmservicestophttp
dfmoptionsethttpsEnabled=yes
dfmoptionsethttpEnabled=no

dfmoptionsethttpsPort=8443
dfmoptionsethostLoginProtocol=ssh
dfmoptionsethostAdminTransport=https
dfmoptionsetperfAdvisorTransport=httpsOk
dfmservicestarthttp

10. Verify that secure communication is enabled with the host by running the command:
dfmhostdiaghostID_or_hostIP

You should be able to connect to the OnCommand console by using the following URL: https://
DataFabric_Manager_server_IP_or_hostname:httpsPort/

Upgrading to OnCommand Unified Manager Core


Package
You can upgrade to OnCommand Unified Manager Core Package to use the monitoring tools and
dashboards of the OnCommand console.
If you are upgrading on a Windows 64-bit server, from DataFabric Manager server 4.x to
OnCommand Unified Manager Core Package 5.0 or later, you must back up the DataFabric Manager
server 4.x database before the upgrade. When you restore the database after the upgrade, the
installation directory C:\ProgramFiles\NetApp\DataFabricManager\DFM\is chosen based
on the 64-bit Core Package.
Note: If you do not back up before upgrading, the installer chooses the ProgramFiles(x86)

directory, which is the location for 32-bit applications.

Changes that affect upgrade to OnCommand Unified


Manager Core Package
While planning your upgrade to OnCommand Unified Manager Core Package 5.2.1, you must
consider the changes in the product that affect the upgrade, such as changes to the database. Also,
OnCommand Unified Manager Core Package 5.1 or later requires two separate DataFabric Manager
servers to monitor and manage 7-Mode and cluster objects.

Upgrading OnCommand Unified Manager Core Package on


Windows
You can upgrade to the latest version by installing OnCommand Unified Manager Core Package.
Before you begin

You must have administrator privileges for the Windows computer on which you are installing
the Core Package.
If you are upgrading from the Express edition of OnCommand Unified Manager Core Package to
clustered Data ONTAP, you must have first upgraded to OnCommand Unified Manager Core
Package 5.0 and installed the Standard edition.
You must have downloaded the setup file.
You must have the following items:
Credentials for network access
The IP address of the server on which you are installing the software
Directory path for the installation

If you are upgrading to OnCommand Unified Manager Core Package while migrating to a
new system, you should use the original DataFabric Manager server installation path. You
must not change this installation path because it might disrupt some functionality.
For example, if the original installation path is C:\ProgramFiles(x86)\NetApp
\DataFabricManager, you should use this path. Do not change this path to D:
\Application\NetApp\DataFabricManager.
When you upgrade a DataFabric Manager server that was earlier managing both 7-Mode and
clustered Data ONTAP objects to a 7-Mode environment, a message is displayed after you select the
mode: Purgingobjectsofundesiredmodefromthedatabase.During a restore
operation, you are prompted to delete the objects.
Steps

1. Start the Core Package installation wizard by running the appropriate setup file.
2. Review and accept the license and AutoSupport agreements.
You cannot install the Core Package unless you accept the license agreement.
3. Select the Data ONTAP environment: 7-Mode or Cluster-Mode.
Note: If you are upgrading using the Express edition, this prompt is not displayed.

4. When prompted, confirm if you want to back up the database.


Backing up the database can take several minutes to many hours, depending on the size of your
database.
5. Review the summary screen and consider whether you want to make changes before completing
the installation, and then click Install.
6. When the Installation Complete screen is displayed, click Next.
7. Click Finish to close the wizard.
After you finish

You should clear the browser cache before you first start the OnCommand console and when you
upgrade to a new version of the software.

Upgrading from Core Package 32-bit to Core Package 64-bit


OnCommand Unified Manager Core Package 5.2 does not support installation on a 32-bit server.
Before you upgrade to Core Package 5.2 in your Windows or Linux server, you must back up and
restore your 32-bit DataFabric Manager server database .
Steps

1. Create a backup of the 32-bit DataFabric Manager server database by running the following
command:
dfmbackupcreatebackup_filename

2. Install the OnCommand Unified Manager Core Package 5.2 on your 64-bit server.
3. Copy the 32-bit database backup to the 64-bit server.

4. Restore the 32-bit database backup on the 64-bit server by running the following command:
dfmbackuprestorebackup_filename

5. Run the following command to the view the old paths for the setup options:
dfmoptionslistsetup_option_name

You should check the path for the following setup options: databaseBackupDir,
dataExportDir, dfmencKeysDir, perfArchiveDir, perfExportDir, pluginsDir,
reportDesignPath, reportsArchiveDir, and scriptDir.
Example

Run the following command:


dfmoptionslistdataExportDir

Result: The command displays the path in which the dataExportDir data is located:
installation_directory/data/in Windows and /opt/NTAPdfm/data/in Linux.
6. Copy the data for each option to the new 64-bit install directory.
Example

Copy the data from the installation_directory/data/location to the new installation


directory in Windows.
Alternatively, copy the data from the /opt/NTAPdfm/data/location to the new installation
directory in Linux.
7. Restart the DataFabric Manager server services by running the command:
dfmservicestop
dfmservicestart

Upgrading Core Package to manage 7-Mode and clustered


Data ONTAP environments
You can install two separate instances of OnCommand Unified Manager Core Package and migrate
data from earlier versions of DataFabric Manager server to enable management of both 7-Mode and
clustered Data ONTAP environments. does not support management of both 7-Mode and clustered
environments from the same server.
Before you begin

You must have two DataFabric Manager server license keys, one for each server instance.
You must back up your existing database before you upgrade.
Steps

1. Download and install OnCommand Unified Manager Core Package.


2. When you are prompted, choose 7-Mode as your environment, and complete the installation.
3. On a second server, perform a new installation of OnCommand Unified Manager Core Package,
choosing the clustered environment.
4. On the second server, restore the database that you backed up before the upgrade.
A warning message is displayed that all 7-Mode data will be deleted.
5. Disable the license key on the second server by entering the following command at the CLI:
dfmlicensedisableall

After the restore, the two DataFabric Manager server installations share the same license key;

therefore, you must disable one license to avoid conflicts.


6. Add a new license key to the second server by entering the following command:
dfmlicenseinstallNewLicenseKey

7. Verify that the new license key is installed by entering the following command:
dfmlicenselist

The second server installation is complete.


Upgrade path from DataFabric Manager 3.5 to OnCommand Unified Manager Core
Package 5.x
You can upgrade to OnCommand Unified Manager Core Package 5.x from DataFabric Manager
server 4.0 or later. If you want to upgrade from DataFabric Manager server 3.5 to OnCommand
Unified Manager Core Package 5.x, you must first upgrade to DataFabric Manager server 4.0.
Upgrading from DataFabric Manager 3.7.1
If you have created custom reports with GUILink and SecureGUILink as fields in DataFabric
Manager 3.7.1 or earlier, upgrading to DataFabric Manager 3.8 or later causes the dfmreport
viewcommand to fail. You must open the custom report in Operations Manager console and save
the report to view it.

Uninstalling OnCommand Unified Manager Core


Package
If you are no longer using the package or need additional space, it might be necessary to uninstall and
remove packages. When you uninstall the Core Package from your system, the installer automatically
removes all components.

Uninstalling OnCommand Unified Manager Core Package


from Windows
You can uninstall OnCommand Unified Manager Core Package, for instance, when a Core Package
installation is unsuccessful, or when you want to reconfigure your system with a fresh installation.
You uninstall the Core Package using the Control Panel application for your operating system.
Before you begin

You must have ensured that there are no other dependencies on the Core Package, because the wizard
uninstalls all associated components.
Steps

1. On the Windows server where you installed the Core Package, navigate to the Windows Control
Panel and Control Panel > Programs and Features.
For details, see the documentation for your Windows operating system.
2. Scroll through the list of installed programs to find the program that you want to remove.
3. Click the program that you want to uninstall, and then click Uninstall/Change or Change/
Remove, depending on your operating system.
The NetApp install wizard opens.

4. Select Remove, and then click Next.


5. Click Uninstall.
6. If requested, reboot the system.
A system reboot is required when, during the uninstallation process, the \DataFabricManager
\DFMprogram directory is not moved to a new directory. The new directory is created with a
name that indicates the date and time that you performed the uninstall process: for example,
\DataFabricManager\DFM20110622140520\, which specifies that OnCommand Unified
Manager Core Package was uninstalled on June 22, 2011, at 2:05:20 PM. When this
uninstallation directory is not created, you must reboot to complete the uninstallation process and
newly install OnCommand Unified Manager Core Package.

Troubleshooting installation and setup


If you encounter unexpected behavior during the installation or when using , you can use specific
troubleshooting procedures to identify and resolve the cause of such issues.

Address already in use


Description
This message occurs when the Windows computer has run out of outbound ports. A
Transmission Control Protocol (TCP) connection has closed, causing the socket pair
associated with the connection to go into a TIME-WAIT state. This prevents other
connections from using the TCP protocol, source Internet Protocol (IP) address,
destination IP address, source port, and destination port for an unknown period of time.
Corrective action
Reduce the length of the TCP TIME-WAIT delay.

There is a problem with this Windows Installer package


Description
This message occurs when you uninstall an application by using the Add or Remove
Programs tool in Windows server. The Windows Installer service manages the installation
and removal of programs. If there is a problem with the registration of the Microsoft
installation engine, you might not be able to remove programs that you have installed by
using the Windows installer.
Corrective action
Unregister and reregister the Windows Installer service

Cursor displays in multiple locations in the same dashboard


panel
Cause
This problem occurs when you use the Firefox browser to open the OnCommand console.
Corrective action

Disable a browser setting in Firefox, as follows:


60 | Installation and Setup Guide
1. Open the Firefox browser and click the Tools menu, then click Options.
2. Click the Advanced tab.
3. Clear the Always use the cursor keys to navigate within pages option.
4. Restart the Firefox browser.

No related objects are displayed for a virtual object


Issue
No physical servers corresponding to a virtual object are displayed in the Related Objects
list in the Server tab of the OnCommand console.
Cause
This problem occurs when you add or register a new host service but the mapping
between the physical servers and virtual objects does not occur.
Corrective action
1. Refresh the monitor by opening a console session and type the following command:
dfmhostdiscovermshare<storagesystem>

2. To view the results, type the following command:


dfmdetails<storagesystem>
3. Search for the shareTimestampvalue to ensure that discovery for the storage system

is complete.
4. Click Rediscover in the Host Services tab of the OnCommand console to rediscover
the host service.
5. Verify that the physical servers are displayed in the Related Objects list in the Server
tab.

Error 1067 while starting the DFM WebUI service


Description
This message occurs during the installation of the Core Package, when the server has Java
installation (older, newer, or same version) that sets Java-specific environment variables.
This results in the DFM WebUI service failing to start when the installation is complete.
Corrective action
You can uninstall the tool or application that sets the Java-specific environment variables
or you can delete these variables when the DFM WebUI service fails to start after you
install the Core Package. You can start the service manually by typing the dfmservice
startwebui/httpcommand.

Verifying host service to DataFabric Manager server


communication
Issue
You can use the dfmhsdiagcommand to verify that communication is enabled
between the host service and the DataFabric Manager server, and to verify that the host
service can communicate with the VMware plug-in. You can use this command to

troubleshoot connectivity issues or to verify that communication is enabled before starting


a host service backup.
Corrective action
1. On the DataFabric Manager server console, type the following command:
dfmhsdiag<hostservice>

OnCommand Unified Manager


Operations Manager Administration Guide
For Use with Core Package 5.2.1- March 2015

Introduction to Operations Manager


Operations Manager is a Web-based UI of the DataFabric Manager server.
You can use Operations Manager for the following day-to-day activities on storage systems:
Discover storage systems
Monitor the device or the object health, the capacity utilization, and the performance
characteristics of a storage system
View or export reports
Configure alerts and thresholds for event managements
Group devices, vFiler units, host agents, volumes, qtrees, and LUNs
Run Data ONTAP CLI commands simultaneously on multiple systems
Configure role-based access control (RBAC)
Manage host users, user groups, domain users, local users, and host roles
Note: DataFabric Manager server 3.8 and later supports not only IPv4, but also IPv6.

What DataFabric Manager server does


The DataFabric Manager server provides infrastructure services such as discovery, monitoring, rolebased
access control (RBAC), auditing, and logging for products in the NetApp Storage and Data
suites.
You can script commands using the command-line interface (CLI) of the DataFabric Manager server
software that runs on a separate server. The software does not run on the storage systems

What a license key is


To use the DataFabric Manager server, you must enable the OnCommand Core Package license by
using the license key. The license key is a character string that is supplied by NetApp.
If you are installing the software for the first time, you enter the license key during installation. You
can enter the license key in the Options window under Licensed Features. You must enable additional

licenses to use other features, such as disaster recovery and backup.

Access to Operations Manager


You can access Operations Manager and the CLI from the IP address or Domain Name System
(DNS) name of the DataFabric Manager server.

After successfully installing the DataFabric Manager server software, the DataFabric Manager server
starts discovering, monitoring, collecting, and saving information about objects in its database.
Objects are entities such as storage systems; vFiler units, disks, aggregates, volumes, and qtrees on
these storage systems; LUNs; and user quotas.
If the server is on Windows, Operations Manager is launched automatically and a welcome page
appears.
You can use one of the following URLs to access Operations Manager:
http://server_ip_address:8080/dfm
http://server_dnsname:8080
Depending on your DNS setup, you might have to use the fully qualified name in this URL; for
example, you should use tampa.florida.com instead of tampa.

Information to customize in the DataFabric Manager server


You can use the DataFabric Manager server to configure storage system IP addresses or names,
administrator access control, and alarms, set up SNMP communities and administrator accounts and
create groups.
DataFabric Manager server 3.8 and later supports IPv6 along with IPv4. However, the following
DataFabric Manager server features lack IPv6 support:
LUN management
Snapshot-based backups (because SnapDrive for Windows and SnapDrive for UNIX do not
support IPv6 addressing)
Disaster recovery
High Availability (HA) over Veritas Cluster Servers (VCS)
"hosts.equiv" file based authentication
APIs over HTTPS do not work for storage systems managed using IPv6 addresses, when the
option httpd.admin.accessis set to a value other than legacy.
Discovery of storage systems and host agents that exist on remote network
Protocols such as RSH and SSH do not support IPv6 link local address to connect to storage
systems and host agents.
Note: Link local address works with SNMP and ICMP only.

Administrator accounts on the DataFabric Manager server


You can use Operations Manager to set up administrator accounts on the DataFabric Manager server.
You can grant capabilities such as read, write, delete, backup, restore, distribution, and full control to
administrators.
The DataFabric Manager server software provides the following two different administrator accounts:
Administratorgrants full access for the administrator who installed the software
Everyoneallows users to have read-only access without logging in

Authentication methods on the management server


The management server uses the information available in the native operating system for
authentication. The server does not maintain its own database of the administrator names and the
passwords.

You can also configure the management server to use Lightweight Directory Access Protocol
(LDAP). If you configure LDAP, then the server uses it as the preferred method of authentication.
Despite the authentication method used, the server maintains its own database of user names and
passwords for local users. (A local user might or might not be an administrator). For local users, the
server does not use the native operating system for authentication; it performs authentication itself.

Authentication with native operating system


You do not need to configure any options to enable the DataFabric Manager server to use the native
operating system for authentication.
Based on the native operating system, the DataFabric Manager server application supports the
following authentication methods:
For Windows: local and domain authentication
For UNIX: local password files, and NIS or NIS+
Note: Ensure that the administrator name you are adding matches the user name specified in the

native operating system.

Authentication with LDAP


You can enable LDAP authentication on the management server and configure the management
server to communicate with your LDAP servers to retrieve information about LDAP users.
The management server provides predefined templates for the most common LDAP server types.
These templates provide predefined LDAP settings that make the management server compatible
with your LDAP server.
The following LDAP servers are compatible with the management server:
Microsoft Active Directory
OpenLDAP
IBM Lotus LDAP
Netscape LDAP Server

What the discovery process is


The DataFabric Manager server discovers all the storage systems in your organization's network by
default. You can add other networks to the discovery process or enable discovery on all the networks.
Depending on your network setup, you can disable discovery entirely. You can disable auto-discovery
if you do not want SNMP network walking.

Discovery by the DataFabric Manager server


The DataFabric Manager server depends on Simple Network Management Protocol (SNMP) to
discover and periodically monitor storage systems.
If your storage systems are not SNMP-enabled, you must enable SNMP for the server can discover
them. You can enable SNMP on storage systems by using either FilerView or the Data ONTAP CLI.
If the routers, switches, or storage systems use SNMP communities other than public, you must
specify the appropriate communities on the Edit Network Credentials page.

The server has to locate and identify storage systems so that it can add them to its database. The
server can monitor and manage only systems and networks that are in the database.
Automatic discovery is typically the primary process that the server uses to discover storage systems
and networks. In this process, the server and the systems (storage systems, vFiler units, and Host
Agents) communicate automatically with each other.
Manual addition is secondary to the discovery process. You typically require it only for the storage
systems and the networks that you add after the server discovers the infrastructure.

What SNMP is
Simple Network Management Protocol (SNMP) is an application-layer protocol that facilitates the
exchange of management information between network devices.
SNMP is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP
enables network administrators to manage network performance; find and solve network problems;
and plan for network growth.

When to enable SNMP


You must enable SNMP on your storage systems before you install the DataFabric Manager server, if
you want the DataFabric Manager server to discover the storage systems immediately.
SNMP is normally enabled on storage systems. You can verify this with storage01>options
snmp.enable.
If SNMP is not enabled on your storage system, enable it with Storage01>options
snmp.enableon. SNMPv1 uses community strings for authentication. Storage systems normally
allow read-only access with the 'public' community.
You can also wait until after installing the software to enable SNMP on storage systems. However,
this causes a delay in the server to discover the storage systems.

SNMP versions to discover and monitor storage systems


The DataFabric Manager server uses the SNMP protocol versions to discover and monitor the storage
systems.
By default, the DataFabric Manager server uses SNMPv1, with public as the community string to
discover the storage systems. To use a specific configuration on a network, you must add the
networks required.
SNMPv1 is a widely used simple request/response protocol. SNMPv3 is an interoperable standardsbased
protocol with security and remote configuration capabilities.
SNMPv3 provides user-based security with separate authentication and authorization. It is a method
to specify common credentials.
Note: SNMPv3 support is available only on storage systems running Data ONTAP 7.3 or later.

You can use SNMPv3 to discover and monitor storage systems if SNMPv1 is disabled.

Note: The user on the storage system whose credentials are specified in Operations Manager

should have the login-snmp capability to be able to use SNMPv3.


The version specified in the Preferred SNMP Version option at the storage system level is used for
monitoring the discovered storage system. If no version is specified at the storage system level, then
either the network setting or the global setting is used. However, you can modify the SNMP version,
if required.
Note: If the monitoring fails using the specified SNMP version, then the other SNMP version is

not used for storage system monitoring.

What the Preferred SNMP Version option is


The Preferred SNMP Version option is a global or network-specific option that specifies the
SNMP protocol version to be used first for discovery.
You can use Operations Manager to configure the option with values such as SNMPv1 or
SNMPv3.

What host discovery is


The DataFabric Manager server automatically discovers storage systems and Host Agents that are in
the same subnet as the server. When you install the DataFabric Manager server software, the Host
Discovery option is enabled by default.
The discovery of networks, when enabled, is integrated with the discovery of storage systems and
Host Agents. The discovery occurs at the same time.

Ping methods in host discovery


The DataFabric Manager server uses SNMP queries for host discovery. You must enable SNMP on
your storage systems and the routers for the DataFabric Manager server to monitor and manage
systems.
By default, SNMP is enabled on storage systems.
Ping methods might include ICMP echo, HTTP, NDMP, or ICMP echo and SNMP. The latter ping
method does not use HTTP to ping a host. Therefore, if a storage system (behind a transparent HTTP
cache) is down and the HTTP cache responds, the server does not mistake the storage system to be
running. The ICMP echo and SNMP ping method is the default for new installations.
Note: When you select ICMP echo and the SNMP ping method, the server uses ICMP echo first,
and then SNMP, to determine if the storage system is running.

Methods of adding storage systems and networks


You can apply a combination of methods to efficiently add storage systems and networks to the
DataFabric Manager server.
You can retain the defaults for the discovery options of Host Discovery (Enabled), and Network
Discovery (Disabled).
You can start the discovery process by manually adding one storage system from each network
that has storage systems.
When you add a storage system, its network is also added. Then, other storage systems on the
network are found automatically.
After verifying that all of the storage systems have been added, you should disable host discovery
to save network resources.
After adding a new storage system to your network, you must add hosts by using either
Operations Manager or through the command line with the dfmhostaddcommand.
If you set up a new network of storage systems, you must add one storage system so that its
network and all other storage systems on it are found.
You can add storage systems that do not have login credentials to the DataFabric Manager server.
However, some attributes such as AutoSupport are not correctly updated on the . You can set the
credentials of such storage systems in the DataFabric Manager server.

Setting the login credentials of storage systems


You can identify the storage systems that do not have login credentials and set the credentials in the
DataFabric Manager server.
Steps

1. Click Control Center > Management > Storage System > Passwords.
2. In the Storage Systems with empty credentials on the DataFabric Manager server section of the
Storage System Passwords page, select the storage system for which you want to set the login
credentials.
3. Specify the login name and password for the administrator.
4. Click Update.

Discovery of a cluster by Operations Manager


(clustered
environments only)
The DataFabric Manager server automatically discovers a cluster that is in the same network as the
DataFabric Manager server host. If the cluster is in a different network, you can discover it by
specifying the appropriate SNMP version in the PreferredSNMPVersionoption in the Network
Credentials page.
For a specified IP address and the SNMP version in the PreferredSNMPVersionoption, the
following objects are queried:
Name of the system (sysName)
ID of the system object (sysObjectId)
Cluster ID (clusterIdentityUuid)
If the query fails, you can use another version of SNMP to send queries. When the query succeeds,
Operations Manager identifies the cluster, based on the sysObjectId. For example, if the sysObjectId
is netappProducts.netappCluster, the cluster is added to the network.
Note: For both SNMPv1 and SNMPv3, Operations Manager uses per-network configuration
settings or the default network settings.

Adding clusters (clustered environments only)


By using Operations Manager, you can add a cluster by specifying the IP address of the given cluster
management logical interface.
About this task

Operations Manager uses the default SNMP version to discover a cluster. If you want to change the
default version, you can specify the appropriate SNMP version in the PreferredSNMPVersion
option in the Network Credentials page.
Steps

1. Click Control Center > Home > Member Details > Physical Systems.
2. In the New storage system field, enter the host name or IP address of the cluster that you want to
add.
3. Click Add.
Result

The cluster is added and displayed in the Clusters, All report.

Cluster monitoring tasks using Operations Manager


(clustered environments only)
Using Operations Manager, you can perform various monitoring tasks on cluster nodes, including

monitoring clusters and generating reports on the cluster and its objects.
You can perform the following tasks on a cluster by using Operations Manager:
Discover a cluster, either automatically or manually.
Monitor the cluster.
Configure the cluster to receive alerts.
Generate various reports on the cluster and its components.
Run commands remotely.

Unsupported tasks for cluster monitoring in Operations Manager


(clustered
environments only)
You cannot perform certain monitoring tasks related to a cluster due to lack of support for certain
APIs, SNMP objects, and interfaces in clustered Data ONTAP systems.
The following features are not supported in Operations Manager:
Management of the cluster administrator user profile and password
Configuration management of clusters, controllers, and Storage Virtual Machines (SVMs)
Configuration of the high-availability checker script
Management of volume SnapMirror relationships, schedules, and jobs
Receiving SNMP traps from a cluster
SnapLock reports

Role-based access control in the DataFabric


Manager server
The DataFabric Manager server uses role-based access control (RBAC) for user login and role
permissions.

What role-based access control does


Role-based access control (RBAC) enables administrators to manage groups of users by defining
roles. If you need to restrict access for specific functionality to selected administrators, you must set
up administrator accounts for them. If you want to restrict the information that administrators can
view and the operations they can perform, you must apply roles to the administrator accounts you
create.
The management server uses role-based access control (RBAC) for user login and role permissions.
If you have not changed the management servers default settings for administrative user access, you
do not need to log in to view them.
When you initiate an operation that requires specific privileges, the management server prompts you
to log in. For example, to create administrator accounts, you must log in with
Administrator account
access.

Logging in to the DataFabric Manager server


You can log in to the DataFabric Manager server by entering the administrator name and password on
the Operations Manager interface.
Steps

1. From the Control Center, select Log In.


2. Type your administrator name and password.
3. Click Log In.

What default administrator accounts are


The DataFabric Manager server uses administrator accounts to manage access control and maintain
security. When you install the DataFabric Manager server software, default administrator accounts
are created: the Administrator and Everyone accounts. Administrator accounts have predefined
roles assigned to them.
Administrator account
The Administrator has superuser privileges and can perform any operation in the
DataFabric Manager server database and add other administrators. The Administrator
account is given the same name as the name of the administrator who installed the
software. Therefore, if you install the DataFabric Manager server on a Linux workstation,
the administrator account is called root.
Everyone account
After installing the DataFabric Manager server, you must log in as the Administrator and
set up the Everyone account to grant view permission on this account. This is optional.
Note: Changes made will not be seen in the audit log.
Note: Prior to DataFabric Manager server 3.3, the Everyone account was assigned Read access by

default. If you upgrade to DataFabric Manager server 3.3 and later, these legacy privileges are
retained by the Everyone account and mapped to the GlobalRead roles.

List of predefined roles in the DataFabric Manager server


You must be aware of the roles assigned to different administrator accounts in the DataFabric
Manager server.

Active Directory user group accounts


The DataFabric Manager server recognizes two types of users namely Administrator and User,
thereby allowing domain administrators the ability to define roles based on a companys
organizational hierarchy.
To set up administrator accounts as a user group, use the following naming convention: <AD
domain>\group_dfmadmins.
In this example, all administrators who belong to group_dfmadmins can log in to the DataFabric
Manager server and inherit the roles specified for that group.

Adding administrative users


You can create administrator accounts from the Operations Manager console. Administrator accounts
are either an individual administrator or a group of administrators.
Before you begin

The DataFabric Manager server user must be a local operating system user, or a domain user
reachable by LDAP.
Steps

1. Log in to the Administrator account.


2. In the Operations Manager console, click Setup > Administrative users.
3. Type the name for the administrative user or domain name for the group of administrators.
When you add the user, they must be available locally.
4. If you have already created a role that you want to assign to this user or group of users, select the
role in the left column of the displayed table and use the arrow button to move the role to the
column on the right.
Roles in the column on the right are assigned to the user that you are creating.
5. Type the email address for the administrator or administrator group.
6. Enter the pager number for the administrator or administrator group.
7. Click Add.

What an RBAC resource is


An RBAC resource is an object on which an operation can be performed. In the DataFabric Manager
server RBAC system, these resources include Aggregates, Controllers, Clusters, Volumes, Virtual
servers, LUNs, Protection policies, Provisioning policies, vFiler templates, Hosts, and DataFabric
Manager server Groups (except configuration groups).
A user with the Policy Write capability in the global scope can create schedules and throttles. A user
with the Policy Write capability on the policy can modify data protection policies. Similarly, a user
with the Policy Delete capability on a policy can delete that policy.
Note: On upgrading to DataFabric Manager server 3.6 or later, a user has the following
capabilities:
User with the Database Write capability in the global scope is assigned the Policy Write
capability.
User with the Database Delete capability is assigned the Policy Delete capability .
Viewing a specific summary page (7-Mode environments only)
You can view the summary page that is specific to a storage system or vFiler unit.
Steps

1. Click Control Center > Home > Member Details > Physical Systems.
2. Click the required storage system or controller link.
3. From the left pane, under Storage Controller Tools, click Host Users Summary.
Viewing users on the host (7-Mode environments only)
You can view the users on a host by using the Host Users report. The Host Users report displays
information about the existing users on the host.
Steps

1. From any page, select Management > Host Users.


2. Select Host Users, All from the Report drop-down list.

Who local users are (7-Mode environments only)


Local users are the user roles created on storage systems and vFiler units.
Viewing local users on the host (7-Mode environments only)
You can view the local users on a host by using the Host Local Users report. The Host Local Users
report displays information about the existing local users on the host.
Steps

1. From any page, select Management > Host Users.


2. Select Host Local Users, All from the Report drop-down list.

What groups and objects are


A group is a collection of the DataFabric Manager server objects. Storage system elements that are
monitored by the DataFabric Manager server, such as storage systems, aggregates, file systems
(volumes and qtrees), and logical unit numbers (LUNs), are referred to as objects.
You can group objects based on characteristics such as the operating system of a storage system
(Data ONTAP version), the storage systems at a location, or all the file systems that belong to a
specific project or group in your organization.
You can add the following list of DataFabric Manager server objects to a resource group:
Host (can include storage systems, Host Agents, virtual servers (clustered environments only),
and vFiler units (7-Mode environments only))
Volume
Qtree
Configuration
LUN path
Aggregate
Disk
Dataset (7-Mode environments only)
Resource pool (7-Mode environments only)
You should consider the following information when creating groups:
You can group similar or different objects in a group.
An object can be a member of any number of groups.
You can group a subset of group members to create a new group.
You cannot create a group of groups.
You can create any number of groups.
You can copy a group or move a group in a group hierarchy.

What group types are


The DataFabric Manager server automatically determines the type of a group based on the objects it
contains. If you place your cursor over an icon, to the left of a group name, on the left side of
Operations Manager main window, you can quickly find out the type of objects the group contains.

What homogeneous groups are


You can group objects into sets of objects with common characteristics. For example, they might be
running the same operating system or belong to a specific project or group in your organization.
You can create the following types of groups:
Appliance resource group ( )contains storage systems, vFiler units, and host agents

Aggregate resource group ( )contains aggregates only


File system resource group ( )contains volumes and qtrees
LUN resource group ( )contains LUNs only
Configuration resource group ( )contains storage systems associated with one or more
configuration files
Datasetis the data that is stored in a collection of primary storage containers, including all the
copies of the data in those containers
Resource poolis a collection of storage objects from which other storage containers are
Allocated

What mixed-type groups are


You can add objects of different types to the same group. Grouping objects from different
homogeneous groups also constitutes a mixed-type group. For example, a mixed-type group can have
a group of vFiler units and volumes. You can also group objects based on their geographical location,
or by the client, they support.
Configuration resource groups can contain only storage systems, vFiler units, and configurations.
Once created, you cannot add any objects to the configuration resource group in the DataFabric
Manager server. The elements of a configuration resource group cannot be a part of any other
homogeneous group. The DataFabric Manager server prevents you from adding other object types to
configuration resource groups. If a group already contains objects other hosts, you cannot add a
configuration file to the group.

What a Global group is


All objects monitored by DataFabric Manager server belong to the global group. By default, a group
called global exists in the DataFabric Manager server database. All subgroups created also belong to
the global group.
You cannot delete or rename the global group. When you delete an object, the DataFabric Manager
server stops monitoring and reporting data for that object. Data collection and reporting is not
resumed until the object is added back (recovered) to the database.

What hierarchical groups are


In addition to creating groups of objects, you can create subgroups within groups to establish a
hierarchy of groups.
Hierarchical groups help you manage administrative privileges, because privileges granted to a parent
group are implicitly granted to all its subgroups. Besides, following are the benefits of having
hierarchical groups:
You can determine the capacity of the group and the chargeback options.
You can keep a record of trending, that is, the data growth rate of the group.
You can select arguments for reports to be generated .

Creating groups
You can create a new group from the Edit Groups page. You can group objects based on storage
systems at a location, or all file systems that belong to a specific project or group in your
organization.
Before you begin

To create a group, you must be logged in as an administrator with a role having Database Write
capability on the parent group. To create a group directly under the Global group, the administrator
must have a role with Database Write capability on the Global group.

Steps

1. From the Control Center, click the Edit Groups.


2. In the Group Name field, type the name of the group you want to create.
3. From the list of groups, select the parent group for the group you are creating.
You might need to expand the list to display the parent group you want.
4. Click Add.
Result

The new group is created. The Current Groups list in the left-pane area is updated with the new
group. You might need to expand the Current Groups list to display the new group.

Creating groups from a report


You can create a new group from a report in Operations Manager.
Steps

1. From the Control Center, click the Member Details tab.


2. Click one of the following tabs: Aggregate, File Systems, or LUN.
3. To the left of the list of objects in the main window, select the check boxes for the objects that you
want to add to the group.
4. At the bottom left of the main window, click Add to New Group.
5. In the Group Name field, type the name of the group you want to create.
6. From the list of groups, select the parent group for the group you are creating.
You might have to expand the list to display the parent group that you want.
7. Click Add.
Result

The Current Groups list in the left-pane area is updated and displays the new group. You might have
to expand the Current Groups list to display the new group.

What configuration resource groups are


A configuration resource group is a group of storage systems that share a set of common
configuration settings. A configuration resource group allows you to designate groups of managed
storage systems that can be remotely configured to share the same configuration settings.
A configuration resource group must contain some number of storage systems and have one or more
files containing the desired configuration settings. These configuration settings are listed in files
called configuration files. Configuration files exist independently of groups, and can be shared
between groups. You can use Operations Manager to create configuration files and to specify the
configuration settings that you want to include in them.
With Operations Manager, you can create and manage configuration files that contain configuration
settings you want to apply to a storage system and vFiler unit or groups of storage systems and vFiler
units. By using the storage system configuration management feature, you can pull configuration
settings from one storage system and vFiler unit and push the same or a partial set of settings to other
storage systems or groups of storage systems and vFiler units. While pushing the configurations
settings, you must ensure that storage system and vFiler unit configuration conforms to the
configuration pushed to it from Operations Manager.

Guidelines for creating groups


Before creating a group, you should figure out how you want to configure the group, what the
purpose of the group is, and how the groups affect your environment. For example, if you create too
many groups, your environment could slow down.
You should consider the following guidelines:

You can group similar or mix-types objects in a group.


An object can be a member of any number of groups.
You can group a subset of group members to create a new group.
You can copy to a group or move to a group in a group hierarchy.
However, a parent group cannot be moved to its child group.

Guidelines for creating configuration resource groups


You must use a set of guidelines when you create Configuration Resource groups.
Use the following guidelines when you create Configuration Resource groups:
You can include storage systems, and vFiler units, with different model types and software
versions.
Storage system, and vFiler unit, can be a member of only one configuration resource group, but
can still be a member of multiple groups for monitoring purposes.
You cannot create a group of Configuration Resource groups.
To apply settings to a Configuration Resource group, you must associate one or more
configuration files with the group.

What summary reports are


Summary reports are available for all groups, including the Global group.
Status
Group members
Storage capacity used and available
Events
Storage chargeback information
Monitored devices
112 | Operations Manager Administration Guide
Physical space
Storage system operating systems
Storage system disks
Capacity graphs
Note: You can view additional reports that focus on the objects in a group by clicking the name of

the group and then clicking the appropriate Operations Manager tab.

What subgroup reports are


If you run a report on a group with subgroups, the data displayed includes data on applicable objects
in the subgroups. You do not see data about other object types in the parent group or the subgroups.
If you display the Aggregate Capacity Graph report on a parent group containing aggregates, you see
data about the aggregates in the parent group. You can also see data about the aggregates in its
subgroups.
If you run a report on a mixed-type object group, Operations Manager runs the report on group
members of the applicable type. For example, qtrees for the Qtree Growth report. Operations
Manager combines the results, and then eliminates duplicates, if any.

What cluster-related objects are (clustered environments


only)
Operations Manager enables you to include cluster-related objects, such as nodes and Storage Virtual
Machines (SVMs, formerly known as Vservers), in a group. This enables you to easily monitor
cluster-related objects that belong to a particular group.
The cluster-related objects are as follows:
SVM
A single file-system namespace. SVMs have separate network access and provide the
same flexibility and control as a dedicated node. Each SVM has its own user domain and
security domain. It can span multiple physical nodes.
SVMs have a root volume that constitute the top level of the namespace hierarchy;
additional volumes are mounted to the root volume to extend the namespace. Each SVM
is associated with one or more logical interfaces (LIFs) through which clients access the
data on the storage server. Clients can access the SVM from any node in the cluster, but
only through the logical interfaces that are associated with the SVM.
Namespace
Every SVM has a namespace associated with it. All the volumes associated with an SVM
are accessed under the server's namespace. A namespace provides a context for the
interpretation of the junctions that link together a collection of volumes.
Junction
A junction points from a directory in one volume to the root directory of another volume.
Junctions are transparent to NFS and CIFS clients.
Logical interface
An IP address with associated characteristics, such as a home port, a list of ports to fail
over to, a firewall policy, a routing group, and so on. Each logical interface is associated
with a maximum of one SVM to provide client access to it.
Cluster
A group of connected storage systems that share a global namespace and that you can
manage as a single SVMor multiple SVMs, providing performance, reliability, and
scalability benefits.
Storage controller
The component of a storage system that runs the Data ONTAP operating system and
controls its disk subsystem. Storage controllers are also sometimes called controllers,
storage appliances, appliances, storage engines, heads, CPU modules, or controller
modules.
Ports
A port represents a physical Ethernet connection. In a Data ONTAP cluster, ports are
classified into the following three types:
Data ports
Provide data access to NFS and CIFS clients.
Cluster ports
Provide communication paths for cluster nodes.
Management ports

Provide data access to Data ONTAP management utility.


Data LIF
A logical network interface mainly used for data transfers and operations. A data LIF is
associated with a node or SVM in a Data ONTAP cluster.
Node management LIF
A logical network interface mainly used for node management and maintenance
operations. A node management LIF is associated with a node and does not fail over to a
different node.
Cluster management LIF
A logical network interface used for cluster management operations. A cluster
management LIF is associated with a cluster and can fail over to a different node.
Interface group
A single virtual network interface that is created by grouping together multiple physical
interfaces.

Creating a group of cluster objects (clustered environments only)


By using Operations Manager, you can create a group of cluster objects for easier administration and
access control. You can add objects such as clusters, controllers, aggregates, volumes, and Storage
Virtual Machine (SVM) to a group.
Steps

1. Click Control Center > Home > Member Details > Physical Systems > Report.
Example

You can create a group of SVMs by clicking Control Center > Home > Member Details >
Virtual Systems > Report > Storage Virtual Machines, All.
2. Depending on the cluster objects you want to group, select the appropriate report from the Report
drop-down list.
3. In the resulting report, select the cluster objects you want to include in a group.
4. Click Add To New Group.
5. In the Group Name field, enter a name for the group.
6. Select an appropriate parent for your new group.
7. Click Add.

Storage monitoring and reporting


Monitoring and reporting functions in the DataFabric Manager server depend on event generation.
You must configure the settings in Operations Manager to customize monitoring and to specify how
and when you want to receive event notifications.
Operations Manager allows you to generate summary and detailed reports. Depending on which tab
you select, Operations Manager returns the appropriate graph or selection of reports (for example,
reports about storage systems, volumes, and disks).

The DataFabric Manager server monitoring process


The DataFabric Manager server discovers the storage systems supported on your network. The
DataFabric Manager server periodically monitors data that it collects from the discovered storage

systems, such as CPU usage, interface statistics, free disk space, qtree usage, and chassis
environmental. The DataFabric Manager server generates events when it discovers a storage system,
when the status is abnormal, or when a predefined threshold is breached. If configured to do so, the
DataFabric Manager server sends a notification to a recipient when an event triggers an alarm.

Cluster monitoring with Operations Manager (clustered


environments only)
You can use Operations Manager to discover and monitor storage systems in cluster environments
and generate reports. Operations Manager uses SNMP or XML APIs for cluster monitoring.

What the cluster-management LIF is (clustered environments only)


The cluster-management LIF is a virtual network interface that enables you to perform cluster
management operations. This LIF is associated with a cluster and can be failed over to a different
node.
The cluster-management LIF is associated with a cluster management server to provide a detailed
view of the cluster. Operations Manager monitors clusters by using the appropriate SNMP version or
APIs. You can gather information about the cluster resources, controller resources, and Storage
Virtual Machine (SVM) resources from the cluster-management LIF.

Information available on the Cluster Details page (clustered


environments
only)
The Cluster Details page for a cluster provides information such as the cluster hierarchy, status of the
cluster, number of logical interfaces, and ports. You can access the Clusters Details page by clicking
the cluster name from any of the cluster reports.
The Cluster Details page displays the following cluster-related information:
Status of the cluster
Serial number
Uptime
Primary IP address
Number of controllers
Number of SVMs
Number of ports
Number of logical interfaces
Contact and location
Current events that have the cluster as the source
Storage capacity
Groups to which the cluster belongs
List of the most recent polling sample date and the polling interval of all the events monitored for
the cluster
Graphs that display the following information:
Volume capacity used
Volume capacity used versus total capacity
Aggregate capacity used
Aggregate capacity used versus total capacity
Aggregate space, usage versus committed
CPU usage (in percentage)
NFS Operations/sec

NFS and CIFS Operations/second


CIFS Operations/second
Network Traffic/second
Logical Interface Traffic/second

Tasks performed from the Cluster Details page (clustered environments only)
You can perform various cluster management tasks from the Cluster Details page, such as viewing
the resource utilization and gathering information about cluster objects. You can access the Clusters
Details page by clicking the cluster name from any of the cluster reports.
You can perform the following tasks from the Cluster Details page:
Exploring the physical view of the system
You can gather information about cluster objects through reports. For example, by clicking the
number corresponding to the Controllers link, you can view the details of those controllers from
the Controllers, All report.
Viewing the total utilization of physical resources
You can view the total utilization of physical resources, including CPU usage, network traffic at
the cluster level, and volume capacity used. You can also view, in graphical format, information
about the corresponding resources.
Browsing the cluster and its objects
You can browse through the cluster and its objects from the Cluster Hierarchy section in the
Cluster Details page. You can also browse to the corresponding report of a particular object. For
example, you can expand the cluster, view the list of Storage Virtual Machines (SVMs), and then
click a specific SVM to view its details on the corresponding Storage Virtual Machine Details
page.

Viewing the utilization of resources


You can view the graphical representation of the utilization of various physical and logical resources
from the Details page in Operations Manager.
Viewing the utilization of logical resources (clustered environments only)
By using Operations Manager, you can view, in graphical format, the utilization of your logical
resources, such as Storage Virtual Machines (SVMs), and logical interface traffic to the SVM and
volumes. You can also configure alarms to send notification whenever the utilization exceeds preset
thresholds.
Steps

1. Click Control Center > Home > Member Details > Virtual Systems > Report > Storage
Virtual Machines, All.
2. Click the name of the SVM for which you want to view the utilization of logical resources.
3. In the Storage Virtual Machine Details page, select the appropriate graph from the drop-down
menu.
You can view the utilization graph on a daily, weekly, monthly, quarterly, or yearly basis.
Example

To view the SVM volume capacity used for one year, you can select Volume Capacity Used and
click 1y.
Viewing the utilization of physical resources
By using Operations Manager, you can view the graphical representation of the utilization of your
physical resources such as CPU, network traffic to the controller, and so on.
Steps

1. Click Control Center > Home > Member Details > Physical Systems > Report > Controllers,
All.
2. Click the name of the controller for which you want to view the utilization of physical resources.
3. In the Storage Controller Details page, select the appropriate graph from the drop-down menu.
You can view the utilization graph on a daily, weekly, monthly, quarterly, or yearly basis. For
example, to view the controllers CPU usage for a period of one year, you can select CPU Usage
(%) and click 1y.

What SNMP trap listener is


In addition to periodically sending out SNMP queries, DataFabric Manager server 3.1 and later
include an SNMP trap listener as part of the server service. Event generation and alerting is faster
than with SNMP queries because the proper monitoring mechanism is started immediately after the
SNMP trap is received. In addition, monitoring is performed asynchronously, instead of waiting for
the monitoring interval.
The SNMP trap listener listens for SNMP traps from monitored storage systems, if they have been
manually configured to send traps to the DataFabric Manager server (over UDP port 162).
Note: The SNMP trap listener can receive SNMP traps only from storage systems that are

supported on the DataFabric Manager server. Traps from other sources are dropped.

What SNMP trap events are


When the SNMP trap listener receives an SNMP trap, the DataFabric Manager server issues an
Information event, but does not change the status of the host.
Instead, the corresponding monitor associated with the trap generates the proper event and continues
to monitor the host to report status changes. The name associated with the SNMP trap Information
event indicates the severity of the trap: for example, Error Trap. The trap severities are deduced from
the last digit of the trap ID, as specified in the custom MIB. The SNMP traps received by the SNMP
trap listener are specified in the custom MIB. For a complete list of traps and associated trap IDs, see
the Data ONTAP Network Management Guide for 7-Mode.
The following list describes the SNMP trap Information event types:
Emergency Trap Received
Alert Trap Received
Critical Trap Received
Error Trap Received
Warning Trap Received 106 SNMP traps
Notification Trap Received
Information Trap Received
If the severity of a trap is unknown, the DataFabric Manager server drops the trap.

How SNMP trap reports are viewed


You can use the Events tab to view reports about the SNMP traps that are received by the DataFabric
Manager server.
The Events tab enables you to view a listing of all current SNMP traps or to sort them by severity.
Each view provides information about each SNMP trap, for example, the name of the trap, the
severity, and the condition that led to the error.

When SNMP traps cannot be received

The DataFabric Manager server cannot receive SNMP traps, if any of the following conditions exist:
A system has not been configured to send traps to the DataFabric Manager server.
The host is not a supported storage system.
DataFabric Manager server version is before 3.1.
Additionally, the DataFabric Manager server cannot receive Debug traps.

How SNMP trap listener is stopped


The SNMP trap listener is enabled by default. If you want to start or stop the SNMP trap listener, use
the snmpTrapListenerEnabledCLI option.

Configuration of SNMP trap global options


You can configure the SNMPtrapglobaloptions by accessing the SNMPTrapListener
options and the EventandAlertoptions on the Options page.
Configuration of the SNMP trap global options is not necessary at start-up. However, you might want
to modify the global default settings.
The following global default settings can be modified:
Enable SNMP trap listener
Use this option to enable or disable the SNMP trap listener.
SNMP Trap Listener Port
Use this option to specify the UDP port on which the SNMP Manager Trap Listener receives
traps. Supported storage systems can send SNMP traps only over UDP port 162.
SNMP Maximum Traps Received per window and SNMP Trap Window Size
Use these two options to limit the number of SNMP traps that can be received by the trap listener
within a specified period.

What events are


Events are generated automatically when a predefined condition occurs or when an object crosses a
threshold. These events enable you to take action to prevent issues that can lead to poor performance
and system unavailability. Events include an impact area, severity, and impact level.
Events are categorized by the type of impact area they encompass. Impact areas include availability,
capacity, configuration, or protection. Events are also assigned a severity type and impact level that
assist you in determining if immediate action is required.
You can configure alarms to send notification automatically when specific events or events of a
specific severity occur.
Events are automatically logged and retained for a default of 180 days.
Event types are predetermined. You can manage event notification, but you cannot add or delete
event types. However, you can modify the event severity by using the command-line interface.

Viewing events
You can view a list of all events and view detailed information about any event.
Step

1. Click Emergency, Critical, Error, Warning on the Operations Manager main window to view
reports of the respective event severity type.

Managing events
If the DataFabric Manager server is not configured to trigger an alarm when an event is generated,
you cannot find out about the event. However, to identify the event, you can check the events log on
the DataFabric Manager server.
Steps

1. From an Events view, select the check box for the event that you want to acknowledge. You can
select multiple events.
2. Click Acknowledge Selected to acknowledge the event that caused the alarm.
3. Find out the cause of the event and take corrective action.
4. Delete the event

Alarm configurations
The DataFabric Manager server uses alarms to tell you when events occur. The DataFabric Manager
server sends the alarm notification to one or more specified recipients: an e-mail address, a pager
number, an SNMP traphost, or a script that you write.
You are responsible for which events cause alarms, whether the alarm repeats until it is
acknowledged, and how many recipients an alarm has. Not all events are severe enough to require
alarms, and not all alarms are important enough to require acknowledgment. Nevertheless, you
should configure the DataFabric Manager server to repeat notification until an event is
acknowledged, to avoid multiple responses to the same event.
The DataFabric Manager server does not automatically send alarms for the events. You must
configure alarms for the events, you specify.

Guidelines for configuring alarms


When configuring alarms you must follow a set of guidelines.
Alarms must be created for a group, either an individual group or the Global group.
If you want to set an alarm for a specific object, you must first create a group with that object as
the only member. You can then configure an alarm for the newly created group.
Alarms you create for a specific event are triggered when that event occurs.
Alarms you create for a type of event are triggered when any event of that severity level occurs.
Alarms can be configured based on events, event severity, or event class.

Creating alarms
You can create alarms from the Alarms page in Operations Manager.
Steps

1. Click Control Center > Setup > Alarms.


150 | Operations Manager Administration Guide
2. From the Alarms page, select the group that you want Operations Manager to monitor.
You might need to expand the list to display the one you want to select.
3. Specify what triggers the alarm: an event or the severity of event.
4. Specify the recipient of the alarm notification.
Note: If you want to specify more than one recipient or configure repeat notification, continue
to Step 5.

5. Click Add to set the alarm. If you want to configure additional options, continue with Step 5.
6. Click Advanced Version.
7. Optionally, if you want to specify a class of events that should trigger this alarm, specify the event
class.
You can use normal expressions.
8. Optionally, specify the recipients of the alarm notification.
Formats include administrator names, e-mail addresses, pager addresses, or an IP address of the
system to receive SNMP traps (or port number to send the SNMP trap to).
9. Optionally, specify the period that Operations Manager sends alarm notifications.
10. Optionally, select Yes to resend the alarm notification until the event is acknowledged or No to
notify the recipients only once.
11. Optionally, set the interval (in minutes) that Operations Manager waits before it tries to resend a
notification.
12. Activate the alarm by selecting No in the Disable field.
13. Click Add.

Working with user alerts


The DataFabric Manager server can send an alert to you whenever it detects a condition or problem,
in your systems that requires attention. You can configure the mail server so that the DataFabric
Manager server can send alerts to specified recipients when an event occurs.

What user alerts are


By default, the DataFabric Manager server sends out user alerts (e-mail messages) to all users who
exceed their quota limits.
Whenever a user event related to disk or file quotas occurs, the DataFabric Manager server sends an
alert to the user who caused the event. The alert is in the form of an e-mail message that includes
information about the file system (volume or qtree) on which the user exceeded the threshold for a
quota.
You can disable the alerts for all users or for the users who have quotas.

Differences between alarms and user alerts


The DataFabric Manager server uses alarms to tell you when events occur. By default, the DataFabric
Manager server sends out alerts to all users who exceed their quota limits.

User alerts configurations


To receive user alerts, you must enable the UserQuotaAlertsoption, configure e-mail addresses,
and the e-mail domain. Optionally, you can configure the mailmapfile.
If you want to enable or disable alerts to all users in the DataFabric Manager server database, use the
EnableUserQuotaAlertsoption in the Users section on the Options page.
You can enable, or disable alerts to only the users who have quotas configured on a specific file
system, or a group of file systems. Use the EnableUserQuotaAlertsoption on the Edit
Volume Settings or Edit Qtree Settings page of that volume or qtree.

How you specify email addresses for alerts


You can specify an email address on DataFabric Manager server to receive alerts. The email address
that is used to send alerts depends on the DataFabric Manager server configuration.
The following list provides the different ways to specify an email address:
Using the dfmquotausercommand
Using the mailmapfile
If you have to specify email addresses of many users, using the Edit User Settings page for each
user might not be convenient. Therefore, the DataFabric Manager server provides a mailmapfile
that enables you to specify many email addresses in one operation.
The following list details what occurs if you do not specify an email address:
When you do not specify the email address, the default email domain is configured and appended
to the user name by the DataFabric Manager server.
The resulting email address is used to send the alert.

Note: The DataFabric Manager server uses only that part of the user name that is unique to the

user. For example, if the user name is company/joe, Operations Manager uses joe as the user
name.
If a default email domain is not configured, Operations Manager uses the part of the user name
that is unique to the user (without the domain information).
Note: If your SMTP server processes only email addresses that contain the domain

information, you must configure the domain in the DataFabric Manager server to ensure that
email messages are delivered to their intended recipients.

What the mailmap file is


The mailmapfile is a simple text file that contains a mapping between the user names and the e-mail
addresses of these users.
The file is imported into the DataFabric Manager server database by using the dfmmailmap
importcommand. Once the file has been imported, the information in the database is used to find
the e-mail addresses of the users.
For more information about the dfmmailmapimportcommand, see the DataFabric Manager
server man pages.
Example of mailmap file
#Startmailmap
USERwindows_domain\[email protected]
[email protected]@company.com
USERchris@nisdomain1chris
#Endmailmap

USER
A case-sensitive keyword that must appear at the beginning of each entry
user_name
The Windows or UNIX user name. If the name contains spaces, enclose the name in either
double or single quotes.
e-mail address
E-mail address to which the quota alert is sent when the user crosses a user quota Threshold

Guidelines for editing the mailmap file


You should follow a set of guidelines for editing the mailmapfile.
When specifying a user name for a UNIX user, you must specify the full NIS domain after the
user name. For example, for UNIX user joe in NIS domain nisdomain1, specify joe@nisdomain1
as the user name.
The specified NIS domain must match the one configured on the storage system. If no domain
information is configured on the storage system, you must also leave the domain in the mailmap
file empty.
Use one or more spaces or tabs to separate the fields in the file.
Use the # character in the beginning of a line that is a comment.
If blank lines exist in the file, they are ignored.

Introduction to DataFabric Manager server reports


The DataFabric Manager server provides standard reports that you can view from the command-line
interface (CLI) or Operations Manager.
You can run reports and create custom reports from the CLI. The DataFabric Manager server also
provides reports in Operations Manager interface, which enables you to perform the following
operations:
View a report.
Save a report in Excel format.
Print a report.
Create a report.
Delete a custom report.
You cannot delete a standard report.
Use a custom report as a template to create another custom report.
By using DataFabric Manager server 3.6 or later, you can search all the reports from Reports menu
> All. All the reports are divided under the following categories:
Recently Viewed
Favorites
Custom Reports
Logical Objects
Physical Objects
Monitoring
Performance (7-Mode environments only)
Backup
Disaster Recovery
Data Protection Transfer
Miscellaneous
For more information about these categories, see Operations Manager Help.
Note: (7-Mode environments only) The report category Performance contains the performance
characteristics of objects. You can view the complete reports under their respective report
categories.

Introduction to report options


The DataFabric Manager server provides standard reports that you can view from the CLI or
Operations Manager.
The dfmreportcommand specifies the report catalog object that you can modify to create the
custom report. The custom report object has the following attributes that you can set:
A short name (for CLI output)
A long name (for GUI output)
Field description
The fields to display
The report catalog it was created from
The custom report also has methods that let you create, delete, and view your custom reports. You
can configure the report options in Operations Manager with respect to Name, Display tab, Catalogs,
and Fields.

Introduction to report catalogs


The DataFabric Manager server provides report catalogs that you use to customize reports. You can

set basic report properties from the CLI or Operations Manager.


Following are the basic report properties:
A short report name (for CLI output)
Long report name (for Operations Manager output)
Field description
The fields to display
The report catalog it was created from
Every report that is generated by the DataFabric Manager server, including those you customize, is
based on the catalogs.
For more information about how to use the CLI to configure and run reports, use the dfmreport
helpcommand. The command specifically describes how to list a report catalog and its fields and
command organization and syntax.

Custom reports you can create in Operations Manager


You can use Operations Manager to create different types of reports, including reports on aggregates,
array LUNs, datasets, controllers, clusters, datasets, events, ports, quotas, and backup, mirror, and
other schedules.
Aggregates
The aggregate report displays information about the space utilization, capacity,
performance characteristics (7-Mode environments only), and availability of the volumes
on your aggregates. By default, you can view aggregate reports from Control Center >
Home > Member Details > Aggregates > Report.
Array LUNs
The array LUNs report displays information about the LUN residing on third-party
storage arrays that are attached to a V-Series system, including information such as the
model, vendor, serial number of the LUN, and size. By default, you can view array LUNs
reports from Control Center > Home > Member Details > Physical Systems > Report.
Aggregate Array LUNs
The aggregate array LUNs report displays information about array LUNs contained on the
aggregates of a V-Series system, including information such as the model, vendor, serial
number of the LUN, and size. By default, you can view aggregate array LUNs reports
from Control Center > Home > Member Details > Aggregates > Report.
Backup
(7-Mode environments only) The backup report displays information about the data
transfer during a backup.
Clusters
(clustered environments only) The clusters report displays information about the clusters,
such as the status, serial number of the cluster, number of Storage Virtual Machines
(SVMs) and controllers associated with the cluster. You can view the reports from
Control Center > Home > Member Details > Physical Systems > Report.
Controllers
The controllers report displays information about the cluster to which the controller
belongs, including the model and serial number of the controller, and the system ID of the

Storage monitoring and reporting | 161


cluster. You can view the report from Control Center > Home > Member Details >
Physical Systems > Report.
Dataset
(7-Mode environments only) The dataset report displays information about the resource,
protection, and the conformance status of the dataset. This report also displays
information about the policy with which the dataset is associated.
Disks
The disks report displays information about the disks in your storage system, such as the
model, vendor, and size. You can view the performance characteristics (7-Mode
environments only) and sort these reports by broken or spare disks, as well as by size. By
default, you can view disks reports along with the controller reports in the Member Details
tab.
Events
The events report displays information about the event severity, user quotas, and SNMP
traps. The information about all the events, including deleted, unacknowledged, and
acknowledged events, in the DataFabric Manager server database are available in these
reports. By default, you can view event reports in the Group Status tab.
FC Link
The FC link report displays information about the logical and physical links of your FC
switches and fabric interfaces. By default, you can view FC link reports along with the
SAN reports in the Member Details tab.
FC Switch
The FC switch report displays FC switches that are deleted, that include user comments,
or are not operating. By default, you can view FC switch reports along with the SAN
reports in the Member Details tab.
FCP Target
The FCP target report displays information about the status, port state, and topology of the
target. The FCP target also reports the name of the FC switch, the port to which the target
connects and the HBA ports that the target can access. By default, you can view FCP
target reports in the Control Center > Home > Member Details > LUNs tab.
File System
The file system report displays information about all the file systems, and you can filter
them into reports by volumes, Snapshot copies, space reservations, qtrees, and chargeback
information. By default, you can view file system reports in the Control Center > Home
> Member Details > File systems tab.
Group Summary
The group summary report displays the status, and the used and available storage space
for your groups. The FCP target report includes storage chargeback reports that are
grouped by usage and allocation. By default, you can view group reports in the Group
Status tab.
Host Users

(7-Mode environments only) The host users report shows you information about the
existing users on the host. By default, you can view host users reports from Management
> Host Users > Report.
Host Local Users
(7-Mode environments only) The host local users report displays information about the
existing local users on the host. By default, you can view the host local users reports from
Management > Host Local Users > Report.
Host Domain Users
(7-Mode environments only) The host domain users report displays information about the
existing domain users on the host. By default, you can view the host domain users reports
from Management > Host Domain Users > Report.
Host Usergroups
(7-Mode environments only) The host usergroups report displays information about the
existing user groups on the host. By default, you can view the host usergroups reports
from Management > Host Usergroups > Report.
Host Roles
(7-Mode environments only) The host roles report displays information about the existing
roles on the host. By default, you can view the host roles reports from Management >
Host Roles > Report.
Interface Groups
The interface groups report displays information about all the defined cluster interface
groups, ports, controllers, and active state of the group. You can view the interface groups
report from Control Center > Home > Member Details > Physical Systems > Report.
Logical Interfaces
The logical interfaces report displays information about the server, the current port that the
logical interface uses, the status of the logical interface, the network address and mask, the
type of role, and whether the interface is at the home port. You can view the logical
interfaces report from Control Center > Home > Member Details > Physical Systems >
Report.
LUN
The LUN report displays information and statistics about the LUNs and LUN initiator
groups on the storage system, along with the performance characteristics (7-Mode
environments only). By default, you can view LUN reports in the Member Details tab.
History, Performance Report
The performance events history report displays all the Performance Advisor events. By
default, you can view the performance events history reports from Group Status > Events
> Report.
Mirror
The mirror report displays information about data transfer in a mirrored relationship.
Performance Events
The performance events report displays all the current Performance Advisor events. By

default, you can view the performance events reports from Control Center > Home >
Group Status > Events > Report.
Ports
The ports report displays information about the controllers that are connected, the type of
role of the port, the status of the port, and sizes of data moving in and out of the port. You
can view the ports report from Control Center > Home > Member Details > Physical
Systems > Report.
Quotas
The quota report displays information about user quotas that you can use for chargeback
reports. By default, you can view quota reports along with the group summary reports in
the Group Status tab.
Report Outputs
The report outputs report displays information about the report outputs that are generated
by the report schedules. By default, you can view report outputs reports from Reports >
Schedule > Saved Reports.
Report Schedules
The report schedules report displays information about the existing report schedules. By
default, you can view report schedules reports from Reports > Schedule > Report
Schedules.
A report schedule is an association between a schedule and a report for the report
generation to happen at that particular time.
Resource Pools
The resource pools report displays information about the storage capacity that is available
and the capacity that is used by all the aggregates in the resource pool. This report also
displays the time zone and the status of the resource pool.
SAN Host
(7-Mode environments only) The SAN host report displays information about SAN hosts,
including FCP traffic and LUN information, and the type and status of the SAN host. By
default, you can view SAN host reports in the Member Details tab.
Schedules
The schedules report displays information about the existing schedules and the names of
the report schedules that are using a particular schedule. By default, you can view
schedules reports from Reports > Schedule > Schedules. The Schedules tab displays all
the schedules. Schedules are separate entities that can be associated with reports.
Scripts
The scripts report displays information about the script jobs and script schedules. By
default, you can view the scripts reports in the Member Details tab.
Spare Array LUNs
The spare array LUNs report displays information about spare array LUNs of a V-Series
system, such as the model, vendor, serial number of the LUN, and size. By default, you
can view the spare array LUNs reports from Control Center > Home > Member Details
> Physical Systems > Report.

storage systems
The storage systems report displays information about the capacity and operations of your
storage systems, performance characteristics (7-Mode environments only), and the
releases and protocols running on them. By default, you can view storage systems reports
in the Control Center > Home > Member Details > Physical Systems tab.
Storage Virtual Machines
(clustered environments only) The Storage Virtual Machines report displays information
about the associated cluster, the root volume on which the SVM resides, name of the
service switch, NIS domain, and the status of the SVM. You can view this report from
Control Center > Home > Member Details > Virtual Systems > Report.
User Quotas
The User Quotas report displays information about the disk space usage and user quota
thresholds collected from the monitored storage systems.
vFiler
(7-Mode environments only) The vFiler report displays the status, available protocols,
storage usage, and performance characteristics (7-Mode environments only) of vFiler
units that you are monitoring with the DataFabric Manager server. By default, you can
view the vFiler reports in the Member Details tab.
Volume
The Volume report displays all the volumes with the following details, for the current
month or for the past month:
Name
Capacity
Available space
Snapshot capacity
Growth rates
Expendability
Chargeback by usage or allocation
Performance characteristics (7-Mode environments only)
By default, you can view the volume reports along with the file system reports in the
Member Details tab.
Note: The FC link and FC switch reports are available only when the SAN license for the

DataFabric Manager server is installed. NetApp has announced the end of availability for the SAN
license. To facilitate this transition, existing customers can continue to license the SAN option with
the DataFabric Manager server.

What performance reports do (7-Mode environments only)


Performance reports in Operations Manager provide information about the performance
characteristics of an object.
You can perform the following operations with performance reports:
You can view the performance characteristics of an object for the periods last, one day, one week,
one month, three months, and one year.
You can view the performance counters related to various fields in catalogs.
You can use data consolidation, which is a statistical method to analyze data.

Data consolidation is available only if you select the Performance option. You can view the
average, minimum, maximum, or median value for the performance metrics over a period. This
field is set to Average by default.

Configuring custom reports


You can configure custom reports in Operations Manager to customize the report to suit your needs.
For example, you can configure a custom report with a relevant name, a comment to help you
remember the use of this report, and select relevant fields for you report.
Steps

1. Select Custom from the Reports menu.


2. Enter a (short) name for the report, depending on how you want it displayed in the command-line
interface (CLI).
3. Enter a (long) name for the report, depending on how you want it displayed in Operations
Manager.
4. Add comments to the report description.
5. Select the catalog from which the available report fields are based. 6. Select where you want the
DataFabric Manager server to display this report in Operations
Manager.
7. Select the related catalog from which you want to choose fields.
You might have to expand the list to display the catalog you want to select.
8. In the Choose From Available Fields section, choose the fields you want to view:
To view fields related to the usage and configuration metrics of the object, click Usage.
(7-Mode environments only) To view fields related to performance metrics of the object, click
Performance.
9. Select a field from Choose From Available Fields.
10. Enter a name for the field, depending on how you want it displayed on the report.
Make your field name abbreviated and as clear as possible. You must be able to view a field name
in the reports and determine which field the information relates to.
11. Specify the format of the field.
If you choose to not format the field, the default format displayed is used.
12. Click Add to move the field to the Reported Fields list.
13. Repeat Steps 8 to 12 for each field that you want to include in the report.
14. Click Move Up or Move Down to reorder the fields.
15. (7-Mode environments only) If you selected Performance, select the required data consolidation
method from the list.
16. Click Create.
17. To view this report, locate this report in the list at the lower part of the page and click the Display
tab name.
18. Locate the report from the Report drop-down list.

What scheduling report generation is


Operations Manager allows you to schedule the generation of reports.
The report can include the following statistics:
Volume capacity used
CPU usage
Storage system capacity
Storage system up time
What Report Schedules reports are
The Report Schedules report shows you information about the existing report schedules. A report

schedule is an association between a report and a schedule for the report to be generated at a
particular time.
By default, Report Schedules reports display in Reports > Schedule > Report Schedules.
Scheduling a report using the All submenu
You can schedule a report using the All submenu from the Reports menu in Operations Manager.
Steps

1. From any page, click Reports > All to display the Report Categories page.
By default, the Recently Viewed category appears.
2. Select a report of your choice.
3. Click Show to display the selected report.
4. Click the Schedule This Report icon, , located in the upper right corner of the page.
5. In the Reports - Add a Schedule page, specify the report schedule parameters.
For details about the report schedule parameters, see the Operations Manager Help.
6. Click Add.
Scheduling a report using the Schedule submenu
You can schedule a report using the Schedule submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click Add New Report Schedule.
3. In the Reports - Add a Schedule page, specify the report schedule parameters.
For details about the report schedule parameters, see the Operations Manager Help.
4. Click Add.

Methods to schedule a report


You can schedule a report in two possible methods from the Reports menu.
Following are the two methods with which you can schedule a report:
Using the Schedule submenu from the Reports menu
Using the All submenu from the Reports menu
Editing a report schedule
You can edit a report schedule using the Schedule submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click the report schedule that you want to edit.
Alternatively, click Saved Reports to list all the report outputs, and then click Report Schedules
entry, that you want to edit.
3. In the Reports - Edit a Schedule page, edit the report schedule parameters.
4. Click Update.
Running a report schedule
You can run a report schedule using the Schedule submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Select the report schedule that you want to run.
3. Click Run Selected.
Listing all the run results of a report schedule
You can list all the run results of a report schedule using the Schedule submenu from the Reports

menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click Last Result Value of a report schedule to display the run result for that particular report
schedule.

What Saved reports are


The Saved reports display information about report outputs such as Status, Run Time, and the
corresponding report schedule, which generated the report output.
By default, Saved reports display in Reports > Schedule > Saved Reports. The Saved Reports tab
displays the list of all the report outputs that are generated by the report schedules.
Listing the report outputs
You can list the report outputs that are generated by all the report schedules using the Schedule
submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click Saved Reports to display the list of report outputs.
Listing the successful report outputs
You can list the successful report outputs generated by all the report schedules using the Schedule
submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click Saved Reports.
3. Select the Report Outputs, Successful entry from the Report drop-down list.
Listing the failed report outputs
You can list the failed report outputs that are generated by all the report schedules using the Schedule
submenu from the Reports menu.
Steps

1. From any page, click Reports > Schedule to display all the report schedules.
2. Click Saved Reports.
3. Select the Report Outputs, Failed entry from the Report drop-down list.

Data export in the DataFabric Manager server


By using third-party tools, you can create customized reports from the data you export from the
DataFabric Manager server and Performance Advisor.
Operations Manager reports are detailed reports about storage system configuration and utilization.
You can create customized reports to perform the following tasks:
Forecast future capacity and bandwidth utilization requirements
Present capacity and bandwidth utilization statistics
Generate performance graphs
Present monthly service level agreement (SLA) reports
Data export provides the following benefits:
Saves effort in collecting up-to-date report data from different sources
Provides database access to the historical data that is collected by the DataFabric Manager server
Provides database access to the information that is provided by the custom report catalogs in the
DataFabric Manager server
Provides and validates the following interfaces to the exposed DataFabric Manager server views:

Open Database Connectivity (ODBC)


Java Database Connectivity (JDBC)
Enables you to export the Performance Advisor and DataFabric Manager server data to text files,
which eases the loading of data to user-specific database
Allows you to schedule the export
Allows you to specify the list of the counters to be exported
Allows you to consolidate the sample values of the data export
Allows you to customize the rate at which the performance counter data is exported (7-Mode
environments only)

User quotas
You can use user quotas to limit the amount of disk space or the number of files that a user can use.
Quotas provide a way to restrict or track the disk space, and number of files used by a user, group, or
qtree. Quotas are applied to a specific volume or qtree.

Why you use quotas


You can use quotas to limit resource usage, to provide notification when resource usage reaches
specific levels, or to track resource usage.
You specify a quota for the following reasons:
To limit the amount of disk space or the number of files that can be used by a user or group, or
that can be contained by a qtree
To track the amount of disk space or the number of files used by a user, group, or qtree, without
imposing a limit
To warn users when their disk usage or file usage is high

Overview of the quota process


Quotas can be soft or hard. Soft quotas cause Data ONTAP to send a notification when specified
thresholds are exceeded, and hard quotas prevent a write operation from succeeding when specified
thresholds are exceeded.
When Data ONTAP receives a request to write to a volume, it checks to see whether quotas are
activated for that volume. If so, Data ONTAP determines whether any quota for that volume (and, if
the write is to a qtree, for that qtree) would be exceeded by performing the write operation. If any
hard quota is exceeded, the write operation fails, and a quota notification is sent. If any soft quota is
exceeded, the write operation succeeds, and a quota notification is sent.

User quota management using Operations Manager


You can view user quota summary reports, chargeback reports, user details, quota events, and so on.
You can perform the following user quota management tasks by using Operations Manager:
View summary reports (across all storage systems) and per-quota reports with data, about files
and disk space that is used by users, hard and soft quota limits, and the projected time when users
exceed their quota limit
View graphs of total growth and per-quota growth of each user

View details about a user


Obtain chargeback reports for users
Edit user quotas when you edit user quotas through Operations Manager, the /etc/quotasfile
is updated on the storage system on which the quota is located.
Configure and edit user quota thresholds for individual users, volumes, and qtrees. When you
configure a user quota threshold for a volume or qtree, the settings apply to all user quotas on that
volume or qtree.
Notify users when they exceed the user quota thresholds configured in the DataFabric Manager
server
Configure the monitoring interval for user quota monitoring
View and respond to user quota events
Configure alarms to notify administrators of the user quota events

Prerequisites for managing user quotas using Operations Manager


To monitor and manage user quotas by using Operations Manager, you must ensure that your storage
system meets certain prerequisites.
The storage systems on which you want to monitor the user quotas must have Data ONTAP 6.3 or
later installed.
The storage systems on which you want to manage user quotas must have Data ONTAP 6.4 or
later installed.
Following are the prerequisites to monitor and edit user quotas assigned to vFiler units:
The hosting storage system must be running Data ONTAP 6.5.1 or later.
You should enable RSH or SSH access to the storage system and configure login and
password credentials that are used to authenticate the DataFabric Manager server.
You must use the DataFabric Manager server to configure the root login name and root password
of a storage system on which you want to monitor and manage user quotas.
You must configure and enable quotas for each volume for which you want to view the user
quotas.
You must log in to Operations Manager as an administrator with quota privilege to view user
quota reports and events so that you can configure user quotas for volumes and qtrees.
Directives, such as QUOTA_TARGET_DOMAIN and QUOTA_PERFORM_USER_MAPPING,
must not be present in the /etc/quotasfile on the storage system.
The /etc/quotasfile on the storage system must not contain any errors.

Where to find user quota reports in Operations Manager


You can view user quota reports in Operations Manager at Control Center > Home > Quotas >
Report.

Monitor interval for user quotas in Operations Manager


You can use Operations Manager to view the monitoring interval at which the DataFabric Manager
server is monitoring a user quota on a storage system.
The UserQuotaMonitoringIntervaloption on the Options page (Setup > Options >
Monitoring) determines how frequently the DataFabric Manager server collects the user quota
information from the monitored storage systems. By default, the user quota information is collected
once every day; however, you can change this monitoring interval.
Note: The process of collecting the user quota information from storage systems is resource

intensive. When you decrease the User Quota Monitoring Interval option to a low value, the
DataFabric Manager server collects the information more frequently. However, decreasing the

User Quota Monitoring Interval might negatively affect the performance of the storage systems
and the DataFabric Manager server.

Modification of user quotas in Operations Manager


You can edit disk space threshold, disk space hard limit, disk space soft limit, and so on for a user
quota in Operations Manager.
When you edit the options for a user quota, the /etc/quotasfile on the storage system where the
quota exists is appropriately updated.
Disk space threshold
Disk space hard limit
Disk space soft limit
Files hard limit
Files soft limit

Prerequisites to edit user quotas in Operations Manager


If you want to edit user quota in Operations Manager, ensure that your storage system meets the
prerequisites.
You must configure the root login name and root password in the DataFabric Manager server for
the storage system on which you want to monitor and manage user quotas.
You must configure and enable quotas for each volume for which you want to view the user
quotas.
Operations Manager conducts vFiler quota editing by using the jobs. If a vFiler quota editing job
fails, verify the quotafile on the hosting storage system.
In addition, to protect the quotafile against damage or loss, before starting a job, the DataFabric
Manager server creates a backup file named DFM(timestamp).bak. If the job fails, you can
recover data by renaming the backupquotafile.

Editing user quotas using Operations Manager


You can edit user quotas by using the Edit Quota Settings page in Operations Manager to increase or
decrease the limits associated with them.
Before you begin

You must have ensured that the storage system meets the prerequisites.
Steps

1. Click Control Center > Home > Group Status > Quotas > Report > User Quotas, All.
2. Click any quota-related fields for the required quota and modify the values.

Configuring user settings using Operations Manager


You can configure user settings, such as the email address of users and quota alerts, and set user
quota threshold using Operations Manager.
Steps

1. Click Control Center > Home > Group Status > Quotas > Report > User Quotas, All.
2. Click the Edit Settings link in the lower left corner.
3. Edit settings such as email address of the user, quota alerts and user quota alerts for full threshold
and nearly full threshold, resource tag, and so on.
You can leave the email address field blank if you want the DataFabric Manager server to use the
default email address of the user.
4. Click Update.

What user quota thresholds are


User quota thresholds are the values that the DataFabric Manager server uses to evaluate whether the
space consumption by a user is nearing, or has reached the limit that is set by the users quota.
If these thresholds are crossed, the DataFabric Manager server generates user quota events.
By default, the DataFabric Manager server sends user alerts in the form of e-mail messages to the
users who cause user quota events. Additionally, you can configure alarms that notify the specified
recipients (DataFabric Manager server administrators, a pager address, or an SNMP trap host) of user
quota events.
The DataFabric Manager server can also send a user alert when users exceed their soft quota limit;
however, no thresholds are defined in the DataFabric Manager server for the soft quotas. The
DataFabric Manager server uses the soft quota limits set in the /etc/quotasfile of a storage system
to determine whether a user has crossed the soft quota.

What the DataFabric Manager server user thresholds are


The DataFabric Manager server user quota thresholds are a percentage of the Data ONTAP hard
limits (files and disk space) configured in the /etc/quotasfile of a storage system.
The user quota threshold makes the user stay within the hard limit for the user quota. Therefore, the
user quota thresholds are crossed even before users exceed their hard limits for user quotas.

User quota thresholds


You can set a user quota threshold to all the user quotas present in a volume or a qtree.
When you configure a user quota threshold for a volume or qtree, the settings apply to all user quotas
on that volume or qtree.
The DataFabric Manager server uses the user quota thresholds to monitor the hard and soft quota
limits configured in the /etc/quotasfile of each storage system.

Ways to configure user quota thresholds in Operations Manager


You can configure user quota thresholds by applying the thresholds to all quotas of a specific user or
on a specific file system or on a group of file systems using Operations Manager.
Apply user quota thresholds to all quotas of a specific user
Apply user quota thresholds to all quotas on a specific file system (volume or qtree) or a group of
file systems
You can apply thresholds using the Edit Quota Settings links on the lower left pane of the
Details page for a specific volume or qtree. You can access the Volume Details page by clicking
on a volume name at Control Center > Home > Member Details > File Systems > Report >
Volumes, All. Similarly, for the Qtree Details page, clicking on the qtree name at Control Center
> Home > Member Details > File Systems > Report > Qtrees, All
To apply settings to a group of file systems, select the group name from the Apply Settings To list
on the quota settings page.
Apply user quota thresholds to all quotas on all users on all file systems: that is, all user quotas in
the DataFabric Manager server database
You can apply thresholds at Setup > Options > Edit Options: Default Thresholds.

Precedence of user quota thresholds in the DataFabric Manager


server

The DataFabric Manager server prioritizes user quota threshold based on a specific user, a specific
volume or a qtree, and, all users in the DataFabric Manager server.
The following list specifies the order in which user quota thresholds are applied:
1. User quota thresholds specified for a specific user
2. File systems (volumes and qtrees) user quota thresholds specified for a specific volume or qtree
3. Global user quota thresholds specified for all users in the DataFabric Manager server database

Management of LUNs, Windows and UNIX hosts,


and FCP targets
You can use Operations Manager to monitor and manage LUNs, Windows and UNIX hosts, and FCP
targets in your SANs. SANs on the DataFabric Manager server are storage networks that are installed
in compliance with the specified SAN setup guidelines.
Note: NetApp has announced the end of availability for the SAN license for the DataFabric
Manager server. Existing customers can continue to license the SAN option with the DataFabric
Manager server. DataFabric Manager server customers should check with their NetApp sales
representative about other NetApp SAN management solutions.

Management of SAN components


To monitor and manage LUNs, FCP targets, and SAN hosts, the DataFabric Manager server must
first discover them.
The DataFabric Manager server uses SNMP to discover storage systems, but SAN hosts must already
have the NetApp Host Agent software installed and configured on them before the DataFabric
Manager server can discover them.
After SAN components have been discovered, the DataFabric Manager server starts collecting
pertinent datafor example, which LUNs exist on which storage systems. Data is collected
periodically and reported through various Operations Manager reports. (The frequency of data
collection depends on the values that are assigned to the DataFabric Manager server monitoring
intervals.)
The DataFabric Manager server monitors LUNs, FCP targets, and SAN hosts for a number of
predefined conditions and thresholds. For example, when the state of an HBA port changes to online
or offline or when the traffic on an HBA port exceeds a specified threshold. If a predefined condition
is met or a threshold is exceeded, the DataFabric Manager server generates and logs an event in its
database. These events can be viewed through the details page of the affected object. Additionally,
you can configure the DataFabric Manager server to send notification about such events (also known
as alarms) to an e-mail address. You can also configure the DataFabric Manager server to send
notifications to a pager, an SNMP trap host, or a script you write.
In addition to monitoring LUNs, FCP targets, and SAN hosts, you can use the DataFabric Manager
server to manage these components. For example, you can create, delete, or expand a LUN.

SAN and NetApp Host Agent software (7-Mode


environments only)
The DataFabric Manager server can automatically discover SAN hosts; however, it does not use
SNMP to poll for new hosts.

NetApp Host Agent software discovers, monitors, and manages SANs on SAN hosts. You must
install the NetApp Host Agent software on each SAN host that you want to monitor and manage with
the DataFabric Manager server.
Note: To modify the global host agent monitoring interval for SAN hosts, you must change the

SAN Host Monitoring Interval (Setup > Options > Monitoring).

List of tasks you can perform using NetApp Host Agent


software (7-Mode environments only)
After you install NetApp Host Agent software on a client host along with the DataFabric Manager
server, you can perform various management tasks, such as monitoring system information for SAN
hosts, and creating and managing LUNs.
You can perform the following tasks:
Monitor basic system information for SAN hosts and related devices.
Perform management functions, such as creating, modifying, or expanding a LUN.
View detailed HBA and LUN information.

List of tasks performed to monitor targets and initiators


You can use Operations Manager to perform management tasks such as view reports; monitor,
manage, and group LUNs; and respond to LUN and SAN host events.
Following is a list of tasks you can perform to monitor targets and initiators:
View reports that provide information about LUNs, FCP targets, and SAN hosts.
View details about a specific LUN, FCP target on a storage system, or SAN host.
Group LUNs, storage systems in a SAN, or SAN hosts for efficient monitoring and management.
Change the monitoring intervals for LUNs, and SAN hosts.
View and respond to LUN and SAN host events.
Configure the DataFabric Manager server to generate alarms to notify recipients of LUN and
SAN host events.

Reports for monitoring LUNs, FCP targets, and SAN


hosts
Reports about LUNs, SAN hosts, and FCP targets that the DataFabric Manager server monitors,
are available on the LUNs page of Member Details tab.
You can view reports by selecting from the Report drop-down list. If you want to view a report
about a specific group, click the group name in the left pane of Operations Manager. You can
view the
following reports from the LUNs page:
FCP Targets

SAN Hosts, Comments


SAN Hosts, All
SAN Hosts, FCP
SAN Hosts, iSCSI
SAN Hosts, Deleted
SAN Hosts Traffic, FCP
SAN Host Cluster Groups
SAN Host LUNs, All
SAN Hosts LUNs, iSCSI
SAN Hosts LUNs, FCP
Management of LUNs, Windows and UNIX hosts, and FCP targets | 201
LUNs, All
LUNs, Comments
LUNs, Deleted
LUNs, Unmapped
LUN Statistics
LUN Initiator Groups
Initiator Groups
202

You might also like