GPFS V3 2 Administration and Programming Reference
GPFS V3 2 Administration and Programming Reference
SA23-2221-01
SA23-2221-01
Note: Before using this information and the product it supports, be sure to read the general information under Notices on page 407.
| | | |
This edition applies to version 3 release 2 of IBM General Parallel File System Multiplatform (program number 5724-N94), IBM General Parallel File System for POWER (program number 5765-G66), and to all subsequent releases and modifications until otherwise indicated in new editions. Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. IBM welcomes your comments. A form for your comments may be provided at the back of this publication, or you may send your comments to this address: International Business Machines Corporation Department H6MA, Mail Station P181 2455 South Road Poughkeepsie, NY 12601-5400 United States of America FAX (United States and Canada): 1+845+432-9405 FAX (Other Countries): Your International Access Code +1+845+432-9405 IBMLink (United States customers only): IBMUSM10(MHVRCFS) Internet e-mail: [email protected] If you would like a reply, be sure to include your name, address, telephone number, or FAX number. Make sure to include the following in your comment or note: v Title and order number of this book v Page number or topic related to your comment When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright International Business Machines Corporation 1998, 2007. All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix About this publication . . . . . . . . . Who should read this publication . . . . . . Conventions used in this publication . . . . . Prerequisite and related information . . . . . ISO 9000 . . . . . . . . . . . . . . Using LookAt to look up message explanations How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi xi xi xii xii xii xii
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Performing GPFS administration tasks . . . . . . . . . . . . . . . . . . . . 1 Requirements for administering a GPFS file system . . . . . . . . . . . . . . . . . . . . 1 Chapter 2. Managing your GPFS cluster . . . . . Creating your GPFS cluster . . . . . . . . . . . Displaying GPFS cluster configuration information . . . Specifying nodes as input to GPFS commands . . . . Adding nodes to a GPFS cluster . . . . . . . . . Deleting nodes from a GPFS cluster . . . . . . . . Changing the GPFS cluster configuration data . . . . Node quorum considerations . . . . . . . . . . Node quorum with tiebreaker considerations . . . . Displaying and changing the file system manager node | Determining how long mmrestripefs takes to complete . Starting and stopping GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . 3 . 3 . 4 . 5 . 5 . 6 . 10 . 10 . 11 . 12 . 12 . . . . . . . . . . . . . . . . . . . . . . . . . 15 15 15 15 16 17 17 17 18 19 20 21 21 21 21 22 23 24 24 25 26 26 26 27 27
Chapter 3. Managing file systems . . . . . . . . . . Mounting a file system . . . . . . . . . . . . . . Mounting a file system on multiple nodes . . . . . . . GPFS-specific mount options . . . . . . . . . . . Unmounting a file system . . . . . . . . . . . . . Unmounting a file system on multiple nodes . . . . . Deleting a file system . . . . . . . . . . . . . . . Determining which nodes have a file system mounted . . . Checking and repairing a file system . . . . . . . . . Listing file system attributes. . . . . . . . . . . . . Modifying file system attributes . . . . . . . . . . . Querying and changing file replication attributes . . . . . Querying file replication . . . . . . . . . . . . . Changing file replication attributes . . . . . . . . . Using Direct I/O on a file in a GPFS file system . . . . . Restriping a GPFS file system . . . . . . . . . . . . Querying file system space . . . . . . . . . . . . . Querying and reducing file system fragmentation . . . . . Querying file system fragmentation . . . . . . . . . Reducing file system fragmentation . . . . . . . . . Backing up a file system . . . . . . . . . . . . . . Required level of Tivoli Storage Manager . . . . . . . Setting up TSM for use by GPFS . . . . . . . . . Specifying TSM parameters with the mmbackup command Using APIs to develop backup applications . . . . . .
Copyright IBM Corp. 1998, 2007
iii
Chapter 4. Managing disks . . . . . . Displaying disks in a GPFS cluster . . . . Adding disks to a file system . . . . . . Deleting disks from a file system . . . . . Replacing disks in a GPFS file system . . . | Additional considerations for managing disks Displaying GPFS disk states . . . . . . Availability . . . . . . . . . . . . Status. . . . . . . . . . . . . . Changing GPFS disk states and parameters Changing your NSD configuration . . . . Changing NSD server usage and failback . | Enabling and disabling Persistent Reserve .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29 29 30 31 32 33 33 34 34 34 36 37 37 39 39 40 40 41 42 43 43 44 44 47 47 48 49 49 50 50 50 51 53 53 54 54 54 54 54 54 55 57 57 57 57 58
Chapter 5. Managing GPFS quotas . . . . . Enabling and disabling GPFS quota management Default quotas . . . . . . . . . . . . . Explicitly establishing and changing quotas . . . Checking quotas . . . . . . . . . . . . . Listing quotas . . . . . . . . . . . . . . Activating quota limit checking . . . . . . . . Deactivating quota limit checking . . . . . . . Creating file system quota reports . . . . . . Restoring quota files . . . . . . . . . . .
Chapter 6. Managing GPFS access control lists and NFS Traditional GPFS ACL administration . . . . . . . . . Setting traditional GPFS access control lists . . . . . Displaying traditional GPFS access control lists . . . . Applying an existing traditional GPFS access control list . Changing traditional GPFS access control lists. . . . . Deleting traditional GPFS access control lists . . . . . NFS V4 ACL administration . . . . . . . . . . . . . NFS V4 ACL Syntax . . . . . . . . . . . . . . NFS V4 ACL translation . . . . . . . . . . . . . Setting NFS V4 access control lists . . . . . . . . . Displaying NFS V4 access control lists . . . . . . . Applying an existing NFS V4 access control lists . . . . Changing NFS V4 access control lists . . . . . . . . Deleting NFS V4 access control lists . . . . . . . . GPFS exceptions and limitations to NFS V4 ACLs . . . NFS and GPFS . . . . . . . . . . . . . . . . . Exporting a GPFS file system using NFS . . . . . . . NFS usage of GPFS cache . . . . . . . . . . . . Synchronous writing using NFS . . . . . . . . . . Unmounting a file system after NFS export . . . . . . NIS automount considerations . . . . . . . . . . . Clustered NFS and GPFS on Linux . . . . . . . . . |
export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 7. Communicating file access patterns to GPFS. . . . . . . . . . . . . . . . . 59 Chapter 8. GPFS commands . mmadddisk Command. . . . mmaddnode Command . . . mmapplypolicy Command . . mmauth Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 64 68 71 76
iv
mmbackup Command . . . mmchattr Command . . . mmchcluster Command . . mmchconfig Command . . mmchdisk Command . . . mmcheckquota Command . mmchfileset Command . . mmchfs Command . . . mmchmgr Command . . . | mmchnode Command . . mmchnsd Command . . . mmchpolicy Command . . mmcrcluster Command . . mmcrfileset Command . . mmcrfs Command . . . . mmcrnsd Command . . . mmcrsnapshot Command . mmcrvsd Command . . . mmdefedquota Command . mmdefquotaoff Command . mmdefquotaon Command . mmdefragfs Command . . mmdelacl Command . . . mmdeldisk Command . . mmdelfileset Command . . mmdelfs Command . . . mmdelnode Command . . mmdelnsd Command . . mmdelsnapshot Command mmdf Command . . . . mmeditacl Command. . . mmedquota Command . . mmexportfs Command . . mmfsck Command . . . mmfsctl Command . . . mmgetacl Command . . . mmgetstate Command . . mmimportfs Command . . mmlinkfileset Command . mmlsattr Command . . . mmlscluster Command . . mmlsconfig Command . . mmlsdisk Command . . . mmlsfileset Command . . mmlsfs Command . . . . mmlsmgr Command . . . mmlsmount Command . . mmlsnsd Command . . . mmlspolicy Command . . mmlsquota Command . . mmlssnapshot Command . mmmount Command . . . | mmnsddiscover Command mmpmon Command . . . mmputacl Command . . . mmquotaoff Command . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. 81 . 84 . 87 . 90 . 97 . 101 . 104 . 106 . 110 . 112 . 116 . 119 . 121 . 125 . 127 . 134 . 139 . 142 . 147 . 149 . 151 . 154 . 157 . 159 . 162 . 165 . 167 . 170 . 172 . 174 . 177 . 180 . 183 . 185 . 190 . 194 . 197 . 200 . 203 . 205 . 207 . 209 . 211 . 214 . 217 . 221 . 223 . 226 . 229 . 231 . 234 . 236 . 238 . 240 . 245 . 248
Contents
mmquotaon Command . . mmremotecluster Command mmremotefs Command . . mmrepquota Command . . mmrestorefs Command . . mmrestripefile Command . mmrestripefs Command . mmrpldisk Command. . . mmshutdown Command . mmsnapdir Command . . mmstartup Command . . | mmtracectl Command . . mmumount Command . . mmunlinkfileset Command .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
250 252 255 258 261 264 267 271 275 277 280 282 285 287 291 293 294 295 297 298 301 303 305 307 308 309 310 312 313 315 317 319 321 323 325 327 329 332 334 335 337 339 341 343 345 347 349 351 352 354 355 358 361 363 366
Chapter 9. GPFS programming interfaces . . . . gpfs_acl_t Structure . . . . . . . . . . . . . gpfs_close_inodescan() Subroutine . . . . . . . gpfs_cmp_fssnapid() Subroutine . . . . . . . . gpfs_direntx_t Structure . . . . . . . . . . . . gpfs_fcntl() Subroutine . . . . . . . . . . . . gpfs_fgetattrs() Subroutine. . . . . . . . . . . gpfs_fputattrs() Subroutine. . . . . . . . . . . | gpfs_fputattrswithpathname() Subroutine . . . . . gpfs_free_fssnaphandle() Subroutine . . . . . . . gpfs_fssnap_handle_t Structure . . . . . . . . . gpfs_fssnap_id_t Structure . . . . . . . . . . gpfs_fstat() Subroutine . . . . . . . . . . . . gpfs_get_fsname_from_fssnaphandle() Subroutine . . gpfs_get_fssnaphandle_by_fssnapid() Subroutine . . gpfs_get_fssnaphandle_by_name() Subroutine . . . gpfs_get_fssnaphandle_by_path() Subroutine. . . . gpfs_get_fssnapid_from_fssnaphandle() Subroutine . gpfs_get_pathname_from_fssnaphandle() Subroutine . gpfs_get_snapdirname() Subroutine . . . . . . . gpfs_get_snapname_from_fssnaphandle() Subroutine gpfs_getacl() Subroutine . . . . . . . . . . . gpfs_iattr_t Structure . . . . . . . . . . . . . gpfs_iclose() Subroutine . . . . . . . . . . . gpfs_ifile_t Structure . . . . . . . . . . . . . gpfs_igetattrs() Subroutine . . . . . . . . . . . gpfs_igetfilesetname() Subroutine . . . . . . . . gpfs_igetstoragepool() Subroutine . . . . . . . . gpfs_iopen() Subroutine . . . . . . . . . . . gpfs_iread() Subroutine . . . . . . . . . . . . gpfs_ireaddir() Subroutine . . . . . . . . . . . gpfs_ireadlink() Subroutine . . . . . . . . . . gpfs_ireadx() Subroutine . . . . . . . . . . . gpfs_iscan_t Structure . . . . . . . . . . . . gpfs_next_inode() Subroutine . . . . . . . . . gpfs_opaque_acl_t Structure . . . . . . . . . . gpfs_open_inodescan() Subroutine . . . . . . . gpfs_prealloc() Subroutine . . . . . . . . . . . gpfs_putacl() Subroutine . . . . . . . . . . . gpfs_quotactl() Subroutine . . . . . . . . . . . gpfs_quotaInfo_t Structure . . . . . . . . . . .
vi
gpfs_seek_inode() Subroutine . . . gpfs_stat() Subroutine . . . . . . gpfsAccessRange_t Structure . . . gpfsCancelHints_t Structure . . . . gpfsClearFileCache_t Structure . . . gpfsDataShipMap_t Structure . . . gpfsDataShipStart_t Structure . . . gpfsDataShipStop_t Structure . . . gpfsFcntlHeader_t Structure . . . . gpfsFreeRange_t Structure . . . . gpfsGetFilesetName_t Structure . . gpfsGetReplication_t Structure . . . gpfsGetSnapshotName_t Structure . gpfsGetStoragePool_t Structure . . . gpfsMultipleAccessRange_t Structure gpfsRestripeData_t Structure . . . . gpfsSetReplication_t Structure . . . gpfsSetStoragePool_t Structure . . . Chapter 10. GPFS user exits mmsdrbackup User exit . . . nsddevices User exit . . . . syncfsconfig User exit . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
368 370 372 373 374 375 377 380 382 383 384 385 387 388 389 391 392 394 397 398 399 400 401 401 401 402
Chapter 11. Considerations for GPFS applications. Exceptions to Open Group technical standards . . . Determining if a file system is controlled by GPFS . . GPFS exceptions and limitations to NFS V4 ACLs . .
| Chapter 12. File system format changes between versions of GPFS . . . . . . . . . . . . 403 | | | | Accessibility features Accessibility features . Keyboard navigation . IBM and accessibility . for GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 405 405 405
Contents
vii
viii
Tables
1. 2. 3. 4. 5. 6. 7. 8. 9. Typographic conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Configuration attributes on the mmchconfig command . . . . . . . . . . . . . . . . . 8 Removal of a file with ACL entries DELETE and DELETE_CHILD . . . . . . . . . . . . . 52 GPFS commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 The mmeditacl command for POSIX and NFS V4 ACLs . . . . . . . . . . . . . . . . 177 The mmgetacl command for POSIX and NFS V4 ACLs . . . . . . . . . . . . . . . . 194 The mmputacl command for POSIX and NFS V4 ACLs . . . . . . . . . . . . . . . . 245 GPFS programming interfaces . . . . . . . . . . . . . . . . . . . . . . . . . 291 GPFS user exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
ix
To find out which version of GPFS is running on a particular Linux node, enter:
rpm -qa | grep gpfs
Throughout this publication you will see various command and component names beginning with the prefix mm. This is not an error. GPFS shares many components with the related products IBM Multi-Media Server and IBM Video Charger.
xi
Table 1. Typographic conventions (continued) Typographic convention <> Usage Angle brackets (less-than and greater-than) enclose the name of a key on the keyboard. For example, <Enter> refers to the key on your terminal or workstation that is labeled with the word Enter. An ellipsis indicates that you can repeat the preceding item one or more times. The notation <Ctrl-x> indicates a control character sequence. For example, <Ctrl-c> means that you hold down the control key while pressing <c>. The continuation character is used in programming examples in this publication for formatting purposes.
... <Ctrl-x> \
ISO 9000
ISO 9000 registered quality systems were used in the development and manufacturing of this product.
xii
Summary of changes
The following sections summarize changes to the GPFS licensed program and the GPFS library for version 3 release 2. Within each book in the library, a vertical line to the left of text and illustrations indicates technical changes or additions made to the previous edition of the book. Summary of changes for GPFS Version 3 Release 2 as updated, October 2007 Changes to GPFS and to the GPFS library for version 3 release 2 include: v New information: In the past, migrating to a new release of GPFS required shutting down GPFS and upgrading all nodes before GPFS could be restarted. GPFS V3.2 supports rolling upgrades and a limited form of backward compatibility: - Rolling upgrades enable you to install new GPFS code one node at a time without shutting down GPFS on other nodes. It is expected that all nodes will be upgraded within a short time. Some features become available on each node as soon as the node is upgraded, while other features become available as soon as all participating nodes are upgraded. - Backward compatibility allows running with a mixture of old and new nodes. Multi-cluster environments may be able to upgrade the local cluster while still allowing mounts from remote nodes in other clusters that have not yet been upgraded. You can designate up to eight NSD servers to simultaneously service I/O requests from different clients. Each of these NSD servers must have physical access to the same logical unit number (LUN). Different servers can serve I/O to different non-intersecting sets of clients for a variety of reasons, such as load balancing on the server, network partitioning by balancing the load on different networks, or workload partitioning. Multiple NSD server functions require all (peer) NSD servers to be part of the same GPFS cluster. The existing subnet functions in GPFS determine which NSD server should serve a particular client. The assumption is that nodes within a subnet are connected via high-speed networks. GPFS for Linux offers Clustered NFS (CNFS) to enable highly-available NFS exporting of file systems. Some or all nodes in an existing GPFS cluster also serve NFS and are part of the NFS cluster. All of the nodes in the NFS cluster export the same file systems to the NFS clients. This support includes the following: - Monitoring. Every node in the NFS cluster runs an NFS monitoring utility that monitors GPFS and the NFS and networking components on the node. After failure detection, and based on customer configuration, the monitoring utility may invoke a failover. - Failover. The automatic failover procedure transfers the NFS serving load from the failing node to another node in the NFS cluster. The failure is managed by the GPFS cluster, including lock and state recovery. - Load balancing. The IP address is the load unit that can be moved from one node to another because of failure or load balancing needs. This solution supports a failover of all the nodes load (all NFS IP addresses) as one unit to another node. However, if no locks are outstanding, individual IP addresses can be moved to other nodes for load balancing purposes. The GPFS InfiniBand Remote Direct Memory Access (RDMA) code uses RDMA for NSC client file I/O requests. RDMA transfers data directly between the NSD client memory and the NSD server memory instead of sending and receiving the data over the TCP socket. Using RDMA may improve bandwidth and decrease CPU utilization. GPFS can use the SCSI-3 (Persistent Reserve) standard to provide fast failover with improved recovery times. To exploit this functionality the file system has to be created on SCSI-3 capable disks.
xiii
To enable this function, you must set up PR when you create each NSD. The basic requirements for PR are that the NSD server must be an AIX node and that the disks must be regular AIX hdisks. To enable PR, use the new mmchconfig command option called usePersistentReserve. Monitoring is enabled with the SNMP-based management application, Net-SNMP. SNMP requires a Linux node installed in order to collect the data. Monitoring involves gaining a view of the GPFS system. Monitored information can be roughly grouped into the categories of configuration, status, and performance: - Configuration denotes the initially customized aspects of the systems current state. - Status information is dynamic information that expresses the current health of nodes, disks, and other hardware, including any reported error conditions disk utilization, and fragmentation. - Performance information includes quantitative measurement of the workings of a system. Performance enhancements include support for parallel defragmentation of a file system, larger pagepool support, and directory-locking: - Defragmentation of a file system can now be run in parallel across nodes in a cluster. - Maximum pagepool support is increased to 256 GB. - Directory-locking improvements for concurrent file creates and deletes from multiple nodes (this function will be available in APAR IZ01431). When a socket connection breaks due to a network failure, GPFS now tries to re-establish the connection rather than immediately initiating node expulsion procedures. Mount up to 256 file systems. GPFS V3.2 extends Information Lifecycle Management (ILM) functionality to integrate with HSM products. A single set of policies can be used to move data across different storage pools of a file system and to move data from GPFS storage pools to near-line storage and from near-line storage to GPFS storage pools. Additional enhancements include the ability to use policies for backup and restore. - Subroutine gpfs_fputattrswithpathname for backup and restore functions has been added. This subroutine sets all the extended file attributes for a file and invokes the policy engine for restoring files. - Subroutines gpfs_fgetattrs and gpfs_fputattrs have been enhanced with new flags. GPFS V3.2 enables the policy code to run in parallel across all nodes in the home cluster that have the file system mounted. The policy evaluation can then scale with the size of the cluster. Administration enhancements, which include: New commands: - mmchnode, which changes node attributes - mmnsddiscover, which rediscovers paths to the specified network shared disks on the specified nodes - mmtracectl, which sets up and enables GPFS tracing Updated commands: - mmapplypolicy to add the -M, -N, and -S options - mmchconfig to add several new configuration parameters - mmchdisk to add the -F option - mmchfs to add full or compat to the -V option - mmchmgr to add the -c option - mmchnsd to state that you can specify up to 8 NSD servers in the disk descriptors - mmcrfs: v To remove the mandatory MountPoint positional parameter v To add the -L and -T options v To change the default for the -D option from posix to nfs4
GPFS: Administration and Programming Reference
xiv
v To change the default for the -k option from posix to all v To add the --version Version option v To change the default values for the -R and -M options to 2 - mmcrnsd to state that you can specify up to 8 NSD servers in the disk descriptors and to change PrimaryServer and BackupServer to serverList - mmdefragfs to add the -P and -N options and to remove the -v option - mmdf to remove the -q option - mmedquota to add Device:Fileset to the -j option - mmlscluster to add the --cnfs option which displays the cluster NFS - mmlsfs to add the all_local and all_remote parameters - mmlsmgr to add the -c and -C parameters - mmmount to add the all_local and all_remote parameters Tracing is improved to increase the reliability of trace data gathering. It enhances the ability to acquire accurate and reliable problem determination information. With the new mmtracectl trace command, you can: - Turn trace on or off on the next session to automatically start trace when GPFS starts - Allow for predefined trace levels: io, all, def, and user-specified trace levels - Change the size of trace buffers - Allow a user-defined directory for keeping trace files - Control trace recycling during daemon termination: (off, local, global, or globalOnShutdown) v Changed information: The terminology "cluster configuration manager" or "configuration manager" has been changed to "cluster manager". v Deleted information: The mmsanrepairfs command has been removed. All references to SANergy have been removed.
Summary of changes
xv
xvi
cluster1.kgn.ibm.com 680681562214606028
GPFS UID domain: cluster1.kgn.ibm.com Remote shell command: /usr/bin/rsh Remote file copy command: /usr/bin/rcp GPFS cluster configuration servers: ----------------------------------Primary server: k164sn06.kgn.ibm.com Secondary server: k164n04.kgn.ibm.com Node Daemon node name IP address Admin node name Designation -------------------------------------------------------------------------------------1 k164n04.kgn.ibm.com 198.117.68.68 k164n04.kgn.ibm.com quorum 2 k164n05.kgn.ibm.com 198.117.68.69 k164n05.kgn.ibm.com quorum 3 k164n06.kgn.ibm.com 198.117.68.70 k164sn06.kgn.ibm.com quorum-manager
See the mmlscluster Command on page 207 for complete usage information.
clientnodes All nodes that do not participate in file system administration activities. | | managernodes All nodes in the pool of nodes from which cluster managers, file system managers, and token managers are selected. mount For commands involving a file system, all of the nodes in the GPFS cluster on which the file system is mounted. nonquorumnodes All of the non-quorum nodes in the GPFS cluster. nsdnodes All of the NSD server nodes in the GPFS cluster. quorumnodes All of the quorum nodes in the GPFS cluster. NodeFile A file that contains a list of nodes. A node file can contain individual nodes or node ranges.
Not every GPFS commands supports all of the above node specification options. To learn what kinds of node specifications are supported by a particular GPFS command, see the relevant command description in Chapter 8, GPFS commands, on page 61.
See the mmaddnode Command on page 68 and the mmlscluster Command on page 207 for complete usage information.
| | | |
v A node being deleted cannot be the primary or secondary GPFS cluster configuration server unless you intend to delete the entire cluster. Verify this by issuing the mmlscluster command. If a node to be deleted is one of the servers and you intend to keep the cluster, issue the mmchcluster command to assign another node as a configuration server before deleting the node. v A node that is being deleted cannot be designated as an NSD server for any disk in the GPFS cluster, unless you intend to delete the entire cluster. Verify this by issuing the mmlsnsd command. If a node that is to be deleted is an NSD server for one or more disks, move the disks to nodes that will remain in the cluster. Use the mmchnsd command to assign new NSD servers for those disks. v GPFS must be shut down on the nodes being deleted. Use the mmshutdown command. To delete the nodes listed in a file called nodes_to_delete, enter:
mmdelnode -N /tmp/nodes_to_delete
where nodes_to_delete contains the nodes k164n01 and k164n02. The system displays information similar to:
Verifying GPFS is stopped on all affected nodes ... mmdelnode: Command successfully completed mmdelnode: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
See the mmdelnode Command on page 167 and the mmlscluster Command on page 207 for complete usage information. Exercise caution when shutting down GPFS on quorum nodes or deleting quorum nodes from the GPFS cluster. If the number of remaining quorum nodes falls below the requirement for a quorum, you will be unable to perform file system operations. See the General Parallel File System: Concepts, Planning, and Installation Guide and search on quorum.
v Change the primary or secondary GPFS cluster configuration server nodes. The primary or secondary server may be changed to another node in the GPFS cluster. That node must be available for the command to be successful. Attention: If during the change to a new primary or secondary GPFS cluster configuration server, one or both of the old server nodes are down, it is imperative that you run the mmchcluster -p LATEST command as soon as the old servers are brought back online. Failure to do so may lead to disruption in GPFS operations. v Synchronize the primary GPFS cluster configuration server node. If an invocation of the mmchcluster command fails, you will be prompted to reissue the command and specify LATEST on the -p option to synchronize all of the nodes in the GPFS cluster. Synchronization instructs all nodes in the GPFS cluster to use the most recently specified primary GPFS cluster configuration server. v Change the remote shell and remote file copy programs to be used by the nodes in the cluster. These commands must adhere to the syntax forms of the rsh and rcp commands, but may implement an alternate authentication mechanism. For example, to change the primary server for the GPFS cluster data, enter:
mmchcluster -p k164n06
Attention: The mmchcluster command, when issued with either the -p or -s option, is designed to operate in an environment where the current primary and secondary GPFS cluster configuration servers are not available. As a result, the command can run without obtaining its regular serialization locks. To assure smooth transition to a new cluster configuration server, no other GPFS commands (mm... commands) should be running when the command is issued nor should any other command be issued until the mmchcluster command has successfully completed. See the mmchcluster Command on page 87 and the mmlscluster Command on page 207 for complete usage information.
You may be able to tune your cluster for better performance by re-configuring one or more attributes. Before you change any attributes, consider how the changes will affect the operation of GPFS. For a detailed discussion see the General Parallel File System: Concepts, Planning, and Installation Guide and the mmcrcluster command. Table 2 details the GPFS cluster configuration attributes which can be changed by issuing the mmchconfig command. Variations under which these changes take effect are noted: 1. Take effect immediately and are permanent (-i). 2. Take effect immediately but do not persist when GPFS is restarted (-I). 3. Require that the GPFS daemon be stopped on all nodes for the change to take effect. 4. May be applied to only a subset of the nodes in the cluster.
Table 2. Configuration attributes on the mmchconfig command Attribute name and Description -i option allowed -I option allowed GPFS must List of be stopped NodeNames on all allowed nodes no no Change takes effect
autoload Starting GPFS automatically automountDir Name of the automount directory cipherList When set, GPFS security using OpenSSL is enabled.
no
no
no
no
yes
no
no
no
yes
no
| | | | | | | | | | | | | | |
cnfsMountdPort The port number to be used for rpc.mountd cnfsNFSDprocs The number of nfsd kernel threads cnfsSharedRoot Directory to be used by the clustered NFS subsystem cnfsVIP Virtual DNS name for the list of CNFS IP addresses dataStructureDump Path for the storage of dumps
yes
yes
no
no
if not immediately, on restart of the daemon if not immediately, on restart of the daemon on restart of the daemon
yes
yes
no
no
no
no
no
yes
yes
yes
no
no
if not immediately, on restart of the daemon if not immediately, on restart of the daemon for new file systems
yes
yes
no
yes
| | | |
defaultMountDir Default parent directory for GPFS file systems dmapiEventTimeout DMAPI attribute dmapiMountTimeout DMAPI attribute
yes
yes
no
no
yes
yes
no
yes
if not immediately, on restart of the daemon if not immediately, on restart of the daemon
yes
yes
no
yes
Table 2. Configuration attributes on the mmchconfig command (continued) Attribute name and Description -i option allowed -I option allowed GPFS must List of be stopped NodeNames on all allowed nodes no yes Change takes effect
yes
yes
| | | | |
failureDetectionTime Indicates the amount of time it will take to detect that a node has failed (when persistent reserve (PR) is enabled)
no
no
no
no
When using this option, the GPFS daemon must be shut down on all nodes. maxblocksize Maximum file system block size allowed maxFilesToCache Number of inodes to cache for recently used files maxMBpS I/O throughput estimate maxStatCache Number of inodes to keep in stat cache nsdServerWaitTimeForMount Number of seconds to wait for an NSD server to come up nsdServerWaitTimeWindowOnMount Time window to determine if quorum is to be considered recently formed pagepool Size of buffer cache on each node prefetchThreads Maximum number of threads dedicated to prefetching data subnets List of subnets to be used for most efficient daemon-to-daemon communication tiebreakerDisks List of tie breaker disks (NSDs) uidDomain The UID domain name for the cluster. no no yes no on restart of the daemon no no yes no on restart of the daemon no no no no on restart of the daemon no no no yes yes yes no yes yes yes no yes yes yes no yes if not immediately, on restart of the daemon if not immediately, on restart of the daemon if not immediately, on restart of the daemon on restart of the daemon no no no yes yes yes no yes if not immediately, on restart of the daemon on restart of the daemon no no no yes on restart of the daemon no no no yes on restart of the daemon
Table 2. Configuration attributes on the mmchconfig command (continued) Attribute name and Description -i option allowed -I option allowed GPFS must List of be stopped NodeNames on all allowed nodes no yes Change takes effect
yes
yes
| | | | | |
usePersistentReserve Enables or disables persistent reserve (PR) on the disks verbsPorts Specifies InfiniBand device names and port numbers
no
no
yes
no
no
no
no
yes
| | | | | | | | | | | | | | | | |
verbsRdma Enables or disables InfiniBand RDMA using the Verbs API worker1Threads maximum number of concurrent file operations
yes
yes
no
yes
no
no
no
yes
Specify the nodes you want to target for change and the attributes with their new values on the mmchconfig command. For example, to change the pagepool value for each node in the GPFS cluster immediately, enter:
mmchconfig pagepool=100M -i
| For a discussion on node quorum, see General Parallel File System: Concepts, Planning, and Installation | Guide and search on node quorum. | |
| For a discussion on node quorum with tiebreaker, see General Parallel File System: Concepts, Planning, | and Installation Guide and search on node quorum with tiebreaker. | When using node quorum with tiebreaker, define between one, two, or three disks to be used as | tiebreaker disks when any quorum node is down. Issue this command: | mmchconfig tiebreakerDisks="nsdName;nsdName;nsdName" | Consider these points: | v You are not permitted to change a GPFS cluster configuration to use node quorum with tiebreaker if there are more than eight existing quorum nodes. |
10
| v You can have a maximum of three tiebreaker disks. | v The disks must have one of these types of attachments to the quorum nodes: Fibre-channel SAN | Virtual shared disks | | v The GPFS daemons must be down on all nodes in the cluster when running mmchconfig tiebreakerDisks. | | If you are using node quorum with tiebreaker and want to change to using node quorum, issue this | command: | mmchconfig tiebreakerDisks=DEFAULT | | | | | | | | | | | | | | | | | | | | |
The output shows the device name of the file system and the file system managers node number and name:
file system manager node [from 19.134.68.69 (k164n05)] ---------------- -----------------fs1 19.134.68.70 (k164n06)
| See the mmlsmgr Command on page 221 for complete usage information. | | | | | | | | | You can change the file system manager node for an individual file system by issuing the mmchmgr command. For example, to change the file system manager node for the file system fs1 to k145n32, enter:
mmchmgr fs1 k145n32
The output shows the file system managers node number and name, in parentheses, as recorded in the GPFS cluster data:
GPFS: 6027-628 Sending migrate request to current manager node 19.134.68.69 (k145n30). GPFS: 6027-629 Node 19.134.68.69 (k145n30) resigned as manager for fs1. GPFS: 6027-630 Node 19.134.68.70 (k145n32) appointed as manager for fs1.
| See the mmchmgr Command on page 110 for complete usage information.
11
| | | | | | | | | | | | | | |
| Assuming that you have enough nodes to saturate the disk servers, and have to move all of the data, the | time to read and write every block of data is roughly: 2 * fileSystemSize / averageDiskserverDataRate | | As an upper bound, due to overhead of scanning all of the metadata, this time should be doubled. If other | jobs are loading the virtual shared disk servers heavily, this time may increase even more. | Note: There is no particular reason to stop all other jobs while the mmrestripefs command is running. The CPU load of the command is minimal on each node and only the files that are being restriped | at any moment are locked to maintain data integrity. |
Check the messages recorded in /var/adm/ras/mmfs.log.latest on one node for verification. Look for messages similar to this:
mmfsd initializing ... GPFS: 6027-300 mmfsd ready
This indicates that quorum has been formed and this node has successfully joined the cluster, and is now ready to mount file systems.
12
If GPFS does not start, see the General Parallel File System: Concepts, Planning, and Installation Guide and search for the GPFS daemon will not come up. See the mmstartup Command on page 280 for complete usage information. If it becomes necessary to stop GPFS, you can do so from the command line by issuing the mmshutdown command:
mmshutdown -a
See the mmshutdown Command on page 275 for complete usage information.
13
14
9. Restriping a GPFS file system on page 22 10. Querying file system space on page 23 11. Querying and reducing file system fragmentation on page 24 12. Backing up a file system on page 26 13. Managing filesets, storage pools and policies. See Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
where device is the name of the file system. For example, to mount the file system fs1, enter:
mmmount fs1
To mount a file system only on a specific set of nodes, use the -N flag of the mmmount command. Note: When using GPFS file systems, you are not required to use the GPFS mmmount or mmumount commands. The operating system mount and umount (or unmount) commands also work.
15
recorded in the GPFS configuration files and are passed as default options to subsequent mount commands on all nodes in the cluster. Options specified with the mmmount or mount commands override the existing default settings, and are not persistent. All of the mount options can be specified using the -o parameter. Multiple options should be separated only by a comma. If an option is specified multiple times, the last instance is the one that takes effect. Certain options can also be set with specifically designated command flags. Unless otherwise stated, mount options can be specified as: option or option=1 or option=yes - to enable the option option=no - to disable the option
nooption or
option=0 or
The option={1 | 0 | yes | no} syntax should be used for options that can be intercepted by the mount command and not passed through to GPFS. An example is the atime option in the Linux environment. The GPFS-specific mount options are: atime Update inode access time for each access. This is the default. This option can also be controlled with the -S option on the mmcrfs and mmchfs commands. mtime Always return accurate file modification times. This is the default. This option can also be controlled with the -E option on the mmcrfs and mmchfs commands. noatime Do not update inode access times on this file system. This option can also be controlled with the -S option on the mmcrfs and mmchfs commands. nomtime Update file modification times only periodically. This option can also be controlled with the -E option on the mmcrfs and mmchfs commands. nosyncnfs Do not commit metadata changes coming from the NFS daemon synchronously. Normal file system synchronization semantics apply. This is the default. syncnfs Synchronously commit metadata changes coming from the NFS daemon. | useNSDserver={always | asfound | asneeded | never} Controls the initial disk discovery and failover semantics for NSD disks. The possible values are: always Always access the disk using the NSD server. Local dynamic disk discovery is disabled. asfound Access the disk as found (the first time the disk was accessed). No change of disk access from local to NSD server, or the other way around, is performed by GPFS. asneeded Access the disk any way possible. This is the default. never Always use local disk access.
16
where device is the name of the file system. For example, to unmount the file system fs1, enter:
mmumount fs1
To unmount a file system only on a specific set of nodes, use the -N flag of the mmumount command. Note: When using GPFS file systems, you are not required to use the GPFS mmmount or mmumount commands. The operating system mount and umount (or unmount) commands also work.
See the mmdelfs Command on page 165 for complete usage information. See the mmdelnsd command for removing the NSD definitions after deleting the file system.
17
| File system fs1 (mnsd.cluster:fs1) is mounted on 5 nodes: 9.114.132.101 c5n101 mnsd.cluster | 9.114.132.100 c5n100 mnsd.cluster | 9.114.132.106 c5n106 mnsd.cluster | 9.114.132.97 c5n97 cluster1.cluster | 9.114.132.92 c5n92 cluster1.cluster |
18
You cannot run the mmfsck command on a file system that has disks in a down state. You must first run the mmchdisk command to change the state of the disks to unrecovered or up. To display the status of the disks in the file system, issue the mmlsdisk command. For example, to check the file system fs1 without making any changes to the file system, enter:
mmfsck fs1
See the mmchdisk Command on page 97, mmcheckquota Command on page 101, mmfsck Command on page 185, and mmlsdisk Command on page 211 for complete usage information.
19
| flag value description | ---- -------------- ----------------------------------------------------Minimum fragment size in bytes | -f 512 Inode size in bytes | -i 512 Indirect block size in bytes | -I 8192 Default number of metadata replicas | -m 2 Maximum number of metadata replicas | -M 2 Default number of data replicas | -r 1 Maximum number of data replicas | -R 2 Block allocation type | -j cluster File locking semantics in effect | -D posix ACL semantics in effect | -k all Estimated average file size | -a -1 Estimated number of nodes that will mount file system | -n 1000 Block size | -B 16384 | -Q user;group;fileset Quotas enforced none Default quotas enabled | Maximum number of inodes | -F 3000096 | -V 10.00(3.2.0.0) File system version. Highest supported version: 10.00 Support for large LUNs? | -u yes Is DMAPI enabled? | -z no Logfile size | -L 524288 Exact mtime mount option | -E yes Suppress atime mount option | -S no | -K whenpossible Strict replica allocation option Disk storage pools in file system | -P system | -d hd3n97;hd4n97;hd5n98;hd6n98;hd7vsdn97;hd8vsdn97 Disks in file system Automatic mount option | -A no Additional mount options | -o none Default mount point | -T /fs2 | See the mmlsfs Command on page 217 for complete usage information. See the General Parallel File System: Concepts, Planning, and Installation Guide and search on GPFS architecture and file system creation considerations for a detailed discussion of file system attributes.
See the mmchfs Command on page 106 and the mmlsfs Command on page 217 for complete usage information. See the General Parallel File System: Concepts, Planning, and Installation Guide and search on GPFS architecture and file system creation considerations for a detailed discussion of file system attributes.
20
See the mmlsattr Command on page 205 for complete usage information. See the General Parallel File System: Concepts, Planning, and Installation Guide and search on GPFS architecture and file system creation considerations for a detailed discussion of file system attributes.
See the mmchattr Command on page 84 and the mmlsattr Command on page 205 for complete usage information. See the General Parallel File System: Concepts, Planning, and Installation Guide and search on GPFS architecture and file system creation considerations for a detailed discussion of file system attributes.
21
This caching policy bypasses file cache and transfers data directly from disk into the user space buffer, as opposed to using the normal cache policy of placing pages in kernel memory. Applications with poor cache hit rates or very large I/Os may benefit from the use of Direct I/O. Direct I/O may also be specified by supplying the O_DIRECT file access mode on the open() of the file.
22
mmrestripefs fs2 -b GPFS: 6027-589 Scanning file system metadata, phase 1 ... 48 % complete on Wed Aug 16 16:47:53 2000 96 % complete on Wed Aug 16 16:47:56 2000 100 % complete on Wed Aug 16 16:47:56 2000 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... 98 % complete on Wed Aug 16 16:48:02 2000 100 % complete on Wed Aug 16 16:48:02 2000 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-565 Scanning user file metadata ... GPFS: 6027-552 Scan completed successfully.
Note: Rebalancing of files is an I/O intensive and time consuming operation, and is important only for file systems with large files that are mostly invariant. In many cases, normal file update and creation will rebalance your file system over time, without the cost of the rebalancing. See the mmrestripefs Command on page 267 for complete usage information.
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
23
| | | | | |
Inode Information ----------------Number of used inodes: Number of free inodes: Number of allocated inodes: Maximum number of inodes:
| See the mmdf Command on page 174 for complete usage information.
free subblk free disk disk size in full subblk in % % name in nSubblk blocks fragments free blk blk util --------------- --------- --------- --------- -------- ------gpfs68nsd 4390912 4270112 551 97.249 99.544 gpfs69nsd 4390912 4271360 490 97.277 99.590 --------- --------- --------------(total) 8781824 8541472 1041 99.567
24
See the mmdefragfs Command on page 154 for complete usage information.
free subblk free in full subblk in % % blocks blk fragments free blk blk util before after freed before after before after before after ----------------------- ----------------- ------------ -----------2192320 2192320 0 216 216 98.57 98.57 99.99 99.99 1082272 1082272 0 200 200 97.50 97.50 99.98 99.98 1077056 1077056 0 173 173 97.03 97.03 99.98 99.98 1082496 1082496 0 400 400 97.52 97.52 99.96 99.96 1077120 1077120 0 120 120 97.04 97.04 99.99 99.99 1077344 1077344 0 246 246 97.06 97.06 99.98 99.98 1084032 1084032 0 336 336 97.66 97.66 99.97 99.97 1078272 1078272 0 217 217 97.14 97.14 99.98 99.98 1080000 1080000 0 263 263 97.30 97.30 99.98 99.98 1109216 1109216 0 110 110 99.93 99.93 99.99 99.99 2196544 2196544 0 306 306 98.76 98.76 99.99 99.99 1083168 1083168 0 246 246 97.58 97.58 99.98 99.98 1079616 1079616 0 200 200 97.07 97.07 99.98 99.98 1079808 1079808 0 229 229 97.08 97.08 99.98 99.98 1085056 1085056 0 205 205 97.56 97.56 99.98 99.98 1085408 1085408 0 237 237 97.59 97.59 99.98 99.98 1085312 1085312 0 193 193 97.58 97.58 99.98 99.98 1085792 1085792 0 260 260 97.62 97.62 99.98 99.98 1111936 1111936 0 53 53 99.97 99.97 100.00 100.00 1111936 1111936 0 53 53 99.97 99.97 100.00 100.00 1078912 1078912 0 292 292 97.00 97.00 99.97 99.97 1079392 1079392 0 249 249 97.05 97.05 99.98 99.98 2192448 2192448 0 208 208 98.58 98.58 99.99 99.99 1082944 1082944 0 222 222 97.56 97.56 99.98 99.98 1082400 1082400 0 242 242 97.51 97.51 99.98 99.98 1081856 1081856 0 216 216 97.46 97.46 99.98 99.98 2192608 2192608 0 280 280 98.57 98.57 99.99 99.99 ----------------------- ---------------------------33735264 33735264 0 5972 5972 99.98 99.98
Defragmentation complete, full block utilization is 99.96%. Re-issue command to try to reach target utilization of 100.00%.
| See the mmdefragfs Command on page 154 for complete usage information.
Chapter 3. Managing file systems
25
3. 4. 5. 6.
The TSM server code and the TSM client code must be at the same level. Storage pools must be configured on the TSM server. The TSM clients must be configured to communicate with the TSM server. TSM must be made aware that the various TSM clients are all working on the same file system, not different file systems having the same name on different client machines. This is accomplished by coding the same value for the nodename keyword in the dsm.sys file located in the Tivoli client directory (/usr/tivoli/tsm/client/ba/bin for AIX, /opt/tivoli/tsm/client/ba/bin for Linux) on each client. 7. Restoration of backed up data must be done using TSM interfaces. This can be done with the client command line interface or the TSM web client. The TSM web client interface must be made operational if you desire to use this interface for restoring data to the file system from the TSM server. Attention: If you are using the TSM Backup Archive client you must use caution when you unlink filesets that contain data backed up by TSM. TSM tracks files by pathname and does not track filesets. As a result, when you unlink a fileset, it appears to TSM that you deleted the contents of the fileset. Therefore, the TSM Backup Archive client inactivates the data on the TSM server which may result in the loss of backup data during the expiration process. You can view or download the TSM documentation at the IBM Tivoli Storage Manager Info Center: http://publib.boulder.ibm.com/infocenter/tivihelp/index.jsp?toc=/com.ibm.itstorage.doc/toc.xm.
26
27
28
| | | | | | | | | | | | | | | | |
The system displays information similar to: File system Disk name NSD servers ---------------------------------------------------------------------fs2 hd3n97 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd4n97 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd5n98 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd6n98 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd7vsdn97 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd8vsdn97 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd9vsdn97 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd10vsdn98 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com,c5n99g.ppd.pok.ibm.com fs2 hd11vsdn98 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com fs2 hd12vsdn98 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com fs2 sdbnsd c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com fs2 sdcnsd c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com fs2 sddnsd c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com fs2 sdensd c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com fs2 sdgnsd c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com
Copyright IBM Corp. 1998, 2007
29
| | | | | | | | | | | | | | | | | | | |
To find out the local device names for the disks, use the mmlsnsd command with the -m option. For example, issuing mmlsnsd -m produces output similar to this: Disk name NSD volume ID Device Node name Remarks --------------------------------------------------------------------------------hd10vsdn98 0972846245C8E93C /dev/hd10vsdn98 c5n97g.ppd.pok.ibm.com server node hd10vsdn98 0972846245C8E93C /dev/hd10vsdn98 c5n98g.ppd.pok.ibm.com server node hd11vsdn98 0972846245C8E93F /dev/hd11vsdn98 c5n97g.ppd.pok.ibm.com server node hd11vsdn98 0972846245C8E93F /dev/hd11vsdn98 c5n98g.ppd.pok.ibm.com server node hd12vsdn98 0972846245C8E941 /dev/hd12vsdn98 c5n97g.ppd.pok.ibm.com server node hd12vsdn98 0972846245C8E941 /dev/hd12vsdn98 c5n98g.ppd.pok.ibm.com server node hd2n97 0972846145C8E924 /dev/hdisk2 c5n97g.ppd.pok.ibm.com server node hd2n97 0972846145C8E924 /dev/hdisk2 c5n98g.ppd.pok.ibm.com server node hd3n97 0972846145C8E927 /dev/hdisk3 c5n97g.ppd.pok.ibm.com server node hd3n97 0972846145C8E927 /dev/hdisk3 c5n98g.ppd.pok.ibm.com server node hd4n97 0972846145C8E92A /dev/hdisk4 c5n97g.ppd.pok.ibm.com server node hd4n97 0972846145C8E92A /dev/hdisk4 c5n98g.ppd.pok.ibm.com server node hd5n98 0972846245EB501C /dev/hdisk5 c5n97g.ppd.pok.ibm.com server node hd5n98 0972846245EB501C /dev/hdisk5 c5n98g.ppd.pok.ibm.com server node hd6n98 0972846245DB3AD8 /dev/hdisk6 c5n97g.ppd.pok.ibm.com server node hd6n98 0972846245DB3AD8 /dev/hdisk6 c5n98g.ppd.pok.ibm.com server node hd7vsdn97 0972846145C8E934 /dev/hd7vsdn97 c5n97g.ppd.pok.ibm.com server node
30
GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-565 Scanning user file metadata ... 6 % complete on Wed Aug 16 15:14:45 2000 100 % complete on Wed Aug 16 15:14:46 2000 GPFS: 6027-552 Scan completed successfully. Done
Note: Rebalancing of files is an I/O intensive and time consuming operation, and is important only for file systems with large files that are mostly invariant. In many cases, normal file update and creation will rebalance your file system over time, without the cost of the rebalancing. When using an IBM eServer High Performance Switch (HPS) in your configuration, it is suggested you process your disks in two steps: 1. Create virtual shared disks on each physical disk with the mmcrvsd command. 2. Using the rewritten disk descriptors from the mmcrvsd command, create NSDs with the mmcrnsd command.
31
Refer to mmdeldisk Command on page 159 for syntax and usage information.
32
Replacing hd3n97 ... The following disks of fs2 will be formatted on node k145n03: hd2n97: size 142028570 KB Extending Allocation Map 0 % complete on Fri Feb 3 16:12:49 2006 1 % complete on Fri Feb 3 16:12:54 2006 40 % complete on Fri Feb 3 16:14:59 2006 91 % complete on Fri Feb 3 16:17:10 2006 100 % complete on Fri Feb 3 16:17:30 2006 Completed adding disks to file system fs2. Scanning system storage pool Scanning file system metadata, phase 1 ... 2 % complete on Fri Feb 3 16:17:44 2006 10 % complete on Fri Feb 3 16:17:47 2006 62 % complete on Fri Feb 3 16:18:06 2006 78 % complete on Fri Feb 3 16:18:13 2006 100 % complete on Fri Feb 3 16:18:19 2006 Scan completed successfully. Scanning file system metadata, phase 2 ... 67 % complete on Fri Feb 3 16:18:25 2006 100 % complete on Fri Feb 3 16:18:26 2006 Scan completed successfully. Scanning file system metadata, phase 3 ... 22 % complete on Fri Feb 3 16:18:29 2006 40 % complete on Fri Feb 3 16:18:36 2006 74 % complete on Fri Feb 3 16:18:49 2006 100 % complete on Fri Feb 3 16:18:56 2006 Scan completed successfully. Scanning file system metadata, phase 4 ... Scan completed successfully. Scanning user file metadata ... 4 % complete on Fri Feb 3 16:19:00 2006 26 % complete on Fri Feb 3 16:19:07 2006 100 % complete on Fri Feb 3 16:19:31 2006 Scan completed successfully. Done mmrpldisk: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
| | | | | | |
33
Refer to mmlsdisk Command on page 211 for syntax and usage information.
Availability
A disks availability determines whether GPFS is able to read and write to the disk. There are four possible values for availability: up The disk is available to GPFS for normal read and write operations.
down No read and write operations can be performed on the disk. recovering An intermediate state for disks coming up, during which GPFS verifies and corrects data. read operations can be performed while a disk is in this state but write operations cannot. unrecovered Not all disks were successfully brought up. Disk availability is automatically changed from up to down when GPFS detects repeated I/O errors. You can also change the availability of a disk by issuing the mmchdisk command.
Status
Disk status controls data placement and migration. Status changes as a result of a pending delete operation, or when the mmchdisk command is issued to allow file rebalancing or re-replicating prior to disk replacement or deletion. Disk status has five possible values, but three are transitional: ready Normal status. suspended Indicates that data is to be migrated off this disk. being emptied Transitional status in effect while a disk deletion is pending. replacing Transitional status in effect for old disk while replacement is pending. replacement Transitional status in effect for new disk while replacement is pending. GPFS allocates space only on disks with a status of ready or replacement. GPFS migrates data off of disks with a status of being emptied, replacing, or suspended, onto disks with a status of ready or replacement. During disk deletion or replacement, data is automatically migrated as part of the operation. The mmrestripefs command must be issued to initiate data migration from a suspended disk. See Deleting disks from a file system on page 31, Replacing disks in a GPFS file system on page 32, and Restriping a GPFS file system on page 22.
34
Issue the mmchdisk command with one of the following four options to change disk state: suspend Instructs GPFS to stop allocating space on the specified disk. Place a disk in this state prior to disk deletion or replacement. This is a user-initiated state that GPFS will never use without an explicit command to change disk state. Note: A disk remains suspended until it is explicitly resumed. Restarting GPFS or rebooting nodes does not restore normal access to a suspended disk. resume Informs GPFS that a disk previously suspended is now available for allocating new space. Resume a disk only when youve suspended it and decided not to delete or replace it. If the disk is currently in a stopped state, it remains stopped until you specify the start option. Otherwise, normal read and write access to the disk resumes. stop Instructs GPFS to stop any attempts to access the specified disk. Use this option to inform GPFS that a disk has failed or is currently inaccessible because of maintenance. A disks availability remains down until it is explicitly started with the start option. Informs GPFS that a disk previously stopped is now accessible. GPFS does this by first changing the disk availability from down to recovering. The file system metadata is then scanned and any missing updates (replicated data that was changed while the disk was down) are repaired. If this operation is successful, the availability is then changed to up. If the metadata scan fails, availability is set to unrecovered. This could occur if other disks remain in recovering or an I/O error has occurred. Repair all disks and paths to disks. See mmfsck. The metadata scan can then be re-initiated at a later time by issuing the mmchdisk start command again. If more than one disk in the file system is down, they should all be started at the same time by using the -a option. If you start them separately and metadata is stored on any disk that remains down, the mmchdisk start command fails. For example, to suspend the hd8vsdn100 disk in the file system fs1, enter:
mmchdisk fs1 suspend -d hd8vsdn100
start
You can also use the mmchdisk command with the change option to change the Disk Usage and Failure Group parameters for one or more disks in a GPFS file system. This can be useful in situations where, for example, a file system that contains only RAID disks is being upgraded to add conventional disks that are better suited to storing metadata. After adding the disks using the mmadddisk command, the metadata currently stored on the RAID disks would have to be moved to the new disks to achieve the desired performance improvement. To accomplish this, first the mmchdisk change command would be issued to change the Disk Usage parameter for the RAID disks to dataOnly. Then the mmrestripefs command would be used to restripe the metadata off the RAID device and onto the conventional disks. For example, to specify that metadata should no longer be stored on disk hd8vsdn100, enter:
mmchdisk fs1 change -d "hd8vsdn100:::dataOnly"
35
See the mmchdisk Command on page 97 and the mmlsdisk Command on page 211 for complete usage information.
| | | |
You must follow these rules when changing NSDs: v You must identify the disks by the NSD names that were given to them by the mmcrnsd command. | v You must explicitly specify values for all NSD servers in the list, even if you are only changing one of the values. | v The file system that contains the NSD being changed must be unmounted prior to issuing the mmchnsd command. v The NSD must be properly connected to the new nodes prior to issuing the mmchnsd command. v This command cannot be used to change the DiskUsage or FailureGroup for an NSD. You must issue the mmchdisk command to change these. v You cannot change the name of the NSD. | For example, to assign node k145n07 as an NSD server for disk gpfs47nsd: | 1. Make sure that k145n07 is not already assigned to the server list by issuing the mmlsnsd command. mmlsnsd -d "gpfs47nsd" | The system displays information similar to: | File system Disk name NSD server nodes | --------------------------------------------------------------| fs1 gpfs47nsd k145n09 | | 2. Unmount the file system on all nodes and ensure that the disk is connected to the new node (k145n07). | | 3. Issue the mmchnsd command: mmchnsd "gpfs47nsd:k145n09,k145n07" | | 4. Verify the changes by issuing the mmlsnsd command: mmlsnsd -d gpfs47nsd | | | | | The system displays information similar to:
File system Disk name NSD servers --------------------------------------------------------------------------fs2 gpfs47nsd k145n09.ppd.pok.ibm.com,k145n07.ppd.pok.ibm.com
36
To change the disk discovery of a file system that is already mounted: cleanly unmount it, wait for the unmount to complete, and then mount the file system using the desired -o useNSDserver option. |
| GPFS can use Persistent Reserve (PR) functionality to improve failover times. However, the following | restrictions apply to the use of PR: | v PR is only supported on AIX nodes. | v | v | v | | v | | v PR is only supported on NSDs built directly on hdisks. The disk subsystems must support PR GPFS supports a mix of PR disks and other disks. However, you will only realize improved failover times if all disks in the cluster support PR. GPFS only supports PR in the local cluster. Remote mounts must access the disks through an NSD server. When you enable or disable PR, you must stop GPFS on all nodes.
| To enable (or disable) Persistent Reserve, issue the command: | mmchconfig usePersistentReserve={yes|no} | | | | For fast recovery times with Persistent Reserve, you should also set the failureDetectionTime configuration parameter. For fast recovery, a recommended value would be 10. You can set this by issuing the command:
mmchconfig failureDetectionTime=10
37
| Example | To determine if the disks on the servers and the disks of a specific node have PR enabled, issue the | following command from the node: | mmlsnsd -X | | | | | | | | | | The system responds with something similar to:
Disk name NSD volume ID Device Devtype Node name Remarks ---------------------------------------------------------------------------------------gpfs10nsd 09725E5E43035A99 /dev/hdisk6 hdisk k155n14.kgn.ibm.com server node,pr=yes gpfs10nsd 09725E5E43035A99 /dev/hdisk8 hdisk k155n16.kgn.ibm.com server node,pr=yes gpfs10nsd 09725E5E43035A99 /dev/hdisk6 hdisk k155n17.kgn.ibm.com directly attached pr=yes
If the GPFS daemon has been started on all the nodes in the cluster and the file system has been mounted on all nodes that have direct access to the disks, then pr=yes should be on all hdisks. If you do not see this, there is a problem. Refer to the General Parallel File System: Problem Determination Guide for additional information on Persistent Reserve errors.
38
39
To disable quota management: | 1. Unmount the file system everywhere. | 2. Run the mmchfs -Q no command. | 3. Remount the file system, deactivating the quota files. All subsequent mounts obey the new quota setting. | See the mmcheckquota Command on page 101, the mmchfs Command on page 106, the mmcrfs Command on page 127, and the mmedquota Command on page 180 for complete usage information. For additional information on quotas, see the General Parallel File System: Concepts, Planning, and Installation Guide.
Default quotas
Applying default quotas provides for minimum quota limits for all new users, groups of users, or filesets for a file system. If default quota values for a file system are not enabled, a new user, group of users or fileset for that file system has a quota limit of zero, which establishes no limit to the amount of space that they can use. To enable default quota values for a file system: 1. The file system must have been created or changed with the -Q yes option. See the mmcrfs and mmchfs commands. 2. Default quotas are enabled with the mmdefquotaon command. 3. Default quota values are established for new users, groups, and filesets for a file system, by issuing the mmdefedquota command. To apply different quota values for a particular user, group, or fileset, the system administrator must explicitly configure those values using the mmedquota command. If after explicit quotas for a user, group, or filesets have been established, it is necessary to reapply the default limits for that user, group, or fileset, you must issue the mmedquota -d command. The default quotas can be deactivated by issuing the mmdefquotaoff command. For example, to create default quotas for users of the file system fs0, enter:
mmdefedquota -u fs0
See the mmdefedquota Command on page 147, mmdefquotaoff Command on page 149, mmdefquotaon Command on page 151, and mmedquota Command on page 180 commands for complete usage information.
40
The mmedquota command opens a session using your default editor, and prompts you for soft and hard limits for inodes and blocks. For example, to set user quotas for user jesmith, enter:
mmedquota -u jesmith
Note: A zero quota limit indicates no quota limits have been established. The current (in use) inode and block usage is for display only; it cannot be changed. When establishing a new quota, zeros appear as limits. Replace the zeros, or old values if you are changing existing limits with values based on the users needs and the resources available. When you close the editor, GPFS checks the values and applies them. If a value which is not valid is specified, GPFS generates an error message. If this occurs, reenter the mmedquota command. You may find it helpful to maintain a quota prototype, a set of limits that you can apply by name to any user, group, or fileset without entering the individual values manually. This makes it easy to set the same limits for all. The mmedquota command includes the -p option for naming a prototypical user, group, or fileset on which limits are to be based. For example, to set group quotas for all users in a group named blueteam to the prototypical values established for prototeam, issue:
mmedquota -g -p prototeam blueteam
You may also reestablish default quotas for a specified user, group of users, or fileset when using the -d option on the mmedquota command. See the mmedquota Command on page 180 for complete usage information.
Checking quotas
The mmcheckquota command counts inode and space usage for a file system and writes the collected data into quota files. You must use the mmcheckquota command if: 1. Quota information is lost due to node failure. Node failure could leave users unable to open files or deny them disk space that their quotas should allow. 2. The in doubt value approaches the quota limit. To see the in doubt value, use the mmlsquota or mmrepquota commands. As the sum of the in doubt value and the current usage may not exceed the hard limit, the actual block space and number of files available to the user, group, or fileset may be constrained by the in doubt value. Should the in doubt value approach a significant percentage of the quota, use the mmcheckquota command to account for the lost space and files. When issuing the mmcheckquota command on a mounted file system, negative in doubt values may be reported if the quota server processes a combination of up-to-date and back-level information. This is a transient situation and may be ignored. During the normal operation of file systems with quotas enabled (not running mmcheckquota online), the usage data reflects the actual usage of the blocks and inodes in the sense that if you delete files you should see the usage amount decrease. The in doubt value does not reflect how much the user has used
Chapter 5. Managing GPFS quotas
41
already, it is just the amount of quotas that the quota server has assigned to its clients. The quota server does not know whether the assigned amount has been used or not. The only situation where the in doubt value is important to the user is when the sum of the usage and the in doubt value is greater than the users quota hard limit. In this case, the user is not allowed to allocate more blocks or inodes unless he brings the usage down. For example, to check quotas for the file system fs1 and report differences between calculated and recorded disk quotas, enter:
mmcheckquota -v fs1
The information displayed shows that the quota information for USR7 was corrected. Due to a system failure, this information was lost at the server, which recorded 0 subblocks and 0 files. The current usage data counted is 96 subblocks and 3 files. This is used to update the quota:
/dev/fs1: quota-check found following differences: USR7: 96 subblocks counted (was 0); 3 inodes counted (was 0)
See the mmcheckquota Command on page 101 for complete usage information.
Listing quotas
The mmlsquota command displays the file system quota limits, default quota limits, and current usage information. GPFS quota management takes replication into account when reporting on and determining if quota limits have been exceeded for both block and file usage. In a file system that has either type of replication set to a value of two, the values reported on by both the mmlsquota command and the mmrepquota command and the values used to determine if quota limits have been exceeded will be double the value reported by the ls command. When issuing the mmlsquota command on a mounted file system, negative in doubt values may be reported if the quota server processes a combination of up-to-date and back-level information. This is a transient situation and may be ignored. Specify the quota information for one user, group of users, or fileset with the mmlsquota command. If neither -g , nor -u, nor -j is specified, quota data is displayed for the user who entered the command. For example, to display default quota information for users of all the file systems in the cluster, enter:
mmlsquota -d -u
This output shows for file system fs1 a default quota limit of 10240K for users has been established. For file systems fs2 and fs3 no default quotas for users has been established. If you issue the mmlsquota command with the -e option, the quota system collects updated information from all nodes before returning output. If the node to which in-doubt space was allocated should fail before updating the quota system about its actual usage, this space might be lost. Should the amount of space in doubt approach a significant percentage of the quota, run the mmcheckquota command to account for the lost space. To collect and display updated quota information about a group named blueteam, specify the -g and -e options:
mmlsquota -g blueteam -e
42
Block Limits Filesystem type KB quota limit Disk quotas for group blueteam (gid 100): fs1 GRP 45730 52000 99000
in_doubt 1335
limit 990
in_doubt 19
grace none
See the mmlsquota Command on page 231 for complete usage information.
See the mmquotaon Command on page 250 and the mmlsfs Command on page 217 for complete usage information.
43
See the mmquotaoff Command on page 248 and the mmlsfs Command on page 217 for complete usage information.
files 32 11
quota 0 0
See the mmrepquota Command on page 258 for complete usage information.
44
1. To restore the user quota file for the file system fs1 from the backup file userQuotaInfo, enter:
mmcheckquota -u userQuotaInfo fs1
2. This will restore the user quota limits set for the file system, but the usage information will not be current. To bring the usage information to current values, the command must be reissued:
mmcheckquota fs1
If no backup files are available and the quota files are to be restored using a new file, these steps must be followed: | 1. The existing corrupted quota files need to be removed: a. Unmount the file system. | b. Disable quota management: | mmchfs fs1 -Q no | c. Remount the file system. | d. Remove the user.quota, group.quota, and fileset.quota files. | | 2. Enable quota management: a. Unmount the file system. | b. Issue the following command: | mmchfs fs1 -Q yes | c. Remount the file system. | | 3. Reestablish quota limits by issuing the mmedquota command or the mmdefedquota command. | 4. Gather the current quota usage values by issuing the mmcheckquota command.
45
46
In this ACL: v The first two lines are comments showing the files owner, jesmith, and group name, team_A v The next three lines contain the base permissions for the file. These three entries are the minimum necessary for a GPFS ACL: 1. The permissions set for the file owner (user), jesmith 2. The permissions set for the owners group, team_A 3. The permissions set for other groups or users outside the owners group and not belonging to any named entry v The next line, with an entry type of mask, contains the maximum permissions allowed for any entries other than the owner (the user entry) and those covered by other in the ACL. v The last three lines contain additional entries for specific users and groups. These permissions are limited by those specified in the mask entry, but you may specify any number of additional entries up to a memory page (approximately 4 K) in size. Traditional GPFS ACLs are fully compatible with the base operating system permission set. Any change to the base permissions, using the chmod command, for example, modifies the corresponding GPFS ACL as well. Similarly, any change to the GPFS ACL is reflected in the output of commands such as ls -l. Note that the control (c) permission is GPFS specific. There is no comparable support in the base operating system commands. As a result, the (c) permission is visible only with the GPFS ACL commands.
Copyright IBM Corp. 1998, 2007
47
Each GPFS file or directory has an access ACL that determines its access privileges. These ACLs control who is allowed to read or write at the file or directory level, as well as who is allowed to change the ACL itself. In addition to an access ACL, a directory may also have a default ACL. If present, the default ACL is used as a base for the access ACL of every object created in that directory. This allows a user to protect all files in a directory without explicitly setting an ACL for each one. When a new object is created, and the parent directory has a default ACL, the entries of the default ACL are copied to the new objects access ACL. After that, the base permissions for user, mask (or group if mask is not defined), and other, are changed to their intersection with the corresponding permissions from the mode parameter in the function that creates the object. If the new object is a directory, its default ACL is set to the default ACL of the parent directory. If the parent directory does not have a default ACL, the initial access ACL of newly created objects consists only of the three required entries (user, group, other). The values of these entries are based on the mode parameter in the function that creates the object and the umask currently in effect for the process. Administrative tasks associated with traditional GPFS ACLs are: 1. Setting traditional GPFS access control lists 2. Displaying traditional GPFS access control lists on page 49 3. Changing traditional GPFS access control lists on page 50 4. Deleting traditional GPFS access control lists on page 50
In the project2.acl file above, v The first three lines are the required ACL entries setting permissions for the files owner, the owners group, and for processes that are not covered by any other ACL entry. v The last three lines contain named entries for specific users and groups. v Because the ACL contains named entries for specific users and groups, the fourth line contains the required mask entry, which is applied to all named entries (entries other than the user and other). Once you are satisfied that the correct permissions are set in the ACL file, you can apply them to the target file with the mmputacl command. For example, to set permissions contained in the file project2.acl for the file project2.history, enter:
mmputacl -i project2.acl project2.history
48
Although you can issue the mmputacl command without using the -i option to specify an ACL input file, and make ACL entries through standard input, you will probably find the -i option more useful for avoiding errors when creating a new ACL. See the mmputacl Command on page 245 and the mmgetacl Command on page 194 for complete usage information.
The first two lines are comments displayed by the mmgetacl command, showing the owner and owning group. All entries containing permissions that are not allowed (because they are not set in the mask entry) display with a comment showing their effective permissions. See the mmgetacl Command on page 194 for complete usage information.
49
See the mmgetacl Command on page 194 and the mmputacl Command on page 245 for complete usage information.
The current ACL entries are displayed using the default editor, provided that the EDITOR environment variable specifies a complete path name. When the file is saved, the system displays information similar to:
mmeditacl: 6027-967 Should the modified ACL be applied? (yes) or (no)
After responding yes, the ACLs are applied. See the mmeditacl Command on page 177 for complete usage information.
You cannot delete the base permissions. These remain in effect after this command is executed. See the mmdelacl Command on page 157 and the mmgetacl Command on page 194 for complete usage information.
50
Depending on the value (posix | nfs4 | all) of the -k parameter, one or both ACL types can be allowed for a given file system. Since ACLs are assigned on a per-file basis, this means that within the same file system one file may have an NFS V4 ACL, while another has a POSIX ACL. The type of ACL can be changed by using the mmputacl or mmeditacl command to assign a new ACL or by the mmdelacl command (causing the permissions to revert to the mode which is in effect a POSIX ACL). At any point in time, only a single ACL can be associated with a file. Access evaluation is done as required by the ACL type associated with the file. NFS V4 ACLs are represented in a completely different format than traditional ACLs. For detailed information on NFS V4 and its ACLs, refer to the paper, NFS Version 4 Protocol and other information found at: www.nfsv4.org. In the case of NFS V4 ACLs, there is no concept of a default ACL. Instead, there is a single ACL and the individual ACL entries can be flagged as being inherited (either by files, directories, both, or neither). Consequently, specifying the -d flag on the mmputacl command for an NFS V4 ACL is an error.
deny Means to not allow (or deny) those permissions that have been selected with an X. v The fourth and final part is a list of flags indicating inheritance. Valid flag values are: FileInherit Indicates that the ACL entry should be included in the initial ACL for files created in this directory. DirInherit Indicates that the ACL entry should be included in the initial ACL for subdirectories created in this directory (as well as the current directory). InheritOnly Indicates that the current ACL entry should NOT apply to the directory, but SHOULD be included in the initial ACL for objects created in this directory. As in traditional ACLs, users and groups are identified by specifying the type and name. For example, group:staff or user:bin. NFS V4 provides for a set of special names that are not associated with a specific local UID or GID. These special names are identified with the keyword special followed by the NFS V4 name. These names are recognized by the fact that they end in with the character @. For example, special:owner@ refers to the owner of the file, special:group@ the owning group, and special:everyone@ applies to all users. The next two lines provide a list of the available access permissions that may be allowed or denied, based on the ACL type specified on the first line. A permission is selected using an X. Permissions that are not specified by the entry should be left marked with - (minus sign). These are examples of NFS V4 ACLs.
Chapter 6. Managing GPFS access control lists and NFS export
51
1. An ACL entry that explicitly allows READ, EXECUTE and READ_ATTR to the staff group on a file is similar to this:
group:staff:r-x-:allow (X)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED
2. A Directory ACL is similar to this. It may include inherit ACL entries that do not apply to the directory itself, but instead become the initial ACL for any objects created within the directory.
special:group@:----:deny:DirInherit:InheritOnly (X)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED
Removal of a file includes renaming the file, moving the file from one directory to another even if the file name remains the same, and deleting it.
Table 3. Removal of a file with ACL entries DELETE and DELETE_CHILD ACL Allows DELETE ACL Allows DELETE_CHILD ACL Denies DELETE_CHILD DELETE_CHILD not specified UNIX mode bits only - wx permissions allowed UNIX mode bits only - no w or no x permissions allowed Permit Permit Permit Permit Permit ACL Denies DELETE Permit Deny Deny Permit Deny DELETE not specified Permit Deny Deny Permit Deny UNIX mode bits only Permit Deny Deny Permit Deny
The UNIX mode bits are used in cases where the ACL is not an NFS V4 ACL.
52
The lines that follow the first one are then processed according to the rules of the expected ACL type. An NFS V4 ACL is similar to this:
#NFSv4 ACL #owner:root #group:system special:owner@:rwxc:allow (X)READ/LIST (X)WRITE/CREATE (-)MKDIR (X)SYNCHRONIZE (X)READ_ACL (-)READ_ATTR (-)READ_NAMED (X)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:owner@:----:deny (-)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (-)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR (X)WRITE_NAMED user:guest:r-xc:allow (X)READ/LIST (-)WRITE/CREATE (-)MKDIR (X)SYNCHRONIZE (X)READ_ACL (-)READ_ATTR (-)READ_NAMED (X)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED
53
user:guest:----:deny (-)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (-)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED
This ACL shows four ACL entries (an allow and deny entry for each of owner@ and guest). In general, constructing NFS V4 ACLs is more complicated than traditional ACLs. Users new to NFS V4 ACLs may find it useful to start with a traditional ACL and allow either mmgetacl or mmeditacl to provide the NFS V4 translation, using the -k nfs4 flag as a starting point when creating an ACL for a new file.
54
v Synchronous writing using NFS on page 57 v Unmounting a file system after NFS export on page 57 v NIS automount considerations on page 57
Export considerations
Keep these points in mind when exporting a GPFS file system to NFS. The operating system being used | and the version of NFS might require special handling or consideration. | Linux export considerations: For Linux nodes only, issue the exportfs -ra command to initiate a reread | of the /etc/exports file. | Starting with Linux kernel version 2.6, an fsid value must be specified for each GPFS file system that is | exported on NFS. The format of the entry in /etc/exports for the GPFS directory /gpfs/dir1 looks like this: | /gpfs/dir1 cluster1(rw,fsid=745) | The administrator must assign fsid values subject to the following conditions: | 1. The values must be unique for each file system. | 2. The values must not change after reboots. The file system should be unexported before any change is made to an already assigned fsid. | | 3. Entries in the /etc/exports file are not necessarily file system roots. You can export multiple directories within a file system. In the case of different directories of the same file system, the fsid should be the | same. For example, in the GPFS file system /gpfs, if two directories are exported (dir1 and dir2), the | entries may look like this: |
55
| /gpfs/dir1 cluster1(rw,fsid=745) /gpfs/dir2 cluster1(rw,fsid=745) | | 4. If a GPFS file system is exported from multiple nodes, the fsids should be the same on all nodes. | | | | | | | | | | Large installations with hundreds of compute nodes and a few login nodes or NFS-exporting nodes require tuning of the GPFS parameters maxFilesToCache and maxStatCache with the mmchconfig command. The general suggestion is for the compute nodes to set maxFilesToCache to about 200. The login or NFS nodes should set this parameter much higher, with maxFilesToCache set to 1000 and maxStatCache set to 50000. This tuning is required for the GPFS token manager (file locking), which can handle approximately 1,000,000 files in memory. The default value of maxFilesToCache is 1000 and the default value of maxStatCache is 4 * maxFilesToCache, so that by default, each node holds 5000 tokens. The token manager must keep track of a total number of tokens, which equals 5000 * number of nodes. This will exceed the memory limit of the token manager on large configurations.
| GPFS does not support NFS V4 exporting GPFS file systems from Linux nodes. NFS V3 is acceptable. | | | | If you are running at SLES 9 SP 1, the kernel defines the sysctl variable fs.nfs.use_underlying_lock_ops, which determines whether the NFS lockd is to consult the file system when granting advisory byte-range locks. For distributed file systems like GPFS, this must be set to true (the default is false).
| You can query the current setting by issuing the command: | sysctl fs.nfs.use_underlying_lock_ops | Alternatively, the fs.nfs.use_underlying_lock_ops = 1 record can be added to /etc/sysctl.conf. This | record must be applied after initially booting the node, and after each reboot, by issuing the command: | sysctl -p | Because the fs.nfs.use_underlying_lock_ops variable is currently not available in SLES 9 SP 2 or later, | when NFS-exporting a GPFS file system, ensure that your NFS server nodes are at the SP 1 level (unless | this variable is made available in later service packs). | For additional considerations when NFS exporting your GPFS file system, refer to File system creation | considerations in GPFS: Concepts, Planning, and Installation Guide. AIX export considerations: AIX does not allow a file system to be exported by NFS V4 unless it supports NFS V4 ACLs. NFS export considerations for versions prior to NFS V4: For NFS exported file systems, the version of NFS you are running with may have an impact on the number of inodes you need to cache, as set by both the maxStatCache and maxFilesToCache parameters on the mmchconfig command. The implementation of the ls command differs from NFS V2 to NFS V3. The performance of the ls command in NFS V3 in part depends on the caching ability of the underlying file system. Setting the cache large enough will prevent rereading inodes to complete an ls command, but will put more of a CPU load on the token manager. Also, the clocks of all nodes in your GPFS cluster must be synchronized. If this is not done, NFS access to the data, as well as other GPFS file system operations, may be disrupted. NFS V4 export considerations: For information on NFS V4, see the paper, NFS Version 4 Protocol and other information found at: www.nfsv4.org.
56
To export a GPFS file system using NFS V4, there are two file system settings that must be in effect. These attributes can be queried using the mmlsfs command, and set using the mmcrfs and mmchfs commands. 1. The -D nfs4 flag is required. Conventional NFS access would not be blocked by concurrent file system reads or writes (this is the POSIX semantic). NFS V4 however, not only allows for its requests to block if conflicting activity is happening, it insists on it. Since this is an NFS V4 specific requirement, it must be set before exporting a file system.
flag value description ---- -------------- -----------------------------------------------------D nfs4 File locking semantics in effect
2. The -k nfs4 or -k all flag is required. Initially, a file system has the -k posix setting, and only traditional GPFS ACLs are allowed. To export a file system using NFS V4, NFS V4 ACLs must be enabled. Since NFS V4 ACLs are vastly different and affect several characteristics of the file system objects (directories and individual files), they must be explicitly enabled. This is done either exclusively, by specifying -k nfs4, or by allowing all ACL types to be stored.
flag value description ---- -------------- -----------------------------------------------------k all ACL semantics in effect
NFS can be restarted after the unmount completes. On Linux, issue this command:
/etc/rc.d/init.d/nfs start
57
not work implicitly, since the mount request is passed to JFS which then produces an error. When specifying -fstype mmfs the local soft-mount works because the mount is then passed to GPFS instead of JFS. A GPFS soft-mount does not automatically unmount. Setting -fstype nfs3 causes the local server mounts to always go through NFS. This allows you to have the same auto.map file on all nodes whether the server is local or not, and the automatic unmount will occur. If you want local soft-mounts of GPFS file systems while other nodes perform NFS mounts, you should have different auto.map files on the different classes of nodes. This should improve performance on the GPFS nodes as they will not have to go through NFS. |
| In addition to the traditional exporting of GPFS file systems using NFS, GPFS allows you to configure a | subset of the nodes in the cluster to provide a highly available solution for exporting GPFS file systems via | NFS. | The participating nodes are designated as Cluster NFS (CNFS) member nodes and the entire setup is | frequently referred to as CNFS or CNFS cluster. | In this solution, all CNFS nodes export the same file systems to the NFS clients. When one of the CNFS | nodes fails, the NFS serving load moves from the failing node to another node in the CNFS cluster. | Failover is done using recovery groups to help choose the preferred node for takeover. | For more information about CNFS, see General Parallel File System: Advanced Administration Guide.
58
59
60
mmchnode Command on page 112 mmchnsd Command on page 116 mmchpolicy Command on page 119 mmcrcluster Command on page 121 mmcrfileset Command on page 125 mmcrfs Command on page 127 mmcrnsd Command on page 134 mmcrsnapshot Command on page 139 mmcrvsd Command on page 142 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmdefragfs Command on page 154 mmdelacl Command on page 157 mmdeldisk Command on page 159 mmdelfileset Command on page 162 mmdelfs Command on page 165 mmdelnode Command on page 167 mmdelnsd Command on page 170 mmdelsnapshot Command on page 172 mmdf Command on page 174
Copyright IBM Corp. 1998, 2007
61
Table 4. GPFS commands (continued) Command mmeditacl Command on page 177 mmedquota Command on page 180 mmexportfs Command on page 183 mmfsck Command on page 185 mmfsctl Command on page 190 mmgetacl Command on page 194 mmgetstate Command on page 197 mmimportfs Command on page 200 mmlinkfileset Command on page 203 mmlsattr Command on page 205 mmlscluster Command on page 207 mmlsconfig Command on page 209 mmlsdisk Command on page 211 mmlsfileset Command on page 214 mmlsfs Command on page 217 mmlsmgr Command on page 221 mmlsmount Command on page 223 mmlsnsd Command on page 226 mmlspolicy Command on page 229 mmlsquota Command on page 231 mmlssnapshot Command on page 234 mmmount Command on page 236 Purpose Creates or changes a GPFS access control list. Sets quota limits. Export a file system from the cluster. Checks and repairs a GPFS file system. Issues a file system control request. Displays the GPFS access control list of a file or directory. Displays the state of the GPFS daemon on one or more nodes. Import a file system into the cluster. Creates a junction that references the root directory of a GPFS fileset. Queries file attributes. Displays the current configuration data for a GPFS cluster. Displays the configuration data for a GPFS cluster. Displays the current configuration and state of disks in a file system. Displays status and attributes of GPFS filesets. Displays file system attributes. Displays which node is the file system manager for the specified file systems. Lists the nodes that have a given GPFS file system mounted. Display NSD information for the GPFS cluster. Displays policy information. Displays quota information for a user, group, or fileset. Displays GPFS snapshot information for the specified file system. Mounts GPFS file systems on one or more nodes in the cluster. Rediscovers paths to the specified network shared disks. Monitors GPFS performance on a per-node basis. Sets the GPFS access control list for the specified file or directory. Deactivates quota limit checking. Activates quota limit checking. Manages information about remote clusters. Manages the information needed for mounting remote GPFS file systems. Displays file system user, group, and fileset quotas. Restores a file system from a GPFS snapshot. Performs a repair operation over the specified list of files. Rebalances or restores the replication factor of all files in a file system.
mmnsddiscover Command on page 238 mmpmon Command on page 240 mmputacl Command on page 245 mmquotaoff Command on page 248 mmquotaon Command on page 250 mmremotecluster Command on page 252 mmremotefs Command on page 255 mmrepquota Command on page 258 mmrestorefs Command on page 261 mmrestripefile Command on page 264 mmrestripefs Command on page 267
62
Table 4. GPFS commands (continued) Command mmrpldisk Command on page 271 mmshutdown Command on page 275 mmsnapdir Command on page 277 Purpose Replaces the specified disk. Unmounts all GPFS file systems and stops GPFS on one or more nodes. Creates and deletes invisible directories that connect to the snapshots of a GPFS file system, and changes the name of the snapshots subdirectory. Starts the GPFS subsystem on one or more nodes. Sets up and enables GPFS tracing. Unmounts GPFS file systems on one or more nodes in the cluster. Removes the junction to a GPFS fileset.
mmtracectl Command on page 282 mmumount Command on page 285 mmunlinkfileset Command on page 287
63
mmadddisk Command
Name
mmadddisk Adds disks to a GPFS file system.
Synopsis
mmadddisk Device {DiskDesc[;DiskDesc...] | -F DescFile} [-a] [-r] [-v {yes | no}] [-N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmadddisk command to add disks to a GPFS file system. This command optionally rebalances an existing file system after adding disks when the -r flag is specified. The mmadddisk command does not require the file system to be unmounted before issuing the command. The file system can be in use while the command is run. Device must be the first parameter. The -N parameter can be used only in conjunction with the -r option. To add disks to a GPFS file system, you first must decide if you will: 1. Create new disks using the mmcrnsd command. You should also decide whether to use the rewritten disk descriptor file produced by the mmcrnsd command, or create a new list of disk descriptors. When using the rewritten file, the Disk Usage and Failure Group specifications will remain the same as specified on the mmcrnsd command. 2. Select disks no longer in use in any file system. Issue the mmlsnsd -F command to display the available disks.
Parameters
Device The device name of the file system to which the disks are added. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. DiskDesc A descriptor for each disk to be added. Each descriptor is delimited by a semicolon (;) and the entire list must be enclosed in quotation marks ( or ). The current maximum number of disk descriptors that can be defined for any single file system is 268 million. However, to achieve this maximum limit, you must recompile GPFS. The actual number of disks in your file system may be constrained by products other than GPFS that you have installed. Refer to the individual product documentation. A disk descriptor is defined as (second, third and sixth fields reserved):
DiskName:::DiskUsage:FailureGroup::StoragePool
DiskName You must specify the name of the NSD previously created by the mmcrnsd command. For a list of available disks, issue the mmlsnsd -F command. DiskUsage Specify a disk usage or accept the default:
64
| |
dataAndMetadata Indicates that the disk contains both data and metadata. This is the default for disks in the system pool. dataOnly Indicates that the disk contains data and does not contain metadata. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. FailureGroup A number identifying the failure group to which this disk belongs. You can specify any value from -1 to 4000 (where -1 indicates that the disk has no point of failure in common with any other disk). If you do not specify a failure group, the value defaults to the node number of the first NSD server defined in the NSD server list plus 4000. If you do not specify an NSD server list, the value defaults to -1. GPFS uses this information during data and metadata placement to assure that no two replicas of the same block are written in such a way as to become unavailable due to a single failure. All disks that are attached to the same NSD server or adapter should be placed in the same failure group. If replication of -m or -r is set to 2 for the file system, storage pools must have two failure groups for the commands to work properly.
| | |
| | | |
StoragePool Specifies the storage pool to which the disk is to be assigned. If this name is not provided, the default is system. Only the system storage pool can contain metadataOnly, dataAndMetadata, or descOnly disks. Disks in other storage pools must be dataOnly. -F DescFile Specifies a file containing a list of disk descriptors, one per line. You may use the rewritten DiskDesc file created by the mmcrnsd command or create your own file. When using the DiskDesc file created by the mmcrnsd command, the values supplied on input to the command for Disk Usage and FailureGroup are used. When creating your own file, you must specify these values or accept the system defaults. -N { Node[,Node...] | NodeFile | NodeClass} Specifies the nodes that are to participate in the restripe of the file system after the specified disks have been made available for use by GPFS. This parameter can be used only in conjunction with the -r option. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the restripe of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-a Specifies asynchronous processing. If this flag is specified, the mmadddisk command returns after the file system descriptor is updated and the rebalancing scan is started; it does not wait for rebalancing to finish. If no rebalancing is requested (the -r flag not specified), this option has no effect. Rebalance all existing files in the file system to make use of new disks.
-r
65
Note: Rebalancing of files is an I/O intensive and time consuming operation, and is important only for file systems with large files that are mostly invariant. In many cases, normal file update and creation will rebalance your file system over time, without the cost of the rebalancing. -v {yes | no} Verify that specified disks do not belong to an existing file system. The default is -v yes. Specify -v no only when you want to reuse disks that are no longer needed for an existing file system. If the command is interrupted for any reason, you must use the -v no option on the next invocation of the command.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmadddisk command. You may issue the mmadddisk command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To add the disks whose descriptors are located in ./disk_pools to the file system fs1 and rebalance the existing files after it is added, issue this command:
mmadddisk fs1 -F ./disk_pools -r
66
Restriping fs1 ... Thu Feb 16 13:58:48 est 2006: mmcommon pushSdr_async: propagation started Thu Feb 16 13:58:52 est 2006: mmcommon pushSdr_async: propagation completed; mmdsh rc = 0 GPFS: 6027-589 Scanning file system metadata, phase 1 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 4 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-565 Scanning user file metadata ... 68 % complete on Thu Feb 16 13:59:06 2006 100 % complete on Thu Feb 16 13:59:07 2006 GPFS: 6027-552 Scan completed successfully. Done
See also
mmchdisk Command on page 97 mmcrnsd Command on page 134 mmdeldisk Command on page 159 mmlsdisk Command on page 211 mmlsnsd Command on page 226
Location
/usr/lpp/mmfs/bin
67
mmaddnode Command
Name
mmaddnode Adds nodes to a GPFS cluster.
Synopsis
mmaddnode -N {NodeDesc[,NodeDesc...] | NodeFile}
Description
Use the mmaddnode command to add nodes to an existing GPFS cluster. On each new node, a mount point directory and character mode device is created for each GPFS file system. You must follow these rules when adding nodes to a GPFS cluster: v You may issue the command only from a node that already belongs to the GPFS cluster. v While a node may mount file systems from multiple clusters, the node itself may only be added to a single cluster using the mmcrcluster or mmaddnode command. v The nodes must be available for the command to be successful. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and reissue the command to add those nodes.
Parameters
-N NodeDesc[,NodeDesc...] | NodeFile Specifies node descriptors, which provide information about nodes to be added to the cluster. NodeFile Specifies a file containing a list of node descriptors (see below), one per line, to be added to the cluster. NodeDesc[,NodeDesc...] Specifies the list of nodes and node designations to be added to the GPFS cluster. Node descriptors are defined as:
NodeName:NodeDesignations:AdminNodeName
where: 1. NodeName is the hostname or IP address to be used by the GPFS daemons for node to node communication. The hostname or IP address must refer to the communications adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the host command to that original address. You may specify a node using any of these forms: Format Example Short hostname k145n01 Long hostname k145n01.kgn.ibm.com IP address 99.119.19.102 2. NodeDesignations is an optional, - separated list of node roles.
68
v manager | client Indicates whether a node is part of the pool of nodes from which configuration mangers, file system managers, and token managers are selected. The default is client. v quorum | nonquorum Indicates whether a node is counted as a quorum node. The default is nonquorum. 3. AdminNodeName is an optional field that consists of a node name to be used by the administration commands to communicate between nodes. If AdminNodeName is not specified, the NodeName value is used. You must provide a NodeDesc for each node to be added to the GPFS cluster.
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmaddnode command. You may issue the mmaddnode command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To add nodes k164n06 and k164n07 as quorum nodes, designating k164n06 to be available as a manager node, issue this command:
mmaddnode -N k164n06:quorum-manager,k164n07:quorum
69
Node Daemon node name IP address Admin node name Designation --------------------------------------------------------------------1 k164n04.kgn.ibm.com 198.117.68.68 k164n04.kgn.ibm.com quorum 2 k164n07.kgn.ibm.com 198.117.68.71 k164n07.kgn.ibm.com quorum 3 k164n06.kgn.ibm.com 198.117.68.70 k164n06.kgn.ibm.com quorum-manager
See also
mmchconfig Command on page 90 mmcrcluster Command on page 121 mmchcluster Command on page 87 mmdelnode Command on page 167 mmlscluster Command on page 207
Location
/usr/lpp/mmfs/bin
70
mmapplypolicy Command
Name
mmapplypolicy - Deletes files or migrates file data between storage pools in accordance with policy rules.
Synopsis
| mmapplypolicy {Device | Directory} [-I {yes | defer | test}] [-N {Node[,Node...] | NodeFile | NodeClass}] | [-P PolicyFile] [-D yyyy-mm-dd [@hh:mm[:ss]]] [-L n] [-M Name=Value...] [-S snapshotName] [-s | WorkDirectory]
Description
Use the mmapplypolicy command to manage the placement and replication of data within GPFS storage pools. It can also be used to delete files from GPFS. You may issue the mmapplypolicy command from any node in the GPFS cluster that has the file system mounted. Any given file is a potential candidate for at most one MIGRATE or DELETE operation during one invocation of the mmapplypolicy command. A file that matches an EXCLUDE rule is not subject to any subsequent MIGRATE or DELETE rules. You should carefully consider the order of rules within a policy to avoid unintended consequences. For detailed information on GPFS policies, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device Specifies the device name of the file system from which files are to be deleted or migrated. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. If specified, this must be the first parameter. Directory Specifies the fully-qualified path name of a GPFS file system subtree from which files are to be deleted or migrated. If specified, this must be the first parameter. -I {yes | defer | test} Specifies what actions the mmapplypolicy command performs on files: yes Indicates that all applicable MIGRATE and DELETE policy rules are run, and the data movement between pools is done during the processing of the mmapplypolicy command. This is the default action. Indicates that all applicable MIGRATE and DELETE policy rules are run, but actual data movement between pools is deferred until the next mmrestripefs or mmrestripefile command. Indicates that all policy rules are evaluated, but the mmapplypolicy command only displays the actions that would be performed had -I defer or -I yes been specified. There is no actual deletion of files or data movement between pools. This option is intended for testing the effects of particular policy rules.
defer
test
| -N { Node[,Node...] | NodeFile | NodeClass} Specifies the list of nodes that will run parallel instances of policy code in the GPFS home cluster. | This command supports all defined node classes. The default is to run on the node where the | mmapplypolicy command is running. |
71
| |
For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. -P PolicyFile Specifies the name of the file containing the policy rules to be applied. If not specified, the policy rules currently in effect for the file system are used.
Options
-D yyyy-mm-dd[@hh:mm[:ss]] Specifies a date and optionally a (UTC) time as year-month-day at hour:minute:second. The mmapplypolicy command evaluates policy rules as if it were running on the date and time specified by the -D flag. This can be useful for planning or testing policies, to see how the mmapplypolicy command would act in the future. If this flag is omitted, the mmapplypolicy command uses the current date and (UTC) time. If a date is specified but not a time, the time is assumed to be 00:00:00. -L n Controls the level of information displayed by the mmapplypolicy command. Larger values indicate the display of more detailed information. These terms are used: candidate file A file that matches a MIGRATE or DELETE policy rule. chosen file A candidate file that has been scheduled for migration or deletion. These are the valid values for n: 0 1 2 3 4 5 6 Displays only serious errors. Displays some information as the command runs, but not for each file. This is the default. Displays each chosen file and the scheduled migration or deletion action. All of the above, plus displays each candidate file and the applicable rule. All of the above, plus displays each explicitly EXCLUDEed file, and the applicable rule. All of the above, plus displays the attributes of candidate and EXCLUDEed files. All of the above, plus displays files that are not candidate files, and their attributes.
For examples and more information on this flag, see the section: The mmapplypolicy -L command in General Parallel File System: Problem Determination Guide. | -M Name=Value... Indicates a string substitution that will be made in the text of the policy rules before the rules are | interpreted. This allows the administrator to reuse a single policy rule file for incremental backups | without editing the file for each backup. | | -S snapshotName Specifies the name of a snapshot. The name appears as a subdirectory of the .snapshots | directory in the file system root. | -s WorkDirectory Specifies the directory to be used for temporary storage during mmapplypolicy command processing. The default directory is /tmp. The mmapplypolicy command stores lists of candidate and chosen files in temporary files within this directory. | | | When you execute mmapplypolicy, it creates several temporary files and file lists. If the specified file system or directories contain many files, this can require a significant amount of temporary storage. The required storage is proportional to the number of files (NF) being acted on and the
72
| | | |
average length of the path name to each file (AVPL). To make a rough estimate of the space required, estimate NF and assume an AVPL of 80 bytes. With an AVPL of 80, the space required is roughly (300 X NF) bytes of temporary space. Although /tmp is the default path name to a temporary directory, an alternative path name can be specified using the -s command line option.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmapplypolicy command. You may issue the mmapplypolicy command from any node in the GPFS cluster that has the file systems mounted. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. This command displays the actions that would occur if a policy were applied, but does not apply the policy at this time: |
mmapplypolicy fs1 -P policyfile -I test
73
| | | | | | | | |
Chose to migrate 16KB: 2 of 2 candidates; Chose to premigrate 0KB: 0 candidates; Already co-managed 0KB: 0 candidates; Chose to delete 16KB: 2 of 2 candidates; 0KB of chosen data is illplaced or illreplicated; Predicted Data Pool Utilization in KB and %: sp1 2528 9765632 0.025887% system 154896 9765632 1.586134%
74
| | | | | | | | | | | |
GPFS Current Data Pool Utilization in KB and % system 165376 17795840 0.929296% Loaded policy rules from policyfile. Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2007-06-20@18:44:26 UTC parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules RULE EXTERNAL POOL hsm EXEC /var/mmfs/etc/dsmcmd OPTS -server k164n04.kgn.ibm.com RULE move data to hsm MIGRATE TO POOL hsm WHERE name like %tf% . . .
4. Additional examples of GPFS policies and using the mmapplypolicy command are in the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
See also
mmchpolicy Command on page 119 | mmcrsnapshot Command on page 139 mmlspolicy Command on page 229 | mmlssnapshot Command on page 234 | mmsnapdir Command on page 277
Location
/usr/lpp/mmfs/bin
75
mmauth Command
Name
mmauth Manages secure access to GPFS file systems.
Synopsis
mmauth genkey {new | commit} Or, mmauth add RemoteClusterName -k KeyFile -l CipherList Or, mmauth update RemoteClusterName -C NewClusterName -k KeyFile [-l CipherList] Or, mmauth delete {RemoteClusterName | all } Or, mmauth grant {RemoteClusterName | all } -f { Device | all } [-a {rw | ro} ] [-r {uid:gid | no}] Or, mmauth deny {RemoteClusterName | all } -f { Device | all } Or, mmauth show [RemoteClusterName | all]
Description
The mmauth command prepares a cluster to grant secure access to file systems owned locally. The mmauth command also prepares a cluster to receive secure access to file systems owned by another cluster. Use the mmauth command to generate a public/private key pair for the local cluster. A public/private key pair must be generated on both the cluster owning the file system and the cluster desiring access to the file system. The administrators of the clusters are responsible for exchanging the public portion of the public/private key pair. Use the mmauth command to add or delete permission for a cluster to mount file systems owned by the local cluster. When a cluster generates a new public/private key pair, administrators of clusters participating in remote file system mounts are responsible for exchanging their respective public key file /var/mmfs/ssl/ id_rsa.pub generated by this command. The administrator of a cluster desiring to mount a file system from another cluster must provide the received key file as input to the mmremotecluster command. The administrator of a cluster allowing another cluster to mount a file system must provide the received key file to the mmauth command. The keyword appearing after mmauth determines which action is performed: add Adds a cluster and its associated public key to the list of clusters authorized to connect to this cluster for the purpose of mounting file systems owned by this cluster.
GPFS: Administration and Programming Reference
76
delete Deletes a cluster and its associated public key from the list of clusters authorized to mount file systems owned by this cluster. deny Denies a cluster the authority to mount a specific file system owned by this cluster.
genkey {new | commit} new Generates a new public/private key pair for this cluster. The key pair is placed in /var/mmfs/ssl. This must be done at least once before cipherList, the GPFS configuration parameter that enables GPFS with OpenSSL, is set. The new key is in addition to the currently in effect committed key. Both keys are accepted until the administrator runs mmauth genkey commit. commit Commits the new public/private key pair for this cluster. Once mmauth genkey commit is run, the old key pair will no longer be accepted, and remote clusters that have not updated their keys (by running mmauth update or mmremotecluster update) will be disconnected. grant show update Updates the public key and other information associated with a cluster authorized to mount file systems owned by this cluster. When the local cluster name (or .) is specified, mmauth update -l can be used to set the cipherList value for the local cluster. Note that you cannot use this command to change the name of the local cluster. Use the mmchcluster command for this purpose. Allows a cluster to mount a specific file system owned by this cluster. Shows the list of clusters authorized to mount file system owned by this cluster.
Parameters
RemoteClusterName Specifies the remote cluster name requesting access to local GPFS file systems. The value all indicates all remote clusters defined to the local cluster.
Options
-a {rw | ro} The type of access allowed: ro rw Specifies read-only access. Specifies read/write access. This is the default.
-C NewClusterName Specifies a new, fully-qualified cluster name for the already-defined cluster remoteClusterName. -f Device The device name for a file system owned by this cluster. The Device argument is required. If all is specified, the command applies to all file systems owned by this cluster at the time that the command is issued. -k KeyFile Specifies the public key file generated by the mmauth command in the cluster requesting to remotely mount the local GPFS file system. -l CipherList Specifies the cipher list to be associated with the cluster specified by remoteClusterName, when connecting to this cluster for the purpose of mounting file systems owned by this cluster.
77
See the Frequently Asked Questions at: publib.boulder.ibm.com/infocenter/ clresctr/topic/ com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html for a list of the ciphers supported by GPFS. -r {uid:gid | no} Specifies a root credentials remapping (root squash) option. The UID and GID of all processes with root credentials from the remote cluster will be remapped to the specified values. The default is not to remap the root UID and GID. The uid and gid must be specified as unsigned integers or as symbolic names that can be resolved by the operating system to a valid UID and GID. Specifying no, off, or DEFAULT turns off the remapping. For more information, see General Parallel File System: Advanced Administration Guide and search on root squash.
Exit status
0 Successful completion. After a successful completion of the mmauth command, the configuration change request will have been propagated to all nodes in the cluster.
Security
You must have root authority to run the mmauth command. You may issue the mmauth command from any node in the GPFS cluster.
Examples
1. This is an example of an mmauth genkey new command:
mmauth genkey new
78
mmauth: Propagating the changes to all affected nodes. This is an asynchronous process.
6. This is an example on how to set or change the cipher list for the local cluster:
mmauth update . -l NULL-SHA
For clustB.kgn.ibm.com, the mmauth genkey new command has been issued, but the mmauth genkey commit command has not yet been issued. For more information on the SHA digest, see General Parallel File System: Problem Determination Guide and search on SHA digest. 8. This is an example of an mmauth deny command:
mmauth deny clustA.kgn.ibm.com -f all
See also
mmremotefs Command on page 255 mmremotecluster Command on page 252 Accessing GPFS file systems from other GPFS clusters in General Parallel File System: Advanced Administration Guide.
79
Location
/usr/lpp/mmfs/bin
80
mmbackup Command
Name
mmbackup Backs up a GPFS file system to a backup server.
Synopsis
mmbackup Device -n ControlFile [-t {full | incremental}] [-s SortDir] Or, mmbackup Device -R [-s SortDir]
Description
Use the mmbackup command to backup a GPFS file system to a backup server. mmbackup takes a temporary snapshot named .mmbuSnapshot of the specified file system, and backs up this snapshot to a back end data store. Accordingly, the files backed up by the command will be stored in the directory /Device/.snapshots/.mmbuSnapshot in the remote data store. This command may be issued from any GPFS node in the cluster to which the file system being backed up belongs, and on which the file system is mounted.
Parameters
Device The device name for the file system to be backed up. This must be the first parameter and must be fully-qualified, such as /dev/fs0. The device name must be specified; there is no default value. -n ControlFile Specifies the file containing the backup control information. The file must be in the present working directory or the path must be fully qualified. Each piece of control information must be on a separate line and correctly qualified. Comment lines are allowed and must begin with a # in column 1. Empty lines may not contain any blank characters. Valid lines either contain a # in column 1 indicating a comment, an = indicating a value is being set, or no characters at all. This option may be specified only if the backup type is full or incremental. If the -R option has been specified, this information is obtained from the control information specified on the earlier full or incremental mmbackup command that completed with partial success. The allowable qualifiers in the control file are: serverName The name of the node specified as the backup server qualified with serverName=. The backup server node may or may not be a GPFS node, although performance may be improved if it is also a backup client node. You may specify only one backup server. clientName The backup clients, one per line, qualified with clientName=. The backup client nodes must be a member of the GPFS cluster where the file system is mounted. For improved performance it is suggested that multiple backup client nodes be specified. The maximum number of backup clients supported is 32. numberOfProcessesPerClient The number of processes per client qualified with numberOfProcessesPerClient=. The number of processes per client may be specified only once.
Options
-R Indicates to resume the previous backup that failed with a return code of 1 (partial success). If the
Chapter 8. GPFS commands
81
previous backup failed with a return code of 2 or succeeded with a return code of 0, this option does not succeed and a new full or incremental backup must be initiated. -s SortDir Specifies the directory to be used by the sort command for temporary data. The default is /tmp. -t {full | incremental} Specifies whether to perform a full backup of all of the files in the file system, or an incremental backup of only those files that have changed since the last backup was performed. The default is to perform an incremental backup. The default is -t incremental.
Exit status
0 1 2 Successful completion. Partially successful completion. Not all of the eligible files were successfully backed up. The command may be resumed by specifying the -R option. A failure occurred that cannot be corrected by resuming the backup. A new backup must be initiated.
Security
You must have root authority to run the mmbackup command. You may issue the mmbackup command from any node in the cluster where the file system being backed up is mounted. When using the rcp and rsh commands for remote communication, a properly-configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
These examples use a control file named inputctrl32, which specifies a backup server, three backup clients, and two processes per client as shown here:
# backup server serverName=k145n06.kgn.ibm.com # backup clients clientName=k14n04.kgn.ibm.com clientName=k14n05.kgn.ibm.com clientName=k14n06.kgn.ibm.com # number of processes per client numberOfProcessesPerClient=2
1. To perform a full backup of the file system /dev/fs0 from node k145n04, issue this command:
mmbackup /dev/fs0 -n inputctrl32 -t full
2. To perform an incremental backup of the file system /dev/fs0 from node k145n04, issue this command:
82
3. In an unsuccessful attempt to perform a full backup of the file system /dev/fs0 from node k145n04, where the user had issued this command:
mmbackup /dev/fs0 -n inputctrl32 -t full
Location
/usr/lpp/mmfs/bin
83
mmchattr Command
Name
mmchattr Changes the replication attributes, storage pool assignment, and I/O caching policy for one or more GPFS files.
Synopsis
mmchattr [-m MetadataReplicas] [-M MaxMetadataReplicas] [-r DataReplicas] [-R MaxDataReplicas] [-P DataPoolName] [-D {yes | no} ] [-I {yes | defer}] Filename [Filename...]
Description
Use the mmchattr command to change the replication attributes, storage pool assignment, and I/O caching policy for files in the GPFS file system. The replication factor must be less than or equal to the maximum replication factor for the file. If insufficient space is available in the file system to increase the number of replicas to the value requested, the mmchattr command ends. However, some blocks of the file may have their replication factor increased after the mmchattr command ends. If additional free space becomes available in the file system at a later time (when, for example, you add another disk to the file system), you can then issue the mmrestripefs command with the -r or -b option to complete the replication of the file. The mmrestripefile command can be used in a similar manner. You can use the mmlsattr command to display the replication values. Data of a file is stored in a specific storage pool. A storage pool is a collection of disks or RAIDs with similar properties. Because these storage devices have similar properties, you can manage them as a groups. You can use storage pools to: v Partition storage for the file system v Assign file storage locations v Improve system performance v Improve system reliability The Direct I/O caching policy bypasses file cache and transfers data directly from disk into the user space buffer, as opposed to using the normal cache policy of placing pages in kernel memory. Applications with poor cache hit rates or very large I/Os may benefit from the use of Direct I/O. The mmchattr command can be run against a file in use. You must have write permission for the files whose attributes you are changing.
Parameters
Filename [Filename ...] The name of one or more files to be changed. Delimit each file name by a space. Wildcard characters are supported in file names, for example, project*.sched.
Options
-D {yes | no} Enable or disable the Direct I/O caching policy for files. -I {yes | defer} Specifies if replication and migration between pools is to be performed immediately (-I yes), or
84
deferred until a later call to mmrestripefs or mmrestripefile (-I defer). By deferring the updates to more than one file, the data movement may be done in parallel. The default is -I yes. -m MetadataReplicas Specifies how many copies of the file systems metadata to create. Enter a value of 1 or 2, but not greater than the value of the MaxMetadataReplicas attribute of the file. | | | -M MaxMetadataReplicas Specifies the maximum number of copies of indirect blocks for a file. Space is reserved in the inode for all possible copies of pointers to indirect blocks. Enter a value of 1 or 2. This value cannot be less than the value of the DefaultMetadataReplicas attribute of the file. -P DataPoolName Changes the files assigned storage pool to the specified DataPoolName. The caller must have superuser or root privileges to change the assigned storage pool. -r DataReplicas Specifies how many copies of the file data to create. Enter a value of 1 or 2. This value should not greater than the value of the MaxDataReplicas attribute of the file. | | | -R MaxDataReplicas Specifies the maximum number of copies of data blocks for a file. Space is reserved in the inode and indirect blocks for all possible copies of pointers to data blocks. Enter a value of 1 or 2. This value should not be less than the value of the DefaultDataReplicas attribute of the file.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have write access to the file to run the mmchattr command. You may issue the mmchattr command only from a node in the GPFS cluster where the file system is mounted.
Examples
1. To change the metadata replication factor to 2 and the data replication factor to 2 for the project7.resource file in file system fs1, issue this command:
mmchattr -m 2 -r 2 /fs1/project7.resource
2. Migrating data from one storage pool to another using the mmchattr command with the -I defer option, or the mmapplypolicy command with the -I defer option will cause the data to be ill-placed. This means that the storage pool assignment for the file has changed, but the file data has not yet been migrated to the assigned storage pool. The mmchattr -L command will show ill-placed flags on the files that are ill-placed. The mmrestripefs, or mmrestripefile command can be used to migrate data to the correct storage pool, and the ill-placed flag will be cleared. This is an example of an ill-placed file:
Chapter 8. GPFS commands
85
mmlsattr -L file1
See also
mmcrfs Command on page 127 mmlsattr Command on page 205 mmlsfs Command on page 217
Location
/usr/lpp/mmfs/bin
86
mmchcluster Command
Name
mmchcluster Changes GPFS cluster configuration data.
Synopsis
mmchcluster {[-p PrimaryServer] [-s SecondaryServer]} Or, mmchcluster -p LATEST Or, mmchcluster {[-r RemoteShellCommand] [-R RemoteFileCopyCommand]} Or, mmchcluster -C ClusterName
Description
The mmchcluster command serves several purposes. You can use it to: 1. Change the primary or secondary GPFS cluster configuration server. 2. Synchronize the primary GPFS cluster configuration server. 3. Change the remote shell and remote file copy programs to be used by the nodes in the cluster. 4. Change the cluster name. 5. Specify node interfaces to be used by the GPFS administration commands. | 6. The mmchnode command replaces the mmchcluster -N command for changes to node data related | to the cluster configuration. To display current system information for the cluster, issue the mmlscluster command. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. When issuing the mmchcluster command with the -p or -s options, the specified nodes must be available in order for the command to succeed. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and reissue the command. Attention: The mmchcluster command, when issued with either the -p or -s option, is designed to operate in an environment where the current primary and secondary cluster configuration servers are not available. As a result, the command can run without obtaining its regular serialization locks. To assure smooth transition to a new cluster configuration server, no other GPFS commands (mm commands) should be running when the command is issued, nor should any other command be issued until the mmchcluster command has successfully completed.
Parameters
-C ClusterName Specifies a new name for the cluster. If the user-provided name contains dots, it is assumed to be
87
a fully-qualified domain name. Otherwise, to make the cluster name unique, the domain of the primary configuration server will be appended to the user-provided name. Since each cluster is managed independently, there is no automatic coordination and propagation of changes between clusters like there is between the nodes within a cluster. This means that if you change the name of the cluster, you should notify the administrators of all other GPFS clusters that can mount your file systems so that they can update their own environments. See the mmauth, mmremotecluster, and mmremotefs commands. -p PrimaryServer Change the primary server node for the GPFS cluster data. This may be specified as a short or long node name, an IP address, or a node number. LATEST Synchronize all of the nodes in the GPFS cluster ensuring they are using the most recently specified primary GPFS cluster configuration server. If an invocation of the mmchcluster command fails, you are prompted to reissue the command and specify LATEST on the -p option to synchronize all of the nodes in the GPFS cluster. Synchronization provides for all nodes in the GPFS cluster to use the most recently specified primary GPFS cluster configuration server. -s SecondaryServer Change the secondary server node for the GPFS cluster data. To remove the secondary GPFS server and continue operating without it, specify a null string, , as the parameter. This may be specified as a short or long nodename, an IP address, or a node number.
Options
-R RemoteFileCopy Specifies the fully-qualified path name for the remote file copy program to be used by GPFS. The remote copy command must adhere to the same syntax format as the rcp command, but may implement an alternate authentication mechanism. -r RemoteShellCommand Specifies the fully-qualified path name for the remote shell program to be used by GPFS. The remote shell command must adhere to the same syntax format as the rsh command, but may implement an alternate authentication mechanism.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchcluster command. You may issue the mmchcluster command from any node in the GPFS cluster. A properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
88
Examples
To change the primary GPFS server for the cluster, issue this command:
mmchcluster -p k164n06
See also
mmaddnode Command on page 68 | mmchnode Command on page 112 mmcrcluster Command on page 121 mmdelnode Command on page 167 mmlscluster Command on page 207 mmremotecluster Command on page 252
Location
/usr/lpp/mmfs/bin
89
mmchconfig Command
Name
mmchconfig Changes GPFS configuration parameters.
Synopsis
mmchconfig Attribute=value[,Attribute=value...] [-i | -I] [-N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmchconfig command to change the GPFS configuration attributes on a single node, a set of nodes, or globally for the entire cluster. The Attribute=value options must come before any operand. When changing both maxblocksize and pagepool, the command fails unless these conventions are followed: v When increasing the values, pagepool must be specified first. v When decreasing the values, maxblocksize must be specified first.
Results
The configuration is updated on each node in the GPFS cluster.
Parameters
-N {Node[,Node...] | NodeFile | NodeClass} Specifies the set of nodes to which the configuration changes apply. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. The -N flag is valid only for the automountDir, dataStructureDump, designation, dmapiEventTimeout, dmapiMountTimeout, dmapiSessionFailureTimeout, maxblocksize, maxFilesToCache, maxStatCache, nsdServerWaitTimeWindowOnMount, nsdServerWaitTimeForMount, pagepool, prefetchThreads, unmountOnDiskFail, and worker1Threads attributes. This command does not support a NodeClass of mount.
| |
Options
| Attribute=value Specifies the name of the attribute to be changed and its associated value. More than one | attribute and value pair, in a comma-separated list, can be changed with one invocation of the | command. | | | -I | | | -i | To restore the GPFS default setting for any given attribute, specify DEFAULT as its value. Specifies that the changes take effect immediately, but do not persist when GPFS is restarted. This option is valid only for the dataStructureDump, dmapiEventTimeout, dmapiMountTimeoout, dmapiSessionFailureTimeout, maxMBpS, pagepool, unmountOnDiskFail, and verbsRdma attributes. Specifies that the changes take effect immediately and are permanent. This option is valid only for the dataStructureDump, dmapiEventTimeout, dmapiMountTimeoout, dmapiSessionFailureTimeout, maxMBpS, pagepool, unmountOnDiskFail, and verbsRdma attributes.
90
autoload Starts GPFS automatically whenever the nodes are rebooted. Valid values are yes or no. | automountDir Specifies the directory to be used by the Linux automounter for GPFS file systems that are being | | mounted automatically. The default directory is /gpfs/automountdir. This parameter does not apply to AIX environments. | cipherList Controls whether GPFS network communications are secured. If cipherList is not specified, or if the value DEFAULT is specified, GPFS does not authenticate or check authorization for network connections. If the value AUTHONLY is specified, GPFS does authenticate and check authorization for network connections, but data sent over the connection is not protected. Before setting cipherList for the first time, you must establish a public/private key pair for the cluster by using the mmauth genkey new command. See the Frequently Asked Questions at: publib.boulder.ibm.com/infocenter/ clresctr/topic/ com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html for a list of the ciphers supported by GPFS. | cnfsMountdPort Specifies the port number to be used for rpc.mountd. See General Parallel File System: | Advanced Administration Guide for restrictions and additional information. | | cnfsNFSDprocs Specifies the number of nfsd kernel threads. The default is 32. | | cnfsSharedRoot Specifies a directory in a GPFS file system to be used by the clustered NFS subsystem. See | General Parallel File System: Advanced Administration Guide for restrictions and additional | information. | | cnfsVIP Specifies a virtual DNS name for the list of CNFS IP addresses assigned to the nodes with the | mmchnode command. This allows NFS clients to be distributed among the CNFS nodes using | DNS round robin. For additional Information, see General Parallel File System: Advanced | Administration Guide. | dataStructureDump Specifies a path for the storage of dumps. The default is to store dumps in /tmp/mmfs. Specify no to not store dumps. It is suggested that you create a directory for the placement of certain problem determination information. This can be a symbolic link to another location if more space can be found there. Do not place it in a GPFS file system, because it might not be available if GPFS fails. If a problem occurs, GPFS may write 200 MB or more of problem determination data into the directory. These files must be manually removed when problem determination is complete. This should be done promptly so that a NOSPACE condition is not encountered if another failure occurs. | defaultMountDir Specifies the default parent directory for GPFS file systems. The default value is /gpfs. If an | explicit mount directory is not provided with the mmcrfs, mmchfs, or mmremotefs command, the | default mount point will be set to DefaultMountDir/DeviceName. | | designation This option is no longer valid. Use the mmchnode Command on page 112 instead. | dmapiEventTimeout Controls the blocking of file operation threads of NFS, while in the kernel waiting for the handling of a DMAPI synchronous event. The parameter value is the maximum time, in milliseconds, the thread will block. When this time expires, the file operation returns ENOTREADY, and the event continues asynchronously. The NFS server is expected to repeatedly retry the operation, which
Chapter 8. GPFS commands
91
eventually will find the response of the original event and continue. This mechanism applies only to read, write, and truncate event types, and only when such events come from NFS server threads. The timeout value is given in milliseconds. The value 0 indicates immediate timeout (fully asynchronous event). A value greater than or equal to 86400000 (which is 24 hours) is considered infinity (no timeout, fully synchronous event). The default value is 86400000. For further information regarding DMAPI for GPFS, see General Parallel File System: Data Management API Guide. dmapiMountTimeout Controls the blocking of mount operations, waiting for a disposition for the mount event to be set. This timeout is activated, at most once on each node, by the first external mount of a file system that has DMAPI enabled, and only if there has never before been a mount disposition. Any mount operation on this node that starts while the timeout period is active will wait for the mount disposition. The parameter value is the maximum time, in seconds, that the mount operation will wait for a disposition. When this time expires and there is still no disposition for the mount event, the mount operation fails, returning the EIO error. The timeout value is given in full seconds. The value 0 indicates immediate timeout (immediate failure of the mount operation). A value greater than or equal to 86400 (which is 24 hours) is considered infinity (no timeout, indefinite blocking until the there is a disposition). The default value is 60. For further information regarding DMAPI for GPFS, see General Parallel File System: Data Management API Guide. dmapiSessionFailureTimeout Controls the blocking of file operation threads, while in the kernel, waiting for the handling of a DMAPI synchronous event that is enqueued on a session that has experienced a failure. The parameter value is the maximum time, in seconds, the thread will wait for the recovery of the failed session. When this time expires and the session has not yet recovered, the event is cancelled and the file operation fails, returning the EIO error. The timeout value is given in full seconds. The value 0 indicates immediate timeout (immediate failure of the file operation). A value greater than or equal to 86400 (which is 24 hours) is considered infinity (no timeout, indefinite blocking until the session recovers). The default value is 0. For further information regarding DMAPI for GPFS, see General Parallel File System: Data Management API Guide. | failureDetectionTime Used when persistent reserve (PR) is enabled and it indicates to GPFS the amount of time it will | take to detect that a node has failed. | | | | | | maxblocksize Changes the maximum file system block size. Valid values are 64 KB, 256 KB, 512 KB, 1 MB, 2 MB, and 4 MB. The default value is 1 MB. Specify this value with the character K or M, for example 512K. File systems with block sizes larger than the specified value cannot be created or mounted unless the block size is increased. maxFilesToCache Specifies the number of inodes to cache for recently used files that have been closed. Storing a files inode in cache permits faster re-access to the file. The default is 1000, but increasing this number may improve throughput for workloads with high file reuse. However, increasing this number excessively may cause paging at the file system manager node. The value should be large enough to handle the number of concurrently open files plus allow caching of recently used files. maxMBpS Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node. The default is 150 MB per second. The value is used in calculating the amount of
92
I/O that can be done to effectively prefetch data for readers and write-behind data from writers. By lowering this value, you can artificially limit how much I/O one node can put on all of the disk servers. This is useful in environments in which a large number of nodes can overrun a few virtual shared disk servers. Setting this value too high usually does not cause problems because of other limiting factors, such as the size of the pagepool, the number of prefetch threads, and so forth. maxStatCache Specifies the number of inodes to keep in the stat cache. The stat cache maintains only enough inode information to perform a query on the file system. The default value is: 4 maxFilesToCache nsdServerWaitTimeForMount When mounting a file system whose disks depend on NSD servers, this option specifies the number of seconds to wait for those servers to come up. The decision to wait is controlled by the criteria managed by the nsdServerWaitTimeWindowOnMount option. Valid values are between 0 and 1200 seconds. The default is 300. A value of zero indicates that no waiting is done. The interval for checking is 10 seconds. If nsdServerWaitTimeForMount is 0, nsdServerWaitTimeWindowOnMount has no effect. The mount thread waits when the daemon delays for safe recovery. The mount wait for NSD servers to come up, which is covered by this option, occurs after expiration of the recovery wait allows the mount thread to proceed. nsdServerWaitTimeWindowOnMount Specifies a window of time (in seconds) during which a mount can wait for NSD servers as described for the nsdServerWaitTimeForMount option. The window begins when quorum is established (at cluster startup or subsequently), or at the last known failure times of the NSD servers required to perform the mount. Valid values are between 1 and 1200 seconds. The default is 600. If nsdServerWaitTimeForMount is 0, nsdServerWaitTimeWindowOnMount has no effect. When a node rejoins the cluster after having been removed for any reason, the node resets all the failure time values that it knows about. Therefore, when a node rejoins the cluster it believes that the NSD servers have not failed. From the nodes perspective, old failures are no longer relevant. GPFS checks the cluster formation criteria first. If that check falls outside the window, GPFS then checks for NSD server fail times being within the window. | | | pagepool Changes the size of the cache on each node. The default value is 64 MB. The minimum allowed value is 4 MB. The maximum GPFS pagepool size can be as large as 256 GB on 64-bit Linux systems and 64-bit AIX systems. Specify this value with the character M, for example, 128M. prefetchThreads Controls the maximum possible number of threads dedicated to prefetching data for files that are read sequentially, or to handle sequential write-behind. | | | | | | | | Functions in the GPFS daemon dynamically determine the actual degree of parallelism for prefetching data. The default value is 72. The minimum value is 2. The maximum value of prefetchThreads plus worker1Threads is: v 164 on 32-bit kernels v 550 on 64-bit kernels subnets Specifies subnets used to communicate between nodes in a GPFS cluster or a remote GPFS cluster. The subnets option must use the following format:
Chapter 8. GPFS commands
93
| | | |
subnets="Subnet[/ClusterName[;ClusterName...][ Subnet[/ClusterName[;ClusterName...]...]"
The order in which you specify the subnets determines the order that GPFS uses these subnets to establish connections to the nodes within the cluster. For example, subnets=192.168.2.0 refers to IP addresses 192.168.2.0 through 192.168.2.255. This feature cannot be used to establish fault tolerance or automatic failover. If the interface corresponding to an IP address in the list is down, GPFS does not use the next one on the list. For more information about subnets, see General Parallel File System: Advanced Administration Guide and search on Using remote access with public and private IP addresses. tiebreakerDisks Controls whether GPFS will use the node quorum with tiebreaker algorithm in place of the regular node based quorum algorithm. See General Parallel File System: Concepts, Planning, and Installation Guide and search for node quorum with tiebreaker. To enable this feature, specify the names of one or three disks. Separate the NSD names with semicolon (;) and enclose the list in quotes. The disks do not have to belong to any particular file system, but must be directly accessible from the quorum nodes. For example:
tiebreakerDisks="gpfs1nsd;gpfs2nsd;gpfs3nsd"
When changing the tiebreakerDisks, GPFS must be down on all nodes in the cluster. uidDomain Specifies the UID domain name for the cluster. | | | A detailed description of the GPFS user ID remapping convention is contained in the UID Mapping for GPFS in a Multi-Cluster Environment white paper at http://www.ibm.com/systems/clusters/ library/wp_lit.html. unmountOnDiskFail Controls how the GPFS daemon will respond when a disk failure is detected. Valid values are yes or no. When unmountOnDiskFail is set to no, the daemon marks the disk as failed and continues as long as it can without using the disk. All nodes that are using this disk are notified of the disk failure. The disk can be made active again by using the mmchdisk command. This is the suggested setting when metadata and data replication are used because the replica can be used until the disk is brought online again. When unmountOnDiskFail is set to yes, any disk failure will cause only the local node to force-unmount the file system that contains that disk. Other file systems on this node and other nodes continue to function normally, if they can. The local node can try and remount the file system when the disk problem has been resolved. This is the suggested setting when using SAN-attached disks in large multinode configurations, and when replication is not being used. This setting should also be used on a node that hosts descOnly disks. See Establishing disaster recovery for your GPFS cluster in General Parallel File System: Advanced Administration Guide. | usePersistentReserve Specifies whether to enable or disable Persistent Reserve (PR) on the disks. Valid values are yes | or no (no is the default). GPFS must be stopped on all nodes when setting this attribute. | v PR is only supported on AIX nodes. | v PR is only supported on NSDs that are built directly on hdisks. | v The disk subsystem must support PR | v GPFS supports a mix of PR disks and other disks. However, you will only realize improved | failover times if all the disks in the cluster support PR. |
94
| | | |
v GPFS only supports PR in the home cluster. Remote mounts must access the disks using an NSD server. For more information, see Reduced recovery time using Persistent Reserve in the General Parallel File System: Concepts, Planning, and Installation Guide.
| verbsPorts Specifies the InfiniBand device names and port numbers used for RDMA transfers between an | NSD client and server. You must enable verbsRdma to enable verbsPorts. | | | | | | | | | The format for verbsPorts is:
verbsPorts="device/port[ device/port ...]"
In this format, device is the HCA device name (such as mthca0) and port is the one-based port number (such as 1 or 2). If you do not specify a port number, GPFS uses port 1 as the default. For example:
verbsPorts="mthca0/1 mthca0/2"
will create two RDMA connections between the NSD client and server using both ports of a dual ported adapter.
| verbsRdma Enables or disables InfiniBand RDMA using the Verbs API for data transfers between an NSD | client and NSD server. Valid values are enable or disable. | worker1Threads Controls the maximum number of concurrent file operations at any one instant. If there are more requests than that, the excess will wait until a previous request has finished. | | | | | This attribute is primarily used for random read or write requests that cannot be pre-fetched, random I/O requests, or small file activity. The default value is 48. The minimum value is 1. The maximum value of prefetchThreads plus worker1Threads is: v 164 on 32-bit kernels v 550 on 64-bit kernels
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchconfig command. You may issue the mmchconfig command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
95
Examples
| To change the maximum file system block size allowed to 4 MB, issue this command: | mmchconfig maxblocksize=4M | To confirm the change, issue this command: | mmlsconfig | | | | | | | | | | | | | | | | The system displays information similar to:
Configuration data for cluster cluster1.kgn.ibm.com: -------------------------------------------clusterName cluster1.kgn.ibm.com clusterId 680681562214606028 clusterType lc autoload yes minReleaseLevel 3.2.0.0 pagepool 512m maxblocksize 4M cipherList AUTHONLY File systems in cluster cluster1.kgn.ibm.com: --------------------------------------------/dev/fs1 /dev/fs2
| See also mmaddnode Command on page 68 | mmchnode Command on page 112 mmcrcluster Command on page 121 mmdelnode Command on page 167 mmlsconfig Command on page 209 mmlscluster Command on page 207
Location
/usr/lpp/mmfs/bin
96
mmchdisk Command
Name
mmchdisk Changes state or parameters of one or more disks in a GPFS file system.
Synopsis
| mmchdisk Device {suspend | resume | stop | start | change} -d DiskDesc [;DiskDesc...] | -F | {DescFile} [-N {Node [,Node...] | NodeFile | NodeClass}] Or, mmchdisk Device {resume | start} -a [[-N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmchdisk command to change the state or the parameters of one or more disks in a GPFS file system. The state of a disk is a combination of its status and availability, displayed with the mmlsdisk command. Disk status is normally either ready or suspended. A transitional status such as replacing, replacement, or being emptied might also appear if a disk is being deleted or replaced. A suspended disk is one that the user has decided not to place any new data on. Existing data on a suspended disk may still be read or updated. Typically, a disk is suspended prior to restriping a file system. Suspending a disk tells the mmrestripefs command that data is to be migrated off that disk. Disk availability is either up or down. Be sure to use stop before you take a disk offline for maintenance. You should also use stop when a disk has become temporarily inaccessible due to a disk failure that is repairable without loss of data on that disk (for example, an adapter failure or a failure of the disk electronics). The Disk Usage (dataAndMetadata, dataOnly, metadataOnly, or descOnly) and Failure Group parameters of a disk are adjusted with the change option. See the General Parallel File System: Concepts, Planning, and Installation Guide and search for recoverability considerations. The mmchdisk change command does not move data or metadata that resides on the disk. After changing disk parameters, in particular, Disk Usage, you may have to issue the mmrestripefs command with the -r option to relocate data so that it conforms to the new disk parameters. The mmchdisk command can be issued for a mounted or unmounted file system. When maintenance is complete or the failure has been repaired, use the mmchdisk command with the start option. If the failure cannot be repaired without loss of data, you can use the mmdeldisk command. Note: 1. The mmchdisk command cannot be used to change the NSD servers associated with the disk. Use the mmchnsd command for this purpose. 2. Similarly, the mmchdisk command cannot be used to change the storage pool for the disk. Use the mmdeldisk and mmadddisk commands to move a disk from one storage pool to another.
Parameters
Device The device name of the file system to which the disks belong. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter.
97
-d DiskDesc[;DiskDesc... ] A descriptor for each disk to be changed. Specify only disk names when using the suspend, resume, stop, or start options. Delimit multiple disk names with semicolons and enclose the list in quotation marks. For example, "gpfs1nsd;gpfs2nsd" When using the change option, include the disk name and any new Disk Usage and Failure Group positional parameter values in the descriptor. Delimit descriptors with semicolons and enclose the list in quotation marks, for example, "gpfs1nsd:::dataOnly;gpfs2nsd:::metadataOnly:12" A disk descriptor is defined as (second, third, sixth and sevenths fields reserved):
DiskName:::DiskUsage:FailureGroup:::
DiskName For a list of disks that belong to a particular file system, issue the mmlsnsd -f, the mmlsfs -d, or the mmlsdisk command. The mmlsdisk command will also show the current disk usage and failure group values for each of the disks. DiskUsage If a value is not specified, the disk usage remains unchanged: dataAndMetadata Indicates that the disk contains both data and metadata. This is the default. dataOnly Indicates that the disk contains data and does not contain metadata. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. FailureGroup A number identifying the failure group to which this disk belongs. You can specify any value from -1 (where -1 indicates that the disk has no point of failure in common with any other disk) to 4000. If you do not specify a failure group, the value remains unchanged. GPFS uses this information during data and metadata placement to assure that no two replicas of the same block are written in such a way as to become unavailable due to a single disk failure. All disks that are attached to the same NSD server or adapter should be placed in the same failure group. | -F {DescFile} | Specifies a file containing a list of disk descriptors, one per line. -a Specifies to change the state of all of the disks belonging to the file system, Device. This operand is valid only on the resume and start options.
-N {Node[,Node...] | NodeFile | NodeClass } Specify the nodes to participate in the restripe of the file system after the state or parameters of the disks have been changed. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the restripe of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
98
Options
change Instructs GPFS to change the DiskUsage parameter, the FailureGroup parameter, or both, according to the values specified in the DiskDesc. resume Informs GPFS that a disk previously suspended is now available for allocating new space. If the disk is currently in a stopped state, it remains stopped until you specify the start option. Otherwise, normal read and write access to the disk resumes. start Informs GPFS that disks previously stopped are now accessible. This is accomplished by first changing the disk availability from down to recovering. The file system metadata is then scanned and any missing updates (replicated data that was changed while the disk was down) are repaired. If this operation is successful, the availability is then changed to up. If the metadata scan fails, availability is set to unrecovered. This could occur if too many other disks are down. The metadata scan can be re-initiated at a later time by issuing the mmchdisk start command again. If more than one disk in the file system is down, they must all be started at the same time by issuing the mmchdisk Device start -a command. If you start them separately and metadata is stored on any disk that remains down, the mmchdisk start command fails. stop Instructs GPFS to stop any attempts to access the specified disks. Use this option to tell the file system manager that a disk has failed or is currently inaccessible because of maintenance. A disk remains stopped until it is explicitly started by the mmchdisk command with the start option. Restarting the GPFS Server daemon or rebooting does not restore normal access to a stopped disk. suspend Instructs GPFS to stop allocating space on the specified disk. Place a disk in this state when you are preparing to restripe the file system off this disk because of faulty performance. This is a user-initiated state that GPFS never uses without an explicit command to change disk state. Existing data on a suspended disk may still be read or updated. A disk remains suspended until it is explicitly resumed. Restarting GPFS or rebooting nodes does not restore normal access to a suspended disk.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchdisk command. You may issue the mmchdisk command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
99
Examples
1. To suspend active disk gpfs2nsd, issue this command:
mmchdisk fs0 suspend -d gpfs2nsd
2. To specify that metadata should no longer be stored on disk gpfs1nsd, issue this command:
mmchdisk fs0 change -d "gpfs1nsd:::dataOnly"
See also
Displaying GPFS disk states on page 33 mmadddisk Command on page 64 mmchnsd Command on page 116 mmdeldisk Command on page 159 mmlsdisk Command on page 211 mmlsnsd Command on page 226 mmrpldisk Command on page 271
Location
/usr/lpp/mmfs/bin
100
mmcheckquota Command
Name
mmcheckquota Checks file system user, group and fileset quotas.
Synopsis
mmcheckquota [-v] {Device [Device...] | -a} Or, mmcheckquota {[-u UserQuotaFilename] | [-g GroupQuotaFileName] | [-j FilesetQuotaFilename]} Device
Description
The mmcheckquota command serves two purposes: 1. Count inode and space usage in a file system by user, group and fileset, and write the collected data into quota files. 2. Replace either the user, group, or fileset quota files, for the file system designated by Device, thereby restoring the quota files for the file system. These files must be contained in the root directory of Device. If a backup copy does not exist, an empty file is created when the mmcheckquota command is issued. The mmcheckquota command counts inode and space usage for a file system and writes the collected data into quota files. Indications leading you to the conclusion you should run the mmcheckquota command include: v MMFS_QUOTA error log entries. This error log entry is created when the quota manager has a problem reading or writing the quota file. v Quota information is lost due to a node failure. A node failure could leave users unable to open files or deny them disk space that their quotas should allow. v The in-doubt value is approaching the quota limit. The sum of the in-doubt value and the current usage may not exceed the hard limit. Consequently, the actual block space and number of files available to the user of the group may be constrained by the in-doubt value. Should the in-doubt value approach a significant percentage of the quota, use the mmcheckquota command to account for the lost space and files. v User, group, or fileset quota files are corrupted. The mmcheckquota command is I/O intensive and should be run when the system load is light. When issuing the mmcheckquota command on a mounted file system, negative in-doubt values may be reported if the quota server processes a combination of up-to-date and back-level information. This is a transient situation and may be ignored.
Parameters
-a Device The device name of the file system. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. -g GroupQuotaFileName Replace the current group quota file with the file indicated. When replacing quota files with the -g option: v The quota file must be in the root directory of the file system. v The file system must be unmounted.
Chapter 8. GPFS commands
Checks all GPFS file systems in the cluster from which the command is issued.
101
-j FilesetQuotaFilename Replace the current fileset quota file with the file indicated. When replacing quota files with the -j option: v The quota file must be in the root directory of the file system. v The file system must be unmounted. -u UserQuotaFilename Replace the current user quota file with the file indicated. When replacing quota files with the -u option: v The quota file must be in the root directory of the file system. v The file system must be unmounted.
Options
-v Reports discrepancies between calculated and recorded disk quotas.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcheckquota command. GPFS must be running on the node from which the mmcheckquota command is issued.
Examples
1. To check quotas for file system fs0, issue this command:
mmcheckquota fs0
The system displays information only if a problem is found. 2. To check quotas for all file systems, issue this command:
mmcheckquota -a
The system displays information only if a problem is found or if quota management is not enabled for a file system:
fs2: no quota management installed fs3: no quota management installed
3. To report discrepancies between calculated and recorded disk quotas, issue this command:
mmcheckquota -v fs1
102
USR 60012: 7248 subblocks counted (was -1619); 2013 inodes counted (was 2217) USR 60013: 6915 subblocks counted (was -616); 1773 inodes counted (was 2297) USR 60014: 6553 subblocks counted (was -1124); 1885 inodes counted (was 2533) USR 60020: 7045 subblocks counted (was -2486); 2050 inodes counted (was 1986) GRP 0: 98529 subblocks counted (was 6406); 17437 inodes counted (was 15910) GRP 100: 116038 subblocks counted (was -65884); 26277 inodes counted (was 30656) FILESET 0: 214567 subblocks counted (was -60842); 43714 inodes counted (was 46661)
See also
mmedquota Command on page 180 mmfsck Command on page 185 mmlsquota Command on page 231 mmquotaon Command on page 250 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
103
mmchfileset Command
Name
mmchfileset Changes the attributes of a GPFS fileset.
Synopsis
mmchfileset Device {FilesetName | -J JunctionPath} {[-j NewFilesetName] | [-t NewComment]}
Description
The mmchfileset command changes the name or comment for an existing GPFS fileset. For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system that contains the fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. FilesetName Specifies the name of the fileset. -J JunctionPath Specifies the junction path name for the fileset. A junction is a special directory entry that connects a name in a directory of one fileset to the root directory of another fileset. -j NewFilesetName Specifies the new name that is to be given to the fileset. This name must be less than 256 characters in length. This flag may be specified along with the -t flag. -t NewComment Specifies an optional comment that appears in the output of the mmlsfileset command. This comment must be less than 256 characters in length. This flag may be specified along with the -j flag.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchfileset command. You may issue the mmchfileset command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster.
104
2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
This command renames fileset fset1 to fset2, and gives it the comment the first fileset:
mmchfileset gpfs1 fset1 -j fset2 -t the first fileset
See also
mmdelfileset Command on page 162 mmcrfileset Command on page 125 mmlinkfileset Command on page 203 mmlsfileset Command on page 214 mmunlinkfileset Command on page 287
Location
/usr/lpp/mmfs/bin
105
mmchfs Command
Name
mmchfs Changes the attributes of a GPFS file system.
Synopsis
| mmchfs Device [-A {yes | no | automount}] [-E {yes | no}] [-D {nfs4 | posix}] [-F | MaxNumInodes[:NumInodesToPreallocate]] [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}] [-m DefaultMetadataReplicas] [-o MountOptions] [-Q {yes | no}] [-r DefaultDataReplicas] [-S {yes | no} ] [-T | Mountpoint] [-V {full | compat}] [-z {yes | no}] Or, mmchfs Device [-W NewDeviceName]
Description
Use the mmchfs command to change the attributes of a GPFS file system.
Parameters
Device The device name of the file system to be changed. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. However, file system names must be unique across GPFS clusters. This must be the first parameter.
Options
-A {yes | no | automount} Indicates when the file system is to be mounted: yes no When the GPFS daemon starts. Manual mount.
automount When the file system is first accessed. -D {nfs4 | posix} Specifies whether a deny-write open lock will block writes, which is expected and required by NFS V4. File systems supporting NFS V4 must have -D nfs4 set. The option -D posix allows NFS writes even in the presence of a deny-write open lock. If you intend to export the file system using NFS V4 or Samba, you must use -D nfs4. For NFS V3 (or if the file system is not NFS exported at all) use -D posix. -E {yes | no} Specifies whether to report exact mtime values. If -E no is specified, the mtime value is periodically updated. If you desire to always display exact modification times, specify the -E yes option. | -F MaxNumInodes[:NumInodesToPreallocate] MaxNumInodes specifies the maximum number of files that can be created. Allowable values | range from the current number of created inodes (determined by issuing the mmdf command with | the -F option), through the maximum number of files possibly supported as constrained by the | formula: |
106
| | | | | | | | | | |
maximum number of files = (total file system space/2) / (inode size + subblock size) If your file system has additional disks added or the number of inodes was insufficiently sized at file system creation, you can change the number of inodes and hence the maximum number of files that can be created. For file systems that will be doing parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes, there is the potential for slowdown in file system access. Take this into consideration when changing your file system. NumInodesToPreallocate specifies the number of inodes that will be pre-allocated by the system right away. If this number is not specified, GPFS allocates inodes dynamically as needed. The MaxNumInodes and NumInodesToPreallocate values can be specified with a suffix, for example 100K or 2M. -k {posix | nfs4 | all} Specifies the type of authorization supported by the file system: posix Traditional GPFS ACLs only (NFS V4 ACLs are not allowed). Authorization controls are unchanged from earlier releases. nfs4 all Support for NFS V4 ACLs only. Users are not allowed to assign traditional GPFS ACLs to any file system objects (directories and individual files). Any supported ACL type is permitted. This includes traditional GPFS (posix) and NFS V4 ACLs (nfs4). The administrator is allowing a mixture of ACL types. For example, fileA may have a posix ACL, while fileB in the same file system may have an NFS V4 ACL, implying different access characteristics for each file depending on the ACL type that is currently assigned. Neither nfs4 nor all should be specified here unless the file system is going to be exported to NFS V4 clients. NFS V4 ACLs affect file attributes (mode) and have access and authorization characteristics that are different from traditional GPFS ACLs.
| -K {no | whenpossible | always} Specifies whether strict replication is to be enforced: | | | | | | | | no Strict replication is not enforced. GPFS will try to create the needed number of replicas, but will still return EOK as long as it can allocate at least one replica.
whenpossible Strict replication is enforced provided the disk configuration allows it. If there is only one failure group, strict replication will not be enforced. always Strict replication is enforced. -m DefaultMetaDataReplicas Changes the default number of metadata replicas. Valid values are 1 and 2 but cannot exceed the values of MaxMetaDataReplicas set when the file system was created.
| | | |
Changing the default replication settings using the mmchfs command does not change the replication setting of existing files. After running the mmchfs command, the mmrestripefs command with the -R option can be used to change all existing files or you can use the mmchattr command to change a small number of existing files. -o MountOptions Specifies the mount options to pass to the mount command when mounting the file system. For a detailed description of the available mount options, see GPFS-specific mount options on page 15.
107
-Q {yes | no} If -Q yes is specified, quotas are activated automatically when the file system is mounted. If -Q no | is specified, the quota files remain in the file system, but are not used. Before you activate or deactivate quotas, you must unmount the file system from the cluster. | | For additional information, refer to Enabling and disabling GPFS quota management on page 39. -r DefaultDataReplicas Changes the default number of data replicas. Valid values are 1 and 2 but cannot exceed the values of MaxDataReplicas set when the file system was created. | | | | Changing the default replication settings using the mmchfs command does not change the replication setting of existing files. After running the mmchfs command, the mmrestripefs command with the -R option can be used to change all existing files or you can use the mmchattr command to change a small number of existing files. -S {yes | no} Suppress the periodic updating of the value of atime as reported by the gpfs_stat(), gpfs_fstat(), stat(), and fstat() calls. If yes is specified these calls report the last time the file was accessed when the file system was mounted with the -S no option. -T Mountpoint Change the mount point of the file system starting at the next mount of the file system. The file system must be unmounted on all nodes prior to issuing the command. | -V {full | compat} After migration, changes the file system format to the latest format supported by the currently | installed level of GPFS. This may cause the file system to become permanently incompatible with | earlier releases of GPFS. | | | | | | | | | | | | | | | Before issuing the -V option, see the Migration, coexistence and compatibility topic in the General Parallel File System: Concepts, Planning, and Installation Guide. You must ensure that all nodes in the cluster have been migrated to the latest level of GPFS code and that you have successfully run the mmchconfig release=LATEST command. For information about specific file system format and function changes when upgrading to the current release, see Chapter 12, File system format changes between versions of GPFS, on page 403. full Enables all new functionality that requires different on-disk data structures. Nodes in remote clusters running an older GPFS version will no longer be able to mount the file system. If there are any nodes running an older GPFS version that have the file system mounted at the time the command is issued, the mmchfs command will fail. Enables only backwardly compatible format changes. Nodes in remote clusters that were able to mount the file system before will still be able to do so. -W NewDeviceName Assign NewDeviceName to be the device name for the file system. -z {yes | no} Enable or disable DMAPI on the file system. For further information on DMAPI for GPFS, see the General Parallel File System: Data Management API Guide.
compat
Exit status
0 Successful completion. nonzero A failure has occurred.
108
Security
You must have root authority to run the mmchfs command. You may issue the mmchfs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To change the default replicas for metadata to 2 and the default replicas for data to 2 for new files created in the fs0 file system, issue this command:
mmchfs fs0 -m 2 -r 2
See also
mmcrfs Command on page 127 mmdelfs Command on page 165 mmdf Command on page 174 mmfsck Command on page 185 mmlsfs Command on page 217 | mmrestripefs Command on page 267
Location
/usr/lpp/mmfs/bin
109
mmchmgr Command
Name
| mmchmgr Assigns a new file system manager node or cluster manager node.
Synopsis
| mmchmgr {Device | -c} [Node]
Description
| The mmchmgr command assigns a new file system manager node or cluster manager node.
Parameters
Device The device name of the file system for which the file system manager node is to be changed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. | -c | Node | | | | | | | Changes the cluster manager node. The target node to be appointed as either the new file system manager or cluster manager node. Target nodes for manager functions must be selected according to use: v Target nodes for cluster manager function must be specified from the list of quorum nodes v Target nodes for file system manager function should be specified from the list of manager nodes If Node is not specified, the new manager is selected automatically. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchmgr command. You may issue the mmchmgr command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
110
Examples
Assume the file system manager for the file system gpfs1 is currently k164n05. To migrate the file system manager responsibilities to k164n06, issue this command:
mmchmgr gpfs1 k164n06
| To change the cluster manager node, issue the command: | mmchmgr -c c5n107 | The system displays information similar to: | Appointing node 9.114.132.107 (c5n107) as cluster manager | Node 9.114.132.107 (c5n107) has taken over as cluster manager | To verify the change, issue the command: | mmlsmgr -c | The system displays information similar to: | Cluster manager node: 9.114.132.107 (c5n107) | See also mmlsmgr Command on page 221
Location
/usr/lpp/mmfs/bin
111
mmchnode Command
| Name | mmchnode Changes node attributes. | Synopsis | mmchnode change-options -N{Node[,Node...] | NodeFile | NodeClass} | Or, | mmchnode {-S Filename | --spec-file=Filename} | Description | Use the mmchnode command to change one or more attributes on a single node or on a set of nodes. If | conflicting node designation attributes are specified for a given node, the last value is used. If any of the | attributes represent a node-unique value, the -N option must resolve to a single node. | Parameters | -N { Node[,Node...] | NodeFile | NodeClass} Specifies the nodes whose states are to be changed. For general information on how to specify | node names, see Specifying nodes as input to GPFS commands on page 4. | | -S Filename | --spec-file=Filename Specifies a file with a detailed description of the changes to be made. Each line represents the | changes to an individual node and has the following format: | node-identifier change-options | | change-options A blank-separated list of attribute[=value] pairs. The following attributes can be specified: | | | | | | | | | | | | | | | | | | | | | | --admin-interface={hostname | ip_address} Specifies the name of the node to be used by GPFS administration commands when communicating between nodes. The admin node name must be specified as an IP address or a hostname that is resolved by the host command to the desired IP address. If the keyword DEFAULT is specified, the admin interface for the node is set to be equal to the daemon interface for the node. --client Specifies that the node should not be part of the pool of nodes from which cluster managers, file system managers, and token managers are selected. --cnfs-disable Temporarily disables the CNFS functionality of a CNFS member node. --cnfs-enable Enables a previously-disabled CNFS member node. --cnfs-groupid=groupid Specifies a failover recovery group for the node. If the keyword DEFAULT is specified, the CNFS recovery group for the node is set to zero. For additional information, refer to Implementing a clustered NFS using GPFS on Linux in the GPFS: Advanced Administration Guide. --cnfs-interface=ip_address_list A comma-separated list of host names or IP addresses to be used for GPFS cluster NFS serving.
112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The specified IP addresses can be real or virtual (aliased). These addresses must be configured to be static (not DHCP) and to not start at boot time. The GPFS daemon interface for the node cannot be a part of the list of CNFS IP addresses. If the keyword DEFAULT is specified, the CNFS IP address list is removed and the node is no longer considered a member of CNFS. For additional information, refer to Implementing a clustered NFS using GPFS on Linux in the GPFS: Advanced Administration Guide. --daemon-interface={hostname | ip_address} Specifies the host name or IP address to be used by the GPFS daemons for node-to-node communication. The host name or IP address must refer to the communication adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the host command to that original address. When changing the daemon interface, GPFS must be stopped on all nodes in the cluster. The keyword DEFAULT is not allowed for this attribute. --manager Designates the node as part of the pool of nodes from which cluster managers, file system managers, and token managers are selected. --nonquorum Designates the node as a non-quorum node. If two or more quorum nodes are downgraded at the same time, GPFS must be stopped on all nodes in the cluster. GPFS does not have to be stopped if the nodes are downgraded one at a time. --nosnmp-agent Stops the SNMP subagent and specifies that the node should no longer serve as an SNMP collector node. For additional information see GPFS SNMP support in the Advanced Administration Guide. --quorum Designates the node as a quorum node. --snmp-agent Designates the node as an SNMP collector node. If the GPFS daemon is active on this node, the SNMP subagent will be started as well. For additional information see GPFS SNMP support in the Advanced Administration Guide.
| nonzero A failure has occurred. | | Security | You must have root authority to run the mmchnode command. | You can issue the mmchnode command from any node in the GPFS cluster. | | | | | When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must make certain that: 1. Proper authorization is granted to all nodes in the GPFS cluster.
113
| 2. The nodes in the GPFS cluster can communicate without the use of a password and without any | extraneous messages. | Examples | To change nodes k145n04 and k145n05 to be both quorum and manager nodes, issue this command: | mmchnode --quorum --manager -N k145n04,k145n05 | | | | | | | | | | | | | | | | | | | | | | | | The system displays information similar to:
Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n04.kgn.ibm.com Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n05.kgn.ibm.com mmchnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
| To change nodes k145n04 and k145n05 to be both quorum and manager nodes, and node k45n06 to be a | non-quorum node, issue this command: | mmchnode -S /tmp/specFile | | | | | | | | | | | | | | | | | | | | | Where the contents of /tmp/specFile are:
k145n04 --quorum --manager k145n05 --quorum --manager k145n06 --nonquorum
114
| | | | | | | | |
----------------------------------Primary server: k145n04.kgn.ibm.com Secondary server: k145n06.kgn.ibm.com Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------1 k145n04.kgn.ibm.com 9.114.68.68 k145n04.kgn.ibm.com quorum-manager 2 k145n05.kgn.ibm.com 9.114.68.69 k145n05.kgn.ibm.com quorum-manager 3 k145n06.kgn.ibm.com 9.114.68.70 k145n06.kgn.ibm.com
| See also | mmchconfig Command on page 90 | mmlscluster Command on page 207 | Location | /usr/lpp/mmfs/bin
115
mmchnsd Command
Name
mmchnsd Changes Network Shared Disk (NSD) configuration parameters.
Synopsis
mmchnsd {DiskDesc[;DiskDesc...] | -F DescFile}
Description
The mmchnsd command serves several purposes. You can use it to: | v Specify a server list for an NSD that does not have one. | v Change the NSD server nodes specified in the server list. | v Delete the server list. The disk must now be SAN-attached to all nodes in the cluster on which the file system will be mounted. | You must follow these rules when changing NSDs: v You must identify the disks by the NSD names that were given to them by the mmcrnsd command. | v You must explicitly specify values for all NSD servers on the list even if you are only changing one of the values. | v The file system that contains the NSD being changed must be unmounted prior to issuing the mmchnsd command. v The NSD must be properly connected to the new nodes prior to issuing the mmchnsd command. v This command cannot be used to change the DiskUsage or FailureGroup for an NSD. You must issue the mmchdisk command to change these. v To move a disk from one storage pool to another, use the mmdeldisk and mmadddisk commands. v You cannot change the name of the NSD.
Parameters
DiskDesc A descriptor for each NSD to be changed. Each descriptor is separated by a semicolon (;). The entire list must be enclosed in single or double quotation marks. | | | | | | | | | | | | | | Note: You can specify up to eight NSD servers in a disk descriptor. -F DescFile Specifies a file containing a list of disk descriptors, one per line. Each disk descriptor must be specified in the form:
DiskName:ServerList
DiskName Is the NSD name that was given to the disk by the mmcrnsd command. ServerList Is a comma-separated list of NSD server nodes. You can specify up to eight NSD servers in this list. The defined NSD will preferentially use the first server on the list. If the first server is not available, the NSD will use the next available server on the list. If you do not define a ServerList, GPFS assumes that the disk is SAN-attached to all nodes in the cluster. If all nodes in the cluster do not have access to the disk, or if the file system to which the disk belongs is to be accessed by other GPFS clusters, you must specify a value for ServerList.
116
| | |
To remove the NSD server list, do not specify a value for ServerList. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmchnsd command. You may issue the mmchnsd command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
| If the disk gpfs1nsd is currently defined with k145n05 as the first server and k145n07 as the second | server, and you want to change the primary NSD server to k145n09, issue this command: | mmchnsd "gpfs1nsd:k145n09:k145n07:::" | To confirm the changes, issue this command: | mmlsnsd -d gpfs1nsd | | | | The system displays information similar to:
File system Disk name NSD servers --------------------------------------------------------------------fs2 gpfs1nsd k145n09.ppd.pok.ibm.com,k145n07.ppd.pok.ibm.com
| See also mmchdisk Command on page 97 mmcrcluster Command on page 121 mmcrnsd Command on page 134 mmlsnsd Command on page 226
117
Location
/usr/lpp/mmfs/bin
118
mmchpolicy Command
Name
mmchpolicy - Establish policy rules for a GPFS file system.
Synopsis
mmchpolicy Device PolicyFileName [-t DescriptiveName ] [-I {yes | test} ]
Description
Use the mmchpolicy command to establish the rules for policy-based lifecycle management of the files in a given GPFS file system. Some of the things that can be controlled with the help of policy rules are: v File placement at creation time v Replication factors v Movement of data between storage pools v File deletion The mmapplypolicy command must be run to move data between storage pools or delete files. Policy changes take effect immediately on all nodes that have the affected file system mounted. For nodes that do not have the file system mounted, policy changes take effect upon the next mount of the file system. For information on GPFS policies, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device Specifies the device name of the file system for which policy information is to be established or changed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter. | PolicyFileName Specifies the name of the file that contains the policy rules. If you specify DEFAULT, GPFS | replaces the current policy file with a single policy rule that assigns all newly-created files to the | system storage pool. |
Options
-I {yes | test} Specifies whether to activate the rules in the policy file PolicyFileName. yes test The policy rules are validated and immediately activated. This is the default. The policy rules are validated, but not installed.
-t DescriptiveName Specifies an optional descriptive name to be associated with the policy rules. The string must be less than 256 characters in length. If not specified, the descriptive name defaults to the base name portion of the PolicyFileName parameter.
Exit status
0 Successful completion.
119
Security
You must have root authority to run the mmchpolicy command. You may issue the mmchpolicy command from any node in the GPFS cluster.
Examples
1. This command validates a policy before it is installed:
mmchpolicy fs2 policyfile -I test
See also
mmapplypolicy Command on page 71 mmlspolicy Command on page 229
Location
/usr/lpp/mmfs/bin
120
mmcrcluster Command
Name
mmcrcluster Creates a GPFS cluster from a set of nodes.
Synopsis
mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile} -p PrimaryServer [-s SecondaryServer] [-r RemoteShellCommand] [-R RemoteFileCopyCommand] [-C ClusterName] [-U DomainName] [-A] [-c ConfigFile]
Description
Use the mmcrcluster command to create a GPFS cluster. Upon successful completion of the mmcrcluster command, the /var/mmfs/gen/mmsdrfs and the /var/mmfs/gen/mmfsNodeData files are created on each of the nodes in the cluster. Do not delete these files under any circumstances. For further information, see the General Parallel File System: Concepts, Planning, and Installation Guide. You must follow these rules when creating your GPFS cluster: v While a node may mount file systems from multiple clusters, the node itself may only be added to a single cluster using the mmcrcluster or mmaddnode command. v The nodes must be available for the command to be successful. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and issue the mmaddnode command to add those nodes. | v You must designate at least one but not more than seven nodes as quorum nodes. You are strongly advised to designate the cluster configuration servers as quorum nodes. How many quorum nodes altogether you will have depends on whether you intend to use the node quorum with tiebreaker algorithm or the regular node based quorum algorithm. For more details, see the General Parallel File System: Concepts, Planning, and Installation Guide and search for designating quorum nodes.
Parameters
-A Specifies that GPFS daemons are to be automatically started when nodes come up. The default is not to start daemons automatically.
-C ClusterName Specifies a name for the cluster. If the user-provided name contains dots, it is assumed to be a fully qualified domain name. Otherwise, to make the cluster name unique, the domain of the primary configuration server will be appended to the user-provided name. If the -C flag is omitted, the cluster name defaults to the name of the primary GPFS cluster configuration server. -c ConfigFile Specifies a file containing GPFS configuration parameters with values different than the documented defaults. A sample file can be found in /usr/lpp/mmfs/samples/mmfs.cfg.sample. See the mmchconfig command for a detailed description of the different configuration parameters. The -c ConfigFile parameter should be used only by experienced administrators. Use this file to set up only those parameters that appear in the mmfs.cfg.sample file. Changes to any other values may be ignored by GPFS. When in doubt, use the mmchconfig command instead.
121
-N NodeDesc[,NodeDesc...] | NodeFile NodeFile specifies the file containing the list of node descriptors (see below), one per line, to be included in the GPFS cluster. NodeDesc[,NodeDesc...] Specifies the list of nodes and node designations to be included in the GPFS cluster. Node descriptors are defined as:
NodeName:NodeDesignations:AdminNodeName
where: 1. NodeName is the hostname or IP address to be used by the GPFS daemon for node to node communication. The hostname or IP address must refer to the communications adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the host command to that original address. You may specify a node using any of these forms: Format Example Short hostname k145n01 Long hostname k145n01.kgn.ibm.com IP address 99.119.19.102 2. NodeDesignations is an optional, - separated list of node roles. v manager | client Indicates whether a node is part of the pool of nodes from which cluster managers, file system managers, and token managers are selected. The default is client. v quorum | nonquorum Indicates whether a node is counted as a quorum node. The default is nonquorum. 3. AdminNodeName is an optional field that consists of a node name to be used by the administration commands to communicate between nodes. If AdminNodeName is not specified, the NodeName value is used. You must provide a descriptor for each node to be added to the GPFS cluster. -p PrimaryServer Specifies the primary GPFS cluster configuration server node used to store the GPFS configuration data. This node must be a member of the GPFS cluster. -R RemoteFileCopy Specifies the fully-qualified path name for the remote file copy program to be used by GPFS. The default value is /usr/bin/rcp. The remote copy command must adhere to the same syntax format as the rcp command, but may implement an alternate authentication mechanism. -r RemoteShellCommand Specifies the fully-qualified path name for the remote shell program to be used by GPFS. The default value is /usr/bin/rsh. The remote shell command must adhere to the same syntax format as the rsh command, but may implement an alternate authentication mechanism. -s SecondaryServer Specifies the secondary GPFS cluster configuration server node used to store the GPFS cluster data. This node must be a member of the GPFS cluster.
| |
122
It is suggested that you specify a secondary GPFS cluster configuration server to prevent the loss of configuration data in the event your primary GPFS cluster configuration server goes down. When the GPFS daemon starts up, at least one of the two GPFS cluster configuration servers must be accessible. If your primary GPFS cluster configuration server fails and you have not designated a secondary server, the GPFS cluster configuration files are inaccessible, and any GPFS administration commands that are issued fail. File system mounts or daemon startups also fail if no GPFS cluster configuration server is available. -U DomainName Specifies the UID domain name for the cluster. A detailed description of the GPFS user ID remapping convention is contained in UID Mapping for GPFS In a Multi-Cluster Environment at www.ibm.com/servers/eserver/clusters/library/ wp_aix_lit.html.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrcluster command. You may issue the mmcrcluster command from any node in the GPFS cluster. A properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To create a GPFS cluster made of all of the nodes listed in the file /u/admin/nodelist, using node k164n05 as the primary server, and node k164n04 as the secondary server, issue:
mmcrcluster -N /u/admin/nodelist -p k164n05 -s k164n04
123
See also
mmaddnode Command on page 68 mmchconfig Command on page 90 mmdelnode Command on page 167 mmlscluster Command on page 207 mmlsconfig Command on page 209
Location
/usr/lpp/mmfs/bin
124
mmcrfileset Command
Name
mmcrfileset Creates a GPFS fileset.
Synopsis
mmcrfileset Device FilesetName [-t Comment]
Description
The mmcrfileset command constructs a new fileset with the specified name. The new fileset is empty except for a root directory, and does not appear in the directory name space until the mmlinkfileset command is issued. The mmcrfileset command is separate from the mmlinkfileset command to allow the administrator to establish policies and quotas on the fileset before it is linked into the name space. For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide. The maximum number of filesets that GPFS supports is 1000 filesets per file system.
Parameters
Device The device name of the file system to contain the new fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. FilesetName Specifies the name of the newly created fileset. -t Comment Specifies an optional comment that appears in the output of the mmlsfileset command. This comment must be less than 256 characters in length.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrfileset command. You may issue the mmcrfileset command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
125
Examples
This example creates two filesets in file system gpfs1:
mmcrfileset gpfs1 fset1
See also
mmchfileset Command on page 104 mmdelfileset Command on page 162 mmlinkfileset Command on page 203 mmlsfileset Command on page 214 mmunlinkfileset Command on page 287
Location
/usr/lpp/mmfs/bin
126
mmcrfs Command
Name
mmcrfs Creates a GPFS file system.
Synopsis
| | | | | mmcrfs Device {DiskDesc[;DiskDesc...] | -F DescFile} [-A {yes | no | automount}] [-D {nfs4 | posix}] [-B BlockSize] [-E {yes | no}] [-j {cluster | scatter}] [-k {posix | nfs4 | all}] [-K {no | whenpossible | always}] [-L LogFileSize] [-m DefaultMetadataReplicas] [-M MaxMetadataReplicas] [-n NumNodes] [-N NumInodes[:NumInodesToPreallocate]] [-Q {yes | no}] [-r DefaultDataReplicas] [-R MaxDataReplicas] [-S {yes | no}] [-T MountPoint] [-v {yes | no}] [-z {yes | no}] [--version Version]
Description
| Use the mmcrfs command to create a GPFS file system. The first two parameters must be Device and either DiskDescList or DescFile and they must be in that order. The block size and replication factors | chosen affect file system performance. A maximum of 256 file systems can be mounted in a GPFS cluster at one time, including remote file systems. When deciding on the maximum number of files (number of inodes) in a file system, consider that for file systems that will be doing parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes, there is the potential for slowdown in file system access. The total number of inodes can be increased using the mmchfs command. When deciding on a block size for a file system, consider these points: | 1. Supported block sizes are 16 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, 2 MB, and 4 MB. | 2. The GPFS block size determines: v The minimum disk space allocation unit. The minimum amount of space that file data can occupy is | a sub-block. A sub-block is 1/32 of the block size. | v The maximum size of a read or write request that GPFS sends to the underlying disk driver. | | 3. From a performance perspective, it is recommended that you set the GPFS block size to match the application buffer size, the RAID stripe size, or a multiple of the RAID stripe size. If the GPFS block | size does not match the RAID stripe size, performance may be severely degraded, especially for write | operations. | 4. In file systems with a high degree of variance in the size of files within the file system, using a small block size would have a large impact on performance when accessing large files. In this kind of system it is suggested that you use a block size of 256 KB (8 KB sub-block). Even if only 1% of the files are large, the amount of space taken by the large files usually dominates the amount of space used on disk, and the waste in the sub-block used for small files is usually insignificant. For further performance information, see the GPFS white papers at http://www.ibm.com/systems/clusters/library/wp_lit.html. | 5. The effect of block size on file system performance largely depends on the application I/O pattern. v A larger block size is often beneficial for large sequential read and write workloads. | v A smaller block size is likely to offer better performance for small file, small random read and write, | and metadata-intensive workloads. | | 6. The efficiency of many algorithms that rely on caching file data in a GPFS page pool depends more on the number of blocks cached rather than the absolute amount of data. For a page pool of a given size, | a larger file system block size would mean fewer blocks cached. Therefore, when you create file | systems with a block size larger than the default of 256 KB, it is recommended that you increase the | page pool size in proportion to the block size. |
127
| 7. The file system block size must not exceed the value of the GPFS maxblocksize configuration | parameter. The maxblocksize parameter is set to 1 MB by default. If a larger block size is desired, use the mmchconfig command to increase the maxblocksize before starting GPFS. |
Results
Upon successful completion of the mmcrfs command, these tasks are completed on all GPFS nodes: v Mount point directory is created. v File system is formatted.
Parameters
Device The device name of the file system to be created. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. However, file system names must be unique within a GPFS cluster. Do not specify an existing entry in /dev. -D {nfs4 | posix} Specifies whether a deny-write open lock will block writes, which is expected and required by NFS V4. File systems supporting NFS V4 must have -D nfs4 set. The option -D posix allows NFS writes even in the presence of a deny-write open lock. If you intend to export the file system using NFS V4 or Samba, you must use -D nfs4. For NFS V3 (or if the file system is not NFS exported at all) use -D posix. The default is -D nfs4. -F DescFile Specifies a file containing a list of disk descriptors, one per line. You may use the rewritten DiskDesc file created by the mmcrnsd command, create your own file, or enter the disk descriptors on the command line. When using the DiskDesc file created by the mmcrnsd command, the values supplied on input to the command for Disk Usage and FailureGroup are used. When creating your own file or entering the descriptors on the command line, you must specify these values or accept the system defaults. DiskDesc[;DiskDesc...] A descriptor for each disk to be included. Each descriptor is separated by a semicolon (;). The entire list must be enclosed in quotation marks ( or ). A disk descriptor is defined as (second, third and sixth fields reserved):
DiskName:::DiskUsage:FailureGroup::StoragePool:
DiskName You must specify the name of the NSD previously created by the mmcrnsd command. For a list of available disks, issue the mmlsnsd -F command. DiskUsage Specify a disk usage or accept the default: | | | | dataAndMetadata Indicates that the disk contains both data and metadata. This is the default for disks in the system pool. dataOnly Indicates that the disk contains data and does not contain metadata. This is the default for disks in storage pools other than the system pool. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third
128
failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. FailureGroup A number identifying the failure group to which this disk belongs. You can specify any value from -1 (where -1 indicates that the disk has no point of failure in common with any other disk) to 4000. If you do not specify a failure group, the value defaults to the node number of the first NSD server defined in the NSD server list plus 4000. If you do not specify an NSD server list, the value defaults to -1. GPFS uses this information during data and metadata placement to assure that no two replicas of the same block are written in such a way as to become unavailable due to a single failure. All disks that are attached to the same NSD server or adapter should be placed in the same failure group. If replication of -m or -r is set to 2, storage pools must have two failure groups for the commands to work properly. StoragePool Specifies the storage pool to which the disk is to be assigned. If this name is not provided, the default is system. Only the system pool may contain descOnly, metadataOnly or dataAndMetadata disks.
| | |
Options
-A {yes | no | automount} Indicates when the file system is to be mounted: yes no When the GPFS daemon starts. This is the default. Manual mount.
automount When the file system is first accessed. | | -B BlockSize Size of data blocks. Must be 16 KB, 64 KB, 256 KB (the default), 512 KB, 1 MB, 2 MB, or 4 MB. Specify this value with the character K or M, for example 512K. -E {yes | no} Specifies whether to report exact mtime values (-E yes), or to periodically update the mtime value for a file system (-E no). If it is more desirable to display exact modification times for a file system, specify or use the default -E yes option. | -j {cluster | scatter} Specifies the block allocation map type. When allocating blocks for a given file, GPFS first uses a | round-robin algorithm to spread the data across all disks in the file system. After a disk is selected, | the location of the data block on the disk is determined by the block allocation map type. If cluster | is specified, GPFS attempts to allocate blocks in clusters. Blocks that belong to a particular file are | kept adjacent to each other within each cluster. If scatter is specified, the location of the block is | chosen randomly. | | | | | | | | The cluster allocation method may provide better disk performance for some disk subsystems in relatively small installations. The benefits of clustered block allocation diminish when the number of nodes in the cluster or the number of disks in a file system increases, or when the file systems free space becomes fragmented. The cluster allocation method is the default for GPFS clusters with eight or fewer nodes and for files systems with eight or fewer disks. The scatter allocation method provides more consistent file system performance by averaging out performance variations due to block location (for many disk subsystems, the location of the data
129
| | | |
relative to the disk edge has a substantial effect on performance). This allocation method is appropriate in most cases and is the default for GPFS clusters with more than eight nodes or file systems with more than eight disks. The block allocation map type cannot be changed after the file system has been created. -k {posix | nfs4 | all} Specifies the type of authorization supported by the file system: posix Traditional GPFS ACLs only (NFS V4 ACLs are not allowed). Authorization controls are unchanged from earlier releases. nfs4 all Support for NFS V4 ACLs only. Users are not allowed to assign traditional GPFS ACLs to any file system objects (directories and individual files). Any supported ACL type is permitted. This includes traditional GPFS (posix) and NFS V4 ACLs (nfs4). The administrator is allowing a mixture of ACL types. For example, fileA may have a posix ACL, while fileB in the same file system may have an NFS V4 ACL, implying different access characteristics for each file depending on the ACL type that is currently assigned. The default is -k all. Neither nfs4 nor all should be specified here unless the file system is going to be exported to NFS V4 clients. NFS V4 ACLs affect file attributes (mode) and have access and authorization characteristics that are different from traditional GPFS ACLs.
| -K {no | whenpossible | always} Specifies whether strict replication is to be enforced: | | | | | | | | | no Indicates that strict replication is not enforced. GPFS will try to create the needed number of replicas, but will still return EOK as long as it can allocate at least one replica.
whenpossible Indicates that strict replication is enforced provided the disk configuration allows it. If the number of failure groups is insufficient, strict replication will not be enforced. This is the default value. always Indicates that strict replication is enforced.
| -L LogFileSize Specifies the size of the internal log file. The default size is 4 MB, the minimum size is 256 KB, | and the maximum size is 16 MB. Specify this value with the K or M character, for example: 8M. | This value cannot be changed after the file system has been created. | | | | In most cases, allowing the log file size to default works well. Increasing the log file size is useful for sites that have a large amount of metadata activity, such as creating and deleting many small files or performing extensive block allocation and deallocation of large files. -m DefaultMetadataReplicas Specifies the default number of copies of inodes, directories, and indirect blocks for a file. Valid values are 1 and 2, but cannot be greater than the value of MaxMetadataReplicas. The default is 1. -M MaxMetadataReplicas Specifies the default maximum number of copies of inodes, directories, and indirect blocks for a file. Valid values are 1 and 2, but cannot be less than the value of DefaultMetadataReplicas. The | default is 2. | -n NumNodes The estimated number of nodes that will mount the file system. This is used as a best guess for the initial size of some file system data structures. The default is 32. This value cannot be changed after the file system has been created.
130
When you create a GPFS file system, you might want to overestimate the number of nodes that will mount the file system. GPFS uses this information for creating data structures that are essential for achieving maximum parallelism in file system operations (see Appendix A: GPFS architecture in General Parallel File System: Concepts, Planning, and Installation Guide). Although a large estimate consumes additional memory, underestimating the data structure allocation can reduce the efficiency of a node when it processes some parallel requests such as the allotment of disk space to a file. If you cannot predict the number of nodes that will mount the file system, allow the default value to be applied. If you are planning to add nodes to your system, you should specify a number larger than the default. However, do not make estimates that are not realistic. Specifying an excessive number of nodes may have an adverse affect on buffer operations. | -N NumInodes[:NumInodesToPreallocate] Specifies the maximum number of files in the file system. This value defaults to the size of the file | system at creation divided by 1M and is constrained by the formula: | | | | | | | | | | maximum number of files = (total file system space/2) / (inode size + subblock size) For file systems that will be creating parallel files, if the total number of free inodes is not greater than 5% of the total number of inodes, file system access might slow down. Take this into consideration when creating your file system. The parameter NumInodesToPreallocate specifies the number of inodes that the system will immediately preallocate. If you do not specify a value for NumInodesToPreallocate, GPFS will dynamically allocate inodes as needed. You can specify the NumInodes and NumInodesToPreallocate values with a suffix, for example 100K or 2M. -Q {yes | no} Activates quotas automatically when the file system is mounted. The default is -Q no. To activate GPFS quota management after the file system has been created: 1. Mount the file system. 2. To establish default quotas: a. Issue the mmdefedquota command to establish default quota values. b. Issue the mmdefquotaon command to activate default quotas. 3. To activate explicit quotas: a. Issue the mmedquota command to activate quota values. b. Issue the mmquotaon command to activate quota enforcement. -r DefaultDataReplicas Specifies the default number of copies of each data block for a file. Valid values are 1 and 2, but cannot be greater than the value of MaxDataReplicas. The default is 1. -R MaxDataReplicas Specifies the default maximum number of copies of data blocks for a file. Valid values are 1 and 2. The value cannot be less than the value of DefaultDataReplicas. The default is 2. | -S {yes | no} Suppresses the periodic updating of the value of atime as reported by the gpfs_stat(), gpfs_fstat(), stat(), and fstat() calls. The default value is -S no. Specifying -S yes for a new file system results in reporting the time the file system was created. | -T MountPoint Specifies the mount point directory of the GPFS file system. If it is not specified, the mount point | will be set to DefaultMountDir/Device. The default value for DefaultMountDir is /gpfs but, it can be | changed with the mmchconfig command. | -v {yes | no} Verifies that specified disks do not belong to an existing file system. The default is -v yes. Specify
Chapter 8. GPFS commands
131
-v no only when you want to reuse disks that are no longer needed for an existing file system. If the command is interrupted for any reason, you must use the -v no option on the next invocation of the command. -z {yes | no} Enable or disable DMAPI on the file system. The default is -z no. For further information on DMAPI for GPFS, see General Parallel File System: Data Management API Guide. | --version Version Enable only the file system features that are compatible with the specified release. The allowed | Version values are 3.1.0.0 and 3.2.0.0. | | | The default is 3.2.0.0, which will enable all currently available features, but will prevent nodes that are running earlier GPFS releases from accessing the file system.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrfs command. You may issue the mmcrfs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
| | | | | | | | | | | | | | | | | | | | | This example shows how to create a file system named gpfs1 using three disks, each with a block size of 512 KB, allowing metadata and data replication to be 2, turning quotas on, and creating gpfs1 as the mount point. To complete this task, issue the command:
mmcrfs gpfs1 "hd2n97;hd3n97;hd4n97" -B 512K -M 2 -R 2 -Q yes -T /gpfs1
132
| 100 % complete on Wed May 16 16:03:52 2007 | Completed creation of file system /dev/gpfs1. | mmcrfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. | | See also mmchfs Command on page 106 mmdelfs Command on page 165 mmdf Command on page 174 mmedquota Command on page 180 mmfsck Command on page 185 mmlsfs Command on page 217
Location
/usr/lpp/mmfs/bin
133
mmcrnsd Command
Name
| mmcrnsd Creates Network Shared Disks (NSDs) used by GPFS.
Synopsis
mmcrnsd -F DescFile [-v {yes |no}]
Description
The mmcrnsd command is used to create cluster-wide names for NSDs used by GPFS. This is the first GPFS step in preparing a disk for use by a GPFS file system. A disk descriptor file supplied to this command is rewritten with the new NSD names and that rewritten disk descriptor file can then be supplied as input to the mmcrfs, mmadddisk or mmrpldisk commands. The name created by the mmcrnsd command is necessary since disks connected at multiple nodes may have differing disk device names in /dev on each node. The name uniquely identifies the disk. This | command must be run for all disks that are to be used in GPFS file systems. The mmcrnsd command is | also used to assign an NSD server list that can be used for I/O operations on behalf of nodes that do not | have direct access to the disk. To identify that the disk has been processed by the mmcrnsd command, a unique NSD volume ID is written on sector 2 of the disk. All of the NSD commands (mmcrnsd, mmlsnsd, and mmdelnsd) use this unique NSD volume ID to identify and process NSDs. After the NSDs are created, the GPFS cluster data is updated and they are available for use by GPFS. When using an IBM eServer High Performance Switch (HPS) in your configuration, it is suggested you process your disks in two steps: 1. Create virtual shared disks on each physical disk through the mmcrvsd command. 2. Using the rewritten disk descriptors from the mmcrvsd command, create NSDs through the mmcrnsd command.
Results
Upon successful completion of the mmcrnsd command, these tasks are completed: v NSDs are created. v The DescFile contains NSD names to be used as input to the mmcrfs, mmadddisk, or the mmrpldisk commands. v A unique NSD volume ID to identify the disk as an NSD has been written on sector 2. v An entry for each new disk is created in the GPFS cluster data.
Parameters
-F DescFile Specifies the file containing the list of disk descriptors, one per line. | | Note: You can specify up to eight NSD servers in a disk descriptor. Disk descriptors have this format:
DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
DiskName
134
| | | |
Is the block device name appearing in /dev for the disk you want to define as an NSD. Examples of disks that are accessible through a block device are SAN-attached disks or virtual shared disks. If a server nodes is specified, DiskName must be the /dev name for the disk device of the first listed NSD server node. See the Frequently Asked Questions at publib.boulder.ibm.com/infocenter/clresctr/topic/ com.ibm.cluster.gpfs.doc/ gpfs_faqs/gpfsclustersfaq.html for the latest supported disk types. GPFS provides the mmcrvsd helper command to ease configuration of /dev disk devices. In an AIX environment, this command can be used to configure virtual shared disks and make them accessible to nodes connected over a high performance switch. The output disk descriptor file from an mmcrvsd command can be used as input to the mmcrnsd command, since the virtual shared disk names enumerated in that file will appear as /dev block devices on switch attached nodes.
ServerList Is a comma separated list of NSD server nodes. You may specify up to eight NSD servers in this list. The defined NSD will preferentially use the first server on the list. If the first server is not available, the NSD will use the next available server on the list. If you do not define a ServerList, GPFS assumes that the disk is SAN-attached to all nodes in the cluster. If all nodes in the cluster do not have access to the disk, or if the file system to which the disk belongs is to be accessed by other GPFS clusters, you must specify a ServerList. DiskUsage Specify a disk usage or accept the default. This field is ignored by the mmcrnsd command, and is passed unchanged to the output descriptor file produced by the mmcrnsd command. Possible values are:
| | | |
dataAndMetadata Indicates that the disk contains both data and metadata. This is the default for disks in the system pool. dataOnly Indicates that the disk contains data and does not contain metadata. This is the default for disks in storage pools other than the system pool. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. FailureGroup Is a number identifying the failure group to which this disk belongs. You can specify any value from -1 to 4000 (where -1 indicates that the disk has no point of failure in common with any other disk). If you do not specify a failure group, the value defaults to the node number plus 4000 for the first NSD server defined in the server list. If you do not specify an NSD server list, the value defaults to -1. GPFS uses this information during data and metadata placement to assure that no two replicas of the same block are written in such a way as to become unavailable due to a single failure. All disks that are attached to the same NSD server or adapter should be placed in the same failure group.
| | |
135
DesiredName Specify the name you desire for the NSD to be created. This name must not already be used as another GPFS disk name, and it must not begin with the reserved string gpfs. Note: This name can contain only the following characters: A through Z, a through z, 0 through 9, or _ (the underscore). All other characters are not valid. If a desired name is not specified, the NSD is assigned a name according to the convention: gpfsNNnsd where NN is a unique nonnegative integer not used in any prior NSD. StoragePool Specifies the name of the storage pool that the NSD is assigned to. This field is ignored by the mmcrnsd command, and is passed unchanged to the output descriptor file produced by the mmcrnsd command. Upon successful completion of the mmcrnsd command, the DescFile file is rewritten to contain the | created NSD names in place of the device name. NSD servers defined in the ServerList and desiredName are omitted from the rewritten disk descriptor and all other fields, if specified, are copied without | modification. The original lines, as well as descriptor lines in error, are commented out and preserved for reference. The rewritten disk descriptor file can then be used as input to the mmcrfs, mmadddisk, or the mmrpldisk commands. You must have write access to the directory where the DescFile file is located in order to rewrite the created NSD information. The Disk Usage and Failure Group specifications in the disk descriptor are preserved only if you use the rewritten file produced by the mmcrnsd command. If you do not use this file, you must either accept the default values or specify new values when creating disk descriptors for other commands.
Options
-v {yes |no} Verify the disk is not already formatted as an NSD. A value of -v yes specifies that the NSD are to be created only if the disk has not been formatted by a previous invocation of the mmcrnsd command, as indicated by the NSD volume ID on sector 2 of the disk. A value of -v no specifies that the disk is to be formatted irrespective of its previous state. The default is -v yes.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrnsd command. You may issue the mmcrnsd command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory, on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster.
136
2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
| The NSD ServerList uses the form:
server1[,server2,...,server8]
| To create your NSDs from the descriptor file nsdesc containing: | sdav1:k145n05,k145n06::dataOnly:4::poolA | sdav2:k145n06,k145n05::dataAndMetadata:5:ABC Issue this command:
mmcrnsd -F nsdesc
These descriptors translate as: Disk Name sdav1 | Server List k145n05,k145n06 | Disk Usage dataOnly Failure Group 4 | Storage Pool poolA | and Disk Name sdav2 | Server List k145n06,k145n05 | Disk Usage dataAndMetadata, allowing both Failure Group 5 | Desired Name gpfs20nsd Storage Pool system nsdesc is rewritten as | | | |
#sdav1:k145n05,k145n06::dataOnly:4::poolA gpfs20nsd:::dataOnly:4::poolA #sdav2:k145n06,k145n05::dataAndMetadata:5:ABC ABC:::dataAndMetadata:5
137
mmcrnsd: Processing disk sdav1 mmcrnsd: Processing disk sdav2 mmcrnsd: 6027-1371 Propagating the changes to all affected nodes. This is an asynchronous process.
See also
mmadddisk Command on page 64 mmcrfs Command on page 127 mmdeldisk Command on page 159 mmdelnsd Command on page 170 mmlsnsd Command on page 226 mmrpldisk Command on page 271
Location
/usr/lpp/mmfs/bin
138
mmcrsnapshot Command
Name
mmcrsnapshot Creates a snapshot of an entire GPFS file system at a single point in time.
Synopsis
mmcrsnapshot Device Directory
Description
Use the mmcrsnapshot command to create a snapshot of an entire GPFS file system at a single point in time. A snapshot is a copy of the changed user data in the file system. System data and existing snapshots are not copied. The snapshot function allows a backup or mirror program to run concurrently with user updates and still obtain a consistent copy of the file system as of the time the copy was created. Snapshots are exact copies of changed data in the active files and directories of a file system. Snapshots of a file system are read-only and they appear in a .snapshots directory located in the file system root directory. The files and attributes of the file system may be changed only in the active copy. There is a maximum limit of 31 snapshots per file system. Snapshots may be deleted only by issuing the mmdelsnapshot command. The .snapshots directory cannot be deleted, though it can be renamed with the mmsnapdir command using the -s option. If the mmcrsnapshot command is issued while a conflicting command is running, the mmcrsnapshot command waits for that command to complete. If the mmcrsnapshot command is running while a conflicting command is issued, the conflicting command waits for the mmcrsnapshot command to complete. Conflicting operations include: 1. Other snapshot commands 2. Adding, deleting, replacing disks in the file system 3. Rebalancing, repairing, reducing disk fragmentation in a file system If quorum is lost before the mmcrsnapshot command completes, the snapshot is considered partial and will be deleted when quorum is achieved again. Because snapshots are not full, independent copies of the entire file system, they should not be used as protection against media failures. For protection against media failures, see General Parallel File System: Concepts, Planning, and Installation Guide and search on recoverability considerations. For more information on snapshots, see Creating and maintaining snapshots of GPFS file systems in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system for which the snapshot is to be created. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter. Directory Specifies the name given to the snapshot. The name appears as a subdirectory of the .snapshots directory in the file system root. Each snapshot must have a unique name.
139
If you do not want traverse the file systems root to access the snapshot, a more convenient mechanism that enables a connection in each directory of the active file system can be enabled with the -a option of the mmsnapdir command.
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrsnapshot command. You may issue the mmcrsnapshot command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To create a snapshot snap1, for the file system fs1, issue this command:
mmcrsnapshot fs1 snap1
Before issuing the command, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3
If a second snapshot were to be created at a later time, the first snapshot would remain as is. Snapshots are made only of active file systems, not existing snapshots. For example:
mmcrsnapshot fs1 snap2
140
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3 /fs1/.snapshots/snap2/file1 /fs1/.snapshots/snap2/userA/file2 /fs1/.snapshots/snap2/userA/file3
See also
mmdelsnapshot Command on page 172 mmlssnapshot Command on page 234 mmrestorefs Command on page 261 mmsnapdir Command on page 277
Location
/usr/lpp/mmfs/bin
141
mmcrvsd Command
Name
mmcrvsd Creates virtual shared disks for use by GPFS.
Synopsis
mmcrvsd [-f FanoutNumber] [-y] [-c] -F DescFile
Description
The mmcrvsd command can be used to create virtual shared disks for subsequent use by the mmcrnsd command. IBM Virtual shared disk is a subsystem that permits application programs that are running on different nodes of an RSCT peer domain access a raw logical volume as if it were local at each of the nodes. Virtual shared disks created with mmcrvsd follow the convention of one local volume group, one local logical volume, one global volume group, and one virtual shared disk per physical volume. After the virtual shared disk is created, it is configured and started on each node with a defined virtual shared disk adapter. See the updatevsdnode command in the correct manual for your environment at: publib.boulder.ibm.com/clresctr/windows/public/rsctbooks.html. The mmcrvsd command can be used only in the AIX environment. Where possible, the mmcrvsd command creates and starts virtual shared disk components in parallel. For instance, when multiple physical disk servers are specified in the disk descriptor file, their LVM components are created in parallel. Starting of all virtual shared disks, on all nodes, always occurs in parallel. The mmcrvsd command may also be restarted should one of the steps fail.
Results
Upon successful completion of the mmcrvsd command: v Virtual shared disks are created. If a desired name vsdname is specified on the disk descriptor, mmcrvsd uses that name for the name of the virtual shared disk. If a desired name is not specified, the virtual shared disk is assigned a name according to the convention: gpfsNNvsd where NN is a unique nonnegative integer not used in any prior virtual shared disk named with this convention. v Virtual shared disks are synchronously started on all nodes. v If a desired name vsdname is specified on the disk descriptor, mmcrvsd uses that name as the basis for the names of the global volume group, local logical volume, and local volume group according to the convention: vsdnamegvg the global volume group vsdnamelv the local logical volume vsdnamevg the local volume group v If a desired name is not specified, the global volume group, local logical volume, and local volume group for the virtual shared disk are named according to the convention:
142
gpfsNNgvg the global volume group gpfsNNlv the local logical volume gpfsNNvg the local volume group where gpfsNNvsd was the name chosen for the virtual shared disk. v The primary server is configured and the volume group is varied online there. v The backup server is configured and the volume group is imported there, but varied off. v The DescFile file is rewritten to contain the created virtual shared disk names in place of any disk descriptors containing physical disk or vpath names. Primary and backup servers are omitted from the rewritten disk descriptor and all other fields, if specified, are copied without modification. The rewritten disk descriptor file can then be used as input to the mmcrnsd command.
Error recovery
Each step of the mmcrvsd process is enumerated during command execution. For example at step 0, the mmcrvsd command prints:
Step 0: Setting up environment
As each step is started, its corresponding number is recorded in the DescFile file as a comment at the end. This comment serves as restart information to subsequent invocations of the mmcrvsd command. For example at step one, the recorded comment would be:
#MMCRVSD_STEP=0
Upon failure, appropriate error messages from the failing system component are displayed along with mmcrvsd error messages. After correcting the failing condition and restarting the mmcrvsd command with the same descriptor file, the command prompts you to restart at the last failing step. For example, if a prior invocation of mmcrvsd failed at step one, the prompt would be:
A prior invocation of this command has recorded a partial completion in the file (/tmp/DescFile). Should we restart at prior failing step(1)?[y]/n=>
Parameters
-F DescFile The file containing the list of disk descriptors, one per line, in the form:
DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePool
DiskName The device name of the disk you want to use to create a virtual shared disk. This can be either an hdisk name or a vpath name for an SDD device. GPFS performance and recovery processes function best with one disk per virtual shared disk. If you want to create virtual shared disks with more than one disk, refer to the correct manual for your environment at: publib.boulder.ibm.com/clresctr/windows/public/rsctbooks.html PrimaryServer The name of the virtual shared disk server node. This can be in any recognizable form.
143
BackupServer The backup server name. This can be specified in any recognizable form or allowed to default to none. | DiskUsage Specify a disk usage or accept the default (see General Parallel File System: Concepts, Planning, and Installation Guide and search on recoverability considerations). This field is ignored by the mmcrvsd command and is passed unchanged to the output descriptor file produced by the mmcrvsd command. dataAndMetadata Indicates that the disk contains both data and metadata. This is the default. dataOnly Indicates that the disk contains data and does not contain metadata. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. Disk usage considerations: 1. The DiskUsage parameter is not utilized by the mmcrvsd command but is copied intact to the output file that the command produces. The output file may then be used as input to the mmcrnsd command. 2. RAID devices are not well-suited for performing small block writes. Since GPFS metadata writes are often smaller than a full block, you may find using non-RAID devices for GPFS metadata better for performance. | FailureGroup A number identifying the failure group to which this disk belongs. You can specify any value from -1 (where -1 indicates that the disk has no point of failure in common with any other disk) to 4000. All disks that have a common point of failure, such as all disks that are attached to the same virtual shared disk server node, should be placed in the same failure group. The value is passed unchanged to the output descriptor file produced by the mmcrvsd command. DesiredName Specify the name you desire for the virtual shared disk to be created. This name must not already be used as another GPFS or AIX disk name, and it must not begin with the reserved string gpfs. Note: This name can contain only the following characters: A through Z, a through z, 0 through 9, or _ (the underscore). All other characters are not valid. The maximum size of this name is 13 characters. StoragePool Specifies the name of the storage pool that the NSD is assigned to. This field is ignored by the mmcrvsd command, and is passed unchanged to the output descriptor file produced by the mmcrvsd command.
144
Options
-f FanoutNumber The maximum number of concurrent nodes to communicate with during parallel operations. The default value is 10. -y -c Specifies no prompting for any queries the command may produce. All default values are accepted. Specifies to create Concurrent Virtual Shared Disks. This option is valid only for disk descriptors that specify both a primary and a backup virtual shared disk server.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmcrvsd command. You may issue the mmcrvsd command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory, on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To create a virtual shared disk with the descriptor file vsdesc containing:
hdisk2:k145n01:k145n02:dataOnly:4 hdisk3:k145n06::dataAndMetadata:5:ABC
These descriptors translate as: Disk Name hdisk2 Server Name k145n01 Backup Server Name k145n02 Disk Usage dataOnly Failure Group 4 Desired Name Name defaults to gpfs20vsd and
145
Disk Name hdisk3 Server Name k145n06 Backup Server Name none Disk Usage dataAndMetadata Failure Group 5 Desired Name ABC The low level components of the virtual shared disk gpfs20vsd are created: gpfs20gvg global volume group gpfs20lv local logical volume gpfs20vg local volume group The low level components of the virtual shared disk ABC are created: ABCgvg global volume group ABClv local logical volume ABCvg local volume group
See also
mmcrnsd Command on page 134
Location
/usr/lpp/mmfs/bin
146
mmdefedquota Command
Name
mmdefedquota Sets default quota limits to a file system.
Synopsis
mmdefedquota {-u | -g | -j} Device
Description
Use the mmdefedquota command to set or change default quota limits for new users, groups, and filesets for a file system. Default quota limits for a file system may be set or changed only if the file system was created with the -Q yes option on the mmcrfs command or changed with the mmchfs command. The mmdefedquota command displays the current values for these limits, if any, and prompts you to enter new values using your default editor: v Current block usage (display only) v Current inode usage (display only) v Inode soft limit v Inode hard limit v Block soft limit Displayed in KB, but may be specified using k, K, m, M, g, or G . If no suffix is provided, the number is assumed to be in bytes. v Block hard limit Displayed in KB, but may be specified using k, K, m, M, g, or G. If no suffix is provided, the number is assumed to be in bytes. Note: A block or inode limit of 0 indicates no limit. The mmdefedquota command waits for the edit window to be closed before checking and applying new values. If an incorrect entry is made, you must reissue the command and enter the correct values. When setting quota limits for a file system, replication within the file system should be considered. GPFS quota management takes replication into account when reporting on and determining if quota limits have been exceeded for both block and file usage. In a file system that has either type of replication set to a value of two, the values reported on by both the mmlsquota command and the mmrepquota command are double the value reported by the ls command. The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
Parameters
Device The device name of the file system to have default quota values set for. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-g -j Specifies that the default quota value is to be applied for new groups accessing the specified file system. Specifies that the default quota value is to be applied for new filesets in the specified file system.
Chapter 8. GPFS commands
147
-u
Specifies that the default quota value is to be applied for new users accessing the specified file system.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdefedquota command. GPFS must be running on the node from which the mmdefedquota command is issued.
Examples
To set default quotas for new users of the file system fs1, issue this command:
mmdefedquota -u fs1
See also
mmcheckquota Command on page 101 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmlsquota Command on page 231 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
148
mmdefquotaoff Command
Name
mmdefquotaoff Deactivates default quota limit usage for a file system.
Synopsis
mmdefquotaoff [-u] [-g] [-j] [-v] {Device [Device...] | -a}
Description
The mmdefquotaoff command deactivates default quota limits for file systems. If default quota limits are deactivated, new users or groups for that file system will then have a default quota limit of 0, indicating no limit. If neither the -u, -j or the -g option is specified, the mmdefquotaoff command deactivates all default quotas. If the -a option is not used, Device must be the last parameter specified.
Parameters
Device The device name of the file system to have default quota values deactivated. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a Deactivates default quotas for all GPFS file systems in the cluster. When used in combination with the -g option, only group quotas are deactivated. When used in combination with the -u or -j options, only user or fileset quotas, respectively, are deactivated. Specifies that default quotas for groups are to be deactivated. Specifies that default quotas for filesets are to be deactivated. Specifies that default quotas for users are to be deactivated. Prints a message for each file system in which default quotas are deactivated.
-g -j -u -v
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdefquotaoff command. GPFS must be running on the node from which the mmdefquotaoff command is issued.
Examples
1. To deactivate default user quotas on file system fs0, issue this command:
mmdefquotaoff -u fs0
149
2. To deactivate default group quotas on all file systems, issue this command:
mmdefquotaoff -g -a
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmlsquota Command on page 231 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
150
mmdefquotaon Command
Name
mmdefquotaon - Activates default quota limit usage for a file system.
Synopsis
mmdefquotaon [-u] [-g] [-j] [-v] [-d] {Device [Device... ] | -a}
Description
The mmdefquotaon command activates default quota limits for a file system. If default quota limits are not applied, new users, groups, or filesets for that file system will have a quota limit of 0, indicating no limit. To use default quotas, the file system must have been created or changed with the -Q yes option. See the mmcrfs and mmchfs commands. If neither the -u, -j or the -g option is specified, the mmdefquotaon command activates all default quota limits. If the -a option is not used, Device must be the last parameter specified. Default quotas are established for new users, groups of users or filesets by issuing the mmdefedquota command. Under the -d option, all users without an explicitly set quota limit will have a default quota limit assigned.
Parameters
Device The device name of the file system to have default quota values activated. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a Activates default quotas for all GPFS file systems in the cluster. When used in combination with the -g option, only group quotas are activated. When used in combination with the -u or -j options, only user or fileset quotas, respectively, are activated. Specifies that existing users, groups of users, or filesets with no established quota limits will have default quota values assigned when the mmdefedquota command is issued. If this option is not chosen, existing quota entries remain in effect and are not governed by the default quota rules. -g -j -u -v Specifies that only a default quota value for group quotas is to be activated. Specifies that only a default quota value for fileset quotas is to be activated. Specifies that only a default quota value for users is to be activated. Prints a message for each file system in which default quotas are activated.
-d
Exit status
0 Successful completion. nonzero A failure has occurred.
Chapter 8. GPFS commands
151
Security
You must have root authority to run the mmdefquotaon command. GPFS must be running on the node from which the mmdefquotaon command is issued.
Examples
1. To activate default user quotas on file system fs0, issue this command:
mmdefquotaon -u fs0
2. To activate default group quotas on all file systems in the cluster, issue this command:
To confirm the change, individually for each file system, issue this command:
mmlsfs fs1 -Q
---- -------------- -----------------------------Q group Quotas enforced group Default quotas enabled
3. To activate user, group and fileset default quotas on file system fs2, issue this command:
mmdefquotaon fs2
See also
mmcheckquota Command on page 101 mmchfs Command on page 106 mmcrfs Command on page 127 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmedquota Command on page 180
152
mmlsquota Command on page 231 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
153
mmdefragfs Command
Name
mmdefragfs Reduces disk fragmentation by increasing the number of full free blocks available to the file system.
Synopsis
| mmdefragfs Device [-i] [-u BlkUtilPct] [-P PoolName] [-N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmdefragfs command to reduce fragmentation of a file system. The mmdefragfs command moves existing file system data within a disk to make more efficient use of disk blocks. The data is migrated to unused sub-blocks in partially allocated blocks, thereby increasing the number of free full blocks. The mmdefragfs command can be run against a mounted or unmounted file system. However, best results are achieved when the file system is unmounted. When a file system is mounted, allocation status may change causing retries to find a suitable unused sub-block. | Note: On a file system that has a very low level of fragmentation, negative numbers can be seen in the output of mmdefragfs for free sub-blocks. This indicates that the block usage has in fact increased | after running the mmdefragfs command. If negative numbers are seen, it does not indicate a | problem and you do not need to rerun the mmdefragfs command. |
Parameters
Device The device name of the file system to have fragmentation reduced. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. | -P PoolName Specifies the pool name to use. | | -N {Node[,Node...] | NodeFile | NodeClass } Specifies the nodes that can be used in this disk defragmentation. This parameter supports all | defined node classes. The default is all, which means that all nodes in the GPFS cluster will | participate in the disk defragmentation. | | | For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-i Specifies to query the current disk fragmentation state of the file system. Does not perform the actual defragmentation of the disks in the file system.
-u BlkUtilPct The average block utilization goal for the disks in the file system. The mmdefragfs command reduces the number of allocated blocks by increasing the percent utilization of the remaining blocks. The command automatically goes through multiple iterations until BlkUtilPct is achieved on all of the disks in the file system or until no progress is made in achieving BlkUtilPct from one iteration to the next, at which point it exits.
154
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdefragfs command. You may issue the mmdefragfs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To query the fragmentation state of file system fs0, issue this command:
mmdefragfs fs0 -i
2. To reduce fragmentation of the file system fs0 on all defined, accessible disks that are not stopped or suspended, issue this command:
mmdefragfs fs0
| 3. To reduce fragmentation of all files in the fs1 file system until the disks have 100% full block utilization, | issue this command: mmdefragfs fs1 -u 100 | The system displays information similar to: | Defragmenting file system fs1... | | Defragmenting until full block utilization is 100.00%, currently 99.96% | 1 % complete on Wed May 16 15:21:49 2007 | 6 % complete on Wed May 16 15:21:53 2007 | 12 % complete on Wed May 16 15:21:56 2007 | 21 % complete on Wed May 16 15:21:59 2007 | 32 % complete on Wed May 16 15:22:02 2007 |
Chapter 8. GPFS commands
155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
% % % % %
on on on on on
16 16 16 16 16
2007 2007 2007 2007 2007 % free blk before after -----------98.57 98.57 97.50 97.50 97.03 97.03 97.52 97.52 97.04 97.04 97.06 97.06 97.66 97.66 97.14 97.14 97.30 97.30 99.93 99.93 98.76 98.76 97.58 97.58 97.07 97.07 97.08 97.08 97.56 97.56 97.59 97.59 97.58 97.58 97.62 97.62 99.97 99.97 99.97 99.97 97.00 97.00 97.05 97.05 98.58 98.58 97.56 97.56 97.51 97.51 97.46 97.46 98.57 98.57 % blk util before after -----------99.99 99.99 99.98 99.98 99.98 99.98 99.96 99.96 99.99 99.99 99.98 99.98 99.97 99.97 99.98 99.98 99.98 99.98 99.99 99.99 99.99 99.99 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 100.00 100.00 100.00 100.00 99.97 99.97 99.98 99.98 99.99 99.99 99.98 99.98 99.98 99.98 99.98 99.98 99.99 99.99 -----------99.98 99.98
---------hd16vsdn10 hd3vsdn01 hd4vsdn01 hd20vsdn02 hd6vsdn01 hd7vsdn01 hd2vsdn01 hd9vsdn01 hd10vsdn01 hd11vsdn02 hd18vsdn10 hd18vsdn02 hd24n01 hd25n01 hd26n01 hd27n01 hd28n01 hd29n01 hd30n01 hd31n01 hd21n01 hd23n01 hd15vsdn10 hd13vsdn02 hd8vsdn01 hd5vsdn01 hd33n09 (total)
free subblk free in full subblk in blocks blk fragments before after freed before after ----------------------- ---------------2192320 2192320 0 216 216 1082272 1082272 0 200 200 1077056 1077056 0 173 173 1082496 1082496 0 400 400 1077120 1077120 0 120 120 1077344 1077344 0 246 246 1084032 1084032 0 336 336 1078272 1078272 0 217 217 1080000 1080000 0 263 263 1109216 1109216 0 110 110 2196544 2196544 0 306 306 1083168 1083168 0 246 246 1079616 1079616 0 200 200 1079808 1079808 0 229 229 1085056 1085056 0 205 205 1085408 1085408 0 237 237 1085312 1085312 0 193 193 1085792 1085792 0 260 260 1111936 1111936 0 53 53 1111936 1111936 0 53 53 1078912 1078912 0 292 292 1079392 1079392 0 249 249 2192448 2192448 0 208 208 1082944 1082944 0 222 222 1082400 1082400 0 242 242 1081856 1081856 0 216 216 2192608 2192608 0 280 280 ----------------------- --------------33735264 33735264 0 5972 5972
Defragmentation complete, full block utilization is 99.96%. Re-issue command to try to reach target utilization of 100.00%.
See also
mmdf Command on page 174
Location
/usr/lpp/mmfs/bin
156
mmdelacl Command
Name
mmdelacl Deletes a GPFS access control list.
Synopsis
mmdelacl [-d] Filename
Description
Use the mmdelacl command to delete the extended entries of an access ACL of a file or directory, or to delete the default ACL of a directory.
Parameters
Filename The path name of the file or directory for which the ACL is to be deleted. If the -d option is specified, Filename must contain the name of a directory.
Options
-d Specifies that the default ACL of a directory is to be deleted. Since there can be only one NFS V4 ACL (no separate default), specifying the -d flag for a file with an NFS V4 ACL is an error. Deleting an NFS V4 ACL necessarily removes both the ACL and any inheritable entries contained in it.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
The mmdelacl command may be issued only by the file or directory owner, the root user, or by someone with control (c) authority in the ACL for the file. You may issue the mmdelacl command only from a node in the GPFS cluster where the file system is mounted.
Examples
To delete the default ACL for a directory named project2, issue this command:
mmdelacl -d project2
157
See also
mmeditacl Command on page 177 mmgetacl Command on page 194 mmputacl Command on page 245
Location
/usr/lpp/mmfs/bin
158
mmdeldisk Command
Name
mmdeldisk Deletes disks from a GPFS file system.
Synopsis
| mmdeldisk Device {DiskName[;DiskName...] | -F DescFile} [-a] [-c] [-r] [-N {Node[,Node...] | NodeFile | | NodeClass}]
Description
The mmdeldisk command migrates all data that would otherwise be lost to the remaining disks in the file system. It then removes the disks from the file system descriptor and optionally rebalances the file system after removing the disks. Run the mmdeldisk command when system demand is low. If a replacement for a failing disk is available, use the mmrpldisk command in order to keep the file system balanced. Otherwise, use one of these procedures to delete a disk: v If the disk is not failing and GPFS can still read from it: 1. Suspend the disk 2. Restripe to rebalance all data onto other disks 3. Delete the disk v If the disk is permanently damaged and the file system is replicated: 1. Suspend and stop the disk. 2. restripe and restore replication for the file system, if possible. 3. Delete the disk from the file system. v If the disk is permanently damaged and the file system is not replicated, or if the mmdeldisk command repeatedly fails, see the General Parallel File System: Problem Determination Guide and search for Disk media failure. If the last disk in a storage pool is deleted, the storage pool is deleted. The mmdeldisk command is not permitted to delete the system storage pool. A storage pool must be empty in order for it to be deleted.
Results
Upon successful completion of the mmdeldisk command, these tasks are completed: v Data that has not been replicated from the target disks is migrated to other disks in the file system. v Remaining disks are rebalanced, if specified.
Parameters
Device The device name of the file system to delete the disks from. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. | DiskName[;DiskName...] Specifies the names of the disks to be deleted from the file system. If there is more than one disk | to be deleted, delimit each name with a semicolon (;) and enclose the list in quotation marks. | | -F DiskFile Specifies a file that contains the names of the disks (one name per line), to be deleted from the | GPFS cluster. |
Chapter 8. GPFS commands
159
-N {Node[,Node...] | NodeFile | NodeClass } Specifies the nodes that participate in the restripe of the file system after the specified disks have been removed. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the restripe of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-a Specifies that the mmdeldisk command not wait for rebalancing to complete before returning. When this flag is specified, the mmdeldisk command runs asynchronously and returns after the file system descriptor is updated and the rebalancing scan is started, but it does not wait for rebalancing to finish. If no rebalancing is requested (-r option is not specified), this option has no effect. Specifies that processing continues even in the event that unreadable data exists on the disks being deleted. Data that has not been replicated is lost. Replicated data is not lost as long as the disks containing the replication are accessible. Rebalance all existing files in the file system to make more efficient use of the remaining disks. Note: Rebalancing of files is an I/O intensive and time consuming operation, and is important only for file systems with large files that are mostly invariant. In many cases, normal file update and creation will rebalance your file system over time, without the cost of the rebalancing.
-c
-r
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdeldisk command. You may issue the mmdeldisk command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To delete gpfs2nsd and gpfs3nsd from file system fs0 and rebalance the files across the remaining disks, issue this command:
mmdeldisk fs0 "gpfs2nsd;gpfs3nsd" -r
160
Scan completed successfully. Scanning file system metadata, phase 3 ... Scan completed successfully. Scanning file system metadata, phase 4 ... Scan completed successfully. Scanning user file metadata ... 2 % complete on Tue Feb 14 14:45:18 2006 3 % complete on Tue Feb 14 14:45:21 2006 9 % complete on Tue Feb 14 14:45:24 2006 18 % complete on Tue Feb 14 14:45:27 2006 27 % complete on Tue Feb 14 14:45:35 2006 28 % complete on Tue Feb 14 14:45:39 2006 29 % complete on Tue Feb 14 14:45:43 2006 30 % complete on Tue Feb 14 14:45:52 2006 34 % complete on Tue Feb 14 14:46:04 2006 37 % complete on Tue Feb 14 14:46:18 2006 45 % complete on Tue Feb 14 14:46:22 2006 51 % complete on Tue Feb 14 14:46:26 2006 94 % complete on Tue Feb 14 14:46:29 2006 100 % complete on Tue Feb 14 14:46:32 2006 Scan completed successfully. tsdeldisk64 completed. mmdeldisk: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
See also
mmadddisk Command on page 64 mmchdisk Command on page 97 mmlsdisk Command on page 211 mmrpldisk Command on page 271
Location
/usr/lpp/mmfs/bin
161
mmdelfileset Command
Name
mmdelfileset Deletes a GPFS fileset.
Synopsis
mmdelfileset Device FilesetName [-f]
Description
The mmdelfileset command deletes a GPFS fileset. The mmdelfileset command fails if the fileset is currently linked into the name space. By default, the mmdelfileset command fails if the fileset contains any contents except for an empty root directory. The root fileset cannot be deleted. If the deleted fileset is included in a snapshot, the fileset is deleted from the active file system, but remains part of the file system in a deleted state. Filesets in the deleted state are displayed by the mmlsfileset command with their names in parenthesis. If the -L flag is specified, the latest including snapshot is also displayed. A deleted filesets contents are still available in the snapshot (that is, through some path name containing a .snapshots component), since it was saved when the snapshot was created. mmlsfileset command illustrates the display of a deleted fileset. When the last snapshot that includes the fileset has been deleted, the fileset is fully removed from the file system. The delete operation fails if fileset being deleted is not empty. You need to specify -f option to delete a non-empty fileset. When -f is specified, all of a filesets child filesets are unlinked, but their content is unaffected. For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system that contains the fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. FilesetName Specifies the name of the fileset to be deleted.
Options
-f Forces the deletion of the fileset. All fileset contents are deleted. Any child filesets are first unlinked.
Exit status
0 Successful completion. nonzero A failure has occurred.
162
Security
You must have root authority to run the mmdelfileset command. You may issue the mmdelfileset command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. This sequence of commands illustrates what happens when attempting to delete a fileset that is linked. a. mmlsfileset gpfs1 The system displays output similar to:
Filesets in file system gpfs1: Name Status fset1 Linked fset2 Unlinked Path /gpfs1/fset1 --
2. This sequence of commands illustrates what happens when attempting to delete a fileset that contains user files. a. mmlsfileset gpfs1 The system displays output similar to:
Filesets in file system gpfs1: Name Status fset1 Linked fset2 Unlinked Path /gpfs1/fset1 --
163
91 % 94 % 97 % 100 % Fileset
complete on Tue Dec complete on Tue Dec complete on Tue Dec complete on Tue Dec fset2 deleted.
17 17 17 17
See also
mmchfileset Command on page 104 mmcrfileset Command on page 125 mmlinkfileset Command on page 203 mmlsfileset Command on page 214 mmunlinkfileset Command on page 287
Location
/usr/lpp/mmfs/bin
164
mmdelfs Command
Name
mmdelfs Removes a GPFS file system.
Synopsis
mmdelfs Device [-p]
Description
The mmdelfs command removes all the structures for the specified file system from the nodes in the cluster. Before you can delete a file system using the mmdelfs command, you must unmount it on all nodes.
Results
Upon successful completion of the mmdelfs command, these tasks are completed on all nodes: v Deletes the character device entry from /dev. v Removes the mount point directory where the file system had been mounted.
Parameters
Device The device name of the file system to be removed. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter.
Options
-p Indicates that the disks are permanently damaged and the file system information should be removed from the GPFS cluster data even if the disks cannot be marked as available.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdelfs command. You may issue the mmdelfs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
165
Examples
To delete file system fs0, issue this command:
mmdelfs fs0
See also
mmcrfs Command on page 127 mmchfs Command on page 106 mmlsfs Command on page 217
Location
/usr/lpp/mmfs/bin
166
mmdelnode Command
Name
mmdelnode Removes one or more nodes from a GPFS cluster.
Synopsis
mmdelnode {-a | -N Node[,Node...] | NodeFile | NodeClass}
Description
Use the mmdelnode command to delete one or more nodes from the GPFS cluster. You may issue the mmdelnode command on any GPFS node. You must follow these rules when deleting nodes: 1. The node being deleted cannot be the primary or secondary GPFS cluster configuration server unless you intend to delete the entire cluster. Verify this by issuing the mmlscluster command. If a node to be deleted is one of the servers and you intend to keep the cluster, issue the mmchcluster command to assign another node as the server before deleting the node. 2. A node being deleted cannot be defined as an NSD server for any disk unless you intend to delete the entire cluster. Verify this by issuing the mmlsnsd command. If a node to be deleted is an NSD server for one or more disks, use the mmchnsd command to assign another node as an NSD server for the affected disks. 3. Unless all nodes in the cluster are being deleted, run the mmdelnode command from a node that will remain in the cluster. 4. Before you can delete a node, you must unmount all of the GPFS file systems and stop GPFS on the node to be deleted. 5. Exercise caution when shutting down GPFS on quorum nodes. If the number of remaining quorum nodes falls below the requirement for a quorum, you will be unable to perform file system operations. See the General Parallel File System: Concepts, Planning, and Installation Guide and search for quorum. Note: Since each cluster is managed independently, there is no automatic coordination and propagation of changes between clusters like there is between the nodes within a cluster. This means that if you permanently delete nodes that are being used as contact nodes by other GPFS clusters that can mount your file systems, you should notify the administrators of those GPFS clusters so that they can update their own environments.
| | | |
Results
Upon successful completion of the mmdelnode command, the specified nodes are deleted from the GPFS cluster.
Parameters
-a Delete all nodes in the cluster. -N {Node[,Node...] | NodeFile | NodeClass} Specifies the set of nodes to be deleted from the cluster. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount.
167
Options
None.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdelnode command. | You may issue the mmdelnode command from any node that will remain in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To delete all of the nodes in the cluster, issue this command:
mmdelnode -a
See also
mmaddnode Command on page 68 mmcrcluster Command on page 121 mmchconfig Command on page 90 mmlsfs Command on page 217 mmlscluster Command on page 207
168
Location
/usr/lpp/mmfs/bin
169
mmdelnsd Command
Name
mmdelnsd Deletes Network Shared Disks (NSDs) from the GPFS cluster.
Synopsis
mmdelnsd {DiskName[;DiskName...] | -F DiskFile} Or, mmdelnsd -p NSDId [-N {Node[,Node] }]
Description
The mmdelnsd command serves two purposes: 1. Delete NSDs from the GPFS cluster. 2. Remove the unique NSD volume ID left on the disk after the failure of a previous invocation of the mmdelnsd command. The NSD had been successfully deleted from the GPFS cluster but there was a failure to clear sector 2 of the disk. The NSD being deleted cannot be part of any file system. Either the mmdeldisk or mmdelfs command must be issued prior to deleting the NSD from the GPFS cluster. The NSD being deleted cannot be a tiebreaker disk. Use the mmchconfig command to assign new tiebreaker disks prior to deleting the NSD from the cluster. For information on tiebreaker disks, see quorum
Results
Upon successful completion of the mmdelnsd command, these tasks are completed: v All references to the disk are removed from the GPFS cluster data. v Sector 2 of the disk is cleared of the unique NSD volume ID.
Parameters
DiskName[;DiskName...] Specifies the names of the NSDs to be deleted from the GPFS cluster. Specify the names generated when the NSDs were created. Use the mmlsnsd -F command to display disk names. If there is more than one disk to be deleted, delimit each name with a semicolon (;) and enclose the list of disk names in quotation marks. -F DiskFile Specifies a file containing the names of the NSDs, one per line, to be deleted from the GPFS cluster. -N Node[,Node] Specifies the nodes to which the disk is attached. If no nodes are listed, the disk is assumed to be directly attached to the local node. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. -p NSDId Specifies the NSD volume ID of an NSD that needs to be cleared from the disk as indicated by the failure of a previous invocation of the mmdelnsd command.
170
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmdelnsd command. You may issue the mmdelnsd command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
| To delete gpfs47nsd from the GPFS cluster, issue this command: | mmdelnsd "gpfs47nsd" | The system displays output similar to: | mmdelnsd: Processing disk gpfs47nsd | mmdelnsd: 6027-1371 Propagating the changes to all affected nodes. This is an asynchronous process. | | See also mmcrnsd Command on page 134 mmlsnsd Command on page 226
Location
/usr/lpp/mmfs/bin
171
mmdelsnapshot Command
Name
mmdelsnapshot Deletes a GPFS snapshot.
Synopsis
mmdelsnapshot Device Directory
Description
Use the mmdelsnapshot command to delete a GPFS snapshot. Once the mmdelsnapshot command has been issued, the snapshot is marked for deletion and cannot be recovered. If the node from which the mmdelsnapshot command is issued fails, you must reissue the command from another node in the cluster to complete the deletion. Prior to reissuing a subsequent mmdelsnapshot command, the file system may be recovered, mounted, and updates may continue to be made and the mmcrsnapshot command may be issued. However, the mmrestorefs and mmdelsnapshot commands may not be issued on other snapshots until the present snapshot is successfully deleted. If the mmdelsnapshot command is issued while a conflicting command is running, the mmdelsnapshot command waits for that command to complete. Conflicting operations include: 1. Other snapshot commands on the same snapshot 2. Adding, deleting, replacing disks in the file system 3. Rebalancing, repairing, reducing disk fragmentation in a file system Any files open in the snapshot will be forcibly closed. The user will receive an errno of ESTALE on the next file access.
Parameters
Device The device name of the file system for which the snapshot is to be deleted. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter. Directory The name of the snapshot to be deleted
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
172
Security
You must have root authority to run the mmdelsnapshot command. You may issue the mmdelsnapshot command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To delete the snapshot snap1, for the file system fs1, issue this command:
mmdelsnapshot fs1 snap1
Before issuing the command, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots
See also
mmcrsnapshot Command on page 139 mmlssnapshot Command on page 234 mmrestorefs Command on page 261 mmsnapdir Command on page 277
Location
/usr/lpp/mmfs/bin
173
mmdf Command
Name
mmdf Queries available file space on a GPFS file system.
Synopsis
| mmdf Device [-d | -F | -m] [-P PoolName]
Description
Use the mmdf command to display available file space on a GPFS file system. For each disk in the GPFS file system, the mmdf command displays this information, by failure group and storage pool: v The size of the disk. v The failure group of the disk. v Whether the disk is used to hold data, metadata, or both. v Available space in full blocks. v Available space in fragments. Displayed values are rounded down to a multiple of 1024 bytes. If the fragment size used by the file system is not a multiple of 1024 bytes, then the displayed values may be lower than the actual values. This can result in the display of a total value that exceeds the sum of the rounded values displayed for individual disks. The individual values are accurate if the fragment size is a multiple of 1024 bytes. For the file system, the mmdf command displays: v The total number of inodes and the number available. The mmdf command may be run against a mounted or unmounted file system. Note: The command is I/O intensive and should be run when the system load is light.
Parameters
Device The device name of the file system to be queried for available file space. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter.
Options
-d -F -m List only disks that can hold data. List the number of inodes and how many of them are free. List only disks that can hold metadata.
-P PoolName Lists only disks that belong to the requested storage pool. The default is to list all disks.
Exit status
0 Successful completion.
174
Security
If you are a root user: 1. You may issue the mmdf command from any node in the GPFS cluster. 2. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or mmchcluster command, you must ensure: a. Proper authorization is granted to all nodes in the GPFS cluster. b. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. If you are a non-root user, you may specify only file systems that belong to the same cluster as the node on which the mmdf command was issued.
Examples
1. To query all disks in the fs2 file system that can hold data, issue this command:
mmdf fs2 -d
2. To query all disks in the fs2 file system that can hold metadata, issue this command: The system displays information similar to:
disk disk size failure name in KB group ----------- --------- -------Disks in storage pool: system gpfs1001nsd 8897968 4001 --------(pool total) 8897968 Disks in storage pool: sp1 --------(pool total) 0 (data) (metadata) (total) ========= 0 8897968 ========= 8897968 holds holds free KB in free KB in metadata data full blocks fragments -------- ---- ---------- ---------yes no 8738816 (98%) 1520 (0%) ------------- --------8738816 (98%) 1520 (0%) ------------ -------0 (0%) 0 (0%) ============ 0 (0%) 8738816 (98%) ============= 8738816 (98%) ========= 0 (0%) 1520 (0%) ========= 1520 (0%)
Chapter 8. GPFS commands
175
See also
mmchfs Command on page 106 mmcrfs Command on page 127 mmdelfs Command on page 165 mmlsfs Command on page 217
Location
/usr/lpp/mmfs/bin
176
mmeditacl Command
Name
mmeditacl Creates or changes a GPFS access control list.
Synopsis
mmeditacl [-d] [-k {nfs4 | posix | native}] Filename
Description
Use the mmeditacl command for interactive editing of the ACL of a file or directory. This command uses the default editor, specified in the EDITOR environment variable, to display the current access control information, and allows the file owner to change it. The command verifies the change request with the user before making permanent changes. The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
For information about NFS V4 ACLs, see Chapter 6, Managing GPFS access control lists and NFS export, on page 47 and NFS and GPFS on page 54. Users may need to see ACLs in their true form as well as how they are translated for access evaluations. There are four cases: 1. By default, mmeditacl returns the ACL in a format consistent with the file system setting, specified using the -k flag on the mmcrfs or mmchfs commands. v If the setting is posix, the ACL is shown as a traditional ACL. v If the setting is nfs4, the ACL is shown as an NFS V4 ACL. v If the setting is all, the ACL is returned in its true form. 2. The command mmeditacl -k nfs4 always produces an NFS V4 ACL. 3. The command mmeditacl -k posix always produces a traditional ACL. 4. The command mmeditacl -k native always shows the ACL in its true form regardless of the file system setting. Table 5 describes how mmeditacl works.
Table 5. The mmeditacl command for POSIX and NFS V4 ACLs Command mmeditacl mmeditacl mmeditacl mmeditacl mmeditacl mmeditacl mmeditacl -k native mmeditacl -k native mmeditacl -k posix mmeditacl -k posix mmeditacl -k nfs4 ACL posix posix posix nfs4 nfs4 nfs4 posix nfs4 posix nfs4 any mmcrfs -k posix nfs4 all posix nfs4 all any any any any any Display Access ACL NFS V4 ACL Access ACL Access ACL[2] NFS V4 ACL NFS V4 ACL Access ACL NFS V4 ACL Access ACL Access ACL[2] NFS V4 ACL -d (default) Default ACL Error[1] Default ACL Default ACL[2] Error[1] Error[1] Default ACL Error[1] Default ACL Default ACL[2] Error[1]
Chapter 8. GPFS commands
177
Table 5. The mmeditacl command for POSIX and NFS V4 ACLs (continued) Command ACL mmcrfs -k Display -d (default)
[1] NFS V4 ACLs include inherited entries. Consequently, there cannot be a separate default ACL. [2] Only the mode entries (owner, group, everyone) are translated. The rwx values are derived from the NFS V4 file mode attribute. Since the NFS V4 ACL is more granular in nature, some information is lost in this translation.
In the case of NFS V4 ACLs, there is no concept of a default ACL. Instead, there is a single ACL and the individual access control entries can be flagged as being inherited (either by files, directories, both, or neither). Consequently, specifying the -d flag for an NFS V4 ACL is an error. By its nature, storing an NFS V4 ACL implies changing the inheritable entries (the GPFS default ACL) as well. Depending on the file systems -k setting (posix, nfs4, or all), mmeditacl may be restricted. The mmeditacl command is not allowed to store an NFS V4 ACL if -k posix is in effect, and is not allowed to store a POSIX ACL if -k nfs4 is in effect. For more information, see the description of the -k flag for the mmchfs, mmcrfs, and mmlsfs commands.
Parameters
Filename The path name of the file or directory for which the ACL is to be edited. If the -d option is specified, Filename must contain the name of a directory.
Options
-d Specifies that the default ACL of a directory is to be edited. -k {nfs4 | posix | native} nfs4 Always produces an NFS V4 ACL.
posix Always produces a traditional ACL. native Always shows the ACL in its true form regardless of the file system setting. This option should not be used for routine ACL manipulation. It is intended to provide a way to show the translations that are done. For example, if a posix ACL is translated by NFS V4. Beware that if the -k nfs4 flag is used, but the file system does not allow NFS V4 ACLs, you will not be able to store the ACL that is returned. If the file system does support NFS V4 ACLs, the -k nfs4 flag is an easy way to convert an existing posix ACL to nfs4 format.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You may issue the mmeditacl command only from a node in the GPFS cluster where the file system is mounted. The mmeditacl command may be used to display an ACL. POSIX ACLs may be displayed by any user with access to the file or directory. NFS V4 ACLs have a READ_ACL permission that is required for non-privileged users to be able to see an ACL. To change an existing ACL, the user must either be the owner, the root user, or someone with control permission (WRITE_ACL is required where the existing ACL is of type NFS V4).
178
Examples
To edit the ACL for a file named project2.history, issue this command:
mmeditacl project2.history
The current ACL entries are displayed using the default editor, provided that the EDITOR environment variable specifies a complete path name. When the file is saved, the system displays information similar to:
mmeditacl: 6027-967 Should the modified ACL be applied? (yes) or (no)
See also
mmdelacl Command on page 157 mmgetacl Command on page 194 mmputacl Command on page 245
Location
/usr/lpp/mmfs/bin
179
mmedquota Command
Name
mmedquota Sets quota limits.
Synopsis
mmedquota {-u [-p ProtoUser] User...| -g [-p ProtoGroup] Group... | -j [-p ProtoFileset ] Device:Fileset... | -d {-u User...| -g Group... | -j Device:Fileset...} | -t {-u | -g | -j}}
Description
The mmedquota command serves two purposes: 1. Sets or changes quota limits or grace periods for users, groups, and filesets in the cluster from which the command is issued. 2. Reestablishes user, group, or fileset default quotas for all file systems with default quotas enabled in the cluster. The mmedquota command displays the current values for these limits, if any, and prompts you to enter new values using your default editor: v Current block usage (display only) v Current inode usage (display only) v Inode soft limit v Inode hard limit v Block soft limit Displayed in KB, but may be specified using g, G, k, K, m, or M. If no suffix is provided, the number is assumed to be in bytes. v Block hard limit Displayed in KB, but may be specified using g, G, k, K, m, or M. If no suffix is provided, the number is assumed to be in bytes. Note: A block or inode limit of 0 indicates no limit. The mmedquota command waits for the edit window to be closed before checking and applying new values. If an incorrect entry is made, you must reissue the command and enter the correct values. You can also use the mmedquota command to change the file system-specific grace periods for block and file usage if the default of one week is unsatisfactory. The grace period is the time during which users can exceed the soft limit. If the user, group, or fileset does not show reduced usage below the soft limit before the grace period expires, the soft limit becomes the new hard limit. When setting quota limits for a file system, replication within the file system should be considered. GPFS quota management takes replication into account when reporting on and determining if quota limits have been exceeded for both block and file usage. In a file system that has either type of replication set to a value of two, the values reported by both the mmlsquota command and the mmrepquota command are double the value reported by the ls command. The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
Parameters
User Name or user ID of target user for quota editing.
GPFS: Administration and Programming Reference
180
Options
-d -g -j -p Reestablish default quota limits for a specific user, group, or fileset that has had an explicit quota limit set by a previous invocation of the mmedquota command. Sets quota limits or grace times for groups. Sets quota limits or grace times for filesets. Applies already-established limits to a particular user, group or fileset. When invoked with the -u option, ProtoUser limits are automatically applied to the specified User or space-delimited list of users. When invoked with the -g option, ProtoGroup limits are automatically applied to the specified Group or space-delimited list of groups. When invoked with the -j option, ProtoFileset limits are automatically applied to the specified fileset or space-delimited list of fileset names. You can specify any user as a ProtoUser for another User, or any group as a ProtoGroup for another Group, or any fileset as a ProtoFileset for another Fileset. -t Sets grace period during which quotas can exceed the soft limit before it is imposed as a hard limit. The default grace period is one week. This flag is followed by one of the following flags: -u, -g or -j, to specify whether the changes apply to users, groups, or filesets respectively. -u Sets quota limits or grace times for users.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmedquota command. GPFS must be running on the node from which the mmedquota command is issued.
Examples
1. To set user quotas for userid paul, issue this command:
mmedquota -u paul
2. To reset default group quota values for the group blueteam, issue this command:
mmedquota -d -g blueteam
181
3. To change the grace periods for all users, issue this command:
mmedquota -t -u
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmlsquota Command on page 231 mmquotaon Command on page 250 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
182
mmexportfs Command
Name
mmexportfs - Retrieves the information needed to move a file system to a different cluster.
Synopsis
mmexportfs {Device | all} -o ExportFilesysData
Description
The mmexportfs command, in conjunction with the mmimportfs command, can be used to move one or more GPFS file systems from one GPFS cluster to another GPFS cluster, or to temporarily remove file systems from the cluster and restore them at a later time. The mmexportfs command retrieves all relevant file system and disk information and stores it in the file specified with the -o parameter. This file must later be provided as input to the mmimportfs command. When running the mmexportfs command, the file system must be unmounted on all nodes. When all is specified in place of a file system name, any disks that are not associated with a file system will be exported as well. Exported file systems remain unusable until they are imported back with the mmimportfs command to the same or a different GPFS cluster.
Results
Upon successful completion of the mmexportfs command, all configuration information pertaining to the exported file system and its disks is removed from the configuration data of the current GPFS cluster and is stored in the user specified file ExportFilesysData.
Parameters
Device | all The device name of the file system to be exported. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. Specify all to export all GPFS file systems, as well as all disks that do not belong to a file system yet. This must be the first parameter. -o ExportFilesysData The path name of a file to which the file system information is to be written. This file must be provided as input to the subsequent mmimportfs command.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmexportfs command. You may issue the mmexportfs command from any node in the GPFS cluster.
183
When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To export all file systems in the current cluster, issue this command:
mmexportfs all -o /u/admin/exportfile
See also
mmimportfs Command on page 200
Location
/usr/lpp/mmfs/bin
184
mmfsck Command
Name
mmfsck Checks and repairs a GPFS file system.
Synopsis
mmfsck Device [-n | -y] [-c | -o] [-t Directory] [ -v | -V] [-N {Node[,Node...] | NodeFile | NodeClass}] The file system must be unmounted before you can run the mmfsck command with any option other than -o.
Description
The mmfsck command in offline mode is intended to be used only in situations where there have been disk or communications failures that have caused MMFS_FSSTRUCT error log entries to be issued, or where it is known that disks have been forcibly removed or otherwise permanently unavailable for use in the file system, and other unexpected symptoms are seen by users. In general it is unnecessary to run mmfsck in offline mode unless under the direction of the IBM Support Center. If neither the -n nor -y flag is specified, the mmfsck command runs interactively prompting you for permission to repair each consistency error as reported. It is suggested that in all but the most severely damaged file systems, you run the mmfsck command interactively (the default). The occurrence of I/O errors, or the appearance of a message telling you to run the mmfsck command, may indicate file system inconsistencies. If either situation occurs, use the mmfsck command to check file system consistency and interactively repair the file system. For information about file system maintenance and repair, see Checking and repairing a file system on page 18. The mmfsck command checks for these inconsistencies: v Blocks marked allocated that do not belong to any file. The corrective action is to mark the block free in the allocation map. v Files for which an inode is allocated and no directory entry exists (orphaned files). The corrective action is to create directory entries for these files in a lost+found subdirectory of the fileset to which the orphaned file or directory belongs. The index number of the inode is assigned as the name. If you do not allow the mmfsck command to reattach an orphaned file, it asks for permission to delete the file. v Directory entries pointing to an inode that is not allocated. The corrective action is to remove the directory entry. v Incorrectly formed directory entries. A directory file contains the inode number and the generation number of the file to which it refers. When the generation number in the directory does not match the generation number stored in the files inode, the corrective action is to remove the directory entry. v Incorrect link counts on files and directories. The corrective action is to update them with accurate counts. v Invalid policy files. The corrective action is to delete the file. v Various problems related to filesets: missing or corrupted fileset metadata, inconsistencies in directory structure related to filesets, missing or corrupted fileset root directory, other problems in internal data structures. If you are repairing a file system due to node failure and the file system has quotas enabled, it is suggested that you run the mmcheckquota command to recreate the quota files. Indications leading you to the conclusion that you should run the mmfsck command include:
185
v An MMFS_FSSTRUCT along with an MMFS_SYSTEM_UNMOUNT error log entry on any node indicating some critical piece of the file system is inconsistent. v Disk media failures v Partial disk failure v EVALIDATE=214, Invalid checksum or other consistency check failure on a disk data structure, reported in error logs or returned to an application. For further information on recovery actions and how to contact the IBM Support Center, see the General Parallel File System: Problem Determination Guide. If you are running the online mmfsck command to free allocated blocks that do not belong to any files, plan to make file system repairs when system demand is low. This is an I/O intensive activity and it can affect system performance.
Results
If the file system is inconsistent, the mmfsck command displays information about the inconsistencies and (depending on the option entered) may prompt you for permission to repair them. The mmfsck command tries to avoid actions that may result in loss of data. In some cases, however, it may indicate the destruction of a damaged file. If there are no file system inconsistencies to detect, the mmfsck command reports this information for the file system: v Number of files v Used blocks v Free blocks All corrective actions, with the exception of recovering lost disk blocks (blocks that are marked as allocated but do not belong to any file), require that the file system be unmounted on all nodes. If the mmfsck command is run on a mounted file system, lost blocks are recovered but any other inconsistencies are only reported, not repaired. If a bad disk is detected, the mmfsck command stops the disk and writes an entry to the error log. The operator must manually start and resume the disk when the problem is fixed. The file system must be unmounted on all nodes before the mmfsck command can repair file system inconsistencies.
Parameters
Device The device name of the file system to be checked and repaired. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. -N {Node[,Node...] | NodeFile | NodeClass} Specify the nodes to participate in the check and repair of the file system. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the check and repair of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-c When the file system log has been lost and the file system is replicated, this option specifies that
186
the mmfsck command attempt corrective action by comparing the replicas of metadata and data. If this error condition occurs, it is indicated by an error log entry. The -c and -o flags are mutually exclusive. -n Specifies a no response to all prompts from the mmfsck command. The option reports inconsistencies but it does not change the file system. To save this information, redirect it to an output file when you issue the mmfsck command. Specifies that the file system can be mounted during the operation of the mmfsck command. Online mode does not perform a full file system consistency check, but blocks marked as allocated that do not belong to a file are recovered. The -c and -o flags are mutually exclusive. Specifies a yes response to all prompts from the mmfsck command. Use this option only on severely damaged file systems. It allows the mmfsck command to take any action necessary for repairs.
-o
-y
| -t Directory Specifies the directory that GPFS uses for temporary storage during mmfsck command | processing. Although you can issue the command from any node, you must specify a temporary | storage directory on the file system manager node. In addition to the location requirement, the | storage directory has a minimum space requirement. The minimum space required (in bytes) is | equal to the maximum number of inodes in the file system multiplied by 12. The default directory | for mmfsck processing is /tmp. | -v -V Specifies that the output is verbose. Specifies that the output is verbose and contains information for debugging purposes.
Exit status
0 2 4 8 Successful completion. The command was interrupted before it completed checks or repairs. The command changed the file system and it must now be restarted. The file system contains damage that has not been repaired.
Security
You must have root authority to run the mmfsck command. You may issue the mmfsck command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To run the mmfsck command on the fs1 file system, receive a report, but not fix inconsistencies, issue this command:
mmfsck fs1 -n
187
Checking "fs1" Checking inodes Checking inode map file Checking directories and files Checking log files Checking extended attributes file Checking allocation summary file Checking policy file Validated policy for stripe group fs1: parsed 3 Placement Rules, 0 Migrate/Delete/Exclude Rules Checking filesets metadata Checking file reference counts Checking file system replication status 1212416 87560 0 0 0 0 0 0 7211746 227650 0 0 0 inodes allocated repairable repaired damaged deallocated orphaned attached subblocks allocated unreferenced deletable deallocated
mmfsck found no inconsistencies in this file system. 2. To run the mmfsck command on the /dev/fs2 file system, receive a report, and fix inconsistencies, issue this command:
mmfsck /dev/fs2 -y
File inode 6912 is not referenced by any directory. Reattach inode to lost+found? yes Checking file system replication status 33792 inodes 46 allocated 0 repairable 0 repaired 0 damaged 0 deallocated 1 orphaned 1 attached 3332520 subblocks 19762 allocated 0 unreferenced 0 deletable
188
deallocated
See also
mmcheckquota Command on page 101 mmcrfs Command on page 127 mmdelfs Command on page 165 mmdf Command on page 174 mmlsfs Command on page 217
Location
/usr/lpp/mmfs/bin
189
mmfsctl Command
Name
mmfsctl Issues a file system control request.
Synopsis
mmfsctl Device {suspend | resume} Or, mmfsctl Device {exclude | include} {-d DiskName[;DiskName...] | -F DiskFile | -G FailureGroup} Or, mmfsctl Device syncFSconfig {-n RemoteNodesFile | -C remoteClusterName} [-S SpecFile]
Description
Use the mmfsctl command to issue control requests to a particular GPFS file system. The command is used to temporarily suspend the processing of all application I/O requests, and later resume them, as well as to synchronize the file systems configuration state between peer clusters in disaster recovery environments. See Establishing disaster recovery for your GPFS cluster in General Parallel File System: Advanced Administration Guide. Before creating a FlashCopy image of the file system, the user must run mmfsctl suspend to temporarily quiesce all file system activity and flush the internal buffers on all nodes that mount this file system. The on-disk metadata will be brought to a consistent state, which provides for the integrity of the FlashCopy snapshot. If a request to the file system is issued by the application after the invocation of this command, GPFS suspends this request indefinitely, or until the user issues mmfsctl resume. Once the FlashCopy image has been taken, the mmfsctl resume command can be issued to resume the normal operation and complete any pending I/O requests. The mmfsctl syncFSconfig command extracts the file systems related information from the local GPFS configuration data, transfers this data to one of the nodes in the peer cluster, and attempts to import it there. Once the GPFS file system has been defined in the primary cluster, users run this command to import the configuration of this file system into the peer recovery cluster. After producing a FlashCopy image of the file system and propagating it to the peer cluster using Peer-to-Peer Remote Copy (PPRC), users similarly run this command to propagate any relevant configuration changes made in the cluster after the previous snapshot. The primary cluster configuration server of the peer cluster must be available and accessible using remote shell and remote copy at the time of the invocation of the mmfsctl syncFSconfig command. Also, the peer GPFS clusters should be defined to use the same remote shell and remote copy mechanism, and they must be set up to allow nodes in peer clusters to communicate without the use of a password. Not all administrative actions performed on the file system necessitate this type of resynchronization. It is required only for those actions that modify the file system information maintained in the local GPFS configuration data, which includes: v Additions, removals, and replacements of disks (commands mmadddisk, mmdeldisk, mmrpldisk)
190
v Modifications to disk attributes (command mmchdisk) v Changes to the file systems mount point (command mmchfs -T) v Changing the file system device name (command mmchfs -W) The process of synchronizing the file system configuration data can be automated by utilizing the syncfsconfig user exit. The mmfsctl exclude command is to be used only in a disaster recovery environment, only after a disaster has occurred, and only after ensuring that the disks in question have been physically disconnected. Otherwise, unexpected results may occur. The mmfsctl exclude command can be used to manually override the file system descriptor quorum after a site-wide disaster. See Establishing disaster recovery for your GPFS cluster in General Parallel File System: Advanced Administration Guide. This command enables users to restore normal access to the file system with less than a quorum of available file system descriptor replica disks, by effectively excluding the specified disks from all subsequent operations on the file system descriptor. After repairing the disks, the mmfsctl include command can be issued to restore the initial quorum configuration.
Parameters
Device The device name of the file system. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. If all is specified with the syncFSconfig option, this command is performed on all GPFS file systems defined in the cluster. exclude Instructs GPFS to exclude the specified group of disks from all subsequent operations on the file system descriptor, and change their availability state to down, if the conditions in the Note below are met. If necessary, this command assigns additional disks to serve as the disk descriptor replica holders, and migrate the disk descriptor to the new replica set. The excluded disks are not deleted from the file system, and still appear in the output of the mmlsdisk command. Note: The mmfsctl exclude command is to be used only in a disaster recovery environment, only after a disaster has occurred, and only after ensuring that the disks in question have been physically disconnected. Otherwise, unexpected results may occur. include Informs GPFS that the previously excluded disks have become operational again. This command writes the up-to-date version of the disk descriptor to each of the specified disks, and clears the excl tag. resume Instructs GPFS to resume the normal processing of I/O requests on all nodes. suspend Instructs GPFS to flush the internal buffers on all nodes, bring the file system to a consistent state on disk, and suspend the processing of all subsequent application I/O requests. syncFSconfig Synchronizes the configuration state of a GPFS file system between the local cluster and its peer in two-cluster disaster recovery configurations. -C remoteClusterName Specifies the name of the GPFS cluster that owns the remote GPFS file system. -d DiskName[;DiskName...] Specifies the names of the NSDs to be included or excluded by the mmfsctl command. Separate the names with semicolons (;) and enclose the list of disk names in quotation marks.
Chapter 8. GPFS commands
191
-F DiskFile Specifies a file containing the names of the NSDs, one per line, to be included or excluded by the mmfsctl command. -G FailureGroup A number identifying the failure group for disks to be included or excluded by the mmfsctl command. -n RemoteNodesFile Specifies a list of contact nodes in the peer recovery cluster that GPFS uses when importing the configuration data into that cluster. Although any node in the peer cluster can be specified here, users are advised to specify the identities of the peer clusters primary and secondary cluster configuration servers, for efficiency reasons. -S SpecFile Specifies the description of changes to be made to the file system, in the peer cluster during the import step. The format of this file is identical to that of the ChangeSpecFile used as input to the mmimportfs command. This option can be used, for example, to define the assignment of the NSD servers for use in the peer cluster.
Options
None.
Exit status
0 Successful completion. nonzero A failure has occurred.
Results
The mmfsctl command returns 0 if successful.
Security
You must have root authority to run the mmfsctl command. You may issue the mmfsctl command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
This sequence of commands creates a FlashCopy image of the file system and propagates this image to the recovery cluster using the Peer-to-Peer Remote Copy technology. The following configuration is assumed:
Site Primary cluster (site A) Recovery cluster (site B) LUNs lunA1, lunA2 lunB1
192
lunA1 FlashCopy source lunA2 FlashCopy target, PPRC source lunB1 PPRC target A single GPFS file system named fs0 has been defined in the primary cluster over lunA1. 1. In the primary cluster, suspend all file system I/O activity and flush the GPFS buffers
mmfsctl fs0 suspend
2. Establish a FlashCopy pair using lunA1 as the source and lunA2 as the target. 3. Resume the file system I/O activity:
mmfsctl fs0 resume
4. Establish a Peer-to-Peer Remote Copy (PPRC) path and a synchronous PPRC volume pair lunA2-lunB1 (primary-secondary). Use the copy entire volume option and leave the permit read from secondary option disabled. 5. Wait for the completion of the FlashCopy background task. Wait for the PPRC pair to reach the duplex (fully synchronized) state. 6. Terminate the PPRC volume pair lunA2-lunB1. 7. If this is the first time the snapshot is taken, or if the configuration state of fs0 changed since the previous FlashCopy snapshot, propagate the most recent configuration to site B:
mmfsctl fs0 syncFSconfig -n recovery_clust_nodelist
Location
/usr/lpp/mmfs/bin
193
mmgetacl Command
Name
mmgetacl Displays the GPFS access control list of a file or directory.
Synopsis
mmgetacl [-d] [-o OutFilename] [-k {nfs4 |posix | native}] Filename
Description
Use the mmgetacl command to display the ACL of a file or directory. For information about NFS V4 ACLs, see Chapter 6, Managing GPFS access control lists and NFS export, on page 47 and NFS and GPFS on page 54. Users may need to see ACLs in their true form as well as how they are translated for access evaluations. There are four cases: 1. By default, mmgetacl returns the ACL in a format consistent with the file system setting, specified using the -k flag on the mmcrfs or mmchfs commands. If the setting is posix, the ACL is shown as a traditional ACL. If the setting is nfs4, the ACL is shown as an NFS V4 ACL. If the setting is all, the ACL is returned in its true form. 2. The command mmgetacl -k nfs4 always produces an NFS V4 ACL. 3. The command mmgetacl -k posix always produces a traditional ACL. 4. The command mmgetacl -k native always shows the ACL in its true form regardless of the file system setting. Table 6 describes how mmgetacl works.
Table 6. The mmgetacl command for POSIX and NFS V4 ACLs Command mmgetacl mmgetacl mmgetacl mmgetacl mmgetacl mmgetacl mmgetacl -k native mmgetacl -k native mmgetacl -k posix mmgetacl -k posix mmgetacl -k nfs4 ACL posix posix posix nfs4 nfs4 nfs4 posix nfs4 posix nfs4 any mmcrfs -k posix nfs4 all posix nfs4 all any any any any any Display Access ACL NFS V4 ACL Access ACL Access ACL[2] NFS V4 ACL NFS V4 ACL Access ACL NFS V4 ACL Access ACL Access ACL[2] NFS V4 ACL -d (default) Default ACL Error[1] Default ACL Default ACL[2] Error[1] Error[1] Default ACL Error[1] Default ACL Default ACL[2] Error[1]
[1] NFS V4 ACLs include inherited entries. Consequently, there cannot be a separate default ACL. [2] Only the mode entries (owner, group, everyone) are translated. The rwx values are derived from the NFS V4 file mode attribute. Since the NFS V4 ACL is more granular in nature, some information is lost in this translation.
194
Parameters
Filename The path name of the file or directory for which the ACL is to be displayed. If the -d option is specified, Filename must contain the name of a directory.
Options
-d Specifies that the default ACL of a directory is to be displayed. -k {nfs4 | posix | native} nfs4 Always produces an NFS V4 ACL.
posix Always produces a traditional ACL. native Always shows the ACL in its true form regardless of the file system setting. -o OutFilename The path name of a file to which the ACL is to be written.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have read access to the directory where the file exists to run the mmgetacl command. You may issue the mmgetacl command only from a node in the GPFS cluster where the file system is mounted.
Examples
1. To display the ACL for a file named project2.history, issue this command:
mmgetacl project2.history
2. This is an example of an NFS V4 ACL displayed using mmgetacl. Each entry consists of three lines reflecting the greater number of permissions in a text format. An entry is either an allow entry or a deny entry. An X indicates that the particular permission is selected, a minus sign () indicates that is it not selected. The following access control entry explicitly allows READ, EXECUTE and READ_ATTR to the staff group on a file:
group:staff:r-x-:allow (X)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED
3. This is an example of a directory ACLs, which may include inherit entries (the equivalent of a default ACL). These do not apply to the directory itself, but instead become the initial ACL for any objects created within the directory. The following access control entry explicitly denies READ/LIST, READ_ATTR, and EXEC/SEARCH to the sys group.
group:sys:----:deny:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED
Chapter 8. GPFS commands
195
See also
mmeditacl Command on page 177 mmdelacl Command on page 157 mmputacl Command on page 245
Location
/usr/lpp/mmfs/bin
196
mmgetstate Command
Name
The mmgetstate command displays the state of the GPFS daemon on one or more nodes.
Synopsis
mmgetstate [-L] [-s] [-v] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmgetstate command to show the state of the GPFS daemon on one or more nodes.
Parameters
-a List all nodes in the GPFS cluster. The option does not display information for nodes that cannot be reached. You may obtain more information if you specify the -v option.
-N {Node[,Node...] | NodeFile | NodeClass} Directs the mmgetstate command to return GPFS daemon information for a set of nodes. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount.
Options
-L -s -v Display quorum, number of nodes up, total number of nodes, and other extended node information. Display summary information such as: number of local and remote nodes that have joined in the cluster, number of quorum nodes. Display intermediate error messages.
The GPFS states recognized and displayed by this command are: active GPFS is ready for operations. arbitrating A node is trying to form a quorum with the other available nodes. down GPFS daemon is not running on the node. unknown Unknown value. Node cannot be reached or some other error occurred.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmgetstate command.
197
When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To display the quorum, the number of nodes up, and the total number of nodes for the GPFS cluster, issue:
mmgetstate -a -L
The 3 under the Quorum column means that you must have three quorum nodes up to achieve quorum. 2. This is an example of a cluster using node quorum with tiebreaker disks. Note the * in the Quorum field, which indicates that tiebreaker disks are being used:
mmgetstate -a -L
12 12
198
| | | |
of remote nodes joined in this cluster: of quorum nodes defined in the cluster: of quorum nodes active in the cluster: = 3, Quorum achieved
0 5 5
See also
mmchconfig Command on page 90 mmcrcluster Command on page 121 mmshutdown Command on page 275 mmstartup Command on page 280
Location
/usr/lpp/mmfs/bin
199
mmimportfs Command
Name
mmimportfs - Imports into the cluster one or more file systems that were created in another GPFS cluster.
Synopsis
mmimportfs {Device | all} -i ImportfsFile [-S ChangeSpecFile]
Description
The mmimportfs command, in conjunction with the mmexportfs command, can be used to move into the current GPFS cluster one or more file systems that were created in another GPFS cluster. The mmimportfs command extracts all relevant file system and disk information from the ExportFilesysData file specified with the -i parameter. This file must have been created by the mmexportfs command. When all is specified in place of a file system name, any disks that are not associated with a file system will be imported as well. If the file systems being imported were created on nodes that do not belong to the current GPFS cluster, the mmimportfs command assumes that all disks have been properly moved, and are online and available to the appropriate nodes in the current cluster. If any node in the cluster, including the node on which you are running the mmimportfs command, does not have access to one or more disks, use the -S option to assign NSD servers to those disks. The mmimportfs command attempts to preserve any NSD server assignments that were in effect when the file system was exported. If the file system was exported from a cluster created with a version of GPFS prior to 2.3, it is possible that the disks of the file system are not NSDs. Such disks will be automatically converted into NSDs by the mmimportfs command. After the mmimportfs command completes, use mmlsnsd to display the NSD server names that are assigned to each of the disks in the imported file system. Use mmchnsd to change the current NSD server assignments as needed. After the mmimportfs command completes, use mmlsdisk to display the failure groups to which each disk belongs. Use mmchdisk to make adjustments if necessary. If you are importing file systems into a cluster that already contains GPFS file systems, it is possible to encounter name conflicts. You must resolve such conflicts before the mmimportfs command can succeed. You can use the mmchfs command to change the device name and mount point of an existing file system. If there are disk name conflicts, use the mmcrnsd command to define new disks and specify unique names (rather than let the command generate names). Then replace the conflicting disks using mmrpldisk and remove them from the cluster using mmdelnsd.
Results
Upon successful completion of the mmimportfs command, all configuration information pertaining to the file systems being imported is added to configuration data of the current GPFS cluster.
200
Parameters
Device | all The device name of the file system to be imported. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. Specify all to import all GPFS file systems, as well as all disks that do not belong to a file system yet. This must be the first parameter. -i ImportfsFile The path name of the file containing the file system information. This file must have previously been created with the mmexportfs command. -S ChangeSpecFile The path name of an optional file containing disk descriptors, one per line, in the format: |
DiskName:ServerList
DiskName Is the name of a disk from the file system being imported. | | | | | | | | ServerList Is a comma separated list of NSD server nodes. You can specify up to eight NSD servers in this list. The defined NSD will preferentially use the first server on the list. If the first server is not available, the NSD will use the next available server on the list. If you do not define a server list, GPFS assumes that the disk is SAN-attached to all nodes in the cluster. If all nodes in the cluster do not have access to the disk, or if the file system to which the disk belongs is to be accessed by other GPFS clusters, you must specify a server list. Note: 1. You cannot change the name of a disk. You cannot change the disk usage or failure group assignment with the mmimportfs command. Use the mmchdisk command for this purpose. 2. All disks that do not have descriptors in ChangeSpecFile are assigned the NSD servers that they had at the time the file system was exported. All disks with NSD servers that are not valid are assumed to be SAN-attached to all nodes in the cluster. Use the mmchnsd command to assign new or change existing NSD server nodes.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmimportfs command. You may issue the mmimportfs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
201
Examples
To import all file systems in the current cluster, issue this command:
mmimportfs all -i /u/admin/exportfile
mmimportfs: Processing file system fs2 ... mmimportfs: Processing disk gpfs1nsd1 mmimportfs: Processing disk gpfs5nsd mmimportfs: Processing disks that do not belong to any file system ... mmimportfs: Processing disk gpfs6nsd mmimportfs: Processing disk gpfs1001nsd mmimportfs: Committing the changes ... mmimportfs: The following file systems were successfully imported: fs1 fs2 mmimportfs: 6027-1371 Propagating the changes to all affected nodes. This is an asynchronous process.
See also
mmexportfs Command on page 183
Location
/usr/lpp/mmfs/bin
202
mmlinkfileset Command
Name
mmlinkfileset Creates a junction that references the root directory of a GPFS fileset.
Synopsis
mmlinkfileset Device FilesetName [-J JunctionPath]
Description
The mmlinkfileset command creates a junction at JunctionPath that references the root directory of FilesetName. The junction is a special directory entry, much like a POSIX hard link, that connects a name in a directory of one fileset, the parent, to the root directory of a child fileset. From the users viewpoint, a junction always appears as if it were a directory, but the user is not allowed to issue the unlink or rmdir commands on a junction. Instead, the mmunlinkfileset command must be used to remove a junction. If JunctionPath is not specified, the junction is created in the current directory with the name FilesetName. The user may use the mv command on the directory to move to a new location in the parent fileset, but the mv command is not allowed to move the junction to a different fileset. For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system that contains the fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. FilesetName Specifies the name of the fileset to be linked. It must not already be linked into the namespace. -J JunctionPath Specifies the name of the junction. The name must not refer to an existing file system object.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmlinkfileset command. You may issue the mmlinkfileset command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Chapter 8. GPFS commands
203
Examples
This command links fileset fset1 in file system gpfs1 to junction path /gpfs1/fset1:
mmlinkfileset gpfs1 fset1 -J /gpfs1/fset1
See also
mmchfileset Command on page 104 mmcrfileset Command on page 125 mmdelfileset Command on page 162 mmlsfileset Command on page 214 mmunlinkfileset Command on page 287
Location
/usr/lpp/mmfs/bin
204
mmlsattr Command
Name
mmlsattr Queries file attributes.
Synopsis
mmlsattr [-L] FileName [FileName...]
Description
Use the mmlsattr command to display attributes of a file.
Results
For the specified file, the mmlsattr command lists: v Current number of copies of data for a file and the maximum value v Number of copies of the metadata for a file and the maximum value v Whether the Direct I/O caching policy is in effect for a file
Parameters
FileName The name of the file to be queried. You must enter at least one file name.
Options
-L Displays additional file attributes: v The files assigned storage pool name. v The name of the fileset that includes the file. v The name of the snapshot that includes the file. If the file is not part of a snapshot, an empty string is displayed. v Whether the file is illplaced. This word would appear under flags.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have read access to run the mmlsattr command. You may issue the mmlsattr command only from a node in the GPFS cluster where the file system is mounted.
Examples
1. To list the attributes of a file named project4.sched, issue this command:
mmlsattr -L /fs0/project4.sched
205
file name: metadata replication: data replication: flags: storage pool name: fileset name: snapshot name: mmlsattr /fs0/*
2. To show the attributes for all files in the root directory of file system fs0, issue this command:
See also
mmchattr Command on page 84
Location
/usr/lpp/mmfs/bin
206
mmlscluster Command
Name
mmlscluster Displays the current configuration information for a GPFS cluster.
Synopsis
| mmlscluster [--cnfs]
Description
Use the mmlscluster command to display the current configuration information for a GPFS cluster. For the GPFS cluster, the mmlscluster command displays: v v v v v v v v The cluster name The cluster ID GPFS UID domain The remote shell command being used The remote file copy command being used The primary GPFS cluster configuration server The secondary GPFS cluster configuration server A list of nodes belonging the GPFS cluster
For each node, the command displays: v The node number assigned to the node by GPFS v GPFS daemon node interface name v Primary network IP address v GPFS administration node interface name v Remarks, such as whether the node is a quorum node or not
Parameters
NONE
Options
| --cnfs Displays information about clustered NFS.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmlscluster command. You may issue the mmlscluster command from any node in the GPFS cluster.
207
When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To display the current configuration information for the GPFS cluster, issue this command:
mmlscluster
See also
mmaddnode Command on page 68 mmchcluster Command on page 87 mmcrcluster Command on page 121 mmdelnode Command on page 167
Location
/usr/lpp/mmfs/bin
208
mmlsconfig Command
Name
mmlsconfig Displays the current configuration data for a GPFS cluster.
Synopsis
mmlsconfig
Description
Use the mmlsconfig command to display the current configuration data for a GPFS cluster. Depending on your configuration, additional information that is set by GPFS may be displayed to assist in problem determination when contacting the IBM Support Center. If a configuration parameter is not shown in the output of this command, the default value for that parameter, as documented in the mmchconfig command, is in effect.
Parameters
NONE
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmlsconfig command. You may issue the mmlsconfig command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To display the current configuration data for the GPFS cluster that you are running on, issue this command:
mmlsconfig
209
| | | | | | | | | | | | | | |
Configuration data for cluster cluster1.kgn.ibm.com: -------------------------------------------clusterName cluster1.kgn.ibm.com clusterId 680681562214606028 clusterType lc autoload yes minReleaseLevel 3.2.0.0 pagepool 512m maxblocksize 4M cipherList AUTHONLY File systems in cluster cluster1.kgn.ibm.com: --------------------------------------------/dev/fs1 /dev/fs2
See also
mmchcluster Command on page 87 mmchconfig Command on page 90 mmcrcluster Command on page 121
Location
/usr/lpp/mmfs/bin
210
mmlsdisk Command
Name
mmlsdisk Displays the current configuration and state of the disks in a file system.
Synopsis
mmlsdisk Device [-d DiskName[;DiskName...]] [-e] [-L] Or, mmlsdisk Device [-d DiskName[;DiskName...]] {-m | -M}
Description
Use the mmlsdisk command to display the current state of the disks in the file system. The mmlsdisk command may be run against a mounted or unmounted file system. For each disk in the list, the mmlsdisk command displays: v disk name v driver type v sector size v failure group v whether it holds metadata v whether it holds data v current status: ready Normal status suspended Indicates that data is to be migrated off this disk being emptied Transitional status in effect while a disk deletion is pending replacing Transitional status in effect for old disk while replacement is pending replacement Transitional status in effect for new disk while replacement is pending v availability: up Disk is available to GPFS for normal read and write operations
down No read and write operations can be performed on this disk recovering An intermediate state for disks coming up, during which GPFS verifies and corrects data. read operations can be performed while a disk is in this state but write operations cannot. unrecovered The disks was not successfully brought up. v disk ID v storage pool that the disk is assigned to
211
Parameters
Device The device name of the file system to which the disks belong. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. -d DiskName[;DiskName...] The name of the disks for which you want to display current configuration and state information. When you enter multiple values for DiskName, you must separate them with semicolons and enclose the list in quotation marks.
"gpfs3nsd;gpfs4nsd;gpfs5nsd"
Options
-e Display all of the disks in the file system that do not have an availability of up and a status of ready. If all disks in the file system are up and ready, the message displayed is:
6027-623 All disks up and ready
-L
Displays an extended list of the disk parameters, including the disk ID field and the remarks field. The remarks column shows the current file system descriptor quorum assignments, and displays the excluded disks. The remarks field contains desc for all disks assigned as the file system descriptor holders and excl for all excluded disks. Displays whether I/O requests to the disk are satisfied on the local node, or using an NSD server. If the I/O is done using an NSD server, shows the NSD server name and the underlying disk name on that server node. Displays whether I/O requests to the disk are satisfied on the local node, or using an NSD server. This scope of this options is the node on which the mmlsdisk command is issued.
-M
-m
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
If you are a root user: 1. You may issue the mmlsdisk command from any node in the GPFS cluster. 2. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: a. Proper authorization is granted to all nodes in the GPFS cluster. b. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. 3. As root, the command can also do an mmlsdisk on remote file systems. If you are a non-root user, you may specify only file systems that belong to the same cluster as the node on which the mmlsdisk command was issued. The mmlsdisk command does not work if GPFS is down.
212
Examples
1. To display the current state of gpfs2nsd, issue this command:
mmlsdisk /dev/fs0 -d gpfs2nsd
2. To display the current states of gpfs2nsd, gpfs3nsd, and gpfs4nsd, and display their respective disk ids and the descriptor quorum assignment, issue this command:
mmlsdisk /dev/fs0 -d "gpfs2nsd;gpfs3nsd;gpfs4nsd" -L
3. To display the whether the I/O is performed locally or using an NSD server, the NSD server name, and the underlying disk name for the file system named test, issue this command:
mmlsdisk test -M
4. To display the same information as in the previous example, but limited to the node on which the command is issued, issue this command:
mmlsdisk test -m
See also
mmadddisk Command on page 64 mmchdisk Command on page 97 mmdeldisk Command on page 159 mmrpldisk Command on page 271
Location
/usr/lpp/mmfs/bin
Chapter 8. GPFS commands
213
mmlsfileset Command
Name
mmlsfileset Displays attributes and status for GPFS filesets.
Synopsis
mmlsfileset Device {[[Fileset [,Fileset ...]] [-J Junction [,Junction ...]] | -F FileName} [-L] [-d] [-i]
Description
Use the mmlsfileset command to display information for the filesets that belong to a given GPFS file system. The default is to display information for all filesets in the file system. You may choose to display information for only a subset of the filesets. The operation of the -L flag omits the attributes listed without it, namely status and junction path. In addition, if the fileset has status deleted, then -L also displays the name of the latest snapshot that includes the fileset in place of the root inode number and parent fileset identifier. The attributes displayed are: v Name of the fileset v Status of the fileset (when the -L flag is omitted) v Junction path to the fileset (when the -L flag is omitted) v Fileset identifier (when the -L flag is included) v Root inode number, if not deleted (when the -L flag is included) v Parent fileset identifier, if not deleted (when the -L flag is included) v v v v v Latest including snapshot, if deleted (when the -L flag is included) Creation time (when the -L flag is included) Number of inodes in use (when the -i flag is included) Data size (in KB) (when the -d flag is included) Comment (when the -L flag is included)
For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system that contains the fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. Fileset Specifies a comma-separated list of fileset names. -J Junction Specifies a comma-separated list of path names. They are not restricted to fileset junctions, but may name any file or directory within the filesets to be listed. -F FileName Specifies the name of a file containing either fileset names or path names. Each line must contain a single entry. All path names must be fully-qualified.
214
Options
-d -i -L Display the number of blocks in use for the fileset. Display the number of inodes in use for the fileset. Display additional information for the fileset. This includes: v Fileset identifier v Root inode number v Parent identifier v Fileset creation time v User defined comments, if any
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You may issue the mmlsfileset command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. This command displays fileset information for all filesets in file system gpfs1:
mmlsfileset gpfs1
This command displays all filesets in file system gpfs1 that are listed in Filesetlist:
mmlsfileset gpfs1 -F Filesetlist
3. These commands displays information for a file system with filesets and snapshots. Note that deleted filesets that are saved in snapshots are displayed with the name enclosed in parentheses.
Chapter 8. GPFS commands
215
16 16 16 16 16 16
c. mmlsfileset fs1 TestF2,TestF5 -J /gpfs/test-f4/dir1,/gpfs/test-f4/dir1/f3/dir2/... The system displays information similar to:
Filesets in file system fs1: Name Status TestF2 Unlinked TestF5 Linked TestF4 Linked TestF3 Linked Path -<TestF2>/subdir/f5 /gpfs/test-f4 /gpfs/test-f4/dir1/f3
See also
mmchfileset Command on page 104 mmcrfileset Command on page 125 mmdelfileset Command on page 162 mmlinkfileset Command on page 203 mmunlinkfileset Command on page 287
Location
/usr/lpp/mmfs/bin
216
mmlsfs Command
Name
mmlsfs Displays file system attributes.
Synopsis
| mmlsfs {Device | all | all_local | all_remote} [-A] [-a] [-B] [-D] [-d] [-E] [-F] [-f] [-I] [-i] [-j] [-k] [-K][-L] [-M] | [-m] [-n] [-o] [-P] [-Q] [-R] [-r] [-S] [-T] [-u] [-V] [-z]
Description
Use the mmlsfs command to list the attributes of a file system. Depending on your configuration, additional information that is set by GPFS may be displayed to assist in problem determination when contacting the IBM Support Center.
Results
If you do not specify any options, all attributes of the file system are displayed. When you specify options, only those attributes specified are listed, in the order issued in the command. Some parameters are preset for optimum performance and, although they display in the mmlsfs command output, you cannot change them.
Parameters
| Device | all | all_local | all_remote | | | | | | | | | Device Indicates the device name of the file system for which information is displayed. File system names do not need to be fully qualified. fs0 is as acceptable as /dev/fs0. all Indicates all file systems that are known to this cluster.
all_local Indicates all file systems that are owned by this cluster. all_remote Indicates all file systems that are owned by another cluster. This must be the first parameter.
Options
-A -a -B -D -d -E -F -f -I -i Automatically mounts the file system when the GPFS daemon starts. Displays the estimated average file size, in bytes. Displays the size of the data block, in bytes. Displays the type of file locking semantics that are in effect (nfs4 or posix). Displays the names of all of the disks in the file system. Displays the exact mtime values reported. Displays the maximum number of files that are currently supported. Displays the minimum fragment size, in bytes. Displays the indirect block size, in bytes. Displays the inode size, in bytes.
Chapter 8. GPFS commands
217
| -j -k | -K | -L -M -m -n -o -P -Q -R -r -S -T -u -V -z
Displays the block allocation type. Displays the type of authorization supported by the file system. Displays the strict replication enforcement. Displays the internal log file size. Displays the maximum number of metadata replicas. Displays the default number of metadata replicas. Displays the estimated number of nodes for mounting the file system. Displays the additional mount options. Displays the storage pools defined within the file system. Displays which quotas are currently enforced on the file system. Displays the maximum number of data replicas. Displays the default number of data replicas. Displays whether the updating of atime is suppressed for the gpfs_stat(), gpfs_fstat(), stat(), and fstat() calls. Displays the default mount point. Displays whether support for large LUNs is enabled. Displays the current format version of the file system. Displays whether DMAPI is enabled for this file system.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
If you are a root user: 1. You may issue the mmlsfs command from any node in the GPFS cluster. 2. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: a. Proper authorization is granted to all nodes in the GPFS cluster. b. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. 3. As root, a user can also issue the mmlsfs on remote file systems. If you are a non-root user, you may specify only file systems that belong to the same cluster as the node on which the mmlsfs command was issued.
Examples
| If you issue the mmlsfs command with no options for the file system gpfs2: | mmlsfs gpfs2
218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The system displays information similar to this (output appears in the order that the options were specified in the command):
flag ----f -i -I -m -M -r -R -j -D -k -a -n -B -Q -F -V -u -z -L -E -S -K -P -d -A -o -T value description -------------- ----------------------------------------------------512 Minimum fragment size in bytes 512 Inode size in bytes 8192 Indirect block size in bytes 2 Default number of metadata replicas 2 Maximum number of metadata replicas 1 Default number of data replicas 2 Maximum number of data replicas cluster Block allocation type posix File locking semantics in effect all ACL semantics in effect -1 Estimated average file size 1000 Estimated number of nodes that will mount file system 16384 Block size user;group;fileset Quotas enforced none Default quotas enabled 3000096 Maximum number of inodes 10.00 (3.2.0.0)File system version. Highest supported version: 10.00 yes Support for large LUNs? no Is DMAPI enabled? 524288 Logfile size yes Exact mtime mount option no Suppress atime mount option whenpossible Strict replica allocation option system Disk storage pools in file system hd3n97;hd4n97;hd5n98;hd6n98;hd7vsdn97;hd8vsdn97 Disks in file system no Automatic mount option none Additional mount options /gpfs2 Default mount point
| If you issue the mmlsfs command with the all option: | mmlsfs all -A | | | | | | | | | | | | | | | | | | The system displays information similar to:
File system attributes for /dev/fs1: ==================================== flag value description ---- -------------- -----------------------------------------------------A yes Automatic mount option File system attributes for /dev/fs2: ==================================== flag value description ---- -------------- -----------------------------------------------------A yes Automatic mount option File system attributes for /dev/fs3: ==================================== flag value description ---- -------------- -----------------------------------------------------A no Automatic mount option
| See also mmcrfs Command on page 127 mmchfs Command on page 106 mmdelfs Command on page 165
219
Location
/usr/lpp/mmfs/bin
220
mmlsmgr Command
Name
mmlsmgr Displays which node is the file system manager for the specified file systems.
Synopsis
mmlsmgr [Device [Device...]] | Or, | mmlsmgr -C ClusterName | Or, | mmlsmgr -c
Description
| Use the mmlsmgr command to display which node is the file system manager or cluster manager for the | file system. If you do not provide a Device operand, file system managers for all file systems within the current cluster for which a file system manager has been appointed are displayed.
Parameters
Device The device names of the file systems for which the file system manager information is displayed. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. If no file system is specified, information about all file systems is displayed. | -C ClusterName Displays the name of the nodes that are file system managers in cluster ClusterName. | | -c Displays the current cluster manager node.
Options
NONE
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
If you are a root user: 1. You may issue the mmlsmgr command from any node in the GPFS cluster.
221
2. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: a. Proper authorization is granted to all nodes in the GPFS cluster. b. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. 3. As root, a user can also issue the mmlsmgr on remote file systems. If you are a non-root user, you may specify only file systems that belong to the same cluster as the node on which the mmlsmgr command was issued.
Examples
1. To display the file system manager node information for all the file systems, issue this command:
mmlsmgr
The output shows the device name of the file system and the file system managers node number and name, in parenthesis, as they are recorded in the GPFS cluster data. 2. To display the file system manager information for file systems gpfs2 and gpfs3, issue this command:
mmlsmgr gpfs2 gpfs3
See also
mmchmgr Command on page 110
Location
/usr/lpp/mmfs/bin
222
mmlsmount Command
Name
mmlsmount Lists the nodes that have a given GPFS file system mounted.
Synopsis
mmlsmount {Device | all | all_local | all_remote} [-L ] [-C {all | all_remote | ClusterName[,ClusterName...] } ]
Description
The mmlsmount command reports if a file system is in use at the time the command is issued. A file system is considered to be in use if it is explicitly mounted with the mount or mmmount command, or if it is mounted internally for the purposes of running some other GPFS command. | | | | | | | Note that the mmlsmount -L command reports file systems that are in use at the time the command is issued. A file system is considered to be in use if it is explicitly mounted with the mount or mmmount command or if it is mounted internally for the purposes of running some other GPFS command. For example, when you run the mmrestripefs command, the file system will be internally mounted for the duration of the command. If mmlsmount is issued in the interim, the file system will be reported as being in use by the mmlsmount command but, unless it is explicitly mounted, will not show up in the output of the mount or df commands.
Parameters
Device | all | all_local | all_remote Device Indicates the device name of the file system for which information is displayed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. all Indicates all file systems known to this cluster.
all_local Indicates all file systems owned by this cluster. all_remote Indicates all file systems owned by another cluster. This must be the first parameter.
Options
-C {all | all_remote | ClusterName[,ClusterName...] } Specifies the clusters for which mount information is requested. If one or more ClusterName is specified, only the names of nodes that belong to these clusters and have the file system mounted are displayed. The dot character (.) can be used in place of the cluster name to denote the local cluster. Option -C all_remote denotes all clusters other than the one from which the command was issued. Option -C all refers to all clusters, local and remote, that can have the file system mounted. Option -C all is the default. -L Specifies to list the nodes that have the file system mounted.
223
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
If you are a root user, you may issue the mmlsmount command from any node in the GPFS cluster. If you are a non-root user, you may specify only file systems that belong to the same cluster as the node on which the mmlsmount command was issued.
Examples
1. To see how many nodes have file system fs2 mounted, issue this command:
mmlsmount fs2
See also
mmmount Command on page 236 mmumount Command on page 285
224
Location
/usr/lpp/mmfs/bin
225
mmlsnsd Command
Name
mmlsnsd Displays the current Network Shared Disk (NSD) information in the GPFS cluster.
Synopsis
mmlsnsd [-a | -F | -f Device | -d DiskName[;DiskName...] ] [-L | -m | -M | -X] [-v]
Description
Use the mmlsnsd command to display the current information for the NSDs belonging to the GPFS cluster. The default is to display information for all NSDs defined to the cluster (-a). Otherwise, you may choose to display the information for a particular file system (-f) or for all disks that do not belong to any file system (-F).
Parameters
-a Display information for all of the NSDs belonging to the GPFS cluster. This is the default. -f Device The device name of the file system for which you want NSD information displayed. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. -F Display the NSDs that do not belong to any file system in the GPFS cluster.
-d DiskName[;DiskName...] The name of the NSDs for which you want information displayed. When you enter multiple DiskNames, you must separate them with semicolons and enclose the entire string of disk names in quotation marks:
"gpfs3nsd;gpfs4nsd;gpfs5nsd"
Options
-L | -m | -M Displays the information in a long format that shows the NSD identifier. Maps the NSD name to its disk device name in /dev on the local node and, if applicable, on the NSD server nodes. Maps the NSD names to its disk device name in /dev on all nodes. This is a slow operation and its usage is suggested for problem determination only. -v | -X | | | Specifies that the output should contain error information, where available. Maps the NSD name to its disk device name in /dev on the local node and, if applicable, on the NSD server nodes. The -X option also displays extended information for the NSD volume ID and information such as NSD server status and Persistent Reserve (PR) enablement in the Remarks field. Using the -X option is a slow operation and is recommended only for problem determination.
Exit status
0 Successful completion. nonzero A failure has occurred.
226
Security
You must have root authority to issue the mmlsnsd command. You may issue the mmlsnsd command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To display the default information for all of the NSDs belonging to the cluster, issue this command:
mmlsnsd
2. To display all of the NSDs attached to the node from which the command is issued, issue this command:
mmlsnsd -m
227
| | |
3. To display all of the NSDs in the GPFS cluster in extended format, issue this command:
The system displays information similar to: File system Disk name NSD volume ID NSD servers | --------------------------------------------------------------------------------------------| fs2 hd3n97 0972846145C8E927 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com | fs2 hd4n97 0972846145C8E92A c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com | fs2 hd5n98 0972846245EB501C c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com | fs2 hd6n98 0972846245DB3AD8 c5n98g.ppd.pok.ibm.com,c5n97g.ppd.pok.ibm.com | fs2 sdbnsd 0972845E45C8E8ED c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sdcnsd 0972845E45C8E8F6 c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sddnsd 0972845E45F83FDB c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sdensd 0972845E45C8E909 c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sdgnsd 0972845E45C8E912 c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sdfnsd 0972845E45F02E81 c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | fs2 sdhnsd 0972845E45C8E91C c5n94g.ppd.pok.ibm.com,c5n96g.ppd.pok.ibm.com | gpfs1 hd2n97 0972846145C8E924 c5n97g.ppd.pok.ibm.com,c5n98g.ppd.pok.ibm.com | | 4. To display extended disk information about disks hd3n97, sdfnsd, and hd5n98, issue this command: mmlsnsd -X -d "hd3n97;sdfnsd;hd5n98" | | | | | | | | | | The system displays information similar to:
Disk name NSD volume ID Device Devtype Node name Remarks --------------------------------------------------------------------------------------------------hd3n97 0972846145C8E927 /dev/hdisk3 hdisk c5n97g.ppd.pok.ibm.com server node,pr=no hd3n97 0972846145C8E927 /dev/hdisk3 hdisk c5n98g.ppd.pok.ibm.com server node,pr=no hd5n98 0972846245EB501C /dev/hdisk5 hdisk c5n97g.ppd.pok.ibm.com server node,pr=no hd5n98 0972846245EB501C /dev/hdisk5 hdisk c5n98g.ppd.pok.ibm.com server node,pr=no sdfnsd 0972845E45F02E81 /dev/sdf generic c5n94g.ppd.pok.ibm.com server node sdfnsd 0972845E45F02E81 /dev/sdm generic c5n96g.ppd.pok.ibm.com server node
See also
mmcrnsd Command on page 134 mmdelnsd Command on page 170
Location
/usr/lpp/mmfs/bin
228
mmlspolicy Command
Name
mmlspolicy Displays policy information.
Synopsis
mmlspolicy Device [-L]
Description
The mmlspolicy command displays policy information for a given file system. This information is displayed: v When the policy file was installed. v The user who installed the policy file. v The node on which the policy file was installed. v The first line of the original policy file. For information on GPFS policies, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system for which policy information is to be displayed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-L Displays the entire original policy file. If this flag is not specified, only the first line of the original policy file is displayed
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You may issue the mmlspolicy command from any node in the GPFS cluster.
Examples
1. This command displays basic information for the policy installed for file system fs2:
mmlspolicy fs2
2. This command displays extended information for the policy installed for file system fs2:
mmlspolicy fs2 -L
229
See also
mmapplypolicy Command on page 71 mmchpolicy Command on page 119
Location
/usr/lpp/mmfs/bin
230
mmlsquota Command
Name
mmlsquota Displays quota information for a user, group, or fileset.
Synopsis
| mmlsquota [-u User | -g Group | -j Fileset] [-v | -q] [-e] [-C ClusterName] [Device [Device...]] Or, | mmlsquota -d {-u | -g | -j} [-C ClusterName] [Device [Device...]]
Description
For the specified User, Group, or Fileset the mmlsquota command displays information about quota limits and current usage on each file system in the cluster. This information is displayed only if quota limits have been established and the user has consumed some amount of storage. If you want quota information for a User, Group, or Fileset that has no file system storage allocated at the present time, you must specify the -v option. If none of: -g, -u, or -j is specified, the default is to display only user quotas for the user who issues the command. For each file system in the cluster, the mmlsquota command displays: 1. Block limits: v quota type (USR or GRP or FILESET) v current usage in KB v soft limit in KB v hard limit in KB v space in doubt v grace period 2. File limits: v current number of files v soft limit v hard limit v files in doubt v grace period Because the sum of the in-doubt value and the current usage may not exceed the hard limit, the actual block space and number of files available to the user, group, or fileset may be constrained by the in-doubt value. If the in-doubt value approaches a significant percentage of the quota, run the mmcheckquota command to account for the lost space and files. GPFS quota management takes replication into account when reporting on and determining if quota limits have been exceeded for both block and file usage. In a file system that has either type of replication set to a value of two, the values reported on by both the mmlsquota command and the mmrepquota command are double the value reported by the ls command. When issuing the mmlsquota command on a mounted file system, negative in-doubt values may be reported if the quota server processes a combination of up-to-date and back-level information. This is a transient situation and may be ignored.
Chapter 8. GPFS commands
231
Parameters
-C ClusterName Specify the name of the cluster from which the quota information is obtained (from the file systems within that cluster). If this option is omitted, the local cluster is assumed. | Device Specifies the device name of the file system for which quota information is to be displayed. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
Options
-d -e Display the default quota limits for user, group, or fileset quotas. Specifies that mmlsquota is to collect updated quota usage data from all nodes before displaying results. If this option is not specified, there is the potential to display negative usage values as the quota server may process a combination of up-to-date and back-level information.
-g Group Display quota information for the user group or group ID specified in the Group parameter. -j Fileset Display quota information for the named fileset. -q -u User Display quota information for the user name or user ID specified in the User parameter. -v Display quota information on file systems where the User, Group or Fileset limit has been set, but the storage has not been allocated. Prints a terse message containing information only about file systems with usage over quota.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
If you are a root user, you may view quota information for all users, groups, and filesets. If you are a non-root user, you may view only fileset quota information, your own quota information, and quota information for any groups to which you belong. You must be a root user to use the -d option. GPFS must be running on the node from which the mmlsquota command is issued.
Examples
Userid paul issued this command:
mmlsquota
232
This output shows the quotas for user paul in file system fsn set to a soft limit of 100096 KB, and a hard limit of 200192 KB. 728 KB is currently allocated to paul. 4880 KB is also in doubt, meaning that the quota system has not yet been updated as to whether this space has been used by the nodes, or whether it is still available. No grace period appears because the user has not exceeded his quota. If the user had exceeded the soft limit, the grace period would be set and the user would have that amount of time to bring his usage below the quota values. If the user failed to do so, the user would not be allocated any more space. The soft limit for files (inodes) is set at 30 and the hard limit is 50. 35 files are currently allocated to this user, and the quota system does not yet know whether the 10 in doubt have been used or are still available. A grace period of six days appears because the user has exceeded his quota. The user would have this amount of time to bring his usage below the quota values. If the user fails to do so, the user is not allocated any more space.
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmrepquota Command on page 258 mmquotaon Command on page 250 mmquotaoff Command on page 248
Location
/usr/lpp/mmfs/bin
233
mmlssnapshot Command
Name
mmlssnapshot Displays GPFS snapshot information for the specified file system.
Synopsis
mmlssnapshot Device [-d] [-Q]
Description
Use the mmlssnapshot command to display GPFS snapshot information for the specified file system. You may optionally display the amount of storage used by the snapshot and if quotas were set for automatic activation upon mounting of the file system at the time the snapshot was taken.
Parameters
Device The device name of the file system for which snapshot information is to be displayed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter.
Options
-d -Q Display the amount of storage used by the snapshot. Display whether quotas were set to be automatically activated upon mounting of the file system at the time the snapshot was taken.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
| You must be a root user to use the -d and -Q options. | If you are a root user, you can issue the mmlssnapshot command from any node in the GPFS cluster. | If you are a non-root user, you may only specify file systems that belong to the same cluster as the node | on which the mmlssnapshot command was issued. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
234
Examples
To display the snapshot information for the file system fs1 additionally requesting storage information, issue this command:
mmlssnapshot fs1 -d
See also
mmcrsnapshot Command on page 139 mmdelsnapshot Command on page 172 mmrestorefs Command on page 261 mmsnapdir Command on page 277
Location
/usr/lpp/mmfs/bin
235
mmmount Command
Name
mmmount Mounts GPFS file systems on one or more nodes in the cluster.
Synopsis
| mmmount {Device | DefaultMountPoint | all | all_local | all_remote} [-o MountOptions] [-a | -N {Node[,Node...] | NodeFile | NodeClass}] Or, mmmount Device MountPoint [-o MountOptions] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Description
The mmmount command mounts the specified GPFS file system on one or more nodes in the cluster. If no nodes are specified, the file systems are mounted only on the node from which the command was issued. A file system can be specified using its device name or its default mount point, as established by the mmcrfs, mmchfs or mmremotefs commands. When all is specified in place of a file system name, all GPFS file systems will be mounted. This also includes remote GPFS file systems to which this cluster has access.
Parameters
| Device | all | all_local | all_remote | | | | | | | | | Device The device name of the file system to be mounted. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. all Indicates all file systems known to this cluster.
all_local Indicates all file systems owned by this cluster. all_remote Indicates all files systems owned by another cluster to which this cluster has access. This must be the first parameter. DefaultMountPoint The mount point associated with the file system as a result of the mmcrfs, mmchfs, or mmremotefs commands. MountPoint The location where the file system is to be mounted. If not specified, the file system is mounted at its default mount point. This option can be used to mount a file system at a mount point other than its default mount point.
Options
-a Mount the file system on all nodes in the GPFS cluster. -N {Node[,Node...] | NodeFile | NodeClass} Specifies the nodes on which the file system is to be mounted. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount.
236
-o MountOptions Specifies the mount options to pass to the mount command when mounting the file system. For a detailed description of the available mount options, see GPFS-specific mount options on page 15.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmmount command. You may issue the mmmount command from any node in the GPFS cluster.
Examples
1. To mount all GPFS file systems on all of the nodes in the cluster, issue this command:
mmmount all -a
2. To mount file system fs2 read-only on the local node, issue this command:
mmmount fs2 -o ro
3. To mount file system fs1 on all NSD server nodes, issue this command:
mmmount fs1 -N nsdsnodes
See also
mmumount Command on page 285 mmlsmount Command on page 223
Location
/usr/lpp/mmfs/bin
237
mmnsddiscover Command
| Name | mmnsddiscover Rediscovers paths to the specified network shared disks on the specified nodes. | Synopsis | mmnsddiscover [-a | -d Disk[;Disk...] | -F DiskFile] [-C ClusterName] [-N {Node[,Node...] | NodeFile | | NodeClass}] | Description | The mmnsddiscover command is used to rediscover paths for GPFS NSDs on one or more nodes. If you | do not specify a node, GPFS rediscovers NSD paths on the node from which you issued the command. | On server nodes, mmnsddiscover causes GPFS to rediscover access to disks, thus restoring paths | which may have been broken at an earlier time. On client nodes, mmnsddiscover causes GPFS to | refresh its choice of which NSD server to use when an I/O operation occurs. | Parameters | -a Rediscovers paths for all NSDs. This is the default. | -d DiskName[;DiskName] Specifies a list of NSDs whose paths are to be rediscovered. | | -F DiskFile Specifies a file that contains the names of the NSDs whose paths are to be rediscovered. | | -C ClusterName Specifies the name of the cluster to which the NSDs belong. This defaults to the local cluster if not | specified. | | -N {Node[,Node...] | NodeFile | NodeClass} Specifies the nodes on which the rediscovery is to be done. | | | For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
| nonzero A failure has occurred. | | Security | You must have root authority to run the mmnsddiscover command. | You can issue the mmnsddiscover command from any node in the GPFS cluster.
238
| | | | | | |
When using the rcp and rsh commands for remote communication, a properly-configured .rhosts file must exist in the home directory for the root user on each node in the GPFS cluster. If you have designated the use of a different remote communication program using the mmcrcluster command or the mmchcluster command, you must make sure that: v Proper authorization is granted to all nodes in the GPFS cluster. v The nodes in the GPFS cluster can communicate without the use of a password and without any extraneous messages.
| Examples | 1. To rediscover the paths for all of the NSDs in the local cluster on the local node, issue the command: mmnsddiscover | The system displays output similar to: | mmnsddiscover: Attempting to rediscover the disks. | This may take a while ... | | mmnsddiscover: Finished. | | 2. To rediscover the paths for all of the NSDs in the local cluster on all nodes in the local cluster, issue the command: | mmnsddiscover -a -N all | | | | | | The system displays output similar to:
mmnsddiscover: Attempting to rediscover the disks. This may take a while ... mmnsddiscover: Finished.
| See also | mmchnsd Command on page 116 | mmcrnsd Command on page 134 | mmdelnsd Command on page 170 | mmlsnsd Command on page 226 | Location | /usr/lpp/mmfs/bin
239
mmpmon Command
Name
mmpmon Manages performance monitoring and displays performance information.
Synopsis
mmpmon [-i CommandFile] [-d IntegerDelayValue] [-p] [-r IntegerRepeatValue] [-s] [-t IntegerTimeoutValue]
Description
Before attempting to use mmpmon, IBM suggests that you review this command entry, then read the entire chapter, Monitoring GPFS I/O performance with the mmpmon command in General Parallel File System: Advanced Administration Guide. Use the mmpmon command to manage GPFS performance monitoring functions and display performance monitoring data. The mmpmon command reads requests from an input file or standard input (stdin), and writes responses to standard output (stdout). Error messages go to standard error (stderr). Prompts, if not suppressed, go to stderr. When running mmpmon in such a way that it continually reads input from a pipe (the driving script or application never intends to send an end-of-file to mmpmon), set the -r option value to 1 (or use the default value of 1) to prevent mmpmon from caching the input records. This avoids unnecessary memory consumption.
Results
The performance monitoring request is sent to the GPFS daemon running on the same node that is running the mmpmon command. All results from the request are written to stdout. There are two output formats: v Human readable, intended for direct viewing. In this format, the results are keywords that describe the value presented, followed by the value. For example:
disks: 2
v Machine readable, an easily parsed format intended for further analysis by scripts or applications. In this format, the results are strings with values presented as keyword/value pairs. The keywords are delimited by underscores (_) and blanks to make them easier to locate. For details on how to interpret the mmpmon command results, see Monitoring GPFS I/O performance with the mmpmon command in General Parallel File System: Advanced Administration Guide.
Parameters
-i CommandFile The input file contains mmpmon command requests, one per line. Use of the -i flag implies use of the -s flag. For interactive use, just omit the -i flag. In this case, the input is then read from stdin, allowing mmpmon to take keyboard input or output piped from a user script or application program. Leading blanks in the input file are ignored. A line beginning with a pound sign (#) is treated as a comment. Leading blanks in a line whose first non-blank character is a pound sign (#) are ignored.
240
Input requests to the mmpmon command are: fs_io_s Displays I/O statistics per mounted file system io_s Displays I/O statistics for the entire node
nlist add name[ name...] Adds node names to a list of nodes for mmpmon processing nlist del Deletes a node list nlist new name[ name...] Creates a new node list nlist s Shows the contents of the current node list. nlist sub name[ name...] Deletes node names from a list of nodes for mmpmon processing. once request Indicates that the request is to be performed only once. reset rhist nr Changes the request histogram facility request size and latency ranges. rhist off Disables the request histogram facility. This is the default. rhist on Enables the request histogram facility. rhist p Displays the request histogram facility pattern. rhist reset Resets the request histogram facility data to zero. rhist s Displays the request histogram facility statistics values. ver Displays mmpmon version. Resets statistics to zero.
Options
-d IntegerDelayValue Specifies a number of milliseconds to sleep after one invocation of all the requests in the input file. The default value is 1000. This value must be an integer greater than or equal to 500 and less than or equal to 8000000. The input file is processed as follows: The first request is processed, it is sent to the GPFS daemon, the responses for this request are received and processed, the results for this request are displayed, and then the next request is processed and so forth. When all requests from the input file have been processed once, the mmpmon command sleeps for the specified number of milliseconds. When this time elapses, mmpmon wakes up and processes the input file again, depending on the value of the -r flag. -p Indicates to generate output that can be parsed by a script or program. If this option is not specified, human readable output is produced.
-r IntegerRepeatValue Specifies the number of times to run all the requests in the input file.
241
The default value is one. Specify an integer between zero and 8000000. Zero means to run forever, in which case processing continues until it is interrupted. This feature is used, for example, by a driving script or application program that repeatedly reads the result from a pipe. The once prefix directive can be used to override the -r flag. See the description of once in Monitoring GPFS I/O performance with the mmpmon command in General Parallel File System: Advanced Administration Guide. -s Indicates to suppress the prompt on input. Use of the -i flag implies use of the -s flag. For use in a pipe or with redirected input (<), the -s flag is preferred. If not suppressed, the prompts go to standard error (stderr). -t IntegerTimeoutValue Specifies a number of seconds to wait for responses from the GPFS daemon before considering the connection to have failed. The default value is 60. This value must be an integer greater than or equal to 1 and less than or equal to 8000000.
Exit status
0 1 3 4 5 111 Successful completion. Various errors (insufficient memory, input file not found, incorrect option, and so forth). Either no commands were entered interactively, or there were no mmpmon commands in the input file. The input file was empty, or consisted of all blanks or comments. mmpmon terminated due to a request that was not valid. An internal error has occurred. An internal error has occurred. A message will follow.
Restrictions
1. Up to five instances of mmpmon may be run on a given node concurrently. However, concurrent users may interfere with each other. See Monitoring GPFS I/O performance with the mmpmon command in General Parallel File System: Advanced Administration Guide. 2. Do not alter the input file while mmpmon is running. 3. The input file must contain valid input requests, one per line. When an incorrect request is detected by mmpmon, it issues an error message and terminates. Input requests that appear in the input file before the first incorrect request are processed by mmpmon.
Security
The mmpmon command must be run by a user with root authority, on the node for which statistics are desired.
Examples
1. Assume that infile contains these requests:
ver io_s fs_io_s rhist off
242
mmpmon node 192.168.1.8 name node1 mmpmon node 192.168.1.8 name node1 timestamp: 1083350358/935524 bytes read: 0 bytes written: 0 opens: 0 closes: 0 reads: 0 writes: 0 readdir: 0 inode updates: 0 mmpmon node 192.168.1.8 name node1 no file systems mounted mmpmon node 192.168.1.8 name node1
The requests in the input file are run 10 times, with a delay of 5000 milliseconds (5 seconds) between invocations. 2. Here is the previous example with the -p flag:
mmpmon -i infile -p -r 10 -d 5000
This output consists of two strings. 5. This is an example of io_s with a mounted file system:
243
mmpmon node 198.168.1.8 name node1 io_s OK timestamp: 1093351951/587570 bytes read: 139460608 bytes written: 139460608 opens: 10 closes: 7 reads: 12885 writes: 133 readdir: 0 inode updates: 14
This output consists of one string. For several more examples, see Monitoring GPFS I/O performance with the mmpmon command in General Parallel File System: Advanced Administration Guide.
Location
/usr/lpp/mmfs/bin
244
mmputacl Command
Name
mmputacl Sets the GPFS access control list for the specified file or directory.
Synopsis
mmputacl [-d] [-i InFilename] Filename
Description
Use the mmputacl command to set the ACL of a file or directory. If the -i option is not used, the command expects the input to be supplied through standard input, and waits for your response to the prompt. For information about NFS V4 ACLs, see Chapter 6, Managing GPFS access control lists and NFS export, on page 47. Any output from the mmgetacl command can be used as input to mmputacl. The command is extended to support NFS V4 ACLs. In the case of NFS V4 ACLs, there is no concept of a default ACL. Instead, there is a single ACL and the individual access control entries can be flagged as being inherited (either by files, directories, both, or neither). Consequently, specifying the -d flag for an NFS V4 ACL is an error. By its nature, storing an NFS V4 ACL implies changing the inheritable entries (the GPFS default ACL) as well. Table 7 describes how mmputacl works.
Table 7. The mmputacl command for POSIX and NFS V4 ACLs Command mmputacl mmputacl -d POSIX ACL Access ACL (Error if default ACL is NFS V4 [1]) Default ACL (Error if access ACL is NFS V4 [1]) NFS V4 ACL Stores the ACL (implies default as well) Error: NFS V4 ACL (has no default ACL)
[1] The default and access ACLs are not permitted to be mixed types because NFS V4 ACLs include inherited entries, which are the equivalent of a default ACL. An mmdelacl of the NFS V4 ACL is required before an ACL is converted back to POSIX.
Depending on the file systems -k setting (posix, nfs4, or all), mmputacl may be restricted. The mmputacl command is not allowed to store an NFS V4 ACL if -k posix is in effect. The mmputacl command is not allowed to store a POSIX ACL if -k nfs4 is in effect. For more information, see the description of the -k flag for the mmchfs, mmcrfs, and mmlsfs commands. Note that the test to see if the given ACL is acceptable based on the file systems -k setting cannot be done until after the ACL is provided. For example, if mmputacl file1 is issued (no -i flag specified) the user then has to input the ACL before the command can verify that it is an appropriate ACL given the file system settings. Likewise, the command mmputacl -d dir1 (again the ACL was not given with the -i flag) requires that the ACL be entered before file system ACL settings can be tested. In this situation, the -i flag may be preferable to manually entering a long ACL, only to find out it is not allowed by the file system.
Parameters
Filename The path name of the file or directory for which the ACL is to be set. If the -d option is specified, Filename must be the name of a directory.
245
Options
-d Specifies that the default ACL of a directory is to be set. This flag cannot be used on an NFS V4 ACL.
-i InFilename The path name of a source file from which the ACL is to be read.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You may issue the mmputacl command only from a node in the GPFS cluster where the file system is mounted. You must be the file or directory owner, the root user, or someone with control permission in the ACL, to run the mmputacl command.
Examples
To use the entries in a file named standard.acl to set the ACL for a file named project2.history, issue this command:
mmputacl -i standard.acl project2.history
See also
mmeditacl Command on page 177 mmdelacl Command on page 157 mmgetacl Command on page 194
246
Location
/usr/lpp/mmfs/bin
247
mmquotaoff Command
Name
mmquotaoff Deactivates quota limit checking.
Synopsis
mmquotaoff [-u] [-g] [-j] [-v] {Device [Device ...] | -a}
Description
The mmquotaoff command disables quota limit checking by GPFS. If none of: -u, -j or -g is specified, the mmquotaoff command deactivates quota limit checking for users, groups, and filesets. If the -a option is not specified, Device must be the last parameter entered.
Parameters
Device[ Device ... ] The device name of the file system to have quota limit checking deactivated. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a Deactivates quota limit checking for all GPFS file systems in the cluster. When used in combination with the -g option, only group quota limit checking is deactivated. When used in combination with the -u or -j options, only user or fileset quota limit checking, respectively, is deactivated. Specifies that only group quota limit checking is to be deactivated. Specifies that only quota checking for filesets is to be deactivated. Specifies that only user quota limit checking is to be deactivated. Prints a message for each file system in which quotas are deactivated.
-g -j -u -v
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmquotaoff command. GPFS must be running on the node from which the mmquotaoff command is issued.
Examples
1. To deactivate user quota limit checking on file system fs0, issue this command:
mmquotaoff -u fs0
248
mmlsfs fs0 -Q
2. To deactivate group quota limit checking on all file systems, issue this command:
mmquotaoff -g -a
To confirm the change, individually for each file system, issue this command:
mmlsfs fs2 -Q
3. To deactivate all quota limit checking on file system fs0, issue this command:
mmquotaoff fs0
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmlsquota Command on page 231 mmquotaon Command on page 250 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
249
mmquotaon Command
Name
mmquotaon Activates quota limit checking.
Synopsis
mmquotaon [-u] [-g] [-j] [-v] {Device [Device...] | -a}
Description
The mmquotaon command enables quota limit checking by GPFS. If none of: -u, -j or -g is specified, the mmquotaon command activates quota limit checking for users, groups, and filesets. If the -a option is not used, Device must be the last parameter specified. After quota limit checking has been activated by issuing the mmquotaon command, issue the mmcheckquota command to count inode and space usage.
Parameters
Device[ Device ... ] The device name of the file system to have quota limit checking activated. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a Activates quota limit checking for all of the GPFS file systems in the cluster. When used in combination with the -g option, only group quota limit checking is activated. When used in combination with the -u or -j option, only user or fileset quota limit checking, respectively, is activated. Specifies that only group quota limit checking is to be activated. Specifies that only fileset quota checking is to be activated. Specifies that only user quota limit checking is to be activated. Prints a message for each file system in which quota limit checking is activated.
-g -j -u -v
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmquotaon command. GPFS must be running on the node from which the mmquotaon command is issued.
250
Examples
1. To activate user quotas on file system fs0, issue this command:
mmquotaon -u fs0
2. To activate group quota limit checking on all file systems, issue this command:
mmquotaon -g -a
To confirm the change, individually for each file system, issue this command:
mmlsfs fs1 -Q
3. To activate user, group, and fileset quota limit checking on file system fs2, issue this command:
mmquotaon fs2
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmlsquota Command on page 231 mmquotaoff Command on page 248 mmrepquota Command on page 258
Location
/usr/lpp/mmfs/bin
251
mmremotecluster Command
Name
mmremotecluster Manages the information about other GPFS clusters that this cluster can access when mounting remote GPFS file systems.
Synopsis
mmremotecluster add RemoteClusterName [-n ContactNodes] [-k KeyFile] Or, mmremotecluster update RemoteClusterName [-C NewClusterName] [-n ContactNodes] [-k KeyFile] Or, mmremotecluster delete {RemoteClusterName | all} Or, mmremotecluster show [RemoteClusterName | all]
Description
The mmremotecluster command is used to make remote GPFS clusters known to the local cluster, and to maintain the attributes associated with those remote clusters. The keyword appearing after mmremotecluster determines which action is performed: add Adds a remote GPFS cluster to the set of remote clusters known to the local cluster.
delete Deletes the information for a remote GPFS cluster. show update Updates the attributes of a remote GPFS cluster. To be able to mount file systems that belong to some other GPFS cluster, you must first make the nodes in this cluster aware of the GPFS cluster that owns those file systems. This is accomplished with the mmremotecluster add command. The information that the command requires must be provided to you by the administrator of the remote GPFS cluster. You will need this information: v The name of the remote cluster. v The names or IP addresses of a few nodes that belong to the remote GPFS cluster. v The public key file generated by the administrator of the remote cluster by running the mmauth genkey command for the remote cluster. Since each cluster is managed independently, there is no automatic coordination and propagation of changes between clusters like there is between the nodes within a cluster. This means that once a remote cluster is defined with the mmremotecluster command, the information about that cluster is automatically propagated across all nodes that belong to this cluster. But if the administrator of the remote cluster decides to rename it, or deletes some or all of the contact nodes, or change the public key file, the information in this cluster becomes obsolete. It is the responsibility of the administrator of the remote GPFS cluster to notify you of such changes so that you can update your information using the appropriate options of the mmremotecluster update command. Displays information about a remote GPFS cluster.
252
Parameters
RemoteClusterName Specifies the cluster name associated with the remote cluster that owns the remote GPFS file system. The value all indicates all remote clusters defined to this cluster, when using the mmremotecluster delete or mmremotecluster show commands. -C NewClusterName Specifies the new cluster name to be associated with the remote cluster. | | -k KeyFile Specifies the name of the public key file provided to you by the administrator of the remote GPFS cluster. -n ContactNodes A comma separated list of nodes that belong to the remote GPFS cluster, in this format:
[tcpPort=NNNN,]node1[,node2 ...]
where: tcpPort=NNNN Specifies the TCP port number to be used by the local GPFS daemon when contacting the remote cluster. If not specified, GPFS will use the default TCP port number 1191. node1[,node2...] Specifies a list of nodes that belong to the remote cluster. The nodes can be identified through their host names or IP addresses.
Options
None.
Exit status
0 Successful completion. After successful completion of the mmremotecluster command, the new configuration information is propagated to all nodes in the cluster.
Security
You must have root authority to run the mmremotecluster command. You may issue the mmremotecluster command from any node in the GPFS cluster.
Examples
1. This command adds remote cluster k164.kgn.ibm.com to the set of remote clusters known to the local cluster, specifying k164n02 and k164n03 as remote contact nodes. File k164.id_rsa.pub is the name of the public key file provided to you by the administrator of the remote cluster.
mmremotecluster add k164.kgn.ibm.com -n k164n02,k164n03\ -k k164.id_rsa.pub
253
For more information on the SHA digest, see General Parallel File System: Problem Determination Guide and search on SHA digest. 3. This command updates information for the remote cluster k164.kgn.ibm.com, changing the remote contact nodes to k164n02 and k164n01. The TCP port to be used when contacting cluster k164.kgn.ibm.com is defined to be 6667..
mmremotecluster update k164.kgn.ibm.com -n tcpPort=6667,k164n02,k164n01
The mmremotecluster show command can then be used to see the changes.
mmremotecluster show k164.kgn.ibm.com
For more information on the SHA digest, see General Parallel File System: Problem Determination Guide and search on SHA digest. 4. This command deletes information for remote cluster k164.kgn.ibm.com from the local cluster.
mmremotecluster delete k164.kgn.ibm.com
See also
mmauth Command on page 76 mmremotefs Command on page 255 Accessing GPFS file systems from other GPFS clusters in General Parallel File System: Advanced Administration Guide.
Location
/usr/lpp/mmfs/bin
254
mmremotefs Command
Name
mmremotefs Manages the information about GPFS file systems from other clusters that this cluster can mount.
Synopsis
| mmremotefs add Device -f RemoteDevice -C RemoteClusterName [-T MountPoint ] [-A {yes | no | | automount}] [-o MountOptions] Or, mmremotefs delete {Device | all | -C RemoteClusterName} Or, mmremotefs show [Device | all | -C RemoteClusterName] Or, mmremotefs update Device [-f RemoteDevice] [-C RemoteClusterName] [-T MountPoint] [-A {yes | no | automount}] [-o MountOptions]
Description
The mmremotefs command is used to make GPFS file systems that belong to other GPFS clusters known to the nodes in this cluster, and to maintain the attributes associated with these file systems. The keyword appearing after mmremotefs determines which action is performed: add Define a new remote GPFS file system.
delete Delete the information for a remote GPFS file system. show update Update the information associated with a remote GPFS file system. Use the mmremotefs command to make the nodes in this cluster aware of file systems that belong to other GPFS clusters. The cluster that owns the given file system must have already been defined with the mmremotecluster command. The mmremotefs command is used to assign a local name under which the remote file system will be known in this cluster, the mount point where the file system is to be mounted in this cluster, and any local mount options that you may want. Once a remote file system has been successfully defined and a local device name associated with it, you can issue normal commands using that local name, the same way you would issue them for file systems that are owned by this cluster. Display the information associated with a remote GPFS file system.
Parameters
Device Specifies the name by which the remote GPFS file system will be known in the cluster. -C RemoteClusterName Specifies the name of the GPFS cluster that owns the remote GPFS file system.
255
-f RemoteDevice Specifies the actual name of the remote GPFS file system. This is the device name of the file system as known to the remote cluster that owns the file system.
Options
-A {yes | no | automount} Indicates when the file system is to be mounted: yes no When the GPFS daemon starts. Manual mount. This is the default.
automount When the file system is first accessed. -o MountOptions Specifies the mount options to pass to the mount command when mounting the file system. For a detailed description of the available mount options, see GPFS-specific mount options on page 15. | | | -T MountPoint The local mount point directory of the remote GPFS file system. If it is not specified, the mount point will be set to DefaultMountDir/Device. The default value for DefaultMountDir is /gpfs, but it can be changed with the mmchconfig command.
Exit status
0 Successful completion. After successful completion of the mmremotefs command, the new configuration information is propagated to all nodes in the cluster.
Security
You must have root authority to run the mmremotefs command. You may issue the mmremotefs command from any node in the GPFS cluster.
Examples
This command adds remote file system gpfsn, owned by remote cluster k164.kgn.ibm.com, to the local cluster, assigning rgpfsn as the local name for the file system, and /gpfs/rgpfsn as the local mount point.
mmremotefs add rgpfsn -f gpfsn -C k164.kgn.ibm.com -T /gpfs/rgpfsn
256
See also
mmauth Command on page 76 mmremotecluster Command on page 252 Accessing GPFS file systems from other GPFS clusters in General Parallel File System: Advanced Administration Guide.
Location
/usr/lpp/mmfs/bin
257
mmrepquota Command
Name
mmrepquota Reports file system user, group, and fileset quotas.
Synopsis
mmrepquota [-e] [-g] [-q] [-u] [-n] [-v] [-j] {Device [Device...] | -a}
Description
The mmrepquota command reports file system usage and quota information for a user, group, or fileset. If none of -g, -j, or -u is specified, then user, group and fileset quotas are listed. If the -a option is not specified, Device must be the last parameter entered. For each file system in the cluster, the mmrepquota command displays: 1. Block limits: v quota type (USR, GRP or FILESET) v current usage in KB v soft limit in KB v hard limit in KB v space in doubt v grace period 2. File limits: v current number of files v soft limit v hard limit v files in doubt v grace period 3. Entry Type default on Default quotas are enabled for this file system default off Default quotas are not enabled for this file system e d i Explicit quotas the quota limits have been explicitly set using the mmedquota command Default quotas the quota limits are the default values set using the mmdefedquota command Initial quotas default quotas were not enabled when this initial entry was established. Initial quota limits have a value of zero indicating no limit.
Because the sum of the in-doubt value and the current usage may not exceed the hard limit, the actual block space and number of files available to the user, group, or fileset may be constrained by the in-doubt value. If the in-doubt value approach a significant percentage of the quota, run the mmcheckquota command to account for the lost space and files.
258
GPFS quota management takes replication into account when reporting on and determining if quota limits have been exceeded for both block and file usage. In a file system that has either type of replication set to a value of two, the values reported on by both the mmlsquota command and the mmrepquota command are double the value reported by the ls command. When issuing the mmrepquota command on a mounted file system, negative in-doubt values may be reported if the quota server processes a combination of up-to-date and back-level information. This is a transient situation and may be ignored.
Parameters
Device[ Device...] The device name of the file system to be listed. If more than one file system is listed, the names must be delimited by a space. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a -e Lists quotas for all file systems in the cluster. A header line is printed automatically with this option. Specifies that the mmrepquota command is to collect updated quota usage data from all nodes before displaying results. If this option is not specified, there is the potential to display negative usage values as the quota server may process a combination of up-to-date and back-level information. List only group quotas. List only fileset quotas. Displays a numerical user ID. Show whether quota enforcement is active. List only user quotas. Print a header line.
-g -j -n -q -u -v
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmrepquota command. GPFS must be running on the node from which the mmrepquota command is issued.
Examples
1. To report on user quotas for file system fs2 and display a header line, issue this command:
mmrepquota -u -v fs2
259
USR 2016 256 USR 104 256 USR 0 256 USR 368 256 USR 0 256 USR 1024 1024
512 0 6days | 512 0 none | 512 0 none | 512 0 23hours| 512 0 none | 5120 4096 none |
7 1 0 5 0 1
10 10 10 4 10 0
20 20 20 10 20 0
See also
mmcheckquota Command on page 101 mmdefedquota Command on page 147 mmdefquotaoff Command on page 149 mmdefquotaon Command on page 151 mmedquota Command on page 180 mmlsquota Command on page 231 mmquotaoff Command on page 248 mmquotaon Command on page 250
Location
/usr/lpp/mmfs/bin
260
mmrestorefs Command
Name
mmrestorefs Restores a file system from a GPFS snapshot.
Synopsis
mmrestorefs Device Directory [-c]
Description
Use the mmrestorefs command to restore user data and attribute files to a file system using those of the specified snapshot. Prior to issuing the mmrestorefs command, you must unmount the file system from all nodes in the cluster. The file system may not be remounted until the mmrestorefs command has successfully completed, unless you have specified the -c option to force the restore to continue even in the event errors are encountered. Automatic quota activation upon mounting the file system is not restored by the mmrestorefs command. You must issue the mmchfs -Q yes command to restore automatic quota activation. Snapshots are not affected by the mmrestorefs command. Consequently, a failure while restoring one snapshot may possibly be recovered by restoring a different snapshot. When the mmsnapdir -a (add a snapshots subdirectory to all subdirectories in the file system) option is in effect, the snapshots subdirectories may no longer show the complete list of snapshots containing the parent directory, if the file system was restored from a snapshot that was not the latest. Since the root directory is contained in all snapshots, its snapshots subdirectory will always show the complete list of snapshots. For information on how GPFS policies and snapshots interact, see Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide. Because snapshots are not copies of the entire file system, they should not be used as protection against media failures. For protection against media failures, see General Parallel File System: Concepts, Planning, and Installation Guide and search on recoverability considerations.
Parameters
Device The device name of the file system for which the snapshot is to be created. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter. Directory The snapshot with which to restore the file system.
Options
-c Continue to restore the file system in the event errors occur. Upon completion of the mmrestorefs -c command, the file system is inconsistent, but can be mounted to recover data from the snapshot. If necessary, the command may be issued to recover as much data as possible. The mmfsck command may be run on an inconsistent file system. After the mmrestorefs -c command has been issued, use the mmfsck command to clean up the files or directories that could not be restored.
Chapter 8. GPFS commands
261
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmrestorefs command. You may issue the mmrestorefs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
We have a directory structure similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3
The directory userB is then created using the inode originally assigned to userA. We take another snapshot:
mmcrsnapshot fs1 snap2
After the command is issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userB/file2b /fs1/userB/file3b /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3
262
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.snapshots/snap1/file1 /fs1/.snapshots/snap1/userA/file2 /fs1/.snapshots/snap1/userA/file3 /fs1/.snapshots/snap2/file1 /fs1/.snapshots/snap2/userB/file2b /fs1/.snapshots/snap2/userB/file3b
See also
mmcrsnapshot Command on page 139 mmdelsnapshot Command on page 172 mmlssnapshot Command on page 234 mmsnapdir Command on page 277
Location
/usr/lpp/mmfs/bin
263
mmrestripefile Command
Name
mmrestripefile - Performs a repair operation over the specified list of files.
Synopsis
| mmrestripefile {-b | -m | -p | -r} {-F FilenameFile | Filename [Filename...]}
Description
The mmrestripefile command performs a repair operation over the specified list of files. The -F flag allows the user to specify a file containing the list of file names to be restriped, with one file name per line. The mmrestripefile command attempts to restore the metadata or data replication factor of the file. You must specify one of the four options (-b, -m, -p, or -r) to indicate how much file data to move. If you do not use replication, the -m and -r options are equivalent. Their behavior differs only on replicated files. After a successful replicate (-r option), all suspended disks are empty. A migrate operation, using the -m option, leaves data on a suspended disk as long as at least one other replica of the data remains on a disk that is not suspended. Restriping a file system includes replicating it. The -b option performs all the operations of the -m and -r options.
Parameters
-F FilenameFile Specifies a file that contains a list of names of files to be restriped, one name per line. Filename Specifies the names of one or more files to be restriped.
Options
| -b | | | -m | -p Rebalances a list of files across all disks that are not suspended, even if they are stopped. Although blocks are allocated on a stopped disk, they are not written to a stopped disk, nor are reads allowed from a stopped disk, until that disk is started and replicated data is copied onto it. Migrates critical data off of any suspended disk in this file system. Critical data is all data that would be lost if currently suspended disks were removed. Directs mmrestripefile to repair the file placement within the storage pool. Files assigned to one storage pool, but with data in a different pool, will have their data migrated to the correct pool. These files are called ill-placed. Utilities, such as the mmchattr command, may change a files storage pool assignment, but not move the data. The mmrestripefile command may then be invoked to migrate all of the data at once, rather than migrating each file individually. Note that the rebalance operation, specified by the -b option, also performs data placement on all files, whereas the placement option, specified by -p, rebalances only the files that it moves. | -r | | | | | Migrates all data for a list of files off of suspended disks. It also restores all specified replicated files in the file system to their designated degree of replication when a previous disk failure or removal of a disk has made some replicated data inaccessible. Use this option either immediately after a disk failure, to protect replicated data against a subsequent failure, or before taking a disk offline for maintenance, to protect replicated data against failure of another disk during the maintenance process.
264
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmrestripefile command. You may issue the mmrestripefile command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
This example illustrates restriping a file named testfile0. This command confirms that testfile0 is illplaced.
mmlsattr -L testfile0
See also
mmadddisk Command on page 64 mmapplypolicy Command on page 71 mmchattr Command on page 84
Chapter 8. GPFS commands
265
mmchdisk Command on page 97 mmdeldisk Command on page 159 mmrpldisk Command on page 271 mmrestripefs Command on page 267
Location
/usr/lpp/mmfs/bin
266
mmrestripefs Command
Name
| mmrestripefs - Rebalances or restores the replication of all files in a file system.
Synopsis
| mmrestripefs Device {-b | -m | -p | -r | -R} [-N {Node[,Node...] | NodeFile | NodeClass}] [-P PoolName]
Description
| Use the mmrestripefs command to rebalance or restore the replication of all files in a file system. The mmrestripefs command moves existing file system data between different disks in the file system based on changes to the disk state made by the mmchdisk, mmadddisk, and mmdeldisk commands. | The mmrestripefs command attempts to restore the metadata or data replication of any file in the file system. | You must specify one of the five options (-b, -m, -p, -r, or -R) to indicate how much file system data to | move. You can issue this command against a mounted or unmounted file system. If you do not use replication, the -m and -r options are equivalent. Their behavior differs only on replicated files. After a successful replicate (-r option), all suspended disks are empty. A migrate operation, using the -m option, leaves data on a suspended disk as long as at least one other replica of the data remains on a disk that is not suspended. Restriping a file system includes replicating it. The -b option performs all the operations of the -m and -r options. Consider the necessity of restriping and the current demands on the system. New data that is added to the file system is correctly striped. Restriping a large file system requires a large number of insert and delete operations and may affect system performance. Plan to perform this task when system demand is low.
Parameters
Device The device name of the file system to be restriped. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. -N {Node[,Node...] | NodeFile | NodeClass} Specify the nodes that participate in the restripe of the file system. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the restripe of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-b Rebalances all files across all disks that are not suspended, even if they are stopped. Although blocks are allocated on a stopped disk, they are not written to a stopped disk, nor are reads allowed from a stopped disk, until that disk is started and replicated data is copied onto it. The mmrestripefs command rebalances and restripes the file system. Use this option to rebalance the file system after adding, changing, or deleting disks in a file system.
267
Note: Rebalancing of files is an I/O intensive and time consuming operation, and is important only for file systems with large files that are mostly invariant. In many cases, normal file update and creation will rebalance your file system over time, without the cost of the rebalancing. -m Migrates all critical data off of any suspended disk in this file system. Critical data is all data that would be lost if currently suspended disks were removed.
-P PoolName Directs mmrestripefs to repair only files assigned to the specified storage pool. -p Directs mmrestripefs to repair the file placement within the storage pool. Files assigned to one storage pool, but with data in a different pool, will have their data migrated to the correct pool. Such files are referred to as ill-placed. Utilities, such as the mmchattr command, may change a files storage pool assignment, but not move the data. The mmrestripefs command may then be invoked to migrate all of the data at once, rather than migrating each file individually. Note that the rebalance operation, specified by the -b option, also performs data placement on all files, whereas the placement option, specified by -p, rebalances only the files that it moves. -r Migrates all data off suspended disks. It also restores all replicated files in the file system to their designated degree of replication when a previous disk failure or removal of a disk has made some replica data inaccessible. Use this parameter either immediately after a disk failure to protect replicated data against a subsequent failure, or before taking a disk offline for maintenance to protect replicated data against failure of another disk during the maintenance process. Changes the replication settings of each file, directory, and system metadata object so that they match the default file system settings (see the mmchfs Command on page 106 -m and -r options) as long as the maximum (-M and -R) settings for the object allow it. Next, it replicates or unreplicates the object as needed to match the new settings. This option can be used to replicate all of the existing files that had not been previously replicated or to unreplicate the files if replication is no longer needed or wanted.
| -R | | | | |
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmrestripefs command. You may issue the mmrestripefs command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To move all critical data from any suspended disk in file system fs0, issue this command:
mmrestripefs fs0 -m
268
Scanning file system metadata, phase 1 ... Scan completed successfully. Scanning file system metadata, phase 2 ... Scan completed successfully. Scanning file system metadata, phase 3 ... Scan completed successfully. Scanning file system metadata, phase 4 ... Scan completed successfully. Scanning user file metadata ... 6 % complete on Fri Feb 10 15:45:07 2006 45 % complete on Fri Feb 10 15:48:03 2006 78 % complete on Fri Feb 10 15:49:28 2006 85 % complete on Fri Feb 10 15:49:53 2006 100 % complete on Fri Feb 10 15:53:21 2006 Scan completed successfully.
2. To rebalance all files in file system fs1 across all defined, accessible disks that are not stopped or suspended, issue this command:
mmrestripefs fs1 -b
See also
mmadddisk Command on page 64 mmapplypolicy Command on page 71 mmchattr Command on page 84 mmchdisk Command on page 97 | mmchfs Command on page 106 mmdeldisk Command on page 159 mmrpldisk Command on page 271 mmrestripefile Command on page 264
269
Location
/usr/lpp/mmfs/bin
270
mmrpldisk Command
Name
mmrpldisk Replaces the specified disk.
Synopsis
mmrpldisk Device DiskName {DiskDesc | -F DescFile} [-v {yes | no}] [-N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmrpldisk command to replace an existing disk in the GPFS file system with a new one. All data on the old disk is migrated to the new one. To replace disks in a GPFS file system, you must first decide if you will: 1. Create new disks using the mmcrnsd command. Use the rewritten disk descriptor file produced by the mmcrnsd command or create a new disk descriptor. When using the rewritten file, the Disk Usage and Failure Group specifications remain the same as specified on the mmcrnsd command. 2. Select disks no longer in use in any file system. Issue the mmlsnsd -F command to display the available disks. The disk may then be used to replace a disk in the file system using the mmrpldisk command. Note: 1. You cannot replace a disk when it is the only remaining disk in the file system. 2. Under no circumstances should you replace a stopped disk. You need to start a stopped disk before replacing it. If a disk cannot be started, you must delete it using the mmdeldisk command. See the General Parallel File System: Problem Determination Guide and search for Disk media failure. 3. The file system need not be unmounted before the mmrpldisk command can be run.
Results
Upon successful completion of the mmrpldisk command, the disk is replaced in the file system and data is copied to the new disk without restriping.
Parameters
Device The device name of the file system where the disk is to be replaced. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter. Disk Name The name of the disk to be replaced, which was previously passed to the mmcrfs, mmadddisk, or mmrpldisk commands. You can display the entire list of disk names by issuing the mmlsdisk command. DiskDesc A descriptor for the replacement disk. -F DescFile Specifies a file containing the disk descriptor for the replacement disk.
Chapter 8. GPFS commands
271
The disk descriptor must be specified in the form (second, third, and sixth fields reserved):
DiskName:::DiskUsage:FailureGroup:::
DiskName You must specify the name of the NSD previously created by the mmcrnsd command. For a list of available disks, issue the mmlsnsd -F command. Disk Usage Specify a disk usage or inherit the disk usage of the disk being replaced: | dataAndMetadata Indicates that the disk contains both data and metadata. dataOnly Indicates that the disk contains data and does not contain metadata. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication. Failure Group A number identifying the failure group to which this disk belongs. You can specify any value from -1 (where -1 indicates that the disk has no point of failure in common with any other disk) to 4000. If you do not specify a failure group, the new disk inherits the failure group of the disk being replaced. Note: While it is not absolutely necessary to specify the same disk descriptor parameters for the new disk as the old disk, it is suggested you do so. If the new disk is equivalent in size as the old disk, and if the DiskUsage and FailureGroup parameters are the same, the data and metadata can be completely migrated from the old disk to the new disk. A disk replacement in this manner allows the file system to maintain its current data and metadata balance. If the new disk has a different size, DiskUsage parameter, or FailureGroup parameter, the operation may leave the file system unbalanced and require a restripe. Additionally, a change in size or the DiskUsage parameter may cause the operation to fail since other disks in the file system may not have sufficient space to absorb more data or metadata. In this case you must first use the mmadddisk command to add the new disk, the mmdeldisk command to delete the old disk, and finally the mmrestripefs command to rebalance the file system. -N {Node[,Node...] | NodeFile | NodeClass} Specify the nodes that participate in the migration of data from the old to the new disk. This command supports all defined node classes. The default is all (all nodes in the GPFS cluster will participate in the restripe of the file system). For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
Options
-v {yes | no} Verify that specified disks do not belong to an existing file system. The default is -v yes. Specify -v
272
no only when you want to reuse disks that are no longer needed for an existing file system. If the command is interrupted for any reason, you must use the -v no option on the next invocation of the command.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmrpldisk command. You may issue the mmrpldisk command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To replace disk hd27n01 in fs1 with a new disk, hd16vsdn10 allowing the disk usage and failure group parameters to default to the corresponding values of hd27n01, and have only nodes c154n01, c154n02, and c154n09 participate in the migration of the data, issue this command:
mmrpldisk fs1 hd27n01 hd16vsdn10 -N c154n01,c154n02,c154n09
273
| | | | | | | | | | | | | | | | | | | |
100 % complete on Wed May 16 16:38:32 2007 Scan completed successfully. Scanning file system metadata, phase 2 ... Scanning file system metadata for fs1sp1 storage pool Scan completed successfully. Scanning file system metadata, phase 3 ... Scan completed successfully. Scanning file system metadata, phase 4 ... Scan completed successfully. Scanning user file metadata ... 3 % complete on Wed May 16 16:38:38 2007 25 % complete on Wed May 16 16:38:47 2007 53 % complete on Wed May 16 16:38:53 2007 87 % complete on Wed May 16 16:38:59 2007 97 % complete on Wed May 16 16:39:06 2007 100 % complete on Wed May 16 16:39:07 2007 Scan completed successfully. Done mmrpldisk: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
| See also mmadddisk Command on page 64 mmchdisk Command on page 97 mmcrnsd Command on page 134 mmlsdisk Command on page 211 mmlsnsd Command on page 226 mmrestripefs Command on page 267
Location
/usr/lpp/mmfs/bin
274
mmshutdown Command
Name
mmshutdown Unmounts all GPFS file systems and stops GPFS on one or more nodes.
Synopsis
mmshutdown [-t UnmountTimeout ] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Description
Use the mmshutdown command to stop the GPFS daemons on one or more nodes. If no operand is specified, GPFS is stopped only on the node from which the command was issued. The mmshutdown command first attempts to unmount all GPFS file systems. If the unmount does not complete within the specified timeout period, the GPFS daemons shut down anyway.
Results
Upon successful completion of the mmshutdown command, these tasks are completed: v GPFS file systems are unmounted. v GPFS daemons are stopped.
Parameters
-a Stop GPFS on all nodes in a GPFS cluster. -N {Node[,Node...] | NodeFile | NodeClass} Directs the mmshutdown command to process a set of nodes. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount.
Options
-t UnmountTimeout The maximum amount of time, in seconds, that the unmount command is given to complete. The default timeout period is equal to: 60 + 3 number of nodes If the unmount does not complete within the specified amount of time, the command times out and the GPFS daemons shut down.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmshutdown command. You may issue the mmshutdown command from any node in the GPFS cluster.
275
When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To stop GPFS on all nodes in the GPFS cluster, issue this command:
mmshutdown -a
See also
mmgetstate Command on page 197 mmlscluster Command on page 207 mmstartup Command on page 280
Location
/usr/lpp/mmfs/bin
276
mmsnapdir Command
Name
mmsnapdir - Creates and deletes invisible directories that connect to the snapshots of a GPFS file system, and changes the name of the snapshots subdirectory.
Synopsis
mmsnapdir Device {[-r | -a] [-s SnapDirName]} Or, mmsnapdir Device [-q]
Description
Use the mmsnapdir command to create or delete invisible directories that connect to the snapshots of a GPFS file system, and change the name of the snapshots subdirectory. Snapshots appear in a subdirectory in the root directory of the file system. If you prefer to access the snapshots from each file system directory rather than traversing through the root directory, you may create an invisible directory to make the connection by issuing the mmsnapdir command with the -a flag (see Example 1 on page 278). The -a flag of the mmsnapdir command creates an invisible directory in each normal directory in the active file system (they do not appear in directories in snapshots) that contains a subdirectory for each existing snapshot of the file system. These subdirectories correspond to the copy of the that directory in the snapshot with the same name. If the mmsnapdir command is issued while another snapshot command is running, the mmsnapdir command waits for that command to complete. For more information about GPFS snapshots, see Creating and maintaining snapshots of GPFS file systems in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This must be the first parameter.
Options
-a -q -r Adds a snapshots subdirectory to all subdirectories in the file system. Displays current settings, if issued without any other flags. Reverses the effect of the -a option. All invisible snapshot directories are removed. The snapshot directory under the file system root directory is not affected.
-s SnapDirName Changes the name of the snapshots subdirectory to SnapDirName. This affects both the directory in the file system root as well as the invisible directory in the other file system directories if the mmsnapdir -a command has been issued.
Exit status
0 Successful completion.
Chapter 8. GPFS commands
277
Security
| If you are a root user, you may issue the mmsnapdir command from any node in the GPFS cluster. | You must be a root user to use the -a, -r , and -s options. | If you are a non-root user, you may only specify file systems that belong to the same cluster as the node | on which the mmsnapdir command was issued. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. To rename the .snapshots directory (the default snapshots directory name) to .link for file system fs1, issue the command:
mmsnapdir fs1 -s .link
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/.link/snap1/file1 /fs1/.link/snap1/userA/file2 /fs1/.link/snap1/userA/file3
2. Issuing:
mmsnapdir fs1 -a
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3 /fs1/userA/.link/snap1/file2 /fs1/userA/.link/snap1/file3 /fs1/.link/snap1/file1 /fs1/.link/snap1/userA/file2 /fs1/.link/snap1/userA/file3
The .link subdirectory under the root directory and under each subdirectory of the tree provides two different paths to each snapshot copy of a file. For example, /fs1/userA/.link/snap1/file2 and /fs1/.link/snap1/userA/file2 are two different paths that access the same snapshot copy of /fs1/userA/file2. 3. Issuing:
mmsnapdir fs1 -r
After the command has been issued, the directory structure would appear similar to:
/fs1/file1 /fs1/userA/file2 /fs1/userA/file3
278
4. Issuing:
mmsnapdir fs1 -q
See also
mmcrsnapshot Command on page 139 mmdelsnapshot Command on page 172 mmlssnapshot Command on page 234 mmrestorefs Command on page 261
Location
/usr/lpp/mmfs/bin
279
mmstartup Command
Name
mmstartup Starts the GPFS subsystem on one or more nodes.
Synopsis
| mmstartup [-a | -N {Node[,Node...] | NodeFile | NodeClass}] [-E EnvVar=value ...]
Description
Use the mmstartup command to start the GPFS daemons on one or more nodes. If no operand is specified, GPFS is started only on the node from which the command was issued.
Parameters
-a Start GPFS on all nodes in a GPFS cluster. -N {Node[,Node...] | NodeFile | NodeClass} Directs the mmstartup command to process a set of nodes. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount.
Options
| -E EnvVar=value Specifies the name and value of an environment variable to be passed to the GPFS daemon. You | can specify multiple -E options. |
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmstartup command. You may issue the mmstartup command from any node in the GPFS cluster. When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
To start GPFS on all nodes in the GPFS cluster, issue this command:
mmstartup -a
280
Thu Aug 12 13:22:40 EDT 2004: 6027-1642 mmstartup: Starting GPFS ...
See also
mmgetstate Command on page 197 mmlscluster Command on page 207 mmshutdown Command on page 275
Location
/usr/lpp/mmfs/bin
281
mmtracectl Command
| Name | mmtracectl Sets up and enables GPFS tracing. | Synopsis | | | | mmtracectl {--start | --stop | --off | --set} [--trace={io | all | def | Class Level [Class Level ...]] [--trace-recycle={off | local | global | globalOnShutdown}] [--aix-trace-buffer-size=BufferSize] [--tracedev-buffer-size=BufferSize] [--trace-file-size=FileSize] [--trace-dispatch={yes | no}] [-N {Node [,Node...] | NodeFile | NodeClass}]
| Description | Attention: | | | | | | | | | Use this command only under the direction of your IBM service representative.
Use the mmtracectl command to perform the following functions: v Start or stop tracing. v Turn tracing on (start or set trace recycle) or off on the next session. This is a persistent setting to automatically start trace each time GPFS starts). v Allow for predefined trace levels: io, all, and def, as well as user-specified trace levels. v Allow for changing the size of trace buffer sizes for AIX and all others using the tracedev option. v Trace recycle functions, which allow for never cycling traces (off option), tracing on all nodes when GPFS ends abnormally (global option), cycling traces any time GPFS goes down on all nodes (globalonshutdown option).
| Results | GPFS tracing can be started, stopped, or related configuration options can be set. | Parameters | {--start | --stop | --off | --set} Specifies the actions that the mmtracectl command performs, where: | | | | | start stop off set Starts the trace. Stops the trace. Clears all of the setting variables and stops the trace. Sets the trace variables.
| --trace={io | all | def | Class Level [Class Level ...}] Allows for predefined and user-specified trace levels, where: | | | | | | io all def Indicates trace-level settings tailored for input and output (I/O). Sets trace levels to their highest setting (9). Indicates that the default trace settings will be used.
| --traceRecycle={off | local | global | globalOnShutdown} Controls trace recycling during daemon termination. The following values are recognized: | | off Does not recycle traces. This is the default.
282
| | | |
local
Recycles traces on the local node every time mmfsd goes down.
global Recycles traces on all nodes in the cluster when an abnormal daemon shutdown occurs. globalOnShutdown Recycles traces on all nodes in the cluster for normal and abnormal daemon shutdowns.
| -N {Node[,Node...] | NodeFile | NodeClass } Specifies the nodes that will participate in the tracing of the file system. This option supports all | defined node classes. The default value is all. | | | For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4.
| Options | --aix-trace-buffer-size =BufferSize Controls the size of the trace buffer in memory for AIX. | | --tracedev-buffer-size=BufferSize Controls the size of the trace buffer in memory. | | --trace-file-Size=FileSize Controls the size of the trace file. | | --trace-dispatch Enables AIX thread dispatching trace hooks. | | Exit status | 0 Successful completion.
| nonzero A failure has occurred. | | Security | You must have root authority to run the mmtracectl command. | You can issue the mmtracectl command from any node in the GPFS cluster. | Examples | To set trace levels to the defined group of def and have traces start on all nodes when GPFS comes up, | issue this command: | mmtracectl --set --trace=def --trace-recycle=global | | | | The system displays output similar to:
mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
| To confirm the change, issue this command: | mmlsconfig|grep trace | The system displays output similar to: | trace all 4 tm 2 thread 1 mutex 1 vnode 2 ksvfs 3 klockl 2 io 3 pgalloc 1 mb 1 lock 2 fsck 3 | traceRecycle global | To manually start traces on all nodes, issue this command: | mmtracectl --start
Chapter 8. GPFS commands
283
| See also | mmchconfig Command on page 90 | mmtrace shell script | Location | /usr/lpp/mmfs/bin
284
mmumount Command
Name
mmumount Unmounts GPFS file systems on one or more nodes in the cluster.
Synopsis
| mmumount {Device | MountPoint | all | all_local | all_remote} [-f ] [-a | -N {Node[,Node...] | NodeFile | NodeClass}] Or, | mmumount Device -f -C {all_remote | ClusterName} [-N Node[,Node...]]
Description
Another name for the mmumount command is the mmunmount command. Either name can be used. The mmumount command unmounts a previously mounted GPFS file system on one or more nodes in the cluster. If no nodes are specified, the file systems are unmounted only on the node from which the command was issued. The file system can be specified using its device name or the mount point where it is currently mounted. | Use the first form of the command to unmount file systems on nodes that belong to the local cluster. | Use the second form of the command with the -C option when it is necessary to force an unmount of file | systems that are owned by the local cluster, but are mounted on nodes that belong to another cluster. | When a file system is unmounted by force with the second form of the mmumount command, the affected | nodes may still show the file system as mounted, but the data will not be accessible. It is the responsibility | of the system administrator to clear the mount state by issuing the umount command.
Parameters
| Device | all | all_local | all_remote | | | | | | | | | Device Is the device name of the file system to be mounted. File system names do not need to be fully qualified. fs0 is as acceptable as /dev/fs0. all Indicates all file systems that are known to this cluster.
all_local Indicates all file systems that are owned by this cluster. all_remote Indicates all files systems that are owned by another cluster to which this cluster has access. MountPoint Is the location where the GPFS file system to be unmounted is currently mounted.
Options
-a -f Unmounts the file system on all nodes in the GPFS cluster. Forces the unmount to take place even though the file system may be still in use.
285
Use this flag with extreme caution. Using this flag may cause outstanding write operations to be lost. Because of this, forcing an unmount can cause data integrity failures and should be used with caution. | -C {all_remote | ClusterName} | Specifies the cluster on which the file system is to be unmounted by force. all_remote denotes all clusters other than the one from which the command was issued. | -N {Node[,Node...] | NodeFile | NodeClass} Specifies the nodes on which the file system is to be unmounted. For general information on how to specify node names, see Specifying nodes as input to GPFS commands on page 4. This command does not support a NodeClass of mount. | | | | When the -N option is specified in conjunction with -C ClusterName, the specified node names are assumed to refer to nodes that belong to the specified remote cluster (as identified by the mmlsmount command). The mmumount command cannot verify the accuracy of this information. NodeClass and NodeFile are not supported in conjunction with the -C option.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmumount command. You may issue the mmumount command from any node in the GPFS cluster.
Examples
1. To unmount file system fs1 on all nodes in the cluster, issue this command:
mmumount fs1 -a
2. To force unmount file system fs2 on the local node, issue this command:
mmumount fs2 -f
See also
mmmount Command on page 236 mmlsmount Command on page 223
Location
/usr/lpp/mmfs/bin
286
mmunlinkfileset Command
Name
mmunlinkfileset Removes the junction to the a GPFS fileset.
Synopsis
mmunlinkfileset Device {FilesetName | -J JunctionPath} [-f]
Description
The mmunlinkfileset command removes the junction to the fileset. The junction can be specified by path or by naming the fileset that is its target. The unlink fails if there are files open in the fileset, unless the -f flag is specified. The root fileset may not be unlinked. Attention: If you are using the TSM Backup Archive client you must use caution when you unlink filesets that contain data backed up by TSM. TSM tracks files by pathname and does not track filesets. As a result, when you unlink a fileset, it appears to TSM that you deleted the contents of the fileset. Therefore, the TSM Backup Archive client inactivates the data on the TSM server which may result in the loss of backup data during the expiration process. For information on GPFS filesets, see the chapter Policy-based data management for GPFS in General Parallel File System: Advanced Administration Guide.
Parameters
Device The device name of the file system that contains the fileset. File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. FilesetName Specifies the name of the fileset to be removed. -J JunctionPath Specifies the name of the junction to be removed. A junction is a special directory entry that connects a name in a directory of one fileset to the root directory of another fileset.
Options
-f Forces the unlink to take place even though there may be open files. This option forcibly closes any open files, causing an errno of ESTALE on their next use of the file.
Exit status
0 Successful completion. nonzero A failure has occurred.
Security
You must have root authority to run the mmunlinkfileset command. You may issue the mmunlinkfileset command from any node in the GPFS cluster.
287
When using the rcp and rsh commands for remote communication, a properly configured .rhosts file must exist in the root users home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the mmcrcluster or the mmchcluster command, you must ensure: 1. Proper authorization is granted to all nodes in the GPFS cluster. 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages.
Examples
1. This command indicates the current configuration of filesets for file system gpfs1:
mmlsfileset gpfs1
2. This command indicates the current configuration of filesets for file system gpfs1:
This command unlinks junction path /gpfs1/fset1 from file system gpfs1:
mmunlinkfileset gpfs1 -J /gpfs1/fset1
288
See also
mmchfileset Command on page 104 mmcrfileset Command on page 125 mmdelfileset Command on page 162 mmlinkfileset Command on page 203 mmlsfileset Command on page 214
Location
/usr/lpp/mmfs/bin
289
290
| |
gpfs_free_fssnaphandle() Subroutine on page 307 Frees a file system snapshot handle. gpfs_fssnap_handle_t Structure on page 308 gpfs_fssnap_id_t Structure on page 309 gpfs_fstat() Subroutine on page 310 Contains a handle for a GPFS file system or snapshot. Contains a permanent identifier for a GPFS file system or snapshot. Returns exact file status for a GPFS file.
gpfs_get_fsname_from_fssnaphandle() Subroutine Obtains a file system name from its snapshot handle. on page 312 gpfs_get_fssnaphandle_by_fssnapid() Subroutine on page 313 gpfs_get_fssnaphandle_by_name() Subroutine on page 315 gpfs_get_fssnaphandle_by_path() Subroutine on page 317 gpfs_get_fssnapid_from_fssnaphandle() Subroutine on page 319 gpfs_get_pathname_from_fssnaphandle() Subroutine on page 321 gpfs_get_snapdirname() Subroutine on page 323 gpfs_get_snapname_from_fssnaphandle() Subroutine on page 325 gpfs_getacl() Subroutine on page 327 gpfs_iattr_t Structure on page 329 gpfs_iclose() Subroutine on page 332 gpfs_ifile_t Structure on page 334 gpfs_igetattrs() Subroutine on page 335 gpfs_igetfilesetname() Subroutine on page 337 gpfs_igetstoragepool() Subroutine on page 339 gpfs_iopen() Subroutine on page 341 gpfs_iread() Subroutine on page 343
Copyright IBM Corp. 1998, 2007
Obtains a file system snapshot handle using its snapshot ID. Obtains a file system snapshot handle using its name. Obtains a file system snapshot handle using its path name. Obtains a file system snapshot ID using its snapshot handle. Obtains a file system path name using its snapshot handle. Obtains the name of the directory containing snapshots. Obtains a snapshot name using its file system snapshot handle. Retrieves the access control information for a GPFS file. Contains attributes of a GPFS inode. Closes a file given its inode file handle. Contains a handle for a GPFS inode. Obtains extended file attributes. Returns the name of the fileset defined by a fileset ID. Returns the name of the storage pool for the given storage pool ID. Opens a file or directory by its inode number. Reads a file opened by gpfs_iopen().
291
Table 8. GPFS programming interfaces (continued) Interface gpfs_ireaddir() Subroutine on page 345 gpfs_ireadlink() Subroutine on page 347 gpfs_ireadx() Subroutine on page 349 gpfs_iscan_t Structure on page 351 gpfs_next_inode() Subroutine on page 352 gpfs_opaque_acl_t Structure on page 354 gpfs_open_inodescan() Subroutine on page 355 gpfs_prealloc() Subroutine on page 358 gpfs_putacl() Subroutine on page 361 gpfs_quotactl() Subroutine on page 363 gpfs_quotaInfo_t Structure on page 366 gpfs_seek_inode() Subroutine on page 368 gpfs_stat() Subroutine on page 370 gpfsAccessRange_t Structure on page 372 gpfsCancelHints_t Structure on page 373 gpfsClearFileCache_t Structure on page 374 gpfsDataShipMap_t Structure on page 375 gpfsDataShipStart_t Structure on page 377 gpfsDataShipStop_t Structure on page 380 gpfsFcntlHeader_t Structure on page 382 gpfsFreeRange_t Structure on page 383 gpfsGetFilesetName_t Structure on page 384 gpfsGetReplication_t Structure on page 385 gpfsGetSnapshotName_t Structure on page 387 gpfsGetStoragePool_t Structure on page 388 Purpose Reads the next directory entry. Reads a symbolic link. Performs block level incremental read of a file within an incremental inode scan. Contains mapping of the inode scan structure. Retrieves the next inode from the inode scan. Contains buffer mapping for the gpfs_getacl() and gpfs_putacl() subroutines. Opens an inode scan over a file system or snapshot. Pre-allocates disk storage for a GPFS file. Sets the access control information for a GPFS file. Manipulates disk quotas on file systems. Contains buffer mapping for the gpfs_quotactl() subroutine. Advances an inode scan to the specified inode number. Returns exact file status for a GPFS file. Declares an access range within a file for an application. Indicates to remove any hints against the open file handle. Indicates file access in the near future is not expected. Indicates which agent nodes are to be used for data shipping. Initiates data shipping mode. Takes a file out of data shipping mode. Contains declaration information for the gpfs_fcntl() subroutine. Undeclares an access range within a file for an application. Obtains a files fileset name. Obtains a files replication factors. Obtains a files snapshot name. Obtains a files storage pool name.
gpfsMultipleAccessRange_t Structure on page 389 Defines prefetching and write-behind file access for an application. gpfsRestripeData_t Structure on page 391 gpfsSetReplication_t Structure on page 392 gpfsSetStoragePool_t Structure on page 394 Restripes a files data blocks. Sets a files replication factors. Sets a files assigned storage pool.
292
gpfs_acl_t Structure
Name
gpfs_acl_t Contains buffer mapping for the gpfs_getacl() and gpfs_putacl() subroutines.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
/* The GPFS ACL */ typedef struct gpfs_acl { gpfs_aclLen_t acl_len; /* Total length of this ACL in bytes */ gpfs_aclLevel_t acl_level; /* Reserved (must be zero) */ gpfs_aclVersion_t acl_version; /* POSIX or NFS4 ACL */ gpfs_aclType_t acl_type; /* Access, Default, or NFS4 */ gpfs_aclCount_t acl_nace; /* Number of Entries that follow */ union { gpfs_ace_v1_t ace_v1[1]; /* when GPFS_ACL_VERSION_POSIX */ gpfs_ace_v4_t ace_v4[1]; /* when GPFS_ACL_VERSION_NFS4 */ }; } gpfs_acl_t;
Description
The gpfs_acl_t structure contains size, version, and ACL type information for the gpfs_getacl() and gpfs_putacl() subroutines.
Members
acl_len The total length (in bytes) of this gpfs_acl_t structure. acl_level Reserved for future use. Currently must be zero. acl_version This field contains the version of the GPFS ACL. GPFS supports two ACL versions: GPFS_ACL_VERSION_POSIX and GPFS_ACL_VERSION_NFS4. On input to the gpfs_getacl() subroutine, set this field to zero. acl_type On input to the gpfs_getacl() subroutine, set this field to: v Either GPFS_ACL_TYPE_ACCESS or GPFS_ACL_TYPE_DEFAULT for POSIX ACLs v GPFS_ACL_TYPE_NFS4 for NFS ACLs. These constants are defined in the gpfs.h header file. acl_nace The number of ACL entries that are in the array (ace_v1 or ace_v4).
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
293
gpfs_close_inodescan() Subroutine
Name
gpfs_close_inodescan() - Closes an inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> void gpfs_close_inodescan(gpfs_iscan_t *iscan);
Description
The gpfs_close_inodescan() subroutine closes the scan of the inodes in a file system or snapshot that was opened with the gpfs_open_inodescan() subroutine. The gpfs_close_inodescan() subroutine frees all storage used for the inode scan and invalidates the iscan handle. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
iscan Pointer to the inode scan handle.
Exit status
The gpfs_close_inodescan() subroutine returns void.
Exceptions
None.
Error status
None.
Examples
For an example using gpfs_close_inodescan(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
294
gpfs_cmp_fssnapid() Subroutine
Name
gpfs_cmp_fssnapid() - Compares two snapshot IDs for the same file system to determine the order in which the two snapshots were taken.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_cmp_fssnapid (const gpfs_fssnap_id_t *fssnapId1, const gpfs_fssnap_id_t *fssnapId2, int *result);
Description
The gpfs_cmp_fssnapid() subroutine compares two snapshot IDs for the same file system to determine the order in which the two snapshots were taken. The result parameter is set as follows: v result less than zero indicates that snapshot 1 was taken before snapshot 2. v result equal to zero indicates that snapshot 1 and 2 are the same. v result greater than zero indicates that snapshot 1 was taken after snapshot 2. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapId1 File system snapshot ID of the first snapshot. fssnapId2 File system snapshot ID of the second snapshot. result Pointer to an integer indicating the outcome of the comparison.
Exit status
If the gpfs_cmp_fssnapid() subroutine is successful, it returns a value of 0 and the result parameter is set as described above. If the gpfs_cmp_fssnapid() subroutine is unsuccessful, it returns a value of -1 and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
EDOM The two snapshots cannot be compared because they were taken from two different file systems.
295
ENOSYS The gpfs_cmp_fssnapid() subroutine is not available. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
296
gpfs_direntx_t Structure
Name
gpfs_direntx_t - Contains attributes of a GPFS directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_direntx { int d_version; /*this structs version*/ unsigned short d_reclen; /*actual size of this struct including null-terminated variable-length d_name*/ unsigned short d_type; /*types are defined below*/ gpfs_ino_t d_ino; /*file inode number*/ gpfs_gen_t d_gen; /*generation number for the inode*/ char d_name[256]; /*null-terminated variable-length name*/ } gpfs_direntx_t; /* File #define #define #define #define types for d_type field in gpfs_direntx_t */ GPFS_DE_OTHER 0 GPFS_DE_DIR 4 GPFS_DE_REG 8 GPFS_DE_LNK 10
Description
The gpfs_direntx_t structure contains the attributes of a GPFS directory entry.
Members
d_version The version number of this structure. d_reclen The actual size of this structure including the null-terminated variable-length d_name field. To allow some degree of forward compatibility, careful callers should use the d_reclen field for the size of the structure rather than the sizeof() function. d_type The type of directory. d_ino The directory inode number. d_gen The directory generation number. d_name Null-terminated variable-length name of the directory.
Examples
For an example using gpfs_direntx_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
297
gpfs_fcntl() Subroutine
Name
gpfs_fcntl() Performs operations on an open file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_fcntl(int fileDesc, void* fcntlArgP)
Description
The gpfs_fcntl() subroutine is used to pass file access pattern information and to control certain file attributes on behalf of an open file. More than one operation can be requested with a single invocation of gpfs_fcntl(). The type and number of operations is determined by the second operand, fcntlArgP, which is a pointer to a data structure built in memory by the application. This data structure consists of: v A fixed length header, mapped by gpfsFcntlHeader_t. v A variable list of individual file access hints, directives or other control structures: File access hints: 1. gpfsAccessRange_t 2. gpfsFreeRange_t 3. gpfsMultipleAccessRange_t 4. gpfsClearFileCache_t File access directives: 1. gpfsCancelHints_t 2. gpfsDataShipMap_t 3. gpfsDataShipStart_t 4. gpfsDataShipStop_t Other file attribute operations: 1. gpfsGetFilesetName_t 2. gpfsGetReplication_t 3. gpfsGetSnapshotName_t 4. gpfsGetStoragePool_t 5. gpfsRestripeData_t 6. gpfsSetReplication_t 7. gpfsSetStoragePool_t The above hints, directives and other operations may be mixed within a single gpfs_fcntl() subroutine, and are performed in the order that they appear. A subsequent hint or directive may cancel out a preceding one. See Chapter 7, Communicating file access patterns to GPFS, on page 59. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
298
Parameters
fileDesc The file descriptor identifying the file to which GPFS applies the hints and directives. fcntlArgP A pointer to the list of operations to be passed to GPFS.
Exit status
If the gpfs_fcntl() subroutine is successful, it returns a value of 0. If the gpfs_fcntl() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EBADF The file descriptor is not valid. EINVAL The file descriptor does not refer to a GPFS file or a regular file. The system call is not valid. ENOSYS The gpfs_fcntl() subroutine is not supported under the current file system format. E2BIG An argument is longer than GPFS_MAX_FCNTL_LENGTH.
Examples
1. This programming segment releases all cache data held by the file handle and tell GPFS that the subroutine will write the portion of the file with file offsets between 2 GB and 3 GB minus one:
struct { gpfsFcntlHeader_t hdr; gpfsClearFileCache_t rel; gpfsAccessRange_t acc; } arg; arg.hdr.totalLength = sizeof(arg); arg.hdr.fcntlVersion = GPFS_FCNTL_CURRENT_VERSION; arg.hdr.fcntlReserved = 0; arg.rel.structLen = sizeof(arg.rel); arg.rel.structType = GPFS_CLEAR_FILE_CACHE; arg.acc.structLen = sizeof(arg.acc); arg.acc.structType = GPFS_ACCESS_RANGE; arg.acc.start = 2LL * 1024LL * 1024LL * 1024LL; arg.acc.length = 1024 * 1024 * 1024; arg.acc.isWrite = 1; rc = gpfs_fcntl(handle, &arg);
2. This programming segment gets the storage pool name and fileset name of a file from GPFS.
struct { gpfsFcntlHeader_t hdr; gpfsGetStoragePool_t pool; gpfsGetFilesetName_t fileset; } fcntlArg; fcntlArg.hdr.totalLength = sizeof(fcntlArg.hdr) + sizeof(fcntlArg.pool) + sizeof(fcntlArg.fileset);
Chapter 9. GPFS programming interfaces
299
fcntlArg.hdr.fcntlVersion = GPFS_FCNTL_CURRENT_VERSION; fcntlArg.hdr.fcntlReserved = 0; fcntlArg.pool.structLen = sizeof(fcntlArg.pool); fcntlArg.pool.structType = GPFS_FCNTL_GET_STORAGEPOOL; fcntlArg.fileset.structLen = sizeof(fcntlArg.fileset); fcntlArg.fileset.structType = GPFS_FCNTL_GET_FILESETNAME; rc = gpfs_fcntl(fd, &fcntlArg);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
300
gpfs_fgetattrs() Subroutine
Name
gpfs_fgetattrs() Retrieves all extended file attributes in opaque format.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_fgetattrs(int fileDesc, int flags, void *bufferP, int bufferSize, int *attrSizeP)
Description
| | | | The gpfs_fgetattrs() subroutine, together with gpfs_fputattrs(), is intended for use by a backup program to save (gpfs_fgetattrs()) and restore (gpfs_fputattrs()) extended file attributes such as ACLs, DMAPI attributes, and other information for the file. If the file has no extended attributes, the gpfs_fgetattrs() subroutine returns a value of 0, but sets attrSizeP to 0 and leaves the contents of the buffer unchanged. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fileDesc The file descriptor identifying the file whose extended attributes are being retrieved. | flags Must have one of these three values: GPFS_ATTRFLAG_DEFAULT v Saves the attributes for file placement v Saves the currently assigned storage pool GPFS_ATTRFLAG_NO_PLACEMENT v Does not save attributes for file placement v Does not save the currently assigned storage pool GPFS_ATTRFLAG_IGNORE_POOL v Saves attributes for file placement v Does not save the currently assigned storage pool bufferP Pointer to a buffer to store the extended attribute information. bufferSize The size of the buffer that was passed in. attrSizeP If successful, returns the actual size of the attribute information that was stored in the buffer. If the bufferSize was too small, returns the minimum buffer size.
Exit status
If the gpfs_fgetattrs() subroutine is successful, it returns a value of 0.
Chapter 9. GPFS programming interfaces
301
If the gpfs_fgetattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSPC bufferSize is too small to return all of the attributes. On return, *attrSizeP is set to the required size. ENOSYS The gpfs_fgetattrs() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
302
gpfs_fputattrs() Subroutine
Name
gpfs_fputattrs() Sets all the extended file attributes for a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_fputattrs(int fileDesc, int flags, void *bufferP)
Description
| | | | The gpfs_fputattrs() subroutine, together with gpfs_fgetattrs(), is intended for use by a backup program to save (gpfs_fgetattrs()) and restore (gpfs_fputattrs()) all of the extended attributes of a file. This subroutine also sets the storage pool for the file and sets data replication to the values that are saved in the extended attributes.
| If the saved storage pool is not valid or if the IGNORE_POOL flag is set, GPFS will select the storage pool | by matching a PLACEMENT rule using the saved file attributes. If GPFS fails to match a placement rule or | if there are no placement rules installed, GPFS assigns the file to the system storage pool. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fileDesc The file descriptor identifying the file whose extended attributes are being set. | flags Non-placement attributes such as ACLs are always restored, regardless of value of the flag. Flags must have one of these three values: GPFS_ATTRFLAG_DEFAULT Restores the previously assigned storage pool and previously assigned data replication. GPFS_ATTRFLAG_NO_PLACEMENT Does not change storage pool and data replication. GPFS_ATTRFLAG_IGNORE_POOL Selects storage pool and data replication by matching the saved attributes to a placement rule instead of restoring the saved storage pool. bufferP A pointer to the buffer containing the extended attributes for the file. If you specify a value of NULL, all extended ACLs for the file are deleted.
Exit status
If the gpfs_fputattrs() subroutine is successful, it returns a value of 0. If the gpfs_fputattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
303
Exceptions
None.
Error status
EINVAL The buffer pointed to by bufferP does not contain valid attribute data. ENOSYS The gpfs_fputattrs() subroutine is not supported under the current file system format.
Examples
To copy extended file attributes from file f1 to file f2: | | | | | | | | | | |
char buf[4096]; int f1, f2, attrSize, rc; rc = gpfs_fgetattrs(f1, GPFS_ATTRFLAG_DEFAULT, buf, sizeof(buf), &attrSize); if (rc != 0) ... // error handling if (attrSize != 0) rc = gpfs_fputattrs(f2, 0, buf); //copy attributes from f1 to f2 else rc = gpfs_fputattrs(f2, 0, NULL); // f1 has no attributes // delete attributes on f2
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
304
gpfs_fputattrswithpathname() Subroutine
| Name | gpfs_fputattrswithpathname() Sets all of the extended file attributes for a file and invokes the policy | engine for RESTORE rules. | Library | GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux) | Synopsis | #include <gpfs.h> | int gpfs_fputattrswithpathname(int fileDesc, int flags, void *bufferP, const char *pathname) | Description | | | | | | The gpfs_fputattrswithpathname() subroutine sets all of the extended attributes of a file. In addition, gpfs_fputattrswithpathname() invokes the policy engine using the saved attributes to match a RESTORE rule to set the storage pool and the data replication for the file. The caller should include the full path to the file (including the file name) to allow rule selection based on file name or path. If the file fails to match a RESTORE rule or if there are no RESTORE rules installed, GPFS selects the storage pool and data replication as it does when calling gpfs_fputattrs().
| Note: Compile any program that uses this subroutine with the -lgpfs flag from one the following libraries: v libgpfs.a for AIX | v libgpfs.so for Linux | | Parameters | fileDesc Is the file descriptor that identifies the file whose extended attributes are to be set. | | flags | | | | | | | | | | Non-placement attributes such as ACLs are always restored, regardless of value of the flag. Flags must have one of these three values: GPFS_ATTRFLAG_DEFAULT Uses the saved attributes to match a RESTORE rule to set the storage pool and the data replication for the file. GPFS_ATTRFLAG_NO_PLACEMENT Does not change storage pool and data replication. GPFS_ATTRFLAG_IGNORE_POOL Checks the file to see if it matches a RESTORE rule. If the file fails to match a RESTORE rule, GPFS ignores the saved storage pool and selects a pool by matching the saved attributes to a PLACEMENT rule.
| bufferP A pointer to the buffer containing the extended attributes for the file. | | If you specify a value of NULL, all extended ACLs for the file are deleted.
| Exit status | If the gpfs_fputattrswithpathname() subroutine is successful, it returns a value of 0. | If the gpfs_fputattrswithpathname() subroutine is unsuccessful, it returns a value of -1 and sets the | global error variable errno to indicate the nature of the error.
Chapter 9. GPFS programming interfaces
305
| Exceptions | None. | Error status | EINVAL The buffer to which bufferP points does not contain valid attribute data. | | ENOSYS The gpfs_fputattrswithpathname() subroutine is not supported under the current file system | format. | | Examples | Refer to gpfs_fputattrs() Subroutine on page 303 for examples. | Location | /usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
306
gpfs_free_fssnaphandle() Subroutine
Name
gpfs_free_fssnaphandle() - Frees a GPFS file system snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> void gpfs_free_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_free_fssnaphandle() subroutine frees the snapshot handle that is passed. The return value is always void. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle.
Exit status
The gpfs_free_fssnaphandle() subroutine always returns void.
Exceptions
None.
Error status
None.
Examples
For an example using gpfs_free_fssnaphandle(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
307
gpfs_fssnap_handle_t Structure
Name
gpfs_fssnap_handle_t - Contains a handle for a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_fssnap_handle gpfs_fssnap_handle_t;
Description
A file system or snapshot is uniquely identified by an fssnapId of type gpfs_fssnap_id_t. While the fssnapId is permanent and global, a shorter fssnapHandle is used by the backup application programming interface to identify the file system and snapshot being accessed. The fssnapHandle, like a POSIX file descriptor, is volatile and may be used only by the program that created it. There are three ways to create a file system snapshot handle: 1. By using the name of the file system and snapshot 2. By specifying the path through the mount point 3. By providing an existing file system snapshot ID Additional subroutines are provided to obtain the permanent, global fssnapId from the fssnapHandle, or to obtain the path or the names for the file system and snapshot, if they are still available in the file system. The file system must be mounted in order to use the backup programming application interface. If the fssnapHandle is created by the path name, the path may be relative and may specify any file or directory in the file system. Operations on a particular snapshot are indicated with a path to a file or directory within that snapshot. If the fssnapHandle is created by name, the file systems unique name may be specified (for example, fs1) or its device name may be provided (for example, /dev/fs1). To specify an operation on the active file system, the pointer to the snapshots name should be set to NULL or a zero-length string provided. The name of the directory under which all snapshots appear may be obtained by the gpfs_get_snapdirname() subroutine. By default this is .snapshots, but it can be changed using the mmsnapdir command. The gpfs_get_snapdirname() subroutine returns the currently set value, which is the one that was last set by the mmsnapdir command, or the default, if it was never changed.
Members
gpfs_fssnap_handle File system snapshot handle
Examples
For an example using gpfs_fssnap_handle_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
308
gpfs_fssnap_id_t Structure
Name
gpfs_fssnap_id_t - Contains a permanent, globally unique identifier for a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_fssnap_id { char opaque[48]; } gpfs_fssnap_id_t;
Description
A file system or snapshot is uniquely identified by an fssnapId of type gpfs_fssnap_id_t. The fssnapId is a permanent and global identifier that uniquely identifies an active file system or a read-only snapshot of a file system. Every snapshot of a file system has a unique identifier that is also different from the identifier of the active file system itself. The fssnapId is obtained from an open fssnapHandle. Once obtained, the fssnapId should be stored along with the file systems data for each backup. The fssnapId is required to generate an incremental backup. The fssnapId identifies the previously backed up file system or snapshot and allows the inode scan to return only the files and data that have changed since that previous scan.
Members
opaque A 48 byte area for containing the snapshot identifier.
Examples
For an example using gpfs_fssnap_id_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
309
gpfs_fstat() Subroutine
Name
gpfs_fstat() Returns exact file status for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_fstat(int fileDesc, struct stat64 *Buffer)
Description
The gpfs_fstat() subroutine is used to obtain exact information about the file associated with the FileDesc parameter. This subroutine is provided as an alternative to the stat() subroutine, which may not provide exact mtime and atime values. See Exceptions to Open Group technical standards on page 401. read, write, or execute permission for the named file is not required, but all directories listed in the path leading to the file must be searchable. The file information is written to the area specified by the Buffer parameter. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fileDesc The file descriptor identifying the file for which exact status information is requested. Buffer A pointer to the stat64 structure in which the information is returned. The stat64 structure is described in the sys/stat.h file.
Exit status
If the gpfs_fstat() subroutine is successful, it returns a value of 0. If the gpfs_fstat() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EBADF The file descriptor is not valid. EINVAL The file descriptor does not refer to a GPFS file or a regular file. ENOSYS The gpfs_fstat() subroutine is not supported under the current file system format.
310
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
311
gpfs_get_fsname_from_fssnaphandle() Subroutine
Name
gpfs_get_fsname_from_fssnaphandle() - Obtains the file systems name from a GPFS file system snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> const char *gpfs_get_fsname_from_fssnaphandle (gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_fsname_from_fssnaphandle() subroutine returns a pointer to the name of file system that is uniquely identified by the file system snapshot handle. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle.
Exit status
If the gpfs_get_fsname_from_fssnaphandle() subroutine is successful, it returns a pointer to the name of the file system identified by the file system snapshot handle. If the gpfs_get_fsname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_get_fsname_from_fssnaphandle() subroutine is not available. EPERM The caller does not have superuser privileges. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
312
gpfs_get_fssnaphandle_by_fssnapid() Subroutine
Name
gpfs_get_fssnaphandle_by_fssnapid() - Obtains a GPFS file system snapshot handle given its permanent, unique snapshot ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_fssnapid (const gpfs_fssnap_id_t *fssnapId);
Description
The gpfs_get_fssnaphandle_by_fssnapid() subroutine creates a handle for the file system or snapshot that is uniquely identified by the permanent, unique snapshot ID. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapId File system snapshot ID
Exit status
If the gpfs_get_fssnaphandle_by_fssnapid() subroutine is successful, it returns a pointer to the file system snapshot handle. If the gpfs_get_fssnaphandle_by_fssnapid() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOMEM Space could not be allocated for the file system snapshot handle. ENOSYS The gpfs_get_fssnaphandle_by_fssnapid() subroutine is not available. EPERM The caller does not have superuser privileges. GPFS_E_INVAL_FSSNAPID The file system snapshot ID is not valid.
313
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
314
gpfs_get_fssnaphandle_by_name() Subroutine
Name
gpfs_get_fssnaphandle_by_name() Obtains a GPFS file system snapshot handle given the file system and snapshot names.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_name (const char *fsName, const char *snapName);
Description
The gpfs_get_fssnaphandle_by_name() subroutine creates a handle for the file system or snapshot that is uniquely identified by the file systems name and the name of the snapshot. Note: 1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fsName A pointer to the name of the file system whose snapshot handle is desired. snapName A pointer to the name of the snapshot whose snapshot handle is desired, or NULL to access the active file system rather than a snapshot within the file system.
Exit status
If the gpfs_get_fssnaphandle_by_name() subroutine is successful, it returns a pointer to the file system snapshot handle. If the gpfs_get_fssnaphandle_by_name() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOENT The file system name is not valid. ENOMEM Space could not be allocated for the file system snapshot handle. ENOSYS The gpfs_get_fssnaphandle_by_name() subroutine is not available.
315
EPERM The caller does not have superuser privileges. GPFS_E_INVAL_SNAPNAME The snapshot name is not valid.
Examples
For an example using gpfs_get_fssnaphandle_by_name(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
316
gpfs_get_fssnaphandle_by_path() Subroutine
Name
gpfs_get_fssnaphandle_by_path() Obtains a GPFS file system snapshot handle given a path to the file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_path (const char *pathName);
Description
The gpfs_get_fssnaphandle_by_path() subroutine creates a handle for the file system or snapshot that is uniquely identified by a path through the file systems mount point to a file or directory within the file system or snapshot. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
pathName A pointer to the path name to a file or directory within the desired file system or snapshot.
Exit status
If the gpfs_get_fssnaphandle_by_path() subroutine is successful, it returns a pointer to the file system snapshot handle. If the gpfs_get_fssnaphandle_by_path() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOENT The path name is not valid. ENOMEM Space could not be allocated for the file system snapshot handle. ENOSYS The gpfs_get_fssnaphandle_by_path() subroutine is not available. EPERM The caller does not have superuser privileges.
317
Examples
For an example using gpfs_get_fssnaphandle_by_path(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
318
gpfs_get_fssnapid_from_fssnaphandle() Subroutine
Name
gpfs_get_fssnapid_from_fssnaphandle() - Obtains the permanent, unique GPFS file system snapshot ID given its handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_get_fssnapid_from_fssnaphandle (gpfs_fssnap_handle_t *fssnapHandle, gpfs_fssnap_id_t *fssnapId);
Description
The gpfs_get_fssnapid_from_fssnaphandle() subroutine obtains the permanent, globally unique file system snapshot ID of the file system or snapshot identified by the open file system snapshot handle. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle. fssnapId File system snapshot ID.
Exit status
If the gpfs_get_fssnapid_from_fssnaphandle() subroutine is successful, it returns a pointer to the file system snapshot ID. If the gpfs_get_fssnapid_from_fssnaphandle() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EFAULT Size mismatch for fssnapId. EINVAL NULL pointer given for returned fssnapId. ENOSYS The gpfs_get_fssnapid_from_fssnaphandle() subroutine is not available. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Chapter 9. GPFS programming interfaces
319
Examples
For an example using gpfs_get_fssnapid_from_fssnaphandle(), see /usr/lpp/mmfs/samples/util/ tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
320
gpfs_get_pathname_from_fssnaphandle() Subroutine
Name
gpfs_get_pathname_from_fssnaphandle() - Obtains the path name of a GPFS file system snapshot given its handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> const char *gpfs_get_pathname_from_fssnaphandle (gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_pathname_from_fssnaphandle() subroutine obtains the path name of the file system or snapshot identified by the open file system snapshot handle. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle.
Exit status
If the gpfs_get_pathname_from_fssnaphandle() subroutine is successful, it returns a pointer to the path name of the file system or snapshot. If the gpfs_get_pathname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_get_pathname_from_fssnaphandle() subroutine is not available. EPERM The caller does not have superuser privileges. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Examples
For an example using gpfs_get_pathname_from_fssnaphandle(), see /usr/lpp/mmfs/samples/util/ tsbackup.C.
Chapter 9. GPFS programming interfaces
321
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
322
gpfs_get_snapdirname() Subroutine
Name
gpfs_get_snapdirname() Obtains the name of the directory containing snapshots.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_get_snapdirname(gpfs_fssnap_handle_t *fssnapHandle, char *snapdirName, int bufLen);
Description
The gpfs_get_snapdirname() subroutine obtains the name of the directory that is used to contain snapshots. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle. snapdirName Buffer into which the name of the snapshot directory will be copied. bufLen The size of the provided buffer.
Exit status
If the gpfs_get_snapdirname() subroutine is successful, it returns a value of 0 and the snapdirName and bufLen parameters are set as described above. If the gpfs_get_snapdirname() subroutine is unsuccessful, it returns a value of -1 and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
ENOMEM Unable to allocate memory for this request. ENOSYS The gpfs_get_snapdirname() subroutine is not available. EPERM The caller does not have superuser privileges.
323
ERANGE The buffer is too small to return the snapshot directory name. ESTALE The cached file system information was not valid. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Examples
For an example using gpfs_get_snapdirname(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
324
gpfs_get_snapname_from_fssnaphandle() Subroutine
Name
gpfs_get_snapname_from_fssnaphandle() - Obtains the name of the snapshot identified by the GPFS file system snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> const char *gpfs_get_snapname_from_fssnaphandle (gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_snapname_from_fssnaphandle() subroutine obtains a pointer to the name of a GPFS snapshot given its file system snapshot handle. If the fssnapHandle identifies an active file system, as opposed to a snapshot of a file system, gpfs_get_snapname_from_fssnaphandle() returns a pointer to a zero-length snapshot name and a successful return code. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle.
Exit status
If the gpfs_get_snapname_from_fssnaphandle() subroutine is successful, it returns a pointer to the name of the snapshot. If the gpfs_get_snapname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_get_snapname_from_fssnaphandle() subroutine is not available. EPERM The caller does not have superuser privileges. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid. GPFS_E_INVAL_SNAPNAME The snapshot has been deleted.
325
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
326
gpfs_getacl() Subroutine
Name
gpfs_getacl() Retrieves the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_getacl(char *pathname, int flags, void *aclP);
Description
The gpfs_getacl() subroutine, together with the gpfs_putacl() subroutine, is intended for use by a backup program to save (gpfs_getacl()) and restore (gpfs_putacl()) the ACL information for the file. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
pathname The path identifying the file for which the ACLs are being obtained. flags Consists of one of these values: 0 Indicates that the aclP parameter is to be mapped with the gpfs_opaque_acl_t struct. The gpfs_opaque_acl_t struct should be used by backup and restore programs. GPFS_GETACL_STRUCT Indicates that the aclP parameter is to be mapped with the gpfs_acl_t struct. The gpfs_acl_t struct is provided for applications that need to interpret the ACL. aclP Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the value of flags. The first four bytes of the buffer must contain its total size.
Exit status
If the gpfs_getacl() subroutine is successful, it returns a value of 0. If the gpfs_getacl() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EINVAL The path name does not refer to a GPFS file or a regular file.
327
ENOENT The file does not exist. ENOSPC The buffer is too small to return the entire ACL. The required buffer size is returned in the first four bytes of the buffer pointed to by aclP. ENOSYS The gpfs_getacl() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
328
gpfs_iattr_t Structure
Name
gpfs_iattr_t Contains attributes of a GPFS inode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
typedef struct gpfs_iattr { int ia_version; /* this struct version */ int ia_reclen; /* sizeof this structure */ int ia_checksum; /* validity check on iattr struct */ gpfs_mode_t ia_mode; /* access mode */ gpfs_uid_t ia_uid; /* owner uid */ gpfs_gid_t ia_gid; /* owner gid */ gpfs_ino_t ia_inode; /* file inode number */ gpfs_gen_t ia_gen; /* inode generation number */ short ia_nlink; /* number of links */ short ia_flags; /* flags (defined below) */ int ia_blocksize; /* preferred block size for io */ gpfs_mask_t ia_mask; /* initial attribute mask-not used*/ gpfs_off64_t ia_size; /* file size in bytes */ gpfs_off64_t ia_blocks; /*512 byte blocks of disk held by file*/ gpfs_timestruc_t ia_atime; /* time of last access */ gpfs_timestruc_t ia_mtime; /* time of last data modification */ gpfs_timestruc_t ia_ctime; /* time of last status change */ gpfs_dev_t ia_rdev; /* ID of device */ int ia_xperm; /*non-zero if file has extended acl*/ int ia_modsnapid; /*Internal snapshot ID indicating the last time that the file was modified*/ unsigned int ia_filesetid; /* fileset ID */ unsigned int ia_datapoolid; /* storage pool ID for data */ } gpfs_iattr_t; /* Define flags for inode attributes */ #define GPFS_IAFLAG_SNAPDIR 0x0001 /* (obsolete) */ #define GPFS_IAFLAG_USRQUOTA 0x0002 /*inode is a user quota file*/ #define GPFS_IAFLAG_GRPQUOTA 0x0004 /*inode is a group quota file*/ #define GPFS_IAFLAG_ERROR 0x0008 /* error reading inode */ /* Define flags for inode replication attributes */ #define GPFS_IAFLAG_REPLMETA 0x0200 /* metadata replication set */ #define GPFS_IAFLAG_REPLDATA 0x0400 /* data replication set */ #define GPFS_IAFLAG_EXPOSED 0x0800 /*may have data on suspended disks*/ #define GPFS_IAFLAG_ILLREPLICATED 0x1000 /*maybe not properly replicated*/ #define GPFS_IAFLAG_UNBALANCED 0x2000 /*maybe not properly balanced*/ #define GPFS_IAFLAG_DATAUPDATEMISS 0x4000 /* has stale data blocks on unavailable disk */ #define GPFS_IAFLAG_METAUPDATEMISS 0x8000 /* has stale metadata on unavailable disk */ #define GPFS_IAFLAG_ILLPLACED 0x0100 /* may not be properly placed */ #define GPFS_IAFLAG_FILESET_ROOT 0x0010 /* root dir of a fileset */ #define GPFS_IAFLAG_NO_SNAP_RESTORE 0x0020 /* dont restore from snapshots */ #define GPFS_IAFLAG_FILESETQUOTA 0x0040 /* inode is a fileset quota file */
Chapter 9. GPFS programming interfaces
329
| | | | | | |
/* Define flags for extended attributes */ #define GPFS_IAXPERM_ACL 0x0001 /* file #define GPFS_IAXPERM_XATTR 0x0002 /* file #define GPFS_IAXPERM_DMATTR 0x0004 /* file #define GPFS_IAXPERM_DOSATTR 0x0008 /* file #define GPFS_IAXPERM_RPATTR 0x0010 /* file
acls */ extended attributes */ dm attributes */ non-default dos attrs */ restore policy attrs */
Description
The gpfs_iattr_t structure contains the various attributes of a GPFS inode.
Members
ia_version The version number of this structure. ia_reclen The size of this structure. ia_checksum The checksum for this gpfs_iattr structure. ia_mode The access mode for this inode. ia_uid The owner user ID for this inode. ia_gid The owner group ID for this inode. ia_inode The file inode number. ia_gen The inode generation number. ia_nlink The number of links for this inode. ia_flags The flags (defined above) for this inode. ia_blocksize The preferred block size for I/O. ia_mask The initial attribute mask (not used). ia_size The file size in bytes. ia_blocks The number of 512 byte blocks of disk held by the file. ia_atime The time of last access. ia_mtime The time of last data modification. ia_ctime The time of last status change. ia_rdev The ID of the device.
330
ia_xperm Indicator - nonzero if file has extended ACL. ia_modsnapid Internal snapshot ID indicating the last time that the file was modified. Internal snapshot IDs for the current snapshots are displayed by the mmlssnapshot command. ia_filesetid The fileset ID for the inode. ia_datapoolid The storage pool ID for data for the inode.
Examples
For an example using gpfs_iattr_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
331
gpfs_iclose() Subroutine
Name
gpfs_iclose() - Closes a file given its inode file handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_iclose(gpfs_ifile_t *ifile);
Description
The gpfs_iclose() subroutine closes an open file descriptor created by gpfs_iopen(). For an overview of using gpfs_iclose() in a backup application, see Using APIs to develop backup applications on page 27. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
ifile Pointer to gpfs_ifile_t from gpfs_iopen().
Exit status
If the gpfs_iclose() subroutine is successful, it returns a value of 0. If the gpfs_iclose() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_iclose() subroutine is not available. EPERM The caller does not have superuser privileges. ESTALE Cached file system information was not valid.
Examples
For an example using gpfs_iclose(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
332
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
333
gpfs_ifile_t Structure
Name
gpfs_ifile_t - Contains a handle for a GPFS inode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_ifile gpfs_ifile_t;
Description
The gpfs_ifile_t structure contains a handle for the file of a GPFS inode.
Members
gpfs_ifile The handle for the file of a GPFS inode.
Examples
For an example using gpfs_ifile_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
334
gpfs_igetattrs() Subroutine
Name
gpfs_igetattrs() - Retrieve all extended file attributes in opaque format.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_igetattrs(gpfs_ifile_t *ifile, void *buffer, int bufferSize, int *attrSize);
Description
The gpfs_igetattrs() subroutine retrieves all extended file attributes in opaque format. This subroutine is intended for use by a backup program to save all extended file attributes (ACLs, DMAPI attributes, and so forth) in one invocation. If the file does not have any extended attributes, the subroutine sets attrSize to zero. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
ifile Pointer to gpfs_ifile_t from gpfs_iopen(). buffer Pointer to buffer for returned attributes. bufferSize Size of the buffer. attrSize Pointer to returned size of attributes.
Exit status
If the gpfs_igetattrs() subroutine is successful, it returns a value of 0. If the gpfs_igetattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_igetattrs() subroutine is not available. EPERM The caller does not have superuser privileges.
335
ERANGE The buffer is too small to return all attributes. Field *attrSizeP will be set to the size necessary. ESTALE Cached file system information was not valid. GPFS_E_INVAL_IFILE Incorrect ifile parameters.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
336
gpfs_igetfilesetname() Subroutine
Name
gpfs_igetfilesetname() - Returns the name of the fileset defined by a fileset ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_igetfilesetname(gpfs_iscan_t *iscan, unsigned int filesetId, void *buffer, int bufferSize);
Description
The gpfs_igetfilesetname() subroutine is part of the backup by inode interface. The caller provides a pointer to the scan descriptor used to obtain the fileset ID. This library routine will return the name of the fileset defined by the fileset ID. The name is the null-terminated string provided by the administrator when the fileset was defined. The maximum string length is defined by GPFS_MAXNAMLEN. Note: 1. This routine is not thread safe. Only one thread at a time is allowed to invoke this routine for the given scan descriptor. 2. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
iscan Pointer to gpfs_iscan_t used to obtain the fileset ID. filesetId The fileset ID. buffer Pointer to buffer for returned attributes. bufferSize Size of the buffer.
Exit status
If the gpfs_igetfilesetname() subroutine is successful, it returns a value of 0. If the gpfs_igetfilesetname() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_igetfilesetname() subroutine is not available.
Chapter 9. GPFS programming interfaces
337
EPERM The caller does not have superuser privileges. ERANGE The buffer is too small to return all attributes. Field *attrSizeP will be set to the size necessary.
Examples
This programming segment gets the fileset name based on the given fileset ID. The returned fileset name is stored in FileSetNameBuffer, which has a length of FileSetNameSize.
gpfs_iscan_t *fsInodeScanP; gpfs_igetfilesetname(fsInodeScanP,FileSetId, &FileSetNameBuffer,FileSetNameSize);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
338
gpfs_igetstoragepool() Subroutine
Name
gpfs_igetstoragepool() - Returns the name of the storage pool for the given storage pool ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_igetstoragepool(gpfs_iscan_t *iscan, unsigned int dataPoolId, void *buffer, int bufferSize);
Description
The gpfs_igetstoragepool() subroutine is part of the backup by inode interface. The caller provides a pointer to the scan descriptor used to obtain the storage pool ID. This routine returns the name of the storage pool for the given storage pool ID. The name is the null-terminated string provided by the administrator when the storage pool was defined. The maximum string length is defined by GPFS_MAXNAMLEN. Note: 1. This routine is not thread safe. Only one thread at a time is allowed to invoke this routine for the given scan descriptor. 2. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
iscan Pointer to gpfs_iscan_t used to obtain the storage pool ID. dataPoolId The storage pool ID. buffer Pointer to buffer for returned attributes. bufferSize Size of the buffer.
Exit status
If the gpfs_igetstoragepool() subroutine is successful, it returns a value of 0. If the gpfs_igetstoragepool() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
339
Error status
ENOSYS The gpfs_igetstoragepool() subroutine is not available. EPERM The caller does not have superuser privileges. ERANGE The buffer is too small to return all attributes. Field *attrSizeP will be set to the size necessary.
Examples
This programming segment gets the storage pool name based on the given storage pool ID. The returned storage pool name is stored in StoragePoolNameBuffer which has the length of StoragePoolNameSize.
gpfs_iscan_t *fsInodeScanP; gpfs_igetstoragepool(fsInodeScanP,StgpoolIdBuffer, &StgpoolNameBuffer,StgpoolNameSize);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
340
gpfs_iopen() Subroutine
Name
gpfs_iopen() - Opens a file or directory by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_ifile_t *gpfs_iopen(gpfs_fssnap_handle_t *fssnapHandle, gpfs_ino_t ino, int open_flags, const gpfs_iattr_t *statxbuf, const char *symlink);
Description
The gpfs_iopen() subroutine opens a user file or directory for backup. The file is identified by its inode number ino within the file system or snapshot identified by the fssnapHandle. The fssnapHandle parameter must be the same one that was used to create the inode scan that returned the inode number ino. To read the file or directory, the open_flags must be set to GPFS_O_BACKUP. The statxbuf and symlink parameters are reserved for future use and must be set to NULL. For an overview of using gpfs_iopen() in a backup application, see Using APIs to develop backup applications on page 27. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle. ino inode number.
open_flags GPFS_O_BACKUP Read files for backup. O_RDONLY For gpfs_iread(). statxbuf This parameter is reserved for future use and should always be set to NULL. symlink This parameter is reserved for future use and should always be set to NULL.
Exit status
If the gpfs_iopen() subroutine is successful, it returns a pointer to the inodes file handle.
Chapter 9. GPFS programming interfaces
341
If the gpfs_iopen() subroutine is unsuccessful, it returns NULL and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
EINVAL Missing or incorrect parameter. ENOMEM Unable to allocate memory for request. ENOSYS The gpfs_iopen() subroutine is not available. EPERM The caller does not have superuser privileges. ESTALE Cached file system information was not valid.
Examples
For an example using gpfs_iopen(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
342
gpfs_iread() Subroutine
Name
gpfs_iread() - Reads a file opened by gpfs_iopen().
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_iread(gpfs_ifile_t *ifile, void *buffer, int bufferSize, gpfs_off64_t *offset);
Description
The gpfs_iread() subroutine reads data from the file indicated by the ifile parameter returned from gpfs_iopen(). This subroutine reads data beginning at parameter offset and continuing for bufferSize bytes into the buffer specified by buffer. If successful, the subroutine returns a value that is the length of the data read, and sets parameter offset to the offset of the next byte to be read. A return value of 0 indicates end-of-file. For an overview of using gpfs_iread() in a backup application, see Using APIs to develop backup applications on page 27. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
ifile Pointer to gpfs_ifile_t from gpfs_iopen(). buffer Buffer for the data to be read. bufferSize Size of the buffer (that is, the amount of data to be read). offset Offset of where within the file to read. If gpfs_iread() is successful, offset is updated to the next byte after the last one that was read.
Exit status
If the gpfs_iread() subroutine is successful, it returns the number of bytes read. If the gpfs_iread() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
343
Error status
EISDIR The specified file is a directory. ENOSYS The gpfs_iread() subroutine is not available. EPERM The caller does not have superuser privileges. ESTALE Cached file system information was not valid. GPFS_E_INVAL_IFILE Incorrect ifile parameter.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
344
gpfs_ireaddir() Subroutine
Name
gpfs_ireaddir() - Reads the next directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_ireaddir(gpfs_ifile_t *idir, const gpfs_direntx_t **dirent);
Description
The gpfs_ireaddir() subroutine returns the next directory entry in a file system. For an overview of using gpfs_ireaddir() in a backup application, see Using APIs to develop backup applications on page 27. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
idir Pointer to gpfs_ifile_t from gpfs_iopen(). dirent Pointer to returned pointer to directory entry.
Exit status
If the gpfs_ireaddir() subroutine is successful, it returns a value of 0 and sets the dirent parameter to point to the returned directory entry. If there are no more GPFS directory entries, gpfs_ireaddir() returns a value of 0 and sets the dirent parameter to NULL. If the gpfs_ireaddir() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOMEM Unable to allocate memory for request. ENOSYS The gpfs_ireaddir() subroutine is not available. ENOTDIR File is not a directory. EPERM The caller does not have superuser privileges. ESTALE The cached file system information was not valid.
Chapter 9. GPFS programming interfaces
345
Examples
For an example using gpfs_ireaddir(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
346
gpfs_ireadlink() Subroutine
Name
gpfs_ireadlink() - Reads a symbolic link by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_ireadlink(gpfs_fssnap_handle_t *fssnapHandle, gpfs_ino_t ino, char *buffer, int bufferSize);
Description
The gpfs_ireadlink() subroutine reads a symbolic link by inode number. Like gpfs_iopen(), use the same fssnapHandle parameter that was used by the inode scan. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle. ino inode number of the link file to read.
buffer Pointer to buffer for the returned link data. bufferSize Size of the buffer.
Exit status
If the gpfs_ireadlink() subroutine is successful, it returns the number of bytes read. If the gpfs_ireadlink() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_ireadlink() subroutine is not available. EPERM The caller does not have superuser privileges. ERANGE The buffer is too small to return the symbolic link.
Chapter 9. GPFS programming interfaces
347
ESTALE Cached file system information was not valid. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
348
gpfs_ireadx() Subroutine
Name
gpfs_ireadx() - Performs block level incremental read of a file within an incremental inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_off64_t gpfs_ireadx(gpfs_ifile_t *ifile, gpfs_iscan_t *iscan, void *buffer, int bufferSize, gpfs_off64_t *offset, gpfs_off64_t termOffset, int *hole);
Description
The gpfs_ireadx() subroutine performs a block level incremental read on a file opened by gpfs_iopen() within a given incremental scan opened using gpfs_open_inodescan(). For an overview of using gpfs_ireadx() in a backup application, see Using APIs to develop backup applications on page 27. The gpfs_ireadx() subroutines returns the data that has changed since the prev_fssnapId specified for the inode scan. The file is scanned starting at offset and terminating at termOffset, looking for changed data. Once changed data is located, the offset parameter is set to its location, the new data is returned in the buffer provided, and the amount of data returned is the subroutines value. If the change to the data is that it has been deleted (that is, the file has been truncated), no data is returned, but the hole parameter is returned with a value of 1, and the size of the hole is returned as the subroutines value. The returned size of the hole may exceed the bufferSize provided. If no changed data was found before reaching the termOffset or the end-of-file, then the gpfs_ireadx() subroutine return value is 0. Block level incremental backups are available only if the previous snapshot was not deleted. If it was deleted, gpfs_ireadx() may still be used, but it returns all of the files data, operating like the standard gpfs_iread() subroutine. However, the gpfs_ireadx() subroutine will still identify sparse files and explicitly return information on holes in the files, rather than returning the NULL data. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
ifile iscan Pointer to gpfs_ifile_t returned from gpfs_iopen(). Pointer to gpfs_iscan_t from gpfs_open_inodescan().
buffer Pointer to buffer for returned data, or NULL to query the next increment to be read. bufferSize Size of buffer for returned data.
Chapter 9. GPFS programming interfaces
349
offset On input, the offset to start the scan for changes. On output, the offset of the changed data, if any was detected. termOffset Read terminates before reading this offset. The caller may specify ia_size from the files gpfs_iattr_t or 0 to scan the entire file. hole Pointer to a flag returned to indicated a hole in the file. A value of 0 indicates that the gpfs_ireadx() subroutine returned data in the buffer. A value of 1 indicates that gpfs_ireadx() encountered a hole at the returned offset.
Exit status
If the gpfs_ireadx() subroutine is successful, it returns the number of bytes read and returned in bufP, or the size of the hole encountered in the file. If the gpfs_ireadx() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EDOM The file system snapshot ID from the iscanId does not match the ifiles. EINVAL Missing or incorrect parameter. EISDIR The specified file is a directory. ENOMEM Unable to allocate memory for request. ENOSYS The gpfs_ireadx() subroutine is not available. EPERM The caller does not have superuser privileges. ERANGE The file system snapshot ID from the iscanId is more recent than the ifiles. ESTALE Cached file system information was not valid. GPFS_E_INVAL_IFILE Incorrect ifile parameter. GPFS_E_INVAL_ISCAN Incorrect iscan parameter.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
350
gpfs_iscan_t Structure
Name
gpfs_iscan_t - Contains a handle for an inode scan of a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_iscan gpfs_iscan_t;
Description
The gpfs_iscan_t structure contains a handle for an inode scan of a GPFS file system or snapshot.
Members
gpfs_iscan The handle for an inode scan for a GPFS file system or snapshot.
Examples
For an example using gpfs_iscan_t, see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
351
gpfs_next_inode() Subroutine
Name
gpfs_next_inode() - Retrieves the next inode from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_next_inode(gpfs_iscan_t *iscan, gpfs_ino_t termIno, const gpfs_iattr_t **iattr);
Description
The gpfs_next_inode() subroutine obtains the next inode from the specified inode scan and sets the iattr pointer to the inodes attributes. The termIno parameter can be used to terminate the inode scan before the last inode in the file system or snapshot being scanned. A value of 0 may be provided to indicate the last inode in the file system or snapshot. If there are no more inodes to be returned before the termination inode, the gpfs_next_inode() subroutine returns a value of 0 and the inodes attribute pointer is set to NULL. For an overview of using gpfs_next_inode() in a backup application, see Using APIs to develop backup applications on page 27. To generate a full backup, invoke gpfs_open_inodescan() with NULL for the prev_fssnapId parameter. Repeated invocations of gpfs_next_inode() then return inode information about all existing user files, directories and links, in inode number order. To generate an incremental backup, invoke gpfs_next_inode() with the fssnapId that was obtained from a fssnapHandle at the time the previous backup was created. The snapshot that was used for the previous backup does not need to exist at the time the incremental backup is generated. That is, the backup application needs to remember only the fssnapId of the previous backup; the snapshot itself can be deleted as soon as the backup is completed. For an incremental backup, only inodes of files that have changed since the specified previous snapshot will be returned. Any operation that changes the files mtime or ctime is considered a change and will cause the file to be included. Files with no changes to the files data or file attributes, other than a change to atime, are omitted from the scan. Incremental backups return deleted files, but full backups do not. A deleted file is indicated by the field ia_nlinks having a value of 0. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
iscan Pointer to the inode scan handle. termIno The inode scan terminates before this inode number. The caller may specify maxIno from gpfs_open_inodescan() or zero to scan the entire inode file.
352
iattr
Exit status
If the gpfs_next_inode() subroutine is successful, it returns a value of 0 and a pointer. The pointer points to NULL if there are no more inodes. Otherwise, the pointer points to the returned inodes attributes. If the gpfs_next_inode() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOMEM Unable to allocate memory for request. ENOSYS The gpfs_next_inode() subroutine is not available. EPERM The caller does not have superuser privileges. ESTALE Cached file system information was not valid. GPFS_E_INVAL_ISCAN Incorrect parameters.
Examples
For an example using gpfs_next_inode(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
353
gpfs_opaque_acl_t Structure
Name
gpfs_opaque_acl_t Contains buffer mapping for the gpfs_getacl() and gpfs_putacl() subroutines.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int acl_buffer_len; unsigned short acl_version; unsigned char acl_type; char acl_var_data[1]; } gpfs_opaque_acl_t;
Description
The gpfs_opaque_acl_t structure contains size, version, and ACL type information for the gpfs_getacl() and gpfs_putacl() subroutines.
Members
acl_buffer_len On input, this field must be set to the total length, in bytes, of the data structure being passed to GPFS. On output, this field contains the actual size of the requested information. If the initial size of the buffer is not large enough to contain all of the information, the gpfs_getacl() invocation must be repeated with a larger buffer. acl_version This field contains the current version of the GPFS internal representation of the ACL. On input to the gpfs_getacl() subroutine, set this field to zero. acl_type On input to the gpfs_getacl() subroutine, set this field to either GPFS_ACL_TYPE_ACCESS or GPFS_ACL_TYPE_DEFAULT, depending on which ACL is requested. These constants are defined in the gpfs.h header file. acl_var_data This field signifies the beginning of the remainder of the ACL information.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
354
gpfs_open_inodescan() Subroutine
Name
gpfs_open_inodescan() Opens an inode scan of a file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> gpfs_iscan_t *gpfs_open_inodescan (gpfs_fssnap_handle_t *fssnapHandle, const gpfs_fssnap_id_t *prev_fssnapId, gpfs_ino_t *maxIno);
Description
The gpfs_open_inodescan() subroutine opens a scan of the inodes in the file system or snapshot identified by the fssnapHandle parameter. The scan traverses all user files, directories and links in the file system or snapshot. The scan begins with the user file with the lowest inode number and returns the files in increasing order. The gpfs_seek_inode() subroutine may be used to set the scan position to an arbitrary inode. System files, such as the block allocation maps, are omitted from the scan. The file system must be mounted to open an inode scan. For an overview of using gpfs_open_inodescan() in a backup application, see Using APIs to develop backup applications on page 27. To generate a full backup, invoke gpfs_open_inodescan() with NULL for the prev_fssnapId parameter. Repeated invocations of gpfs_next_inode() then return inode information about all existing user files, directories and links, in inode number order. To generate an incremental backup, invoke gpfs_open_inodescan() with the fssnapId that was obtained from a fssnapHandle at the time the previous backup was created. The snapshot that was used for the previous backup does not need to exist at the time the incremental backup is generated. That is, the backup application needs to remember only the fssnapId of the previous backup; the snapshot itself can be deleted as soon as the backup is completed. For the incremental backup, any operation that changes the files mtime or ctime causes the file to be included. Files with no changes to the files data or file attributes, other than a change to atime, are omitted from the scan. A full inode scan (prev_fssnapId set to NULL) does not return any inodes of nonexistent or deleted files, but an incremental inode scan (prev_fssnapId not NULL) does return inodes for files that have been deleted since the previous snapshot. The inodes of deleted files have a link count of zero. If the snapshot indicated by prev_fssnapId is available, the caller may benefit from the extended read subroutine, gpfs_ireadx(), which returns only the changed blocks within the files. Without the previous snapshot, all blocks within the changed files are returned. Once a full or incremental backup completes, the new_fssnapId must be saved in order to reuse it on a subsequent incremental backup. This fssnapId must be provided to the gpfs_open_inodescan() subroutine, as the prev_fssnapId input parameter. Note:
Chapter 9. GPFS programming interfaces
355
1. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fssnapHandle File system snapshot handle. prev_fssnapId Pointer to file system snapshot ID or NULL. If prev_fssnapId is provided, the inode scan returns only the files that have changed since the previous backup. If the pointer is NULL, the inode scan returns all user files. maxIno Pointer to inode number or NULL. If provided, gpfs_open_inodescan() returns the maximum inode number in the file system or snapshot being scanned.
Exit status
If the gpfs_open_inodescan() subroutine is successful, it returns a pointer to an inode scan handle. If the gpfs_open_inodescan() subroutine is unsuccessful, it returns a NULL pointer and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
EDOM The file system snapshot ID passed for prev_fssnapId is from a different file system. EINVAL Incorrect parameters. ENOMEM Unable to allocate memory for request. ENOSYS The gpfs_open_inodescan() subroutine is not available. EPERM The caller does not have superuser privileges. ERANGE The prev_fssnapId parameter is the same as or more recent than snapId being scanned. ESTALE Cached file system information was not valid. GPFS_E_INVAL_FSSNAPHANDLE The file system snapshot handle is not valid. GPFS_E_INVAL_FSSNAPID The file system snapshot ID passed for prev_fssnapId is not valid.
Examples
For an example using gpfs_open_inodescan(), see /usr/lpp/mmfs/samples/util/tsbackup.C.
356
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
357
gpfs_prealloc() Subroutine
Name
gpfs_prealloc() Pre-allocates disk storage for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_prealloc(int fileDesc, offset_t StartOffset, offset_t BytesToPrealloc)
Description
The gpfs_prealloc() subroutine is used to preallocate disk storage for a file that has already been opened, prior to writing data to the file. The pre-allocated disk storage is started at the requested offset, StartOffset, and covers at least the number of bytes requested, BytesToPrealloc. Allocations are rounded to GPFS sub-block boundaries. Pre-allocating disk space for a file provides an efficient method for allocating storage without having to write any data. This can result in faster I/O compared to a file which gains disk space incrementally as it grows. Existing data in the file is not modified. Reading any of the pre-allocated blocks returns zeroes. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
fileDesc An integer specifying the file descriptor returned by open(). The file designated for preallocation must be opened for writing. StartOffset The byte offset into the file at which to begin pre-allocation. BytesToPrealloc The number of bytes to be pre-allocated.
Exit status
If the gpfs_prealloc() subroutine is successful, it returns a value of 0. If the gpfs_prealloc() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error. If errno is set to one of the following, some storage may have been pre-allocated: v EDQUOT v EFBIG v ENOSPC v ENOTREADY
358
The only way to tell how much space was actually pre-allocated is to invoke the stat() subroutine and compare the reported file size and number of blocks used with their values prior to preallocation.
Exceptions
None.
Error status
EACCES The file is not opened for writing. EBADF The file descriptor is not valid. EDQUOT A disk quota has been exceeded EFBIG The file has become too large for the file system or has exceeded the file size as defined by the users ulimit value. EINVAL The file descriptor does not refer to a GPFS file or a regular file; a negative value was specified for StartOffset or BytesToPrealloc. ENOTREADY The file system on which the file resides has become unavailable. ENOSPC The file system has run out of disk space. ENOSYS The gpfs_prealloc() subroutine is not supported under the current file system format.
Examples
#include #include #include #include <stdio.h> <fcntl.h> <errno.h> <gpfs.h>
int rc; int fileHandle = -1; char* fileNameP = "datafile"; offset_t startOffset = 0; offset_t bytesToAllocate = 20*1024*1024;
/* 20 MB */
fileHandle = open(fileNameP, O_RDWR|O_CREAT, 0644); if (fileHandle < 0) { perror(fileNameP); exit(1); } rc = gpfs_prealloc(fileHandle, startOffset, bytesToAllocate); if (rc < 0) { fprintf(stderr, "Error %d preallocation at %lld for %lld in %s\n", errno, startOffset, bytesToAllocate, fileNameP); exit(1); }
359
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
360
gpfs_putacl() Subroutine
Name
gpfs_putacl() Restores the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_putacl(char *pathname, int flags, void *aclP)
Description
The gpfs_putacl() subroutine together with the gpfs_getacl() subroutine is intended for use by a backup program to save (gpfs_getacl()) and restore (gpfs_putacl()) the ACL information for the file. Note: 1. The use of gpfs_fgetattrs() and gpfs_fputattrs() is preferred. 2. You must have write access to the file. 3. Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
pathname Path name of the file for which the ACLs is to be set. flags Consists of one of these values: 0 Indicates that the aclP parameter is to be mapped with the gpfs_opaque_acl_t struct. The gpfs_opaque_acl_t struct should be used by backup and restore programs. GPFS_PUTACL_STRUCT Indicates that the aclP parameter is to be mapped with the gpfs_acl_t struct. The gpfs_acl_t struct is provided for applications that need to change the ACL. aclP Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the value of flags. This is where the ACL data is stored, and should be the result of a previous invocation of gpfs_getacl().
Exit status
If the gpfs_putacl() subroutine is successful, it returns a value of 0. If the gpfs_putacl() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Chapter 9. GPFS programming interfaces
361
Error status
ENOSYS The gpfs_putacl() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
362
gpfs_quotactl() Subroutine
Name
gpfs_quotactl() Manipulates disk quotas on file systems.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_quotactl(char *pathname, int cmd, int id, void *bufferP);
Description
The gpfs_quotactl() subroutine manipulates disk quotas. It enables, disables, and manipulates disk quotas for file systems on which quotas have been enabled. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
pathname Specifies the path name of any file within the mounted file system to which the quota control command is to applied. cmd | Specifies the quota control command to be applied and whether it is applied to a user, group, or fileset quota. The cmd parameter can be constructed using GPFS_QCMD(qcmd, Type) contained in gpfs.h. The qcmd parameter specifies the quota control command. The Type parameter specifies one of the following quota types: v user (GPFS_USRQUOTA) v group (GPFS_GRPQUOTA) v fileset (GPFS_FILESETQUOTA) The valid values for the qcmd parameter specified in gpfs.h are: Q_QUOTAON Enables quotas. Enables disk quotas for the file system specified by the pathname parameter and type specified in Type. The id and bufferP parameters are unused. Root user authority is required to enable quotas. Q_QUOTAOFF Disables quotas. Disables disk quotas for the file system specified by the pathname parameter and type specified in Type. The id and bufferP parameters are unused. Root user authority is required to disable quotas. Q_GETQUOTA Gets quota limits and usage information.
363
Retrieves quota limits and current usage for a user, group, or fileset specified by the id parameter. The bufferP parameter points to a gpfs_quotaInfo_t structure to hold the returned information. The gpfs_quotaInfo_t structure is defined in gpfs.h. Root authority is required if the id value is not the current id (user id for GPFS_USRQUOTA, group id for GPFS_GRPQUOTA) of the caller. Q_SETQUOTA Sets quota limits Sets disk quota limits for a user, group, or fileset specified by the id parameter. The bufferP parameter points to a gpfs_quotaInfo_t structure containing the new quota limits. The gpfs_quotaInfo_t structure is defined in gpfs.h. Root user authority is required to set quota limits. Q_SETUSE Sets quota usage Sets disk quota usage for a user, group, or fileset specified by the id parameter. The bufferP parameter points to a gpfs_quotaInfo_t structure containing the new quota usage. The gpfs_quotaInfo_t structure is defined in gpfs.h. Root user authority is required to set quota usage. Q_SYNC Synchronizes the disk copy of a file system quota Updates the on disk copy of quota usage information for a file system. The id and bufferP parameters are unused. Root user authority is required to synchronize a file system quota. id bufferP Points to the address of an optional, command-specific data structure that is copied in or out of the system. Specifies the user, group, or fileset ID to which the quota control command applies. The id parameters is interpreted by the specified quota type.
Exit status
If the gpfs_quotactl() subroutine is successful, it returns a value of 0. If the gpfs_quotactl() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EACCES Search permission is denied for a component of a path prefix. EFAULT An invalid bufferP parameter is supplied. The associated structure could not be copied in or out of the kernel. EINVAL One of the following errors: v The file system is not mounted. v Invalid command or quota type. v Invalid input limits: negative limits or soft limits are greater than hard limits.
364
ENOENT No such file or directory. EPERM The quota control command is privileged and the caller did not have root user authority. E_NO_QUOTA_INST The file system does not support quotas. This is the actual errno generated by GPFS.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
365
gpfs_quotaInfo_t Structure
Name
gpfs_quotaInfo_t Contains buffer mapping for the gpfs_quotactl() subroutine.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_quotaInfo { gpfs_off64_t blockUsage; /* current block count */ gpfs_off64_t blockHardLimit; /* absolute limit on disk blks alloc */ gpfs_off64_t blockSoftLimit; /* preferred limit on disk blks */ gpfs_off64_t blockInDoubt; /* distributed shares + "lost" usage for blks */ int inodeUsage; /* current # allocated inodes */ int inodeHardLimit; /* absolute limit on allocated inodes */ int inodeSoftLimit; /* preferred inode limit */ int inodeInDoubt; /* distributed shares + "lost" usage for inodes */ gpfs_uid_t quoId; /* uid, gid or fileset id int entryType; /* entry type, not used */ unsigned int blockGraceTime; /* time limit for excessive disk use */ unsigned int inodeGraceTime; /* time limit for excessive inode use */ } gpfs_quotaInfo_t;
Description
The gpfs_quotaInfo_t structure contains detailed information for the gpfs_quotactl() subroutine.
Members
blockUsage Current block count in 1 KB units. blockHardLimit Absolute limit on disk block allocation. blockSoftLimit Preferred limit on disk block allocation. blockInDoubt Distributed shares and block usage that have not been not accounted for. inodeUsage Current number of allocated inodes. inodeHardLimit Absolute limit on allocated inodes. inodeSoftLimit Preferred inode limit. inodeInDoubt Distributed inode share and inode usage that have not been accounted for. quoId user ID, group ID, or fileset ID. entryType Not used blockGraceTime Time limit (in seconds since the Epoch) for excessive disk use.
366
inodeGraceTime Time limit (in seconds since the Epoch) for excessive inode use. Epoch is midnight on January 1, 1970 UTC (Coordinated Universal Time).
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
367
gpfs_seek_inode() Subroutine
Name
gpfs_seek_inode() - Advances an inode scan to the specified inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_seek_inode(gpfs_iscan_t *iscan, gpfs_ino_t ino);
Description
The gpfs_seek_inode() subroutine advances an inode scan to the specified inode number. The gpfs_seek_inode() subroutine is used to start an inode scan at some place other than the beginning of the inode file. This is useful to restart a partially completed backup or an interrupted dump transfer to a mirror. It could also be used to do an inode scan in parallel from multiple nodes, by partitioning the inode number space into separate ranges for each participating node. The maximum inode number is returned when the scan was opened and each invocation to obtain the next inode specifies a termination inode number to avoid returning the same inode more than once. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
iscan ino Pointer to the inode scan handle. The next inode number to be scanned.
Exit status
If the gpfs_seek_inode() subroutine is successful, it returns a value of 0. If the gpfs_seek_inode() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
ENOSYS The gpfs_seek_inode() subroutine is not available. GPFS_E_INVAL_ISCAN Incorrect parameters.
Examples
For an example using gpfs_seek_inode(), see /usr/lpp/mmfs/samples/util/tsinode.c.
368
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
369
gpfs_stat() Subroutine
Name
gpfs_stat() Returns exact file status for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h> int gpfs_stat(char *pathName, struct stat64 *Buffer)
Description
The gpfs_stat() subroutine is used to obtain exact information about the file named by the pathName parameter. This subroutine is provided as an alternative to the stat() subroutine, which may not provide exact mtime and atime values. See Exceptions to Open Group technical standards on page 401. read, write, or execute permission for the named file is not required, but all directories listed in the path leading to the file must be searchable. The file information is written to the area specified by the Buffer parameter. Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library: v libgpfs.a for AIX v libgpfs.so for Linux
Parameters
pathName The path identifying the file for which exact status information is requested. Buffer A pointer to the stat64 structure in which the information is returned. The stat64 structure is described in the sys/stat.h file.
Exit status
If the gpfs_stat() subroutine is successful, it returns a value of 0. If the gpfs_stat() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
EBADF The path name is not valid. EINVAL The path name does not refer to a GPFS file or a regular file. ENOSYS The gpfs_stat() subroutine is not supported under the current file system format.
370
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
371
gpfsAccessRange_t Structure
Name
gpfsAccessRange_t Declares an access range within a file for an application.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; offset_t start; offset_t length; int isWrite; char padding[4]; } gpfsAccessRange_t;
Description
The gpfsAccessRange_t structure declares an access range within a file for an application. The application accesses file offsets within the given range, and does not access offsets outside the range. Violating this hint may produce worse performance than if no hint was specified. This hint is useful in situations where a file is partitioned coarsely among several nodes. If the ranges do not overlap, each node can specify which range of the file it accesses. This provides a performance improvement in some cases, such as for sequential writing within a range. Subsequent GPFS_ACCESS_RANGE hints replace a hint passed earlier.
Members
structLen Length of the gpfsAccessRange_t structure. structType Structure identifier GPFS_ACCESS_RANGE. start The start of the access range offset, in bytes, from beginning of file.
length Length of the access range. 0 indicates to end of file. isWrite 0 indicates read access. 1 indicates write access. padding[4] Provided to make the length of the gpfsAccessRange_t structure a multiple of 8 bytes in length. There is no need to initialize this field.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
372
gpfsCancelHints_t Structure
Name
gpfsCancelHints_t Indicates to remove any hints against the open file handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; } gpfsCancelHints_t;
Description
The gpfsCancelHints_t structure indicates to remove any hints against the open file handle. GPFS removes any hints that may have been issued against this open file handle: v The hint status of the file is restored to what it would have been immediately after being opened, but does not affect the contents of the GPFS file cache. Cancelling an earlier hint that resulted in data being removed from the GPFS file cache does not bring that data back into the cache. Data reenters the cache only upon access by the application or by user-driven or automatic prefetching. v Only the GPFS_MULTIPLE_ACCESS_RANGE hint has a state that might be removed by the GPFS_CANCEL_HINTS directive. Note: This directive cancels only the effect of other hints, not other directives.
Members
structLen Length of the gpfsCancelHints_t structure. structType Structure identifier GPFS_CANCEL_HINTS.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
373
gpfsClearFileCache_t Structure
Name
gpfsClearFileCache_t Indicates file access in the near future is not expected.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; } gpfsClearFileCache_t;
Description
The gpfsClearFileCache_t structure indicates file access in the near future is not expected. The application does not expect to make any further accesses to the file in the near future, so GPFS removes any data or metadata pertaining to the file from its cache. Multiple node applications that have finished one phase of their computation may want to use this hint before the file is accessed in a conflicting mode from another node in a later phase. The potential performance benefit is that GPFS can avoid later synchronous cache consistency operations.
Members
structLen Length of the gpfsClearFileCache_t structure. structType Structure identifier GPFS_CLEAR_FILE_CACHE.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
374
gpfsDataShipMap_t Structure
Name
gpfsDataShipMap_t Indicates which agent nodes are to be used for data shipping.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
#define GPFS_MAX_DS_AGENT_NODES 2048 typedef struct { int structLen; int structType; int partitionSize; int agentCount; int agentNodeNumber[GPFS_MAX_DS_AGENT_NODES] } gpfsDataShipMap_t;
Description
The gpfsDataShipMap_t structure indicates which agent nodes are to be used for data shipping. GPFS recognizes which agent nodes to use for data shipping: v This directive can only appear in a gpfs_fcntl() subroutine that also gives the GPFS_DATA_SHIP_START directive. v If any of the participating threads include an explicit agent mapping with this directive, all threads must provide the same agent mapping, or else GPFS returns EINVAL in errno. If this directive is not used, the agents are exactly the nodes on which the GPFS_DATA_SHIP_START directive was given. The order of these nodes in the mapping is random. Once the order is set, when all instances have issued the GPFS_DATA_SHIP_START directive, the partitioning of the blocks is round robin among the agent nodes. v All of the nodes named in the data shipping mapping must also be data shipping clients that have issued the GPFS_DATA_SHIP_START directive. The reason for this is that GPFS, like most file systems, does not guarantee that data is written through to disk immediately after a write call from an application, or even after a close returns. Thus, cached data can be lost if a node crashes. Data loss can only occur, however, if the node that crashes is the node that wrote the data. With data shipping, this property is no longer true. Any node crash in the collective of nodes can cause loss of data. An application running with a file in data shipping mode writes data by shipping it to the GPFS cache on an agent node. That agent node may later crash before writing the data to disk. The originating node may not receive, pay attention to, or realize the severity of an error message. Presumably, a distributed application would notice a crash of one of the nodes on which it was running and would take corrective action, such as rolling back to an earlier stable checkpoint or deleting a corrupt output file. By requiring that all agent nodes also have at least one data shipping client, GPFS makes it such that at least one of the nodes of a distributed application will crash if there is the potential for data loss because of an agent node crash. If any of the data shipping client nodes suffers a node or GPFS crash, the file will be taken out of data shipping mode. The value for partitionSize must be a multiple of the number of bytes in a single file system block.
Members
structLen Length of the gpfsDataShipMap_t structure.
Chapter 9. GPFS programming interfaces
375
structType Structure identifier GPFS_DATA_SHIP_MAP. partitionSize The number of contiguous bytes per server. This value must be a multiple of the number of bytes in a single file system block. agentCount The number of entries in the agentNodeNumber array. agentNodeNumber array The data ship agent node numbers assigned by GPFS and displayed with the mmlscluster command.
Error status
EINVAL Not all participating threads have provided the same agent mapping. ENOMEM The available data space in memory is not large enough to allocate the data structures necessary to run in data shipping mode. EPERM An attempt to open a file in data shipping mode that is already open in write mode by some thread that did not issue the GPFS_DATA_SHIP_START directive. ESTALE A node in the data shipping collective has gone down.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
376
gpfsDataShipStart_t Structure
Name
gpfsDataShipStart_t Initiates data shipping mode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; int numInstances; int reserved; } gpfsDataShipStart_t;
Description
The gpfsDataShipStart_t structure initiates data shipping mode. Once all participating threads have issued this directive for a file, GPFS enters a mode where it logically partitions the blocks of the file among a group of agent nodes. The agents are those nodes on which one or more threads have issued the GPFS_DATA_SHIP_START directive. Each thread that has issued a GPFS_DATA_SHIP_START directive and the associated agent nodes are referred to as the data shipping collective. In data shipping mode: v All file accesses result in GPFS messages to the appropriate agents to read or write the requested data. v The GPFS_DATA_SHIP_START directive is a blocking collective operation. That is, every thread that intends to access the file through data shipping must issue the GPFS_DATA_SHIP_START directive with the same value of numInstances. These threads all block within their gpfs_fcntl() subroutines until all numInstances threads have issued the GPFS_DATA_SHIP_START directive. v The number of threads that issue the GPFS_DATA_SHIP_START directive does not have to be the same on all nodes. However, each thread must use a different file handle. The default agent mapping can be overridden using the GPFS_DATA_SHIP_MAP directive. v Applications that perform a fine-grained write, sharing across several nodes, should benefit most from data shipping. The reason for this is that the granularity of GPFS cache consistency is an entire file block, which rarely matches the record size of applications. Without using data shipping, when several nodes simultaneously write into the same block of a file, even non-overlapping parts of the block, GPFS serially grants, and then releases, permission to write into the block to each node in turn. Each permission change requires dirty cached data on the relinquishing node to be flushed to disk, yielding poor performance. Data shipping avoids this overhead by moving data to the node that already has permission to write into the block rather than migrating access permission to the node trying to write the data. However, since most data accesses are remote in data shipping mode, clients do not benefit from caching as much in data shipping mode as they would if data shipping mode were not in effect. The cost to send a message to another instance of GPFS to fetch or write data is much higher than the cost of accessing that data through the local GPFS buffer cache. Thus, whether or not a particular application benefits from data shipping is highly dependent on its access pattern and its degree of block sharing.
377
v Another case where data shipping can help performance is when multiple nodes must all append data to the current end of the file. If all of the participating threads open their instances with the O_APPEND flag before initiating data shipping, one of the participating nodes is chosen as the agent to which all appends are shipped. The aggregate performance of all the appending nodes is limited to the throughput of a single node in this case, but should still exceed what the performance would have been for appending small records without using data shipping. Data shipping mode imposes several restrictions on file usage: v Because an application level read or write may be split across several agents, POSIX read and write file atomicity is not enforced while in data shipping mode. v A file in data shipping mode cannot be written through any file handle that was not associated with the data shipping collective through a GPFS_DATA_SHIP_START directive. v Calls that are not allowed on a file that has data shipping enabled: chmod fchmod chown fchown link
The GPFS_DATA_SHIP_START directive exits cleanly only when cancelled by a GPFS_DATA_SHIP_STOP directive. If all threads issue a close for the file, it is taken out of data shipping mode but errors are also returned.
Members
structLen Length of the gpfsDataShipStart_t structure. structType Structure identifier GPFS_DATA_SHIP_START numInstances The number of open file instances, on all nodes, collaborating to operate on the file. reserved This field is currently not used. For compatibility with future versions of GPFS, set this field to zero.
Recovery
Since GPFS_DATA_SHIP_START directives block their invoking threads until all participants respond accordingly, there needs to be a way to recover if the application program uses the wrong value for numInstances or one of the participating nodes crashes before issuing its GPFS_DATA_SHIP_START directive. While a gpfs_fcntl() subroutine is blocked waiting for other threads, the subroutine can be interrupted by any signal. If a signal is delivered to any of the waiting subroutines, all waiting subroutine on every node are interrupted and return EINTR. GPFS does not establish data shipping if such a signal occurs. It is the responsibility of the application to mask off any signals that might normally occur while waiting for another node in the data shipping collective. Several libraries use SIGALRM; the thread that makes the gpfs_fcntl() invocation should use sigthreadmask to mask off delivery of this signal while inside the subroutine.
378
Error status
EINTR A signal was delivered to a blocked gpfs_fcntl() subroutine. All waiting subroutines, on every node, are interrupted. EINVAL The file mode has been changed since the file was opened to include or exclude O_APPEND. The value of numInstances is inconsistent with the value issued by other threads intending to access the file. An attempt has been made to issue a GPFS_DATA_SHIP_START directive on a file that is already in use in data shipping mode by other clients. ENOMEM The available data space in memory is not large enough to allocate the data structures necessary to establish and/or run in data shipping mode. EPERM An attempt has been made to open a file in data shipping mode that is already open in write mode by some thread that did not issue the GPFS_DATA_SHIP_START directive. GPFS does not initiate data shipping. ESTALE A node in the data shipping collective has gone down.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
379
gpfsDataShipStop_t Structure
Name
gpfsDataShipStop_t Takes a file out of data shipping mode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; } gpfsDataShipStop_t;
Description
The gpfsDataShipStop_t structure takes a file out of data shipping mode. GPFS takes the file out of data shipping mode: v GPFS waits for all threads that issued the GPFS_DATA_SHIP_START directive to issue this directive, then flushes all dirty file data to disk. v While a gpfs_fcntl() invocation is blocked waiting for other threads, the subroutine can be interrupted by any signal. If a signal is delivered to any of the waiting invocations, all waiting subroutines on every node are interrupted and return EINTR. GPFS does not cancel data shipping mode if such a signal occurs. It is the responsibility of the application to mask off any signals that might normally occur while waiting for another node in the data shipping collective. Several libraries use SIGALRM; the thread that issues the gpfs_fcntl() should use sigthreadmask to mask off delivery of this signal while inside the subroutine.
Members
structLen Length of the gpfsDataShipStop_t structure. structType Structure identifier GPFS_DATA_SHIP_STOP.
Error status
EIO An error occurred while flushing dirty data. EINTR A signal was delivered to a blocked gpfs_fcntl() subroutine. All waiting subroutines, on every node, are interrupted. EINVAL An attempt has been made to issue the GPFS_DATA_SHIP_STOP directive from a node or thread that is not part of this data shipping collective. An attempt has been made to issue the GPFS_DATA_SHIP_STOP directive on a file that is not in data shipping mode. ESTALE A node in the data shipping collective has gone down.
380
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
381
gpfsFcntlHeader_t Structure
Name
gpfsFcntlHeader_t Contains declaration information for the gpfs_fcntl() subroutine.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int totalLength; int fcntlVersion; int errorOffset; int fcntlReserved; } gpfsFcntlHeader_t;
Description
The gpfsFcntlHeader_t structure contains size, version, and error information for the gpfs_fcntl() subroutine.
Members
totalLength This field must be set to the total length, in bytes, of the data structure being passed in this subroutine. This includes the length of the header and all hints and directives that follow the header. The total size of the data structure cannot exceed the value of GPFS_MAX_FCNTL_LENGTH, as defined in the header file gpfs_fcntl.h. The current value of GPFS_MAX_FCNTL_LENGTH is 64 KB. fcntlVersion This field must be set to the current version number of the gpfs_fcntl() subroutine, as defined by GPFS_FCNTL_CURRENT_VERSION in the header file gpfs_fcntl.h. The current version number is one. errorOffset If an error occurs processing a system call, GPFS sets this field to the offset within the parameter area where the error was detected. For example, 1. An incorrect version number in the header, would cause errorOffset to be set to zero. 2. An error in the first hint following the header would set errorOffset to sizeof(header). If no errors are found, GPFS does not alter this field. fcntlReserved This field is currently unused. For compatibility with future versions of GPFS, set this field to zero.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
382
gpfsFreeRange_t Structure
Name
gpfsFreeRange_t Undeclares an access range within a file for an application.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; offset_t start; offset_t length; } gpfsFreeRange_t;
Description
The gpfsFreeRange_t structure undeclares an access range within a file for an application. The application no longer accesses file offsets within the given range. GPFS flushes the data at the file offsets and removes it from the cache. Multiple node applications that have finished one phase of their computation may want to use this hint before the file is accessed in a conflicting mode from another node in a later phase. The potential performance benefit is that GPFS can avoid later synchronous cache consistency operations.
Members
structLen Length of the gpfsFreeRange_t structure. structType Structure identifier GPFS_FREE_RANGE. start The start of the access range offset, in bytes, from beginning of file.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
383
gpfsGetFilesetName_t Structure
Name
gpfsGetFilesetName_t Obtains a files fileset name.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; char buffer[GPFS_FCNTL_MAX_NAME_BUFFER]; } gpfsGetFilesetName_t
Description
The gpfsGetFilesetName_t structure is used to obtain a files fileset name.
Members
structLen Length of the gpfsGetFilesetName_t structure. structType Structure identifier GPFS_FCNTL_GET_FILESETNAME. buffer The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
384
gpfsGetReplication_t Structure
Name
gpfsGetReplication_t Obtains a files replication factors.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; int metadataReplicas; int maxMetadataReplicas; int dataReplicas; int maxDataReplicas; int status; int reserved; } gpfsGetReplication_t
Description
The gpfsGetReplication_t structure is used to obtain a files replication factors.
Members
structLen Length of the gpfsGetReplication_t structure. structType Structure identifier GPFS_FCNTL_GET_REPLICATION. metadataReplicas Returns the current number of copies of indirect blocks for the file. maxMetadataReplicas Returns the maximum number of copies of indirect blocks for a file. dataReplicas Returns the current number of copies of the data blocks for a file. maxDataReplicas Returns the maximum number of copies of data blocks for a file. status Returns the status of the file. Status values defined below. reserved Unused, but should be set to 0.
Error status
These values are returned in the status field: GPFS_FCNTL_STATUS_EXPOSED This file may have some data where the only replicas are on suspended disks; implies some data may be lost if suspended disks are removed. GPFS_FCNTL_STATUS_ILLREPLICATE This file may not be properly replicated; that is, some data may have fewer or more than the desired number of replicas, or some replicas may be on suspended disks.
Chapter 9. GPFS programming interfaces
385
GPFS_FCNTL_STATUS_UNBALANCED This file may not be properly balanced. GPFS_FCNTL_STATUS_DATAUPDATEMISS This file has stale data blocks on at least one of the disks that are marked as unavailable or recovering in the stripe group descriptor. GPFS_FCNTL_STATUS_METAUPDATEMISS This file has stale indirect blocks on at least one unavailable or recovering disk. GPFS_FCNTL_STATUS_ILLPLACED This file may not be properly placed; that is, some data may be stored in an incorrect storage pool.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
386
gpfsGetSnapshotName_t Structure
Name
gpfsGetSnapshotName_t Obtains a files snapshot name.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; char buffer[GPFS_FCNTL_MAX_NAME_BUFFER]; } gpfsGetSnapshotName_t
Description
The gpfsGetSnapshotName_t structure is used to obtain a files snapshot name. If the file is not part of a snapshot, a zero length snapshot name will be returned.
Members
structLen Length of the gpfsGetSnapshotName_t structure. structType Structure identifier GPFS_FCNTL_GET_SNAPSHOTNAME. buffer The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
387
gpfsGetStoragePool_t Structure
Name
gpfsGetStoragePool_t Obtains a files storage pool name.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; char buffer[GPFS_FCNTL_MAX_NAME_BUFFER]; } gpfsGetStoragePool_t
Description
The gpfsGetStoragePool_t structure is used to obtain a files storage pool name.
Members
structLen Length of the gpfsGetStoragePool_t structure. structType Structure identifier GPFS_FCNTL_GET_STORAGEPOOL. buffer The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
388
gpfsMultipleAccessRange_t Structure
Name
gpfsMultipleAccessRange_t Defines prefetching and write-behind file access for an application.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { offset_t blockNumber; /* int start; /*start int length; /* int isWrite; /* char padding[4]; } gpfsRangeArray_t; data block number to access */ of range (from beginning of block)*/ number of bytes in range */ 0 - READ access 1 - WRITE access */
typedef struct { int structLen; int structType; int accRangeCnt; int relRangeCnt; gpfsRangeArray_t accRangeArray[GPFS_MAX_RANGE_COUNT]; gpfsRangeArray_t relRangeArray[GPFS_MAX_RANGE_COUNT]; } gpfsMultipleAccessRange_t;
Description
The gpfsMultipleAccessRange_t structure defines prefetching and write-behind access where the application will soon access the portions of the blocks specified in accRangeArray and has finished accessing the ranges listed in relRangeArray. The size of a block is returned in the st_blksize field of the stat command, so the offset, OFF, of a file is in the block, OFF/st_blksize. v Up to GPFS_MAX_RANGE_COUNT, as defined in the header file gpfs_fcntl.h, blocks may be given in one multiple access range hint. The current value of GPFS_MAX_RANGE_COUNT is eight. Depending on the current load, GPFS may initiate prefetching of some or all of the blocks. v Each range named in accRangeArray that is accepted for prefetching, should eventually be released with an identical entry in relRangeArray, or else GPFS will stop prefetching blocks for this file. Note: Naming a subrange of a block in relRangeArray that does not exactly match a past entry in accRangeArray has no effect, and does not produce an error. v Applications that make random accesses or regular patterns not recognized by GPFS may benefit from using this hint. GPFS already recognizes sequential and strided file access patterns. Applications that use such patterns should not need to use this hint, as GPFS automatically recognizes the pattern and performs prefetching and write-behind accordingly. In fact, using the multiple access range hint in programs having a sequential or strided access pattern may degrade performance due to the extra overhead to process the hints. Notice that the units of prefetch and release are file blocks, not file offsets. If the application intends to make several accesses to the same block, it will generally get better performance by including the entire range to be accessed in the GPFS_MULTIPLE_ACCESS_RANGE hint before actually doing a read or write. A sample program gpfsperf, which demonstrates the use of the
389
GPFS_MULTIPLE_ACCESS_RANGE hint, is included in the GPFS product and installed in the /usr/lpp/mmfs/samples/perf directory.
Members
structLen Length of the gpfsMultipleAccessRange_t structure. structType Structure identifier GPFS_MULTIPLE_ACCESS_RANGE. accRangeCnt On input, the number of ranges in accRangeArray. On output, the number of processed ranges, the first n, of the given ranges. relRangeCnt The number of ranges in relRangeArray. accRangeArray The ranges of blocks that the application will soon access. relRangeArray The ranges of blocks that the application has finished accessing.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
390
gpfsRestripeData_t Structure
Name
gpfsRestripeData_t Restripes a files data blocks.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; int options; int reserved; } gpfsRestripeData_t
Description
The gpfsRestripeData_t structure is used to restripe a files data blocks to updates its replication and migrate its data. The data movement is always done immediately.
Members
structLen Length of the gpfsRestripeData_t structure. structType Structure identifier GPFS_FCNTL_RESTRIPE_DATA. options Options for restripe command. See mmrestripefs command for complete definitions. GPFS_FCNTL_RESTRIPE_M Migrate critical data off of suspended disks. GPFS_FCNTL_RESTRIPE_R Replicate data against subsequent failure. GPFS_FCNTL_RESTRIPE_P Place file data in assigned storage pool. GPFS_FCNTL_RESTRIPE_B Rebalance file data. reserved Must be set to 0.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
391
gpfsSetReplication_t Structure
Name
gpfsSetReplication_t Sets a files replication factors.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; int metadataReplicas; int maxMetadataReplicas; int dataReplicas; int maxDataReplicas; int errReason; int errValue1; int errValue2; int reserved; } gpfsSetReplication_t
Description
The gpfsGetReplication_t structure is used to set a files replication factors. However, the directive does not cause the file to be restriped immediately. Instead, the caller must append a gpfsRestripeData_t directive or invoke an explicit restripe using the mmrestripefs or mmrestripefile command.
Members
structLen Length of the gpfsSetReplication_t structure. structType Structure identifier GPFS_FCNTL_SET_REPLICATION. metadataReplicas Specifies how many copies of the file systems metadata to create. Enter a value of 1 or 2, but not greater than the value of the maxMetadataReplicas attribute of the file. A value of 0 indicates not to change the current value. maxMetadataReplicas The maximum number of copies of indirect blocks for a file. Space is reserved in the inode for all possible copies of pointers to indirect blocks. Valid values are 1 and 2, but cannot be less than DefaultMetadataReplicas. The default is 1. A value of 0 indicates not to change the current value. dataReplicas Specifies how many copies of the file data to create. Enter a value of 1 or 2, but not greater than the value of the maxDataReplicas attribute of the file. A value of 0 indicates not to change the currant value. metadataReplicas The maximum number of copies of data blocks for a file. Space is reserved in the inode and indirect blocks for all possible copies of pointers to data blocks. Valid values are 1 and 2 but cannot be less than DefaultDataReplicas. The default is 1. A value of 0 indicates not to change the current value. errReason Returned reason for request failure. Defined below.
392
errValue1 Returned value depending upon errReason. errValue2 Returned value depending upon errReason. reserved Unused, but should be set to 0.
Error status
These values are returned in the errReason field: GPFS_FCNTL_ERR_NONE Command was successful or no reason information was returned. GPFS_FCNTL_ERR_METADATA_REPLICAS_RANGE Field metadataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower and upper range boundaries. GPFS_FCNTL_ERR_MAXMETADATA_REPLICAS_RANGE Field maxMetadataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower and upper range boundaries. GPFS_FCNTL_ERR_DATA_REPLICAS_RANGE Field dataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower and upper range boundaries. GPFS_FCNTL_ERR_MAXDATA_REPLICAS_RANGE Field maxDataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower and upper range boundaries. GPFS_FCNTL_ERR_FILE_NOT_EMPTY An attempt to change maxMetadataReplicas or maxDataReplicas or both was made on a file that is not empty. GPFS_FCNTL_ERR_REPLICAS_EXCEED_FGMAX Field metadataReplicas, or dataReplicas, or both exceed the number of failure groups. Field errValue1 contains the maximum number of metadata failure groups. Field errValue2 contains the maximum number of data failure groups.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux.
393
gpfsSetStoragePool_t Structure
Name
gpfsSetStoragePool_t Sets a files assigned storage pool.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct { int structLen; int structType; int errReason; int errValue1; int errValue2; int reserved; char buffer[GPFS_FCNTL_MAX_NAME_BUFFER]; } gpfsSetStoragePool_t
Description
The gpfsSetStoragePool_t structure is used to set a files assigned storage pool. However, the directive does not cause the file data to be migrated immediately. Instead, the caller must append a gpfsRestripeData_t directive or invoke an explicit restripe with the mmrestripefs or mmrestripefile command. The caller must have su or root privileges to change a storage pool assignment.
Members
structLen Length of the gpfsSetStoragePool_t structure. structType Structure identifier GPFS_FCNTL_SET_STORAGEPOOL. buffer The name of the storage pool for the files data. Only user files may be reassigned to different storage pool. System files, including all directories, must reside in the system pool and may not be moved. The size of the buffer may vary, but must be a multiple of eight. errReason Returned reason for request failure. Defined below. errValue1 Returned value depending upon errReason. errValue2 Returned value depending upon errReason. reserved Unused, but should be set to 0.
Error status
These values are returned in the errReason field: GPFS_FCNTL_ERR_NONE Command was successful or no reason information was returned. GPFS_FCNTL_ERR_INVALID_STORAGE_POOL Invalid storage pool name was given.
394
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX, /usr/lpp/mmfs/lib/libgpfs.so for Linux
395
396
397
Description
The /var/mmfs/etc/mmsdrbackup user exit, when properly installed on the primary GPFS configuration server, will be asynchronously invoked every time there is a change to the GPFS master configuration file. This user exit can be used to create a backup of the GPFS configuration data. Read the sample file /usr/lpp/mmfs/samples/mmsdrbackup.sample for a detailed description on how to code and install this user exit.
Parameters
The generation number of the most recent version of the GPFS configuration data.
Exit status
The mmsdrbackup user exit should always returns a value of zero.
Location
/var/mmfs/etc
398
Description
The /var/mmfs/etc/nsddevices user exit, when properly installed, is invoked synchronously by the GPFS daemon during its disk discovery processing. The purpose of this procedure is to discover and verify the physical devices on each node that correspond to the disks previously defined to GPFS with the mmcrnsd command. The nsddevices user exit can be used to either replace or to supplement the disk discovery procedure of the GPFS daemon. Read the sample file /usr/lpp/mmfs/samples/nsddevices.sample for a detailed description on how to code and install this user exit.
Parameters
None.
Exit status
The nsddevices user exit should return either zero or one. When the nsddevices user exit returns a value of zero, the GPFS disk discovery procedure is bypassed. When the nsddevices user exit returns a value of one, the GPFS disk discovery procedure is performed and the results are concatenated with the results from the nsddevices user exit.
Location
/var/mmfs/etc
399
Description
The /var/mmfs/etc/syncfsconfig user exit, when properly installed, will be synchronously invoked after each command that may change the configuration of a file system. Examples of such commands are: mmadddisk, mmdeldisk, mmchfs, and so forth. The syncfsconfig user exit can be used to keep the file system configuration data in replicated GPFS clusters automatically synchronized. Read the sample file /usr/lpp/mmfs/samplessyncfsconfig.sample for a detailed description on how to code and install this user exit.
Parameters
None.
Exit status
The syncfsconfig user exit should always returns a value of zero.
Location
/var/mmfs/etc
400
401
402
| | |
| Every GPFS file system has a format version number associated with it. This version number corresponds | to the on-disk data structures for the file system, and is an indicator of the supported file system | functionality. | The file system version number is assigned when the file system is first created, and is updated to the | latest supported level after the file system is migrated using the mmchfs -V command. | The format version number for a file system can be displayed with the mmlsfs -V command. If a file | system was created with an older GPFS release, new functionality that requires different on-disk data | structures will not be enabled until you run the mmchfs -V command. | The mmchfs -V option requires the specification of one of two values - full or compat: | v Specifying mmchfs -V full enables all of the new functionality that requires different on-disk data structures. After this command, nodes in remote clusters running an older GPFS version will no longer | be able to mount the file system. | | v Specifying mmchfs -V compat enables only features that are compatible with nodes running GPFS 3.1. After this command, nodes in remote clusters running GPFS 3.1 will still be able to mount the file | system, but nodes running GPFS versions older than 3.1 will not be able to mount the file system. | | The current highest file system format version is 10.00. This is the version that is assigned to file systems | created with GPFS 3.2. The same version number will be assigned to older file systems after you run the | mmchfs -V command. | If your current file system is at format level 9.03 (GPFS 3.1), the set of enabled features depends on the | value specified with the mmchfs -V option: | v After running mmchfs -V full, the file system will be able to support: Fine grain directory locking | LIMIT clause on placement policies | | v After running mmchfs -V compat, the file system will be enabled for: Fine grain directory locking (provided all nodes accessing the directory are at GPFS 3.2) | | | | | | If your current file system is at format level 8.00 (GPFS 2.3), after running mmchfs -V, the file system will be able to support all of the above, plus: v Storage pools v Filesets v Fileset quotas
| If your current file system is at format level 7.00 (GPFS 2.2), after running mmchfs -V, the file system will | be able to support all of the above, plus: | v NFS V4 Access Control Lists | v New format for the internal allocation summary files | If your current file system is at format level 6.00 (GPFS 2.1), after running mmchfs -V, the file system will | be able to support all of the above, plus extended access control list entries (-rwxc access mode bits). | | | | The functionality described above is only a subset of the functional changes introduced with the different GPFS releases. Functional changes that do not require changing the on-disk data structures are not listed here. Such changes are either immediately available when the new level of code is installed, or require running the mmchconfig release=LATEST command. For a complete list, see the Summary of changes.
Copyright IBM Corp. 1998, 2007
403
404
| |
| Accessibility features help a user who has a physical disability, such as restricted mobility or limited vision, | to use information technology products successfully. | |
Accessibility features
| The following list includes the major accessibility features in IBM GPFS. These features support: | v Keyboard-only operation. | v Interfaces that are commonly used by screen readers. | v Keys that are tactilely discernible and do not activate just by touching them. | v Industry-standard devices for ports and connectors. | v The attachment of alternative input and output devices. | The IBM Cluster information center and its related publications are accessibility-enabled. The | accessibility features of the information center are described in Accessibility and keyboard shortcuts in the | information center. | |
Keyboard navigation
| See the IBM Accessibility Center at http://www.ibm.com/able for more information about the commitment | that IBM has to accessibility.
405
406
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only IBMs product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any of IBMs intellectual property rights may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 USA For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation Intellectual Property Law Mail Station P300
Copyright IBM Corp. 1998, 2007
407
2455 South Road, Poughkeepsie, NY 12601-5400 USA Such information may be available, subject to appropriate terms and conditions, including in some cases, payment or a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to the application programming interfaces for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States or other countries or both: v AIX | v eServer v FlashCopy | v General Parallel File System | v GPFS v IBM v SANergy v Tivoli | INFINIBAND, InfiniBand Trade Association and the INFINIBAND design marks are trademarks and/or | service marks of the INFINIBAND Trade Association. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of the Open Group in the United States and other countries.
408
Other company, product, and service names may be the trademarks or service marks of others.
Notices
409
410
Glossary
This glossary defines technical terms and abbreviations used in GPFS documentation. If you do not find the term you are looking for, refer to the index of the appropriate book or view the IBM Glossary of Computing Terms, located on the Internet at: w3.ibm.com/standards/terminology.
disposition. The session to which a data management event is delivered. An individual disposition is set for each type of event from each file system. disk leasing. A method for controlling access to storage devices from multiple host systems. Any host that wants to access a storage device configured to use disk leasing registers for a lease; in the event of a perceived failure, a host system can deny access, preventing I/O operations with the storage device until the preempted system has reregistered. domain. A logical grouping of resources in a network for the purpose of common management and administration.
B
block utilization. The measurement of the percentage of used subblocks per allocated blocks.
C
cluster. A loosely-coupled collection of independent systems (nodes) organized into a network for the purpose of sharing resources and communicating with each other. See also GPFS cluster. cluster configuration data. The configuration data that is stored on the cluster configuration servers.
F
failback. Cluster recovery from failover following repair. See also failover. failover. (1) The process of transferring all control of the ESS to a single cluster in the ESS when the other cluster in the ESS fails. See also cluster. (2) The routing of all transactions to a second controller when the first controller fails. See also cluster. (3) The assumption of file system duties by another node when a node fails. failure group. A collection of disks that share common access paths or adapter connection, and could all become unavailable through a single hardware failure. fileset. A hierarchical grouping of files managed as a unit for balancing workload across a cluster. file-management policy. A set of rules defined in a policy file that GPFS uses to manage file migration and file deletion. See also policy. file-placement policy. A set of rules defined in a policy file that GPFS uses to manage the initial placement of a newly created file. See also policy. file system descriptor. A data structure containing key information about a file system. This information includes the disks assigned to the file system (stripe group), the current state of the file system, and pointers to key files such as quota files and log files. file system descriptor quorum. The number of disks needed in order to write the file system descriptor correctly. file system manager. The provider of services for all the nodes using a single file system. A file system manager processes changes to the state or description of the file system, controls the regions of disks that are allocated to each node, and controls token management and quota management.
| | | | |
cluster manager. The node that monitors node status using disk leases, detects failures, drives recovery, and selects file system managers. The cluster manager is the node with the lowest node number among the quorum nodes that are operating at a particular time. control data structures. Data structures needed to manage file data and metadata cached in memory. Control data structures include hash tables and link pointers for finding cached data; lock states and tokens to implement distributed locking; and various flags and sequence numbers to keep track of updates to the cached data.
D
Data Management Application Program Interface (DMAPI). The interface defined by the Open Groups XDSM standard as described in the publication System Management: Data Storage Management (XDSM) API Common Application Environment (CAE) Specification C429, The Open Group ISBN 1-85912-190-X. deadman switch timer. A kernel timer that works on a node that has lost its disk lease and has outstanding I/O requests. This timer ensures that the node cannot complete the outstanding I/O requests (which would risk causing file system corruption), by causing a panic in the kernel. disk descriptor. A definition of the type of data that the disk contains and the failure group to which this disk belongs. See also failure group.
411
fragment. The space allocated for an amount of data too small to require a full block. A fragment consists of one or more subblocks.
K
kernel. The part of an operating system that contains programs for such tasks as input/output, management and control of hardware, and the scheduling of user tasks.
G
GPFS cluster. A cluster of nodes defined as being available for use by GPFS file systems. GPFS portability layer . The interface module that each installation must build for its specific hardware platform and Linux distribution. GPFS recovery log. A file that contains a record of metadata activity, and exists for each node of a cluster. In the event of a node failure, the recovery log for the failed node is replayed, restoring the file system to a consistent state and allowing other nodes to continue working.
L
logical volume. A collection of physical partitions organized into logical partitions, all contained in a single volume group. Logical volumes are expandable and can span several physical volumes in a volume group. Logical Volume Manager (LVM). A set of system commands, library routines, and other tools that allow the user to establish and control logical volume (LVOL) storage. The LVM maps data between the logical view of storage space and the physical disk drive module (DDM).
I
ill-placed file. A file assigned to one storage pool, but having some or all of its data in a different storage pool. ill-replicated file. A file with contents that are not correctly replicated according to the desired setting for that file. This situation occurs in the interval between a change in the file's replication settings or suspending one of its disks, and the restripe of the file. indirect block. A block containing pointers to other blocks. IBM Virtual Shared Disk. The subsystem that allows application programs running on different nodes to access a logical volume as if it were local to each node. The logical volume is local to only one of the nodes (the server node). inode. The internal structure that describes the individual files in the file system. There is one inode for each file.
M
metadata. A data structures that contain access information about file data. These include: inodes, indirect blocks, and directories. These data structures are not accessible to user applications. metanode. The one node per open file that is responsible for maintaining file metadata integrity. In most cases, the node that has had the file open for the longest period of continuous time is the metanode. mirroring. The process of writing the same data to multiple disks at the same time. The mirroring of data protects it against data loss within the database or within the recovery log. multi-tailed. A disk connected to multiple nodes.
N
namespace. Space reserved by a file system to contain the names of its objects. Network File System (NFS). A protocol, developed by Sun Microsystems, Incorporated, that allows any host in a network to gain access to another host or netgroup and their file directories. Network Shared Disk (NSD). A component for cluster-wide disk naming and access. NSD volume ID. A unique 16 digit hex number that is used to identify and access all NSDs. node. An individual operating-system image within a cluster. Depending on the way in which the computer system is partitioned, it may contain one or more nodes.
J
journaled file system (JFS). A technology designed for high-throughput server environments, which are important for running intranet and other high-performance e-business file servers. junction. A special directory entry that connects a name in a directory of one fileset to the root directory of another fileset.
412
node descriptor. A definition that indicates how GPFS uses a node. Possible functions include: manager node, client node, quorum node, and nonquorum node node number. A number that is generated and maintained by GPFS as the cluster is created, and as nodes are added to or deleted from the cluster. node quorum. The minimum number of nodes that must be running in order for the daemon to start. node quorum with tiebreaker disks. A form of quorum that allows GPFS to run with as little as one quorum node available, as long as there is access to a majority of the quorum disks. non-quorum node. A node in a cluster that is not counted for the purposes of quorum determination.
R
Redundant Array of Independent Disks (RAID). A collection of two or more disk physical drives that present to the host an image of one or more logical disk drives. In the event of a single physical device failure, the data can be read or regenerated from the other disk drives in the array due to data redundancy. recovery. The process of restoring access to file system data when a failure has occurred. Recovery can involve reconstructing data or providing alternative routing through a different server. replication. The process of maintaining a defined set of data in more than one location. Replication involves copying designated changes for one location (a source) to another (a target), and synchronizing the data in both locations. rule. A list of conditions and actions that are triggered when certain conditions are met. Conditions include attributes about an object (file name, type or extension, dates, owner, and groups), the requesting client, and the container name associated with the object.
P
policy. A list of file-placement and service-class rules that define characteristics and placement of files. Several policies can be defined within the configuration, but only one policy set is active at one time. policy rule. A programming statement within a policy that defines a specific action to be preformed. pool. A group of resources with similar characteristics and attributes. portability. The ability of a programming language to compile successfully on different operating systems without requiring changes to the source code. primary GPFS cluster configuration server. In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data. private IP address. A IP address used to communicate on a private network. public IP address. A IP address used to communicate on a public network.
S
SAN-attached. Disks that are physically attached to all nodes in the cluster using Serial Storage Architecture (SSA) connections or using fibre channel switches secondary GPFS cluster configuration server. In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data in the event that the primary GPFS cluster configuration server fails or becomes unavailable. Secure Hash Algorithm digest (SHA digest). A character string used to identify a GPFS security key. Serial Storage Architecture (SSA). An American National Standards Institute (ANSI) standard, implemented by IBM, for a high-speed serial interface that provides point-to-point connection for peripherals, such as storage arrays. session failure. The loss of all resources of a data management session due to the failure of the daemon on the session node. session node. The node on which a data management session was created. Small Computer System Interface (SCSI). An ANSI-standard electronic interface that allows personal computers to communicate with peripheral hardware, such as disk drives, tape drives, CD-ROM drives, printers, and scanners faster and more flexibly than previous interfaces.
Q
quorum node. A node in the cluster that is counted to determine whether a quorum exists. quota. The amount of disk space and number of inodes assigned as upper limits for a specified user, group of users, or fileset. quota management. The allocation of disk blocks to the other nodes writing to the file system, and comparison of the allocated space to quota limits at regular intervals.
Glossary
413
snapshot. A copy of changed data in the active files and directories of a file system with the exception of the inode number, which is changed to allow application programs to distinguish between the snapshot and the active files and directories. source node. The node on which a data management event is generated. SSA. See Serial Storage Architecture. stand-alone client. The node in a one-node cluster. storage area network (SAN). A dedicated storage network tailored to a specific environment, combining servers, storage products, networking products, software, and services. storage pool. A grouping of storage space consisting of volumes, logical unit numbers (LUNs), or addresses that share a common set of administrative characteristics. stripe group. The set of disks comprising the storage assigned to a file system. striping. A storage process in which information is split into blocks (a fixed amount of data) and the blocks are written to (or read from) a series of disks in parallel. subblock. The smallest unit of data accessible in an I/O operation, equal to one thirty-second of a data block. system storage pool. A storage pool containing file system control structures, reserved files, directories, symbolic links, special devices, as well as the metadata associated with regular files, including indirect blocks and extended attributes The system storage pool can also contain user data.
U
user storage pool. A storage pool containing the blocks of data that make up user files.
V
virtual file system (VFS). A remote file system that has been mounted so that it is accessible to the local user. virtual shared disk. See IBM Virtual Shared Disk. virtual node (vnode). The structure that contains information about a file system object in an virtual file system (VFS).
T
token management. A system for controlling file access in which each application performing a read or write operation is granted some form of access to a specific block of file data. Token management provides data consistency and controls conflicts. Token management has two components: the token management server, and the token management function. token management function. A component of token management that requests tokens from the token management server. The token management function is located on each cluster node. token management server. A component of token management that controls tokens relating to the operation of the file system. The token management server is located at the file system manager node. twin-tailed. A disk connected to two nodes.
414
B
backing up a file system 26 backup appliations writing 27 backup client 81 backup control information 81 backup server 81 block level incremental backups 349 block size affect on maximum mounted file system size 129 choosing 129
A
access ACL 48 access control information 327 restoring 361 access control lists administering 50 allow type 51 applying 54 changing 50, 54 creating 245 DELETE 52 DELETE_CHILD 52 deleting 50, 54, 157 deny type 51 displaying 49, 54, 194 editing 177 exceptions 54 getting 197 inheritance 51 limitations 54 managing 47 NFS V4 47, 50 NFS V4 syntax 51 setting 48, 49, 53 special names 51 traditional 47 translation 53 accessibility 405 keyboard 405 shortcut keys 405 ACL information 293, 354 restoring 361 retrieving 327 activating quota limit checking 43 adding disks 30 adding nodes to a GPFS cluster 5 administering GPFS file system 1 administration tasks 1 agent node 375 atime 108, 355, 370, 401 attributes useNSDserver 37 authentication method 1
Copyright IBM Corp. 1998, 2007
92,
C
cache 57 changing an administration or daemon interface for a node 112 automounter attribute 91 cipherList attribute 91 cluster configuration attributes 90 cnfsMountdPort attribute 91 cnfsNFSDprocs attribute 91 cnfsSharedRoot attribute 91 cnfsVIP attribute 91 configuration attributes on the mmchconfig command 8 dataStructureDump attribute 91 defaultMountDir attribute 91 designation attribute 91 disk states 34 maxblocksize attribute 92 maxFilesToCache attribute 92 maxMBpS attribute 92 maxStatCache attribute 93 nsdServerWaitTimeForMount 93 nsdServerWaitTimeWindowOnMount 93 pagepool attribute 93 prefetchThreads attribute 93 quotas 40 replication 21 tiebreakerDisks 94 tracing attributes 282 uidDomain attribute 94 unmountOnDiskFail attribute 94 usePersistentReserve 94 verbsPorts 95 verbsRdma 95
415
changing (continued) worker1Threads attribute 95 checking file systems 18 quotas 41 chmod 47 cipherList 77 cipherList attribute changing 91 client node refresh NSD server 238 cluster changing configuration attributes 8, 90 changing tracing attributes 282 cluster configuration attributes changing 8 displaying 209 cluster configuration data 183, 190 cluster configuration server 121, 167, 190 clustered NFS subsystem using 58 cnfsMountdPort attribute changing 91 cnfsNFSDprocs attribute changing 91 cnfsSharedRoot attribute changing 91 cnfsVIP attribute changing 91 commands chmod 47 mmadddisk 30, 64, 400 mmaddnode 5, 68 mmapplypolicy 71 mmauth 76 mmbackup 26, 27, 81 mmchattr 21, 22, 84 mmchcluster 6, 87 mmchconfig 8, 10, 56, 57, 90 mmchdisk 19, 22, 34, 97 mmcheckquota 18, 39, 41, 101 mmchfileset 104 mmchfs 20, 37, 43, 57, 106 mmchmgr 110 mmchnode 112 mmchnsd 36, 116 mmchpolicy 119 mmcrcluster 3, 121 mmcrfileset 125 mmcrfs 15, 43, 50, 57, 127 mmcrnsd 29, 30, 31, 64, 134, 271, 399, 400 mmcrsnapshot 27, 139 mmcrvsd 142 mmcvnsd 31 mmdefedquota 147 mmdefquotaoff 149 mmdefquotaon 151 mmdefragfs 24, 25, 154 mmdelacl 50, 51, 54, 157 mmdeldisk 31, 159, 400 mmdelfileset 162
commands (continued) mmdelfs 17, 165 mmdelnode 5, 167 mmdelnsd 170 mmdelsnapshot 172 mmdf 23, 31, 174 mmeditacl 50, 51, 53, 54, 177 mmedquota 40, 180 mmexportfs 183 mmfsck 18, 19, 31, 185 mmfsctl 190 mmgetacl 49, 53, 54, 194 mmgetstate 197 mmimportfs 200 mmlinkfileset 203 mmlsattr 21, 205 mmlscluster 3, 207 mmlsconfig 209 mmlsdisk 19, 33, 211 mmlsfileset 214 mmlsfs 19, 33, 57, 217 mmlsmgr 11, 221 mmlsmount 17, 223 mmlsnsd 29, 226 mmlspolicy 229 mmlsquota 42, 231 mmlssnapshot 234, 331 mmmount 15, 37, 236 mmnsddiscover 238 mmpmon 240 mmputacl 48, 49, 51, 53, 54, 245 mmquotaoff 43, 248 mmquotaon 43, 250 mmremotecluster 252 mmremotefs 37, 255 mmrepquota 44, 258 mmrestorefs 261 mmrestripe 33 mmrestripefile 264 mmrestripefs 22, 31, 267 completion time 12 mmrpldisk 32, 271 mmshutdown 13, 275 mmsnapdir 277, 308 mmstartup 12, 280 mmtracectl 282 mmumount 16, 17, 285 mmunlinkfileset 287 rcp 66, 69, 73, 82, 88, 95, 99, 104, 109, 110, 117, 123, 125, 132, 136, 140, 145, 155, 160, 163, 165, 168, 171, 173, 175, 184, 187, 192, 198, 201, 203, 208, 209, 212, 215, 218, 222, 227, 234, 262, 265, 268, 273, 276, 278, 280, 288 rsh 66, 69, 73, 82, 88, 95, 99, 104, 109, 110, 117, 123, 125, 132, 136, 140, 145, 155, 160, 163, 165, 168, 171, 173, 175, 184, 187, 192, 198, 201, 203, 208, 209, 212, 215, 218, 222, 227, 234, 262, 265, 268, 273, 276, 278, 280, 288 concurrent virtual shared disks creating 142 configuration see also cluster 417
416
configuration attributes on the mmchconfig command changing 8 considerations for GPFS applications 401 contact node 167 control file permission 47 creating file systems 127 filesets 125 quota reports 44 ctime 355, 401
D
data replica 108 data replication changing 21 data shipping 375 data shipping mode 377, 380, 382, 383, 385 dataStructureDump attribute changing 91 deactivating quota limit checking 43 default ACL 48 default quotas 40 activating 151 deactivating 149 editing 147 defaultMountDir changing 91 deleting file systems 17 filesets 162 nodes from a GPFS cluster 5 deleting links snapshots 277 deny-write open lock 106 description dmapiEventTimeout 91 dmapiMountTimeout 92 dmapiSessionFailureTimeout 92 designation attribute changing 91 Direct I/O caching policy using 21 directives description 59 subroutine for passing 298 directory entry reading 345 DirInherit 51 disability 405 disabling Persistent Reserve 37 disaster recovery 190 disk access path discovery 238 disk descriptor 64, 98, 134 disk discovery 37 disk parameter changing 97 disk state changing 34, 97
disk state (continued) displaying 33 suspended 97 disk storage pre-allocating 358 disk usage 97, 128, 271 disks adding 30, 64, 271 availability 34 configuration 211 deleting 31, 159 displaying information 29 displaying state 211 ENOSPC 33 failure 22 fragmentation 24 managing 29 maximum number 29 reducing fragmentation 154 replacing 32, 271 status 34 strict replication 33 displaying access control lists 49 disk fragmentation 24 disk states 33 disks 29 filesets 214 NSD belonging to a GPFS cluster 226 quotas 42 DMAPI 108 dmapiEventTimeout description 91 dmapiMountTimeout attribute description 92 dmapiSessionFailureTimeout attribute description 92 dumps storage of information 91
E
editing default quotas 147 enabling Persistent Reserve 37 establishing quotas 40 exceptions to Open Group technical standards execute file permission 47 exporting a GPFS file system 55 extended ACLs retrieve 301 set 303, 305 extended file attributes 303, 305 retrieve 301 set 303, 305
401
F
failure group 97, 128, 271
Index
417
failureDetectionTime attribute changing 92 FailureGroup 65 file access control information 323, 327, 361 access range 372 ACL information 323, 327, 361 block level incremental read 349 extended attributes 301, 303, 305 reading 343 file access application driven prefetching and write-behind canceling 373, 374 clearing 374 freeing a given range 383 gpfs_fcntl() 59 gpfsDataShipStart_t 377 gpfsDataShipStop_t 380 file attribute extended 335 querying 205 file cache 374, 375 file descriptor closing 332 opening 341 file permissions control 47 GPFS extension 47 file status information 310, 370 file system descriptor quorum 191 file system manager changing nodes 11, 110 designation 91 displaying current 221 displaying node currently assigned 11 file system name 312, 315 file system snapshot handle 296, 308 file system space querying 174 file systems access control lists 47 adding disks 64 AIX export 56 attributes changing 20 displaying 19 backing up 26, 81 block size 127 change manager node 110 changing attributes 106 changing attributes for files 84 checking 18, 185, 200 control request 190 controlled by GPFS 401 creating 127 creating snapshot 139 deleting 17, 165 deleting disks 159 disk fragmentation 24 displaying attributes 217 displaying format version 217
389
file systems (continued) exporting 55, 183 exporting using NFS 55 file system manager displaying 221 format version 106 formatting 128 fragmentation querying 24 GPFS control 401 handle 308 importing 200 inconsistencies 185 links to snapshots 277 Linux export 55 listing mounted 223 management 15 migrating 106 mounted file system sizes 92, 129 mounting 236 mounting on multiple nodes 15 moving to another cluster 106, 183 mtime value 106 NFS export 56 NFS V4 export 56 querying space 174 quotas 151 rebalancing 267 reducing fragmentation 25, 154 remote 76, 252, 255 repairing 18, 185 restoring with snapshots 261 restripe 65 restriping 22, 267 space, querying 23 unmounting 275, 285 unmounting on multiple nodes 17 which nodes have mounted 17 FileInherit 51 files .rhosts 66, 69, 73, 82, 88, 95, 99, 104, 109, 110, 117, 123, 125, 132, 136, 140, 145, 155, 160, 163, 165, 168, 171, 173, 175, 184, 187, 192, 198, 201, 203, 208, 209, 212, 215, 218, 222, 227, 234, 262, 265, 268, 273, 276, 278, 280, 288 dsm.sys 26 orphaned 185 rebalancing 264 restriping 264 filesets changing 104 creating 125 deleting 162 displaying 214 ID 337 linking 203 name 337 quotas 39 unlinking 287 FlashCopy image 190 full backup 81, 352, 355
418
G
genkey 77 GPFS stopping 275 GPFS cache 57 GPFS cluster adding nodes 5 changing the GPFS cluster configuration servers creating 3, 121 deleting nodes 5 displaying configuration information 3 managing 3 GPFS cluster configuration data 183 GPFS cluster configuration information displaying 207 GPFS cluster configuration server 167 changing 87 primary 88 secondary 88 GPFS cluster configuration servers changing 6 choosing 121 displaying 3 GPFS cluster data 134 GPFS commands 61 GPFS configuration data 398, 400 GPFS daemon starting 12, 280 stopping 13, 275 GPFS daemon status 197 GPFS directory entry 297 GPFS file system administering 1 GPFS file system snapshot handle 312, 313, 315, 317, 319, 321, 325 free 307 GPFS programming interfaces 291 GPFS subroutines 291 GPFS user exits 397 gpfs_acl_t 293 GPFS_ATTRFLAG_IGNORE_POOL gpfs_fgetattrs() 301 gpfs_fputattrs() 303 gpfs_fputattrswithpathname() 305 GPFS_ATTRFLAG_NO_PLACEMENT gpfs_fgetattrs() 301 gpfs_fputattrs() 303 gpfs_fputattrswithpathname() 305 gpfs_close_inodescan() 294 gpfs_cmp_fssnapid() 295 gpfs_direntx_t 297 gpfs_fcntl() 298 gpfs_fgetattrs() 301 GPFS_ATTRFLAG_IGNORE_POOL 301 GPFS_ATTRFLAG_NO_PLACEMENT 301 gpfs_fputattrs() 303 GPFS_ATTRFLAG_IGNORE_POOL 303 GPFS_ATTRFLAG_NO_PLACEMENT 303 gpfs_fputattrswithpathname() 305 GPFS_ATTRFLAG_IGNORE_POOL 305 GPFS_ATTRFLAG_NO_PLACEMENT 305
gpfs_free_fssnaphandle() 307 gpfs_fssnap_handle_t 308 gpfs_fssnap_id_t 309 gpfs_fstat() 310 gpfs_get_fsname_from_fssnaphandle() 312 gpfs_get_fssnaphandle_by_fssnapid() 313 gpfs_get_fssnaphandle_by_name() 315 gpfs_get_fssnaphandle_by_path() 317 gpfs_get_fssnapid_from_fssnaphandle() 319 gpfs_get_pathname_from_fssnaphandle() 321 gpfs_get_snapdirname() 323 gpfs_get_snapname_from_fssnaphandle() 325 gpfs_getacl() 327 gpfs_iattr_t 329 gpfs_iclose 27 gpfs_iclose() 332 gpfs_ifile_t 334 gpfs_iopen 27 gpfs_iopen() 341 gpfs_iread 27 gpfs_iread() 343 gpfs_ireaddir 27 gpfs_ireaddir() 345 gpfs_ireadlink() 347 gpfs_ireadx() 349 gpfs_iscan_t 351 gpfs_next_inode 27 gpfs_next_inode() 352 gpfs_opaque_acl_t 354 gpfs_open_inodescan 27 gpfs_open_inodescan() 355 gpfs_prealloc() 358 gpfs_putacl() 361 gpfs_quotactl() 39, 363 gpfs_quotaInfo_t 366 gpfs_seek_inode() 368 gpfs_stat() 370 gpfsAccessRange_t 372 gpfsCancelHints_t 373 gpfsClearFileCache_t 374 gpfsDataShipMap_t 375 gpfsDataShipStop_t 380 gpfsFcntlHeader_t 382 gpfsFreeRange_t 383 gpfsGetFilesetName_t 384 gpfsGetReplication_t 385 gpfsGetSnapshotName_t 387 gpfsGetStoragePool_t 388 gpfsMultipleAccessRange_t 389 gpfsRestripeData_t 391 gpfsSetReplication_t 392 gpfsSetStoragePool_t 394 grace period changing 180 setting 180 group quota 147, 149, 151, 181, 231, 250, 258
H
High Performance Switch HPS) 31
Index
419
298
I
I/O caching policy changing 84 in-doubt value 101, 231, 258 incremental backup 81, 352, 355 inheritance of ACLs 51 InheritOnly 51 inode attributes 329 inode file handle 332, 334 inode number 341, 347, 352, 355, 368 inode scan 349, 352, 368 closing 294 opening 355 inode scan handle 294, 351 iscan handle 294
K
kernel memory 84
L
license inquiries 407 linking filesets 203 links to snapshots creating 277 deleting 277 LookAt message retrieval tool lost+found directory 18
xii
M
managing a GPFS cluster 3 GPFS quotas 39 maxblocksize attribute changing 92 maxFilesToCache attribute changing 92 maximum number of files changing 106 displaying 217 maxMBpS attribute changing 92 maxStatCache attribute changing 93 message retrieval tool, LookAt metadata 85 metadata replica 107 metadata replication changing 21 mmadddisk 30, 64, 400
xii
mmaddnode 5, 68 mmapplypolicy 71 mmauth 76 mmbackup 26, 81 mmchattr 21, 84 mmchcluster 6, 87 mmchconfig 8, 56, 57, 90 mmchdisk 34, 97 mmcheckquota 39, 41, 101 mmchfileset 104 mmchfs 20, 43, 57, 106 mmchmgr 110 mmchnode 112 mmchnsd 116 mmchpolicy 119 mmcrcluster 3, 121 mmcrfileset 125 mmcrfs 15, 43, 50, 57, 127 mmcrnsd 30, 134, 271, 399, 400 mmcrsnapshot 27, 139 mmcrvsd 142 mmdefedquota 147 mmdefquotaoff 149 mmdefquotaon 151 mmdefragfs 24, 25, 154 mmdelacl 50, 51, 54, 157 mmdeldisk 159, 400 mmdelfileset 162 mmdelfs 17, 165 mmdelnode 5, 167 mmdelnsd 170 mmdelsnapshot 172 mmdf 23, 174 mmeditacl 50, 51, 53, 54, 177 mmedquota 40, 180 mmexportfs 183 MMFS_FSSTRUCT 185 MMFS_SYSTEM_UNMOUNT 186 mmfsck 185 mmfsctl 190 mmgetacl 49, 53, 54, 194 mmgetstate 197 mmimportfs 200 mmlinkfileset 203 mmlsattr 21, 205 mmlscluster 3, 207 mmlsconfig 209 mmlsdisk 33, 211 mmlsfileset 214 mmlsfs 19, 57, 217 mmlsmgr 11, 221 mmlsmount 17, 223 mmlsnsd 29, 226 mmlspolicy 229 mmlsquota 42, 231 mmlssnapshot 234, 331 mmmount 15, 236 mmnsddiscover 238 mmpmon 240 mmputacl 48, 49, 51, 53, 54, 245 mmquotaoff 43, 248
420
mmquotaon 43, 250 mmremotecluster 252 mmremotefs 255 mmrepquota 44, 258 mmrestorefs 261 mmrestripefile 264 mmrestripefs 22, 267 completion time 12 mmrpldisk 271 mmshutdown 13, 275 mmsnapdir 277, 308 mmstartup 12, 280 mmtracectl 282 mmumount 16, 17, 285 mmunlinkfileset 287 modifying file system attributes 20 mount point directory 128 mounting file systems 15 mounting a file system 106 an NFS exported file system 55 mtime 129, 355, 370, 401
NSD server node choosing 134 NSD server nodes changing 36, 116 NSD volume ID 134, 170 nsdServerWaitTimeForMount changing 93 nsdServerWaitTimeWindowOnMount changing 93
O
optimizing file access 59 options always 16 asfound 16 asneeded 16 atime 16 mount 15 mtime 16 never 16 noatime 16 nomtime 16 nosyncnfs 16 syncnfs 16 useNSDserver 16 orphaned files 18
N
Network File System (NFS) cache usage 57 exporting a GPFS file system 55 interoperability with GPFS 54 synchronous writes 57 unmounting a file system 57 Network Shared Disks (NSDs) 399 automatic creation 200 changing configuration attributes 36, 116 creating 134 deleting 170 displaying 226 NFS V4 47, 106, 128 NFS V4 ACL 107, 130, 157, 177, 178, 194, 195, 245 GPFS exceptions 402 special names 402 NFS V4 protocol GPFS exceptions 402 NIS automount 57 node descriptor 68, 122 node designation 68, 122 node quorum 10 node quorum with tiebreaker 6, 10 nodes adding to a cluster 68 adding to a GPFS cluster 5 assigned as file system manager 11 deleting from a cluster 167 which have file systems mounted 17 notices 407 NSD path 238 NSD failback 37 NSD server 12, 37, 65, 167, 200 NSD server list changing 36, 116
P
pagepool attribute changing 93 partitioning file blocks 377 patent information 407 peer recovery cluster 190 Peer-to-Peer Remote Copy (PPRC) 190 performance communicating file access patterns 59 monitoring 240 persistent reserve failureDetectionTime attribute 92 Persistent Reserve disabling 37 enabling 37 policy applying 71 PR 37 prefetchThreads attribute changing 93 primary GPFS cluster configuration server 122 problem determination information placement of 91 public/private key pair 76
Q
querying disk fragmentation 24 file system fragmentation replication 21 space 23 24
Index
421
quorum 191 quorum node 121, 167, 197 quota files replacing 44, 101 quota information 366 quotas activating 250 activating limit checking 43 changing 40, 180, 363 checking 41, 101 creating reports 258 deactivating 248 deactivating quota limit checking default values 40 disabling 40 displaying 42, 231 enabling 39 establishing 40 fileset 39 group 39 reports, creating 44 setting 180 user 39
rsh 66, 69, 73, 82, 88, 95, 99, 104, 109, 110, 117, 123, 125, 132, 136, 140, 145, 155, 160, 163, 165, 168, 171, 173, 175, 184, 187, 192, 198, 201, 203, 208, 209, 212, 215, 218, 222, 227, 234, 262, 265, 268, 273, 276, 278, 280, 288
S
secondary GPFS cluster configuration server 122 server node restoring NSD path 238 server node, NSD choosing 134 setting access control lists 48 shortcut keys keyboard 405 snapshot directory 323 snapshot handle 308, 312, 313, 315, 317, 319, 321, 325 free 307 snapshot ID 308, 309, 313, 319 comparing 295 internal 331 snapshot name 315, 325 snapshots creating 139 deleting 172 directory 139 displaying 234 listing 234 maximum number 139 partial 139 restoring a file system 261 sparse file 349 specifying subnets attribute 93 standards, exceptions to 401 starting GPFS 12, 280 before starting 12 stopping GPFS 13 storage pre-allocating 358 storage pools ID 339 name 339 strict 107, 130 Stripe Group Manager see File System Manager 417 structures gpfs_acl_t 293 gpfs_direntx_t 297 gpfs_fssnap_handle_t 308 gpfs_fssnap_id_t 309 gpfs_iattr_t 329 gpfs_ifile_t 334, 335 gpfs_iscan_t 351 gpfs_opaque_acl_t 354 gpfs_quotaInfo_t 366 gpfsAccessRange_t 372 gpfsCancelHints_t 373 gpfsClearFileCache_t 374 gpfsDataShipMap_t 375
43
R
RAID stripe size 127 rcp 66, 69, 73, 82, 88, 95, 99, 104, 109, 110, 117, 123, 125, 132, 136, 140, 145, 155, 160, 163, 165, 168, 171, 173, 175, 184, 187, 192, 198, 201, 203, 208, 209, 212, 215, 218, 222, 227, 234, 262, 265, 268, 273, 276, 278, 280, 288 read file permission 47 rebalancing a file 264 rebalancing a file system 267 refresh NSD server mmnsddiscover 238 remote copy command changing 87 choosing 121 remote shell command changing 87 choosing 121 repairing a file system 18 replacing disks 32 replicated cluster 400 replication 107, 130 changing 21 querying 21, 205 replication attributes changing 84 replication factor 84 requirements administering GPFS 1 restoring NSD path mmnsddiscover 238 restripe see rebalance 417 restriping a file 264 restriping a file system 22, 267 root authority 1
422
structures (continued) gpfsDataShipStart_t 377 gpfsDataShipStop_t 380 gpfsFcntlHeader_t 382 gpfsFreeRange_t 383 gpfsGetFilesetName_t 384 gpfsGetReplication_t 385 gpfsGetSnapshotName_t 387 gpfsGetStoragePool_t 388 gpfsMultipleAccessRange_t 389 gpfsRestripeData_t 391 gpfsSetReplication_t 392 gpfsSetStoragePool_t 394 subnets attribute specifying 93 subroutines gpfs_close_inodescan() 294 gpfs_cmp_fssnapid() 295 gpfs_fcntl() 59, 298 gpfs_fgetattrs() 301 gpfs_fputattrs() 303 gpfs_fputattrswithpathname() 305 gpfs_free_fssnaphandle() 307 gpfs_fstat() 310 gpfs_get_fsname_from_fssnaphandle() 312 gpfs_get_fssnaphandle_by_fssnapid() 313 gpfs_get_fssnaphandle_by_name() 315 gpfs_get_fssnaphandle_by_path() 317 gpfs_get_fssnapid_from_fssnaphandle() 319 gpfs_get_pathname_from_fssnaphandle() 321 gpfs_get_snapdirname() 323 gpfs_get_snapname_from_fssnaphandle() 325 gpfs_getacl() 327 gpfs_iclose 27 gpfs_iclose() 332 gpfs_igetattrs() 335 gpfs_igetfilesetname() 337 gpfs_igetstoragepool() 339 gpfs_iopen 27 gpfs_iopen() 341 gpfs_iread 27 gpfs_iread() 343 gpfs_ireaddir 27 gpfs_ireaddir() 345 gpfs_ireadlink() 347 gpfs_ireadx() 349 gpfs_next_inode 27 gpfs_next_inode() 352 gpfs_open_inodescan 27 gpfs_open_inodescan() 355 gpfs_prealloc() 358 gpfs_putacl() 361 gpfs_quotactl() 39, 363 gpfs_seek_inode() 368 gpfs_stat() 370 symbolic link reading 347 syncFSconfig 190
T
tiebreaker disk 170 tiebreakerDisks changing 94 time it will take to detect that a node has failed failureDetectionTime attribute 92 timeout period 275 Tivoli Storage Manager (TSM) documentation 26 setup requirements 26 using the mmbackup command 81 version 26 traceFileSize changing 283 traceRecycle changing 282 trademarks 408 traditional ACL 107, 130, 177, 178, 194, 195
U
UID domain 123 uidDomain attribute changing 94 unlinking filesets 287 unmounting a file system 16 NFS exported 57 on multiple nodes 17 unmountOnDiskFail attribute changing 94 useNSDserver values 16 usePersistentReserve changing 94 user exits mmsdrbackup 398 nsddevices 399 syncfsconfig 400 user quota 148, 149, 151, 181, 231, 250, 258 user space buffer 84 using a clustered NFS subsystem 58
V
verbsPorts changing 95 verbsRdma changing 95 virtual shared disk server 143 virtual shared disks 134 creating 142
W
worker1Threads attribute changing 95 write file permission 47
Index
423
424
Thank you for your support. Submit your comments using one of these channels: v Send your comments to the address on the reverse side of this form. v Send your comments via e-mail to: [email protected] If you would like a response from IBM, please fill in the following information:
Address
E-mail address
___________________________________________________________________________________________________
Fold and _ _ _ _ _ _ _ _ _ _Fold and_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please _ _ _ _ _ staple _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Tape _ _ _ _ _ _ _ _ Tape _ _ _ _ do not _ _ _ _ NO POSTAGE NECESSARY IF MAILED IN THE UNITED STATES
IBM Corporation Department 55JA, Mail Station P384 2455 South Road Poughkeepsie, NY 12601-5400
_________________________________________________________________________________________ Please do not staple Fold and Tape Fold and Tape
SA23-2221-01
SA23-2221-01