AIX Admin PDF
AIX Admin PDF
SG24-7414-00
Liang Dong
Costa Lochaitis
Allen Oh
Sachinkumar Patil
Andrew Young
ibm.com/redbooks
7414edno.fm
SG24-7414-00
7414edno.fm
Note: Before using this information and the product it supports, read the information in
Notices on page xv.
7414TOC.fm
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Chapter 1. Application development and system debug. . . . . . . . . . . . . . . 1
1.1 Editor enhancements (5300-05) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 System debugger enhancements (5300-05) . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 The $stack_details variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 The frame subcommand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 The addcmd subcommand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.4 Deferred event support ($deferevents variable) . . . . . . . . . . . . . . . . . 7
1.2.5 Regular expression symbol search /, and ? subcommands . . . . . . . . 7
1.2.6 Thread level breakpoint and watchpoint support . . . . . . . . . . . . . . . . 8
1.2.7 A dump subcommand enhancement . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.8 Consistency checkers (5300-03). . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 The trcgen and trcgenk command (5300-05) . . . . . . . . . . . . . . . . . . . . . . 11
1.4 xmalloc debug enhancement (5300-05) . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Stack execution disable protection (5300-03) . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Environment variable and library enhancements . . . . . . . . . . . . . . . . . . . 13
1.6.1 Environment variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.2 LIBRARY variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.3 Named shared library areas (5300-03) . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.4 Modular I/O library (5300-05) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.5 POSIX prioritized I/O support (5300-03) . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Vector instruction set support (5300-03) . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7.1 What is SIMD? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7.2 Technical details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7.3 Compiler support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8 Raw socket support for non-root users (5300-05). . . . . . . . . . . . . . . . . . . 23
iii
7414TOC.fm
iv
7414TOC.fm
Contents
7414TOC.fm
vi
7414TOC.fm
Contents
vii
7414TOC.fm
viii
7414LOF.fm
Figures
3-1
3-2
4-1
4-2
6-1
6-2
6-3
6-4
6-5
6-6
7-1
7-2
7-3
7-4
7-5
7-6
ix
7414LOF.fm
7414LOT.fm
Tables
1-1 dbx subcommands for thread level debugging . . . . . . . . . . . . . . . . . . . . . . 8
2-1 API fcntl parameter details for file system freeze and thaw feature . . . . . 29
2-2 The rollback command parameter details . . . . . . . . . . . . . . . . . . . . . . . . . 30
2-3 Commonly used flags for mirscan command . . . . . . . . . . . . . . . . . . . . . . 32
2-4 Output columns from mirscan command . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-1 LMT memory consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4-1 Flags of ps command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4-2 Commonly used usrck command flags and their descriptions . . . . . . . . . 69
4-3 Page size support by platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4-4 The ld and ldedit command arguments for page size specification . . . . . 72
4-5 The vmstat command new flags and descriptions . . . . . . . . . . . . . . . . . . 76
4-6 The acctrpt command flags for process accounting . . . . . . . . . . . . . . . . . 78
4-7 The acctprt command fields for process output . . . . . . . . . . . . . . . . . . . . 79
4-8 The acctrpt command flags for system reporting . . . . . . . . . . . . . . . . . . . 80
4-9 The acctprt command fields for system output . . . . . . . . . . . . . . . . . . . . . 81
4-10 The acctprt command fields for transaction output . . . . . . . . . . . . . . . . . 84
5-1 System NFS calls summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5-2 Pending NFS calls summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5-3 Global region fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5-4 Partition region fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5-5 Command options and their descriptions . . . . . . . . . . . . . . . . . . . . . . . . 106
5-6 The xmwlm and topas command flags . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5-7 The topas specific command options . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5-8 Possible suffixes for iostat -D command fields . . . . . . . . . . . . . . . . . . . . 112
5-9 The hpmcount command parameters details . . . . . . . . . . . . . . . . . . . . . 116
5-10 The hpmstat command parameter details. . . . . . . . . . . . . . . . . . . . . . . 118
5-11 Statistics fields and their descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6-1 NFS commands to change the server grace period . . . . . . . . . . . . . . . . 134
6-2 Common problems and actions to troubleshoot NDAF. . . . . . . . . . . . . . 162
7-1 The geninv command parameter details. . . . . . . . . . . . . . . . . . . . . . . . . 188
7-2 The multibos command flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7-3 Supported migration paths matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
xi
7414LOT.fm
xii
7414LOE.fm
Examples
1-1 Sample program used for explaining enhanced features of dbx. . . . . . . . . 2
1-2 The where subcommand default output . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-3 The where subcommand output once $stack_details is set . . . . . . . . . . . . 4
1-4 The addcmd subcommand example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1-5 A dump subcommand example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1-6 Available consistency checkers with the kdb check subcommand . . . . . . 11
2-1 Freeze a file system using the chfs command . . . . . . . . . . . . . . . . . . . . . 29
2-2 Report output from mirscan command . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4-1 Mail format for cron internal errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4-2 Mail format for cron jobs completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4-3 A cron job completion message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4-4 The ps command new flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4-5 The usrck -l command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4-6 The usrck -l user command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4-7 The usrck -l -b command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4-8 Output of the pagesize -af command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4-9 The svmon -G command showing multiple page size information . . . . . . 75
4-10 The svmon -P command showing multiple page size information . . . . . 75
4-11 The vmstat command output using the -p and -P flags. . . . . . . . . . . . . . 76
4-12 The acctrpt command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4-13 The acctrpt command output filtered for command projctl and user root 79
4-14 The acctrpt command system output . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4-15 The acctrpt command transaction report . . . . . . . . . . . . . . . . . . . . . . . . 83
4-16 The geninstall -L command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4-17 The gencopy -L -d command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4-18 Default lsldap output with no flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4-19 Using lsldap to show user entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4-20 Using lsldap by root to show entry for user3 . . . . . . . . . . . . . . . . . . . . . . 90
4-21 Normal user using lsldap to view user3 . . . . . . . . . . . . . . . . . . . . . . . . . 91
5-1 Example output of svmon -G. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5-2 Example output of vmstat -p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5-3 Example output of svmon -P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5-4 Sample output of curt report (partial) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5-5 Sample output of netpmon (partial) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5-6 The topas command cross partition monitor panel . . . . . . . . . . . . . . . . . 102
5-7 The topas command detailed partition view without HMC data . . . . . . . 104
5-8 The topas command detailed partition view with HMC data . . . . . . . . . . 105
5-9 ASCII formatted output generated by topas out from xmwlm data file . . 108
xiii
7414LOE.fm
5-10 Summary output generated by topasout from topas -R data file. . . . . . 108
5-11 Output generated by topasout from topas -R data files . . . . . . . . . . . . 109
5-12 The iostat -D command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5-13 The iostat -D command interval output . . . . . . . . . . . . . . . . . . . . . . . . . 112
5-14 The iostat -aD command sample output . . . . . . . . . . . . . . . . . . . . . . . . 113
5-15 The hpmcount command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5-16 The hpmstat command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5-17 Using the mempool subcommand to show a systems memory pools . 121
5-18 The fcstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6-1 The ckfilt command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6-2 The login.cfg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6-3 The dmf command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7-1 Example of geninv usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
7-2 The multibos -s command output to setup standby BOS . . . . . . . . . . . . 197
7-3 The multibos -m -X command output . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
7-4 Record with disk names only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
7-5 Record with physical locations only . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
7-6 Record with PVIDs only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
xiv
7414spec.fm
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
xv
7414spec.fm
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Redbooks (logo)
pSeries
AFS
AIX 5L
AIX
BladeCenter
DFS
Enterprise Storage Server
General Parallel File System
Geographically Dispersed
Parallel Sysplex
GDPS
GPFS
HACMP
IBM
Parallel Sysplex
PowerPC
POWER
POWER Hypervisor
POWER3
POWER4
POWER5
POWER5+
PTX
Redbooks
System p
System p5
Tivoli
xvi
7414pref.fm
Preface
This IBM Redbook focuses on the differences introduced in AIX 5L Version
5.3 since the initial AIX 5L Version 5.3 release. It is intended to help system
administrators, developers, and users understand these enhancements and
evaluate potential benefits in their own environments.
Since AIX 5L Version 5.3 was introduced many new features, including JFS2,
LDAP, trace and debug, installation and migration, NFSv4, and performance tools
enhancements were introduced. There are many other improvements offered
through updates for AIX 5L Version 5.3, and you can explore them in this
redbook.
For clients who are not familiar with the base enhancements of AIX 5L Version
5.3, a companion publication, AIX 5L Differences Guide Version 5.3 Edition,
SG24-7463 is available.
xvii
7414pref.fm
xviii
7414pref.fm
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. JN9B Building 905 Internal Zip 9053D004
11501 Burnet Road
Austin, TX 78758-3493
Preface
xix
7414pref.fm
xx
7414ch01.fm
Chapter 1.
Application development
and system debug
In the area of application development, this chapter covers the following major
topics:
Editor enhancements (5300-05)
System debugger enhancements (5300-05)
The trcgen and trcgenk command (5300-05)
xmalloc debug enhancement (5300-05)
Stack execution disable protection (5300-03)
Environment variable and library enhancements
Vector instruction set support (5300-03)
Raw socket support for non-root users (5300-05)
IOCP support for AIO (5300-05)
7414ch01.fm
#include<stdio.h>
int add(int x, int y)
{
int z=0;
for(;y>=1;y--)
z=z+x;
return z;
}
int mul(int x, int y)
7414ch01.fm
{
int z=0;
z=add(x,y);
return z;
}
int main(){
int result;
result=mul(100,5);
printf(Final result is %d\n", result);
}
#cc -g example1.c
#dbx a.out
Type 'help' for help.
reading symbolic information ...
(dbx) stop at 5
[1] stop at 5
(dbx) stop at 12
[2] stop at 12
(dbx) stop at 19
[3] stop at 19
(dbx) run
[3] stopped in main at line 19
19
result=mul(100,5);
(dbx) where
main(), line 19 in example1.c
(dbx) cont
[2] stopped in mul at line 12
12
z=add(x,y);
(dbx) cont
[1] stopped in add at line 5
5
for(;y>=1;y--)
(dbx) where
add(x = 100, y = 5), line 5 in example1.c
mul(x = 100, y = 5), line 12 in example1.c
main(), line 19 in example1.c
(dbx)
7414ch01.fm
the register set for each active function or procedure displayed by the where
subcommand.
By default, the $stack_details variable is disabled in dbx. Example 1-2 shows the
output of the where subcommand without setting the $stack_details variable.
Example 1-2 The where subcommand default output
(dbx) where
add(x = 100, y = 5), line 5 in example1.c
mul(x = 100, y = 5), line 12 in example1.c
main(), line 19 in example1.c
Example 1-3 shows the output of the where subcommand, once $stack_details
variable is set.
Example 1-3 The where subcommand output once $stack_details is set
7414ch01.fm
$stkp:0x2ff22c30
$r14:0x00000001
$r15:0x2ff22d00
$r16:0x2ff22d08
$r17:0x00000000
$r18:0xdeadbeef
$r19:0xdeadbeef
$r20:0xdeadbeef
$r21:0xdeadbeef
$r22:0xdeadbeef
$r23:0xdeadbeef
$r24:0xdeadbeef
$r25:0xdeadbeef
$r26:0xdeadbeef
$r27:0xdeadbeef
$r28:0xdeadbeef
$r29:0xdeadbeef
$r30:0xdeadbeef
$r31:0x200005c0
$iar:0x1000045c $link:0x100001ec
[unset $noflregs to view floating point registers]
2 main(), line 19 in "example1.c"
(dbx)
7414ch01.fm
7414ch01.fm
$ctr:0xdeadbeef
$xer:0x20000020
$mq:0xdeadbeef
Condition status = 0:e 1:l 2:e 3:e 4:e 5:l 6:l 7:e
[unset $noflregs to view floating point registers]
[unset $novregs to view vector registers]
in add at line 5
0x1000037c (add+0x14) 8061006c
lwz r3,0x6c(r1)
7414ch01.fm
Current thread
The dbx command has added some new subcommands that enable you to work
with individual attribute objects, condition variables, mutexes, and threads. They
are provided in Table 1-1.
Table 1-1 dbx subcommands for thread level debugging
dbx subcommand
Description
attribute
condition
mutex
thread
tstophwp
ttracehwp
tstop
7414ch01.fm
dbx subcommand
Description
tstopi
ttrace
ttracei
tnext
tnexti
tstep
tstepi
tskip
A number of subcommands that do not work with threads directly are also
affected when used to debug a multithreaded program.
For further details of the thread-level debugging with thread-level breakpoint and
watchpoint, refer to the man page of the dbx command.
7414ch01.fm
dump "s*"
To redirect names and values of variables in the current procedure to the
var.list file, Enter:
dump > var.list
Example 1-5 on page 10 shows the output of dump subcommand in dbx for a
minimal C language program:
Example 1-5 A dump subcommand example
# dbx a.out
Type 'help' for help.
reading symbolic information ...
(dbx) step
stopped in main at line 19
19
result=mul(100,5);
(dbx) dump
main(), line 19 in "example1.c"
result = 0
__func__ = "main"
(dbx) dump "z*"
example1.mul.z
example1.add.z
(dbx) dump "mu*"
mul
(dbx) dump "mai*"
main
(dbx)
10
7414ch01.fm
Example 1-6 Available consistency checkers with the kdb check subcommand
(0)> check
Please specify a checker name:
Kernel Checkers
Description
-------------------------------------------------------------------------------proc
Validate proc and pvproc structures
thread
Validate thread and pvthread structures
Kernext Checkers
Description
--------------------------------------------------------------------------------
11
7414ch01.fm
12
7414ch01.fm
executable file-based SED flags. The SED facility is available only with the
AIX 5L 64-bit kernel. The syntax is as follows:
sedmgr [-m {off | all | select | setidfiles}] [-o {on | off}]
[-c {system | request | exempt} {file_name | file_group}]
[-d {file_name | directory_name}] [-h]
You can use the command to enable and control the level of stack execution
performed on the system. This command can also be used to set the various
flags in an executable file, controlling the stack execution disable. Any changes to
the system wide mode setting will take effect only after a system reboot.
If invoked without any parameters, the sedmgr command will display the current
setting in regards to the stack execution disable environment.
To change the system-wide SED mode flag to setidfiles and the SED control flag
to on, enter:
sedmgr -m setidfiles -o on
With this command example, the setidfiles option sets the mode of operation so
that operating system performs stack execution disable for the files with the
request SED flag set and enables SED for the executable files with the following
characteristics:
setuid files owned by root
setid files with primary group as system or security
To change the SED checking flag to exempt for the plans file, enter:
sedmgr -c exempt plans
To change the SED checking flag to select for all the executable files marked as a
TCB file, type use following command:
sedmgr -c request TCB_files
To display the SED checking flag of the plans file, Enter:
sedmgr -d plans
13
7414ch01.fm
DR_MEM_PERCENT (5300-03)
Dynamic addition or removal of memory from an LPAR running multiple dynamic
LPAR-aware programs can result in conflict for resources. By default, each
program is notified equally about the resource change. For example, if 1 GB of
memory is removed from an LPAR running two dynamic-aware programs, then,
by default, each program is notified that 1 GB of memory has been removed.
Because the two programs are generally unaware of each other, both of them will
scale down their memory use by 1 GB, leading to inefficiency. A similar efficiency
problem can also occur when new memory is added.
To overcome this problem, AIX 5L now allows application scripts to be installed
with a percentage factor that indicates the percentage of the actual memory
resource change. The system then notifies the application in the event of a
dynamic memory operation. While installing the application scripts using the
drmgr command, you can specify this percentage factor using the
DR_MEM_PERCENT name=value pair. The application script will need to output
this name=value pair when it is invoked by the drmgr command with the scriptinfo
subcommand. The value must be an integer between 1 and 100. Any value
outside of this range is ignored, and the default value, which is 100, is used.
Additionally, you can also set this name=value pair as an environment variable at
the time of installation. During installation, the value from the environment
variable, if set, will override the value provided by the application script. Similarly,
in applications using the SIGRECONFIG signal handler and dr_reconfig() system
call, you can control the memory dynamic LPAR notification by setting the
DR_MEM_PERCENT name=value pair as an environment variable before the
application begins running. This value, however, cannot be changed without
restarting the application.
AIXTHREAD_READ_GUARDPAGES (5300-03)
Beginning with AIX 5L Version 5.3 release 5300-03 the,
AIXTHREAD_READ_GUARDPAGES environment variable is added into the
AIX 5L system. The AIXTHREAD_READ_GUARDPAGES environment variable
enables or disables read access to the guard pages that are added to the end of
the pthread stack. It can be set as follows:
#AIXTHREAD_READ_GUARDPAGES={ON|OFF};
#export AIXTHREAD_GUARDPAGES
14
7414ch01.fm
The change takes effect immediately in the shell session and will be effective for
its duration.
You can make the change permanent on a system by adding the
AIXTHREAD_READ_GUARDPAGES={ON|OFF} command to the
/etc/environment file.
15
7414ch01.fm
16
7414ch01.fm
When the last process attached to the segment exits, the area will be
dynamically removed. Multiple named shared library areas can be active on the
system simultaneously, provided they have a unique name. Named shared
library areas can only be used by 32-bit processes.
By default, the named shared library area works in the same way as the global
area, designating one segment for shared library data and one for text. However,
it is possible to use an alternate memory model that dedicates both segments to
shared library text. To do this, you can specify the doubletext32 option for the
named shared library area:
LDR_CNTRL=NAMEDSHLIB=shared1,doubletext32 dbstartup.sh
This is useful for process groups that need to use more than 256 MB for shared
library text, however it does mean that library data will not be preloaded that may
have additional performance implications. This option should therefore be
considered on a case by case basis.
MIO architecture
The Modular I/O library consists of five I/O modules that may be invoked at
runtime on a per-file basis. The modules currently available are:
mio module
pf module
trace module
recov module
aix module
The default modules are mio and aix; the other modules are optional.
17
7414ch01.fm
aio_fildes
aio_offset
*aio_buf
aio_nbytes
aio_reqprio
aio_sigevent
aio_lio_opcode
18
7414ch01.fm
19
7414ch01.fm
20
7414ch01.fm
Note:
In order the generate vector-enabled code, you should explicitly specify
-qenablevmx option.
In order to use the -qvecnvol option, you need bos.adt.include version
5.3.0.30 or greater to be installed on your system .
When -qnoenablevmx compiler option is in effect, -qnovecnvol option is
ignored.
The -qnovecnvol option performs independently from -qhot=simd | nosimd,
-qaltivec | -qnoaltivec and also vector directive NOSIMD.
On AIX 5.3 with 5300-03, by default, 20 volatile registers (vr0-vr19) are
used, and 12 non-volatile vector registers (vr20 - vr31) are not used. You
can use these registers only if when -qvecnvol is in effect.
The -qvecnvol option should be enabled only when no legacy code, that
saves and restores non-volatile registers, is involved. Using -qvecnvol and
linking with legacy code, may result runtime failure.
21
7414ch01.fm
option is in effect. Otherwise, the compiler will ignore -qaltivec and issue a
warning message.
Also, if -qnoenablevmx option is in effect. The compiler will ignore -qaltivec and
issue a warning message.
When -qaltivec is in effect, the following macros are defined:
__ALTIVEC__ is defined to 1.
__VEC__ is defined to 10205.
22
7414ch01.fm
1
2
3
4
5
/*virtual circuit*/
/*datagram*/
/*raw socket*/
/*reliably-delivered message*/
/*connection datagram*/
23
7414ch01.fm
After AIX 5L Version 5.3 with TL 5300-05, raw sockets can be opened by
non-root users who have the CAP_NUMA_ATTACH capability. For non-root raw
socket access, the chuser command assigns the CAP_NUMA_ATTACH
capability, along with CAP_PROPAGATE.
For the user, who is to be permitted raw socket use, the sysadmin should set the
CAP_NUMA_ATTACH bit. While opening a raw socket, if the user is non-root, it is
checked if this bit is on. If yes, raw socket access is permitted; if no, prohibited.
The capabilities are assigned to a user using the syntax:
# chuser "capabilities=CAP_NUMA_ATTACH,CAP_PROPAGATE" <user>
This command adds the given capabilities to the user in /etc/security/user file.
24
7414ch01.fm
25
7414ch01.fm
26
7414ch02.fm
Chapter 2.
Storage management
In this chapter, the following topics relating to storage management are
discussed:
JFS2 file system enhancements
The mirscan command (5300-03)
AIO fast path for concurrent I/O (5300-05)
FAStT boot support enhancements (5300-03)
Tivoli Access Manager pre-install (5300-05)
Geographic Logical Volume Manager (5300-03)
27
7414ch02.fm
28
7414ch02.fm
Table 2-1 API fcntl parameter details for file system freeze and thaw feature
Parameters
Details
FSCNTL_FREEZE
FSCNTL_REFREEZE
FSCNTL_THAW
Note: For all applications using this interface, use FSCNTL_THAW to thaw the
file system rather than waiting for the timeout to expire. If the timeout expires,
an error log entry is generated as an advisory.
The following show the usage of chfs command to freeze and thaw a file system.
To freeze and thaw a file system, use the following command:
chfs -a freeze=<timeout | 0 | "off"> <file system name>
Example 2-1 shows the file systems read-only behavior during its freeze timeout
period.
Example 2-1 Freeze a file system using the chfs command
# chfs -a freeze=60 /tmp; date; echo "TEST FREEZE TIME OUT" >
/tmp/sachin.txt; cat /tmp/sachin.txt;date
Mon Dec 11 16:42:00 CST 2006
TEST FREEZE TIME OUT
Mon Dec 11 16:43:00 CST 2006
Similarly, the following command can be used to refreeze a file system:
chfs -a refreeze=<timeout> <file system name>
29
7414ch02.fm
Details
-v
-s
-c
snappedFS
snapshotObject
30
7414ch02.fm
To back up all of the data in a DMAPI file system, use a command that reads
entire files, such as the tar command. This can cause a DMAPI-enabled
application to restore data for every file accessed by the tar command, moving
data back and forth between secondary and tertiary storage, so there can be
performance implications.
31
7414ch02.fm
Description
-v vgname
-l lvname
-p pvname
-r reverse_pvname
-c lvcopy
-a
The -r reverse_pname flag takes a disk device as its argument and checks all
partitions that do not reside on that device but which have a mirrored copy
located there. This is useful for ensuring the integrity of a logical volume, prior to
removing a failing disk.
LP CP TARGETPV
32
1
2
3
4
1
2
3
4
1
1
1
1
2
2
2
2
7414ch02.fm
Description
OP
STATUS
PVNAME
Identifies the name of the physical volume where the partition being
operated on resides.
PP
SYNC
IOFAIL
The valid values for this field are yes or no. The value indicated refers
to the state of the partition after the operation has been completed.
LVNAME
Identifies the name of the logical volume where the partition being
operated on resides.
LP
CP
TARGETPV
Identifies the name of the physical volume that was used as the
target for a migration operation.
TARGETPP
Identifies the physical partition number of the partition that was used
as the target for a migration operation
Forced Resync
Migration
33
7414ch02.fm
34
7414ch02.fm
devices such as the FAStT, AIX 5L 5300-03 has made enhancements to allow a
partition to only have one path configured if the partition is required to boot off the
FAStT.
35
7414ch02.fm
36
7414ch02.fm
37
7414ch02.fm
38
7414ch03.fm
Chapter 3.
39
7414ch03.fm
40
7414ch03.fm
The component trace facility provides system trace information for specific
system components. This information allows service personnel to access
component state information through either in-memory trace buffers or through
traditional AIX 5L system trace. CT is enabled by default.
Component trace uses mechanisms similar to system trace. Existing TRCHKxx
and TRCGEN macros can be replaced with CT macros to trace into system trace
buffers or memory trace mode private buffers. Once recorded, CT events can be
retrieved using the ctctrl command. Extraction using the ctctrl command is
relevant only to in-memory tracing. CT events can also be present in a system
trace. The trcrpt command is used in both cases to process the events.
level of trace
The memory trace mode stores the trace entries in a memory buffer, either
private to the component or to a per-CPU memory buffer dedicated to the
kernel's lightweight memory tracing.
The following settings may be changed:
on/off
serialization policy
level of trace
The tracing can be suspended or not (suspended by default). The size of the
private buffer (by default, the size is 0).
Component trace entries may be traced to a private component buffer, the
lightweight memory trace, or the system trace. The destination is governed by
flags specified to the CT_HOOK and CT_GEN macros. The MT_COMMON flag
causes the entry to be traced into the common, lightweight memory trace buffer,
and MT_RARE causes it to go to the rare, lightweight memory trace buffer. You
should not specify both MT_COMMON and MT_RARE. MT_PRIV traces the
entry into the component's private buffer. MT_SYSTEM puts the entry into the
system trace if system trace is active. Thus, an entry may be traced into the
lightweight memory trace, the component's private buffer, and the system trace,
41
7414ch03.fm
or any combination of these destinations. Generic trace entries, traced with the
CT_GEN macro, can not be traced into the lightweight memory trace.
In the memory trace mode, you have the choice for each component, at
initialization, to store their trace entries either in a component private buffer or in
one of the memory buffers managed by the lightweight memory trace. In the
second case, the memory type (common or rare) is chosen for each trace entry.
The component private buffer is a pinned memory buffer that can be allocated by
the framework at component registration or at a later time and only attached to
this component. Its size can be dynamically changed by the developer (through
the CT API) or the administrator (with the ctctrl command).
Private buffers and lightweight memory buffers will be used in circular mode,
meaning that once the buffer is full, the last trace entries overwrite the first one.
Moreover, for each component, the serialization of the buffers can be managed
either by the component (for example managed by the component owner) or by
the component trace framework. This serialization policy is chosen at registration
and may not be changed during the life of the component.
The system trace mode is an additional functionality provided by component
trace. When a component is traced using system trace, each trace entry is sent
to the current system trace. In this mode, component trace will act as a front-end
filter for the existing system trace. By setting the system trace level, a component
can control which trace hooks enter the system trace buffer. Tracing into the
system trace buffer, if it is active, is on at the CT_LVL_NORMAL tracing level by
default.
42
7414ch03.fm
ctctrl -q
To query the state of only the netinet components, Enter:
ctctrl -c netinet -q -r
Disable the use of in-memory CT buffers can be persistently across reboots
by using:
ctctrl -P memtraceoff
CT can be persistently enabled by running:
ctctrl -P memtraceon
Note: The bosboot command is required to make the trace persistent on the
next boot.
Overview
LMT provides system trace information for First Failure Data Capture (FFDC). It
is a constant kernel trace mechanism that records software events occurring
during system operation. The system activates LMT at initialization, then tracing
runs continuously. Recorded events are saved into per processor memory trace
buffers. There are two memory trace buffers for each processor, one to record
common events, and one to record rare events. The memory trace buffers can be
extracted from system dumps and accessed on a live system by service
personnel. The trace records look like traditional AIX 5L system trace records.
The extracted memory trace buffers can be viewed with the trcrpt command,
with formatting as defined in the /etc/trcfmt file.
LMT differs from the traditional AIX 5L system trace in several ways.
LMT is more efficient
LMT is enabled by default, and has been explicitly tuned as an FFDC
mechanism.
However a traditional AIX 5L trace will not be collected until requested. Unlike
traditional AIX 5L system trace, you cannot selectively record only certain AIX 5L
trace hook IDs with LMT. With LMT, you either record all LMT-enabled hooks, or
43
7414ch03.fm
you record none. This means that traditional AIX 5L system trace is the preferred
Second Failure Data Capture (SFDC) tool, as you can more precisely specify the
exact trace hooks of interest given knowledge gained from the initial failure. All
trace hooks can be recorded using traditional AIX 5L system trace, but it may
produce large amount of data this way. Traditional system trace also provides
options that allow you to automatically write the trace information to a disk-based
file (such as /var/adm/ras/trcfile). LMT provides no such option to automatically
write the trace entries to disk when the memory trace buffer fills. When an LMT
memory trace buffer fills, it wraps, meaning the oldest trace record is overwritten,
similar to circular mode in traditional AIX 5L trace.
LMT allows you to view some history of what the system was doing prior to
reaching the point where a failure is detected. As previously mentioned, each
CPU has a memory trace buffer for common events, and a smaller memory trace
buffer for rare events. The intent is for the common buffer to have a 1 to 2 second
retention (in other words, have enough space to record events occurring during
the last 1 to 2 seconds without wrapping). The rare buffer is designed for around
one hour's retention. This depends on workload, on where developers place
trace hook calls in the AIX 5L kernel source, and on what parameters they trace.
AIX 5L Version 5.3 ML3 is tuned such that overly expensive, frequent, or
redundant trace hooks are not recorded using LMT. Note all of the kernel trace
hooks are still included in traditional system trace (when it is enabled). So a given
trace hook entry may be recorded in LMT, system trace, or both. By default, the
LMT-aware trace macros in the source code write into the LMT common buffer,
so there is currently little rare buffer content in ML3.
LMT has proven to be a very useful tool during the development of AIX 5L
Version 5.3 with the ML 5300-03.
44
7414ch03.fm
Number of
CPUs
System
Memory
Total LMT
Memory:
64-bit Kernel
Total LMT
Memory:
32-bit Kernel
POWER3 (375
MHz CPU)
1 GB
8 MB
8 MB
POWER3 (375
MHz CPU)
4 GB
16 MB
16 MB
POWER5
(1656 MHz CPU,
shared processor
LPAR, 60% ent
cap, SMT)
8 logical
16 GB
120 MB
16 MB
POWER5 (1656
MHz CPU)
16
64 GB
512 MB
16 MB
To determine the amount of memory being used by LMT, enter the following shell
command:
echo mtrc | kdb | grep mt_total_memory
The following example output is from a IBM System p5 machine with 4 logical
CPUs, 1 GB memory and 64-bit kernel (the result may vary on your system):
# echo mtrc | kdb | grep mt_total_memory
mt_total_memory... 00000000007F8000
45
7414ch03.fm
The preceding output shows the LMT is using 8160 KB (that is in hex 0x7F8000
bytes) memory.
The 64-bit kernel resizes the LMT trace buffers in response to dynamic
reconfiguration events (for both POWER4 and POWER5 systems). The 32-bit
kernel does not, it will continue to use the buffer sizes calculated during system
initialization. Note that for either kernel, in the rare case that there is insufficient
pinned memory to allocate an LMT buffer when a CPU is being added, the CPU
allocation will fail. This can be identified by a CPU_ALLOC_ABORTED entry in
the error log, with detailed data showing an Abort Cause of 0000 0008 (LMT) and
Abort Data of 0000 0000 0000 000C (ENOMEM).
For the 64-bit kernel, the /usr/sbin/raso command can also be used to increase
or decrease the memory trace buffer sizes. This is done by changing the
mtrc_commonbufsize and mtrc_rarebufsize tunable variables. These two
variables are dynamic parameters, which means they can be changed without
requiring a reboot. For example, to change the per CPU rare buffer size to
sixteen 4 KB pages, for this boot as well as future boots, you would enter:
raso -p -o mtrc_rarebufsize=16
For more information on the memory trace buffer size tunables, see the raso
command documentation.
Internally, LMT tracing is temporarily suspended during any 64-bit kernel buffer
resize operation.
For the 32-bit kernel, the options are limited to accepting the default
(automatically calculated) buffer sizes, or disabling LMT (to completely avoid
buffer allocation).
Using LMT
This section will describe various commands available to make use of the
information captured by LMT. LMT is designed to be used by IBM service
personnel, so these commands (or their new LMT-related parameters) may not
be documented in the external documentation in InfoCenter. Each command can
display a usage string if you enter command -?.
To use LMT, use the following steps:
1. Extract the LMT data from a system dump or a running system
2. Format the contents of the extracted files to readable files
3. Analyze the output files and find the problem
46
7414ch03.fm
mtrcrare-0
mtrcrare-1
To extract Lightweight Memory Trace information from dump image vmcore.0 and
put it into the /tmp directory, enter:
trcdead -o /tmp -M vmcore.0
The new -M parameter of the trcrpt command can then be used to format the
contents of the extracted files. Presently the trcrpt command allows you to look
at the common files together, or the rare files together, but will not display a
totally merged view of both sets. All LMT trace record entries are time-stamped,
so it is straight forward to merge files when desired. Also remember that in the
initial version of AIX 5L Version 5.3 ML3, rare buffer entries are truly rare, and
most often the interesting data will be in the common buffers. Continuing the
previous example, to view the LMT files that were extracted from the dumpfile,
you could enter:
trcrpt -M common
and
trcrpt -M rare
47
7414ch03.fm
Other trcrpt command parameters can be used in conjunction with the -M flag
to qualify the displayed contents. As one example, you could use the following
command to display only VMM trace event group hookids that occurred on
CPU 1:
trcrpt -D vmm -C 1 -M common
The trcrpt command is the most flexible way to view LMT trace records.
However, it is also possible to use the kdb dump reader and KDB debugger to
view LMT trace records. This is done via the new mtrace subcommand. Without
any parameters, the subcommand displays some global information relating to
LMT. The -c parameter is used to show LMT information for a given CPU, and
can be combined with the common or rare keyword to display the common or
rare buffer contents for a given CPU. The -d flag is the other flag supported by
the mtrace subcommand. This option takes additional subparameters that define
a memory region using its address and length. The -d option formats this
memory region as a sequence of LMT trace record entries. One potential use of
this option is to view the LMT entries described in the dmp_minimal cdt of a
system dump.
Note: Any LMT buffer displayed from kdb/KDB contains only generic
formatting, unlike the output provided by the trcrpt command. The kdb/KDB
subcommand is a more primitive debug aid. It is documented in the external
KDB documentation for those wishing additional details. As a final comment
regarding kdb and LMT, the mtrace subcommand is not fully supported when
the kdb command is used to examine a running system. In particular, buffer
contents will not be displayed when the kdb command is used in this live
kernel mode.
48
7414ch03.fm
common buffer only, for only the first 2 CPU's of a system. CPU numbering starts
with zero. By default, the extracted files are placed in /var/adm/ras/mtrcdir:
mtrcsave -M common -C 0,1
ls /var/adm/ras/mtrcdir
mtrccommon
mtrccommon-0
mtrccommon-1
The snap command can be used to collect any LMT trace files created by the
mtrcsave command. This is done using the gettrc snap script, which supports
collecting LMT trace files from either the default LMT log directory or from an
explicitly named directory. The files are stored into the
/tmp/ibmsupt/gettrc/<logdirname> subdirectory. Using the snap command to
collect LMT trace files is only necessary when someone has explicitly created
LMT trace files and wants to send them to service. If the machine has crashed,
the LMT trace information is still embedded in the dump image, and all that is
needed is for snap to collect the dump file. You can see the options supported by
the gettrc snapscript by executing:
/usr/lib/ras/snapscripts/gettrc -h
As an example, to collect general system information, as well as any LMT trace
files in the default LMT log directory, you would enter:
snap -g "gettrc -m"
The preceding discussions of the trcdead, trcrpt, mtrcsave, and snap
commands mention the LMT log directory. The trcdead and mtrcsave commands
create files in the LMT log directory, the trcrpt command looks in the LMT log
directory for LMT trace files to format, and the gettrc snap script may look in the
LMT log directory for LMT trace files to collect. By default, the LMT log directory
is /var/adm/ras/mtrcdir. This can be changed to a different directory using the
trcctl command. For example, to set the LMT log directory to a directory
associated with a dump being analyzed, you might enter:
trcctl -M /mypath_to_dump/my_lmt_logdir
This sets the system-wide default LMT log directory to
/mypath_to_dump/my_lmt_logdir, and subsequent invocations of trcdead,
trcrpt, mtrcsave, and the gettrc snapscript will access the my_lmt_logdir
directory. This single system-wide log directory may cause issues on multi-user
machines where simultaneous analysis of different dumps is occurring.
LMT support introduced with AIX 5L Version 5.3 with ML 5300-03 still represents
a significant advance in AIX first failure data capture capabilities, and provides
service personnel with a powerful and valuable tool for diagnosing problems.
49
7414ch03.fm
(hw)
(hw,D1)
(hw,D1,D2)
(hw,D1,D2,D3)
(hw,D1,D2,D3,D4)
(hw,D1,D2,D3,D4,D5)
50
TRCHKL1
TRCHKL2
TRCHKL3
TRCHKL4
TRCHKL5
7414ch03.fm
(hw,D1)
(hw,D1,D2)
(hw,D1,D2,D3)
(hw,D1,D2,D3,D4)
(hw,D1,D2,D3,D4,D5)
In AIX 5L Version 5.3 with the 5300-05 Technology Level and above, a time
stamp is recorded with each event regardless of the type of macro used.
There are only two macros to record events to one of the generic channels
(channels 1-7). They are:
TRCGEN (ch,hw,d1,len,buf)
TRCGENT (ch,hw,d1,len,buf)
These macros record a hookword (hw), a data word (d1), and a length of data
(len) specified in bytes from the user's data segment at the location specified
(buf) to the event stream specified by the channel (ch). In AIX 5L Version 5.3 with
the 5300-05 Technology Level and above, the time stamp is recorded with both
macros.
51
7414ch03.fm
52
7414ch03.fm
53
7414ch03.fm
54
7414ch03.fm
55
7414ch03.fm
56
7414ch03.fm
57
7414ch03.fm
The system trace can be used to trace processor utilization register (PURR) to
provide more accurate event timings in a shared processor partition environment.
In previous versions of AIX 5L and AIX, the trace buffer size for a regular user is
restricted to a maximum of 1 MB. Version 5.3 allows the system group users to
set the trace buffer size either through a new command, trcctl, or using a new
SMIT menu called Manage Trace.
The new Lightweight Memory Trace (LMT) is a highly efficient, always-on trace
aimed at First Failure Data Capture.
See 3.1.2, Lightweight memory trace (5300-03) on page 43 for additional
information.
The Component Trace (CT) facility provides system trace information for specific
system components. This information allows service personnel to access
component state information through either in-memory trace buffers or through
traditional AIX system trace. CT is enabled by default.
See 3.1.1, Component trace facility (5300-05) on page 40 for more information.
The Run-Time Error Checking (RTEC) facility provides service personnel with a
method to manipulate debug capabilities that are already built into product
binaries. RTEC provides service personnel with powerful first failure data capture
and second failure data capture error detection features.
Error Detection facilities have been improved to detect when code runs disabled
for interrupts too long.
New Debug Aids, such as KDB Consistency Checkers and enhanced socket
debugging capabilities
58
7414ch03.fm
been introduced to check the settings for the corefile creation and change them,
respectively.
59
7414ch03.fm
60
7414ch04.fm
Chapter 4.
System administration
In this chapter, the following major topics are discussed:
AIX 5L release support strategy (5300-04)
Command enhancements
Multiple page size support (5300-04)
Advanced Accounting
National language support
LDAP enhancements (5300-03)
61
7414ch04.fm
62
7414ch04.fm
63
7414ch04.fm
Subject:
Cron Job Failure
64
7414ch04.fm
Content:
Cron Environment:
SHELL= < Shell Name>
PATH= <PATH string>
CRONDIR = <cron directory name>
ATDIR = < atjobs directory name>
Output from cron as follows:
Brief description on the error encountered
Mail format for mail on cron jobs is shown in Example 4-2:
Example 4-2 Mail format for cron jobs completion
Subject:
Output from <at | cron> job <jobname>, username@hostname, exit status
<Exit Code>
Content:
Cron Environment:
SHELL= < Shell Name>
PATH= <PATH string>
CRONDIR = <cron directory name>
ATDIR = < at jobs directory name>
Your <cron | at> job executed on <machine name> on <scheduled time>
[cron | at job name]
produced the following output:
<Output from the Job or any error messages reported>
Example 4-3 shows an example of mail sent by cron on cron job completion:
Example 4-3 A cron job completion message
Message 1:
From daemon Mon Dec 4 14:26:00 2006
Date: Mon, 4 Dec 2006 14:26:00 -0600
From: daemon
To: root
Subject: Output from cron job /usr/bin/sleep,
[email protected], exit status 2
Cron Environment:
SHELL =
65
7414ch04.fm
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jr
e/bin:/usr/java14/bin:/usr/local/bin
CRONDIR=/var/spool/cron/crontabs
ATDIR=/var/spool/cron/atjobs
LOGNAME=root
HOME=/root
Your "cron" job executed on lpar01.itsc.austin.ibm.com on Mon Dec
14:26:00 CST 2006
/usr/bin/sleep
66
7414ch04.fm
In the session shown, you can just type H (case-sensitive) to disable highlighting.
Purpose
-Z
-L pidlist
Generates a list of descendants of each and every PID that has been
passed to it in the pidlist variable. The pidlist variable is a list of
comma-separated process IDs. The list of descendants from all of
the given PID is printed in the order in which they appear in the
process table.
-T pid
Example 4-4 shows the ps command flags to find all the descendants of the inetd
process:
Example 4-4 The ps command new flags
# ps -ef|grep inetd
root 180324 188532
0 08:37:15
# ps -L 180324
PID
TTY TIME CMD
135340
- 0:00 telnetd
180324
- 0:00 inetd
0:00 /usr/sbin/inetd
67
7414ch04.fm
209020
254084
286890
307372
311474
319694
323780
364790
pts/2
pts/0
pts/0
pts/2
0:00
0:00
0:00
0:00
0:00
0:00
0:02
0:00
telnetd
ksh
bootpd
ksh
telnetd
topas
xmtopas
ps
You can find all the processes including their hierarchy in a ASCII tree format by
entering:
ps -T 0
The -T option is used to find all the processes and sub-processes under a
specific user by providing the users telnet IP address.
1. Find out the pts number of the user by giving the hostname or the IP address:
# who
root
root
root
pts/0
pts/1
pts/2
Nov 30 08:41
Nov 30 08:44
Nov 30 08:50
(kcyk04t.itsc.austin.ibm.com)
(proxy)
(kcbd0na.itsc.austin.ibm.com)
0 08:50:49
pts/2
0:00 -ksh
3. Use ps -T options:
# ps -fT 254084
The -Z option of the ps command is added to support different page sizes. For
more information about Huge Page support and Large Page support in AIX 5L
Version 5.3, refer to 4.3, Multiple page size support (5300-04) on page 71.
68
7414ch04.fm
Description
-b
Reports users who are not able to access the system and the
reasons, with the reasons displayed in a bit-mask format. The -l flag
must be specified if the -b flag is specified. Note: The bit mask does
not report criteria 10 (user denied login to terminal), since this cannot
be considered a complete scenario when determining if a system is
inaccessible to a user. Likewise, the bit mask does not report criteria
9 (User denied access by applications) if at least one but not all of the
attributes values deny authentication; this criteria is only reported
when all four attribute values deny authentication.
-l
Command examples
The following examples (Example 4-5 and Example 4-6) would be used by an
administrator to scan all users or the users specified by the user parameter to
determine if a user can access the system and also gives a description as to why
the user account has no access.
Example 4-5 The usrck -l command
# usrck -l ALL
The system is inaccessible to daemon, due to the following:
User account is expired.
User has no password.
The system is inaccessible to bin, due to the following:
User account is expired.
User has no password.
The system is inaccessible to sys, due to the following:
User account is expired.
User has no password.
The system is inaccessible to adm, due to the following:
User has no password.
The system is inaccessible to uucp, due to the following:
User has no password.
User denied access by login, rlogin applications.
69
7414ch04.fm
# usrck -l test
The system is inaccessible to test, due to the following:
User account is locked.
The next example uses both the -b and -l option, the output consists of two fields,
the username and a 16-digit bit mask, separated by a tab. This output is a
summary of the previous command and lists all the user accounts which have
been locked out.
Example 4-7 The usrck -l -b command
# usrck -l -b ALL
daemon
bin
sys
adm
uucp
guest
nobody
lpd
lp
invscout
snapp
ipsec
nuucp
sshd
test
0000000000001010
0000000000001010
0000000000001010
0000000000001000
0000000000001000
0000000000001000
0000000000001010
0000000000001010
0000000001001000
0000000001001000
0000000001001000
0000000001001000
0000000001001000
0000000001001001
0000000000000001
70
7414ch04.fm
71
7414ch04.fm
Required Hardware
Requires user
configuration
Restricted
Kernel
4 KB
All
No
No
32 & 64
64 KB
No
No
64 only
16 MB
Yes
Yes
64 only
16 GB
Yes
Yes
64 only
You can use the pagesize -af command to display all of the virtual memory
page sizes supported by AIX 5L on a system. Example 4-8 provides a sample
output.
Example 4-8 Output of the pagesize -af command
# pagesize -af
4K
64K
16M
16G
You can specify the page sizes to use for three regions of a process's address
space using an environment variable or settings in an applications XCOFF
binary with the ldedit or ld commands as shown in Example 4-4.
Table 4-4 The ld and ldedit command arguments for page size specification
72
Region
ld/ledit option
LDR_CNTRL
environment variable
Description
Data
-bdatapsize
DATAPSIZE
Stack
-bstackpsize
STACKPSIZE
Text
-btextpsize
TEXTPSIZE
7414ch04.fm
For example, the following command causes a.out to use 64 KB pages for its
data, 4 KB pages for its text, and 64 KB pages for its stack on supported
hardware:
LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=4K@STACKPSIZE=64K a.out
Unless page sizes are selected using one of the previous mechanisms, a
process will continue to use 4 KB pages for all three process memory regions by
default.
The 4 KB and 64 KB page sizes are intended to be general-purpose, and no
configuration changes are necessary to enable a system to use these page
sizes. The 16 MB large page size and 16 GB huge page size are intended only to
be used in very high performance environments, and an administrator must
configure a system to use these page sizes. Furthermore, the support for 16 MB
large pages and 16 GB huge pages is limited. 16 MB large pages are only
supported for process data and shared memory, and 16 GB huge pages are only
supported for shared memory.
To enable 16 MB page support, the vmo command can be used as follows.
# vmo -p -o lgpg_regions=32 -o lgpg_size=16777216
This will configure 32 16 MB pages, giving 512 MB in total. The operation can be
performed dynamically provided the system is capable of performing dynamic
LPAR operations. Operations to change the number of large pages on the
system may succeed partially. If a request to increase or decrease the size of the
pool cannot fully succeed (for example, if lgpg_regions is tuned to 0 but there are
large pages in use by applications), the vmo command will add or remove pages
to get as close as possible to the requested number of pages.
16 GB huge pages must be configured using a system's Hardware Management
Console (HMC). Under the Managed System's Properties menu, a system
administrator can configure the number of 16 GB huge pages on a system by
selecting Show Details in the Advanced Options field of the Memory tab.
Figure 4-2 shows this option.
73
7414ch04.fm
Figure 4-2 Configuring huge pages on a managed system using the HMC
Changing the number of 16 GB huge pages on a system requires that the entire
managed system be powered off. Once a managed system has been configured
with 16 GB huge pages, they can be assigned to partitions by changing a
partition's profile.
The vmo command can be used to globally disable 64 KB and 16 GB pages
using the vmm_mpsize_support option.
vmo -r -o vmm_mpsize_support=0
This option will take effect after a reboot and once it is set, only 4 KB and 16 MB
pages are available, regardless of the ld command options used.
74
7414ch04.fm
The ps command
The ps command now has an additional flag, -Z, which displays the page sizes
being used for the data, stack, and text memory regions of a running process.
# ps -Z
PID
TTY
233636 pts/0
262346 pts/0
278670 pts/0
inuse
91197
669
free
170947
work
45147
77716
pers
0
0
clnt
0
13481
PoolSize
-
inuse
72109
1193
pgsp
669
0
memory
pg space
pin
in use
PageSize
s 4 KB
m 64 KB
pin
45147
virtual
77716
pin
36235
557
virtual
58628
1193
Example 4-10 The svmon -P command showing multiple page size information
Pid Command
262346 sleep
PageSize
s
4 KB
m 64 KB
Vsid
Virtual
0
Inuse
15963
Inuse
11307
291
Pin
7556
Pin
7540
1
Pgsp
Pgsp
0
0
16MB
N
N
Virtual
11306
291
PSize
s
Inuse
Pin Pgsp
11306 7540
0 11306
75
7414ch04.fm
d
f
2
1
work
work
work
clnt
m
m
m
s
278
8
5
1
0
0
1
0
0
0
0
-
278
8
5
-
Description
-p
-P
Example 4-11 The vmstat command output using the -p and -P flags
# vmstat -p ALL
System configuration: lcpu=2 mem=1024MB ent=0.10
kthr
memory
page
faults
cpu
----- ----------- ------------------------ ------------ ----------------------r b
avm fre re pi po fr sr cy in
sy cs us sy id wa
pc
ec
1 1 77757 170904
0 0
0
0
0
0 33
41 101 0 0 99 0 0.00
0.0
psz
avm fre
4K 58670 67639
64K 1193 6454
re
0
0
pi
0
0
po
0
0
fr
0
0
sr
0
0
cy
siz
0 139792
0
7647
# vmstat -P ALL
System configuration: mem=1024MB
pgsz
memory
page
----- -------------------------- -----------------------------------siz
avm
fre
re
pi
po
fr
sr
cy
4K
139792
58671
67637
0
0
0
0
0
0
64K
7647
1193
6454
0
0
0
0
0
0
76
7414ch04.fm
Process accounting
For process accounting, users can generate accounting reports by projects, by
groups, by users, by commands, or by a combination of these four identifiers.
The syntax for this aspect is shown in the following.
acctrpt [ -f filename ] [ -F ] [ -U uid ] [ -G gid ] [ -P projID ]
[ -C command ] [ -b begin_time ] [ -e end_time ] [ -p projfile ]
[ -n ]
The commonly used flags are described in Table 4-6.
77
7414ch04.fm
Description
-f filename
Specifies the path name of the accounting data file to be used. More
than one file can be specified using a comma-separated list. If the
-f flag is not specified, the /var/aacct/aacctdata file is used by
default.
-U uid
-G gid
-P projID
-C command
-b begin_time
-e end_time
-p projfile
Example 4-12 gives a sample process output, while Example 4-13 shows a
report filtered to give information on the projctl command as run by root.
78
7414ch04.fm
TIMESTAMP
--------12011107
PROJID
-----System
UID
--root
GID
--system
PID
--274574
CMD
--acctctl
12011107
System
root
system
209144
ksh
12011111
System
root
system
274650
smitty
(C)
(M)
(F)
STARTED EXITED (S)
------- ------ --12011107 E
C:
M:
F:
S:
12011107 E
C:
M:
F:
S:
12011111 N
C:
M:
F:
S:
PELAPSE
VMEM
LFILE
LSOCKET
------1.0611
76
0.00
0.00
1.1301
183
0.00
0.00
74.4906
61281
0.02
0.00
TELAPSE
DMEM
DFILE
DSOCKET
------1.0611
82
0.00
0.00
1.1301
189
0.00
0.00
74.4906
62249
0.02
0.00
CPU
PMEM
(sec)
(pg)
(MB)
(MB)
-----0.0045
0
0.0058
0
0.1425
0
Example 4-13 The acctrpt command output filtered for command projctl and user root
# acctrpt -U 0 -C projctl
Process Accounting Report
-------------------------
PROJID
-
UID
--root
GID
-
CMD
--projctl
CNT
--7
(C)
(M)
(F)
(S)
--C:
M:
F:
S:
PELAPSE
VMEM
LFILE
LSOCKET
-------0.0
0
1.9
0.0
TELAPSE
DMEM
DFILE
RSOCKET
-------0.0
0
0.0
0.0
CPU
PMEM
(sec)
(pg)
(MB)
(MB)
-------0.0
0
Description
PROJID
UID
GID
CMD
CNT
PELAPSE
TELAPSE
CPU
79
7414ch04.fm
Field
Description
LFILE
DFILE
LSOCKET
RSOCKET
DMEM
PMEM
VMEM
System accounting
For system accounting, reports can be generated that describe the system-level
use of resources, such as processors, memory, file systems, disks, and network
interfaces. The system accounting interval must be enabled to collect accounting
statistics for system resources. This function is often refereed to as LPAR
accounting, however it is not restricted to partitioned environments and can be
run on any system running AIX 5l Version 5.3. The syntax for these reports is
shown here.
acctrpt [ -f filename ] [ -F ] -L resource [ -b begin_time ]
[ -e end_time ]
The main flag not described in the process section is shown in Table 4-8
Table 4-8 The acctrpt command flags for system reporting
80
Flag
Description
-L resource
filesys
netif
disk
Disk statistics
vtarget
vclient
ALL
7414ch04.fm
-L
ALL
CNT
--3
IDLE
IO
---320.8
30157
IOWAIT
PGSPIN
-----1.7
0
SPROC
PGSPOUT
------0.2
0
UPROC
LGPGUTIL
-------4.9
0.0
INTR
PGRATE
-----0.5
0.0
(sec)
DEVNAME
------specfs
pipefs
/dev/hd10opt
/proc
/dev/hd1
/dev/hd3
/dev/hd9var
/dev/hd2
/dev/hd4
MOUNTPT
------specfs
pipefs
/opt
/proc
/home
/tmp
/var
/usr
/
FSTYPE
-----16
16
0
6
0
0
0
0
0
RDWR
---1339
3167
0
0
0
414
29959
12955
2301
OPEN
---1194
0
0
0
0
143
109
9672
749
CREATE
-----0
0
0
0
0
138
4
0
45
LOCKS
----2
0
0
0
0
0
62
0
87
XFERS(MBs)
---------26.9
0.1
0.0
0.0
0.0
0.6
0.4
25.9
1.4
NETIFNAME
--------lo0
en0
NUMIO
----36
1344
XFERS(MBs)
---------0.001459
0.178871
DISKNAME
-------hdisk0:0
hdisk0
vscsi1
BLKSZ
----512
512
0
XFERS(MBs)
---------2136
2136
154815488
READ
---47982
47982
446
WRITE
----254392
254392
1690
SERVER#
------4
SERVERID UNITID
BYTESIN(MBs) BYTESOUT
-------- ----------------- -------805306408 9367487224930631680 23.428711
124.214844
Description
81
7414ch04.fm
Field
Description
CNT
IDLE
IOWAIT
SPROC
UPROC
INTR
IO
Number of I/Os.
PGSPIN
PGSPOUT
LGPGUTIL
PGRATE
DEVNAME
Device name.
MOUNTPT
FSTYPE
RDWR
OPEN
CREATE
LOCKS
XFERS
82
CNT
NETIFNAM
E
NUMIO
Number of I/Os.
XFERS
7414ch04.fm
Field
Description
DISKNAME
Disk name.
BLKSZ
XFERS
READ
WRITE
SERVER#
SERVERID
UNITID
BYTESIN
BYTESOUT
Transaction accounting
For transaction accounting, users can generate accounting reports describing
application transactions. Transaction reports provide scheduling and accounting
information, such as transaction resource usage requirements. These reports
use data that is produced by applications that are instrumented with the
Application Resource Management (ARM) interfaces to describe the
transactional nature of their workloads. Advanced Accounting supports ARM
interfaces by recording information that is presented through these interfaces in
the accounting data file. The acctrpt command can then process these files and
report information. The transaction report syntax is as follows.
acctrpt [ -f filename ] [ -F ] -T [ -b begin_time ] [ -e end_time ]
The -T flag specifies that a transaction report is required, Example 4-15 gives a
sample output.
Example 4-15 The acctrpt command transaction report
/usr/bin/acctrpt T f /var/aacct/acctdata
PROJID
------
CNT
---
(A)
(T)
---
CLASS
RESPONSE
--------
NAME
QUEUED
-------
USER
CPU
----
GROUP
TRANSACTION
-------
------------
(sec)
83
7414ch04.fm
144
System
32
A:
T:
A:
T:
0.00
0.00
WebSphere
0.00
0.00
IBM Webserving 0.00
67.01
server1
URI
IBM_SERV
Apache/1.3.28(Unix)
The fields for the transaction report are shown in Table 4-10
Table 4-10 The acctprt command fields for transaction output
Field
Description
PROJID
CNT
CLASS
Account class.
GROUP
NAME
Application name.
TRANSACTION
Transaction name
USER
User name.
RESPONSE
QUEUED
USER
84
7414ch04.fm
85
7414ch04.fm
# geninstall -L -d /dev/cd0
86
7414ch04.fm
Tivoli_Management_Agent.client:Tivoli_Management_Agent.client.rte:3.7.1
.0::I:C::
:::N:Management Framework Endpoint Runtime"::::
bos:bos.rte:5.3.0.50::S:C:::::N:Base Operating System Runtime::::
bos:bos.rte.ILS:5.3.0.50::S:C:::::N:International Language Support::::
bos:bos.rte.SRC:5.3.0.50::S:C:::::N:System Resource Controller::::
bos:bos.rte.aio:5.3.0.50::S:C:::::N:Asynchronous I/O Extension::::
bos:bos.rte.archive:5.3.0.50::S:C:::::b:Archive Commands::::
bos:bos.rte.bind_cmds:5.3.0.50::S:C:::::N:Binder and Loader
Commands::::
bos:bos.rte.boot:5.3.0.50::S:C:::::b:Boot Commands::::
bos:bos.rte.bosinst:5.3.0.50::S:C:::::N:Base OS Install Commands::::
bos:bos.rte.commands:5.3.0.50::S:C:::::b:Commands::::
bos:bos.rte.compare:5.3.0.50::S:C:::::N:File Compare Commands::::
bos:bos.rte.console:5.3.0.50::S:C:::::N:Console::::
bos:bos.rte.control:5.3.0.50::S:C:::::N:System Control Commands::::
bos:bos.rte.cron:5.3.0.50::S:C:::::N:Batch Operations::::
bos:bos.rte.date:5.3.0.50::S:C:::::N:Date Control Commands::::
bos:bos.rte.devices:5.3.0.50::S:C:::::b:Base Device Drivers::::
bos:bos.rte.devices_msg:5.3.0.50::S:C:::::N:Device Driver Messages::::
bos:bos.rte.diag:5.3.0.50::S:C:::::N:Diagnostics::::
The following command lists the install packages on the media.
gencopy -L -d Media [ -D ]
This listing is colon separated and contains the following information:
file_name:package_name:fileset:V.R.M.F:type:platform:Description
bos.sysmgt:bos.sysmgt:bos.sysmgt.nim.client:4.3.4.0:I:R:Network
Install Manager - Client Tools
bos.sysmgt:bos.sysmgt:bos.sysmgt.smit:4.3.4.0:I:R:System Management
Interface Tool (SMIT)
In Example 4-17, the gencopy command is used with the -L option.
Example 4-17 The gencopy -L -d command output
# gencopy -L -d /dev/cd0
Tivoli_Management_Agent.client:Tivoli_Management_Agent.client:Tivoli_Management_
Agent.client.rte:3.7.1.0:I:R:Management Framework Endpoint Runtime"
bos:bos:bos.rte:5.3.0.50:O:R:Base Operating System Runtime
bos.rte.edit_5.3.0.50.bff:bos:bos.rte.edit:5.3.0.50:S:R:Editors
bos.rte.diag_5.3.0.50.bff:bos:bos.rte.diag:5.3.0.50:S:R:Diagnostics
bos.rte.devices_msg_5.3.0.50.bff:bos:bos.rte.devices_msg:5.3.0.50:S:R:Device Dri
ver Messages
bos.rte.devices_5.3.0.50.bff:bos:bos.rte.devices:5.3.0.50:S:R:Base Device Driver
s
87
7414ch04.fm
Kannada
Bengali
Assamese
Punjabi
Oriya
Estonian
Latvian
88
7414ch04.fm
89
7414ch04.fm
ou=hosts,dc=example,dc=com
ou=services,dc=example,dc=com
ou=ethers,dc=example,dc=com
ou=profile,dc=example,dc=com
nismapname=auto_home,dc=example,dc=com
nismapname=auto_appl,dc=example,dc=com
nismapname=auto_master,dc=example,dc=com
nismapname=auto_apps,dc=example,dc=com
nismapname=auto_next_apps,dc=example,dc=com
nismapname=auto_opt,dc=example,dc=com
ou=group,dc=example,dc=com
ou=people,dc=example,dc=com
Example 4-19 shows how to use the lsldap command with the passwd option to
list the distinguished names of all users:
Example 4-19 Using lsldap to show user entries
# lsldap passwd
dn: uid=user1,ou=people,dc=example,dc=com
dn: uid=user2,ou=people,dc=example,dc=com
dn: uid=user3,ou=people,dc=example,dc=com
#
To retrieve all of the information for the user named user3 run the lsldap
command with the -a passwd option and the user3 name as shown in
Example 4-20.
Example 4-20 Using lsldap by root to show entry for user3
# lsldap -a passwd user3
dn: uid=user3,ou=people,dc=example,dc=com
uidNumber: 20003
uid: user3
gidNumber: 20000
gecos: ITSO user3
homeDirectory: /home/user3
loginShell: /bin/ksh
cn: ITSO user3
shadowLastChange: 12996
shadowInactive: -1
shadowMax: -1
shadowFlag: 0
shadowWarning: -1
shadowMin: 0
objectClass: posixAccount
objectClass: shadowAccount
90
7414ch04.fm
All users can run the lsldap command, but when a normal user runs this
command they will only be able to see public information. For example, using the
same command as a normal user you see the information (Example 4-21).
Example 4-21 Normal user using lsldap to view user3
$ lsldap -a passwd user3
dn: uid=user3,ou=people,dc=example,dc=com
uidNumber: 20003
uid: user3
gidNumber: 20000
gecos: ITSO user3
homeDirectory: /home/user3
loginShell: /bin/ksh
cn: ITSO user3
objectClass: posixAccount
objectClass: shadowAccount
objectClass: account
objectClass: top
For more information, reference the lsldap command man page in the AIX 5L
Version 5.3 InfoCenter commands and the IBM Redbook Integrating AIX into
Heterogeneous LDAP Environments, SG24-7165.
http://www.redbooks.ibm.com/abstracts/SG247165.html?Open
91
7414ch04.fm
92
7414ch05.fm
Chapter 5.
Performance monitoring
In this chapter, the following major topics are discussed:
Performance tools enhancements (5300-05)
The gprof command enhancement (5300-03)
The topas command enhancements
The iostat command enhancement (5300-02)
PMAPI end user tools (5300-02)
Memory affinity enhancements (5300-01)
The fcstat command (5300-05)
93
7414ch05.fm
# svmon -G
memory
pg space
size
8208384
262144
inuse
5714226
20653
free
2494158
pin
in use
work
453170
5674818
pers
0
110
clnt
0
39298
PoolSize
-
inuse
5379122
20944
pgsp
20653
0
PageSize
s
4 KB
m 64 KB
pin
453170
virtual
5674818
pin
380338
4552
virtual
5339714
20944
94
7414ch05.fm
vmstat -p
The following displays per page size statistics.
vmstat -P
Example 5-2 shows the output from -p.
Example 5-2 Example output of vmstat -p
# vmstat -p ALL
System configuration: lcpu=2 mem=6144MB
kthr
memory
page
faults
cpu
----- ----------- ------------------------ ------------ ----------r b
avm
fre re pi po fr
sr cy in
sy cs us sy id wa
1 1 380755 207510
0
0
0
2
8
0 12 1518 172 0 0 99 0
psz
avm
fre re pi po fr
sr cy
siz
4K 380755 207510
0
0
0
2
8
0 1528754
16M
0
10
0
0
0
0
0
0
10
Both options take a comma-separated list of specific page sizes or the keyword
all to indicate information should be displayed for all supported page sizes that
have one or more page frames. Example 5-3 displays per-page size information
for all of the page sizes with page frames on a system.
Example 5-3 Example output of svmon -P
# vmstat -P all
System configuration: mem=1024MB
pgsz
memory
page
----- -------------------------- -----------------------------------siz
avm
fre
re
pi
po
fr
sr
cy
4K
262144
116202
116313
0
0
0
3
8
0
64K
31379
961
3048
0
0
0
0
0
0
95
7414ch05.fm
The curt command works with both uniprocessor and multiprocessor AIX
Version 4 and AIX 5L Version 5 traces, however it is shipped with AIX 5L Version
5.2. In AIX 5L Version 5.3 with TL 5300-05, the command is enhanced to support
NFS v4.
The curt command report will be divided into many sections, the enhanced part
of it will be:
System NFS Calls Summary
Pending NFS System Calls Summary
The following is an example about how the command supports NFS v4.
1. Get a system trace and gensyms command output. (This section will not cover
how to trace)
2. Use the curt command on the trace to get the command report.
curt -i trace.raw -n gensyms.out -o curt.out
3. Open curt.out file with a text editor, then you see the report covers many
sections:
General Information
System Summary
System Application Summary
Processor Summary
Processor Application Summary
Application Summary by TID
Application Summary by PID
Application Summary by Process Type
Kproc Summary
Application Pthread Summary by PID
System Calls Summary
Pending System Calls Summary
Hypervisor Calls Summary
Pending Hypervisor Calls Summary
System NFS Calls Summary
Pending NFS System Calls Summary
Pthread Calls Summary
Pending Pthread Calls Summary
FLIH Summary
SLIH Summary
Information on completed NFS operations (System NFS Calls Summary) that
includes the name of the NFS operation, the number of times the NFS operation
was executed, and the total CPU time, expressed in milliseconds, and as a
96
7414ch05.fm
percentage with average, minimum, and maximum time the NFS operation call
was running.
Information on pending NFS operations (Pending NFS System Calls Summary),
where the NFS operations did not complete before the end of the trace. The
information includes the sequence number for NFS V2/V3, or opcode for NFS
V4, the thread or process which made the NFS operation, and the accumulated
CPU time that the NFS operation was running, expressed in milliseconds.
Example 5-4 shows sample output from the curt command.
Example 5-4 Sample output of curt report (partial)
...(lines omitted)...
System NFS Calls Summary
-----------------------Count
Total Time Avg Time Min Time Max Time % Tot % Tot Opcode
(msec)
(msec)
(msec)
(msec)
Time Count
======== =========== ======== ======== ======== ===== ===== =============
4
2.9509
0.7377
0.0331
2.4040 80.05
4.44 OPEN
27
0.1935
0.0072
0.0036
0.0159
5.25 30.00 PUTFH
4
0.1202
0.0300
0.0177
0.0397
3.26
4.44 READDIR
...(lines omitted)...
4
0.0047
0.0012
0.0011
0.0013
0.13
4.44 SAVEPH
-------- ----------- -------- -------- -------- ----- ----- ------------90
3.6862
0.0410
NFS V4 SERVER
TOTAL
15
2
0.8462
0.2126
0.0564
0.1063
0.0286
0.0965
......
1
-------765
TOTAL
0.0005
----------2.0984
0.0005
-------0.0027
0.0005
--------
Accumulated
Time (msec)
============
0.0275
0.0191
0.0201
0.0091
0.0797
0.1161
40.33
10.13
1.96
0.26
NFS4_RFS4CALL
NFS4_CREATE_ATTR
0.0005
0.02
0.13 NFS4_ADD_SETATTR
-------- ----- ----- ------------NFS V4 CLIENT
97
7414ch05.fm
...(lines omitted)...
The System NFS Calls Summary has the fields provided in Table 5-1.
Table 5-1 System NFS calls summary
Fields
Meaning
Opcode
Count
% Tot Time
The total CPU time that the system spent processing the
system NFS calls of this type, expressed as a percentage
of the total processing time.
% Tot Count
The Pending System NFS Calls Summary has the fields provided in Table 5-2.
Table 5-2 Pending NFS calls summary
98
Fields
Meaning
7414ch05.fm
Fields
Meaning
Sequence Number
Opcode
...(lines omitted)...
NFSv4 Client RPC Statistics (by Server):
---------------------------------------Server
Calls/s
---------------------------------nim
0.04
---------------------------------------------------------------------Total (all servers)
0.04
99
7414ch05.fm
======================================================================
NFSv4 Server Statistics (by Client):
-----------------------------------Read
Write
Other
Client
Ops/s
Ops/s
Ops/s
---------------------------------------------------------------------nim
0.00
0.00
0.14
---------------------------------------------------------------------Total (all clients) 0.00
0.00
0.14
======================================================================
...(lines omitted)...
Detailed NFSv4 Client RPC Statistics (by Server):
------------------------------------------------SERVER: nim
calls:
27
call times (msec):
avg 1.569
min 0.083
max 11.697 sdev 3.322
COMBINED (All Servers)
calls:
27
call times (msec):
avg 1.569
min 0.083
max 11.697 sdev 3.322
======================================================================
Detailed NFSv4 Server Statistics (by Client):
--------------------------------------------CLIENT: nim
writes:
1
write times (msec):
avg 0.062
min 0.062
max 0.062
sdev 0.000
other ops:
82
other times (msec):
avg 0.448
min 0.001
max 11.543 sdev 2.012
COMBINED (All Clients)
writes:
1
write times (msec):
avg 0.062
min 0.062
max 0.062
sdev 0.000
other calls:
82
other times (msec):
avg 0.448
min 0.001
max 11.543 sdev 2.012
...(lines omitted)...
100
7414ch05.fm
available to them at the same location in the effective address space as the
global shared library area (segments 0xD and 0xF). The tprof command is
named shared library ready now.
For more information about the named shared library area, see 1.6.3, Named
shared library areas (5300-03).
101
7414ch05.fm
using the C subcommand from any other panel. Example 5-6 shows the cross
partition monitor panel.
Example 5-6 The topas command cross partition monitor panel
Topas CEC Monitor
Interval:
2006
Partitions
Memory (GB)
Shr: 2
Mon: 2.2 InUse: 1.0
Ded: 1
Avl: 4.0
10
Processors
Shr: 1 PSz:
Ded: 1 APP:
1
Shr_PhysB:
1.0 Ded_PhysB:
0.01
0.00
Host
OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared-----------------------------------lpar01
A53 U 1.0 0.4 4
0 0 0 99 0.01 0.50
1.5 412
1
VIO_Server1 A53 U 0.5 0.3 4
0 0 0 99 0.01 0.50
1.4 409
0
------------------------------------dedicated---------------------------------lpar04
A53 S 0.8 0.3 2
0 0 0 99 0.00
The display is split into two sections, the global region and the partition region.
Global region
This region represents aggregated data from the partition set and is split into
three sections, Partitions, Memory and Processors. The fields have the
descriptions provided in Table 5-3.
Table 5-3 Global region fields
Field
Description
Partitions
Shr
Ded
Mon
Avl
Total memory available to the partition set. This field requires the
partition to query information from the HMC, more details on this
functionality are provided later.
InUse
102
7414ch05.fm
Field
Description
Shr
Ded
Psz
App
Shr_PhysB
Ded_PhysB
Partition region
This region displays information about individual partitions that are split
depending on whether they use shared or dedicated CPUs. The fields in this
region are provided in Table 5-4.
Table 5-4 Partition region fields
Field
Description
Host
Hostname of partition
OS
Mode
This shows the operating mode of the partition. Possible values are:
Shared Partitions
C
Dedicated Partitions
S
SMT enabled
SMT disabled
Mem
InU
Lp
Us
103
7414ch05.fm
Field
Description
Sy
Wa
Id
Ded_PhysB
Ent
%Entc
Vcsw
Phi
Within the cross partition view there are additional subcommands available.
Pressing s and d toggle on and off the shared and dedicated partition views
respectively, while pressing r will force the topas command to search for HMC
configuration changes if a connection is available. This includes the discovery of
new partitions, processors, or memory allocations. Finally, the g subcommand
toggles the detail level of the global region, Example 5-7 shows the detailed view.
Example 5-7 The topas command detailed partition view without HMC data
Topas CEC Monitor
Partition Info
Monitored : 3
UnMonitored: Shared
: 2
Dedicated : 1
Capped
: 1
Uncapped
: 2
Interval: 10
Memory (GB)
Processor
Monitored : 2.2
Monitored :2.0
UnMonitored:
UnMonitored: Available :
Available : UnAllocated:
UnAllocated: Consumed
: 1.0
Shared
: 1
Dedicated : 1
Pool Size : 1
Avail Pool : 1.0
0.01
0.00
Hypervisor
Virt. Context Switch: 565
Phantom Interrupts :
0
Host
OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared------------------------------------VIO_Server1 A53 U 0.5 0.3 4
0 0 0 99
0.01 0.50
1.3 392
0
lpar01
A53 u 1.0 0.4 2
0 0 0 99
0.00 0.50
0.9 173
0
------------------------------------dedicated----------------------------------lpar04
A53
0.8 0.3 1
0 0 0 99
0.00
Implementation
In this example, there are various fields which are incomplete. As of AIX 5L
Version 5.3 Technology Level 5, a partition is able to query the HMC to retrieve
further information regarding the CEC as a whole, this is enabled using the
following process
104
7414ch05.fm
Interval:
Memory (GB)
Monitored :
UnMonitored:
Available :
UnAllocated:
Consumed
:
2.2
1.8
4.0
1.4
1.0
10
Processor
Monitored :2.0
UnMonitored:0.0
Available : 2
UnAllocated:0.0
Shared
: 1
Dedicated
Pool Size :
Avail Pool :
1
1.0
Host
OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared-----------------------------------VIO_Server1 A53 U 0.5 0.3 4
0 0 0 99 0.01 0.50
1.2 379
0
lpar01
A53 u 1.0 0.4 2
0 0 0 99 0.00 0.50
1.0 227
0
------------------------------------dedicated---------------------------------lpar04
A53
0.8 0.3 1
0 0 0 99 0.00
If you are unable to obtain this information dynamically from the HMC, you can
provide it as a command line option as follows
105
7414ch05.fm
Description
-o availmem
-o unavailmem
-o availcpu
-o unavailcpu
-o partitions
-o reconfig
-o poolsize
106
7414ch05.fm
Description
-c
-s
Specifies that topasout should format the output files in a format suitable
for input to spreadsheet programs.
-m type
-R
min
Minimum value
max
Maximum value
mean
Mean value
stdev
Standerd deviation
set
exp
detailed -
disk
107
7414ch05.fm
The options in Table 5-7 apply only to the reports generated by topas -R , If the
-b and -e options are not used, the daily recording will be processed from
beginning to end
Table 5-7 The topas specific command options
Flag
Description
-i MM
Splits the recording reports into equal size time periods. This must be
a multiple of the recording interval, which by default is 5 minutes.
Allowed values are 5, 10, 15, 30, 60
-b HHMM
Begin time in hours (HH) and minutes (MM). Range is between 0000
and 2400
-e HHMM
end time in hours (HH) and minutes (MM).Range is between 0000 and
2400 and must be greater than the begin time
Monitor: xmtrend
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Time="2006/11/29
Example 5-10 Summary output generated by topasout from topas -R data file
Report: CEC Summary --- hostname: nim
version:1.1
Start:11/28/06 00:00:21
Stop:11/28/06 14:24:21 Int:60 Min Range: 864 Min
Partition Mon: 2 UnM: 0 Shr: 2 Ded: 0 Cap: 1 UnC: 1
-CEC------ -Processors------------------------- -Memory (GB)-----------Time ShrB DedB Mon UnM Avl UnA Shr Ded PSz APP Mon UnM Avl UnA InU
01:00 0.01 0.00 0.7 0.0 0.0
0 0.7
0 2.0 2.0 1.0 0.0 0.0 0.0 0.6
02:01 0.01 0.00 0.7 0.0 0.0
0 0.7
0 2.0 2.0 1.0 0.0 0.0 0.0 0.6
03:01 0.01 0.00 0.7 0.0 0.0
0 0.7
0 2.0 2.0 1.0 0.0 0.0 0.0 0.6
04:01 0.01 0.00 0.7 0.0 0.0
0 0.7
0 2.0 2.0 1.0 0.0 0.0 0.0 0.6
05:01 0.01 0.00 0.7 0.0 0.0
0 0.7
0 2.0 2.0 1.0 0.0 0.0 0.0 0.6
108
7414ch05.fm
0.01
0.01
0.00
0.00
0.7
0.7
0.0
0.0
0.0
0.0
0
0
0.7
0.7
0
0
2.0
2.0
2.0
2.0
1.0
1.0
0.0
0.0
0.0 0.0
0.0 0.0
0.6
0.6
It is also possible to generate a topas command screen output from topas -R,
Example 5-11 gives a sample screen.
Example 5-11 Output generated by topasout from topas -R data files
Report: CEC Detailed --- hostname: nim
Start:11/30/06 08:38:33 Stop:11/30/06 09:39:33
Int:60 Min
version:1.1
Range: 61 Min
Time: 09:38:32
----------------------------------------------------------------Partition Info Memory (GB)
Processors
Monitored : 3 Monitored : 2.2 Monitored : 2.0 Shr Physcl Busy: 0.05
UnMonitored: 0 UnMonitored: 0.0 UnMonitored: 0.0 Ded Physcl Busy: 0.00
Shared
: 2 Available : 0.0 Available : 0.0
Dedicated : 1 UnAllocated: 0.0 Unallocated: 0.0 Hypervisor
Capped
: 1 Consumed
: 1.4 Shared
: 1.0 Virt Cntxt Swtch: 1642
UnCapped
: 2
Dedicated : 1.0 Phantom Intrpt :
34
Pool Size : 1.0
Avail Pool : 1.0
Host
OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared-----------------------------------nim
A53 U 1.0 0.7 4 0 1 0 95 0.04 0.5
7.07 926 33
VIO_Server1 A53 U 0.5 0.3 4 0 0 0 99 0.01 0.5
2.30 716
2
------------------------------------dedicated---------------------------------appserver
A53 S 0.8 0.3 2 0 0 0 99 0.00
109
7414ch05.fm
xfer:
read:
write:
queue:
%tm_act
0.1
rps
0.0
wps
0.3
avgtime
7.8
bps
2.0K
avgserv
4.7
avgserv
7.5
mintime
0.0
tps
0.3
minserv
0.2
minserv
0.8
maxtime
1.4S
bread
bwrtn
221.2
1.8K
maxserv
timeouts
25.2
0
maxserv
timeouts
51.0
0
avgwqsz
avgsqsz
0.0
0.0
fails
0
fails
0
sqfull
31816
The output is split into 4 sections, xfer, read, write, and queue discussed in the
following sections.
bps
tps
bread
bwrtn
110
7414ch05.fm
avgserv
minserv
maxserv
timeouts
fails
avgserv
minserv
maxserv
timeouts
fails
mintime
111
7414ch05.fm
maxtime
avgwqsz
avgsqsz
sqfull
For all the fields, the outputs have a default unit. However, as values change
order of magnitude, different suffixes are provided to improve clarity. The possible
values are shown in Table 5-8.
Table 5-8 Possible suffixes for iostat -D command fields
Suffix
Description
1000 bytes.
Seconds.
Hours.
Note: The M suffix is used both for data and time metrics. Therefore a value of
1.0 M could indicate 1 MB of data or 1 minute depending on the context.
As with the standard iostat command, two report types can be generated. If the
command is run without an interval, the system generates a summary report of
the statistics since boot. If an interval is specified, then data is collected over the
given time period. For interval data, the values for the max and min fields
reported give the respective values for the whole data collection period rather
than that specific interval. In Example 5-13, the value of 77.4 ms for the queue
maxtime occurred during the second interval and was not surpassed in the third
so it is reported again. If the -R flag is specified, the command will report the
maximum value just for that interval.
Example 5-13 The iostat -D command interval output
-------------------------------------------------------------------------------
112
7414ch05.fm
xfer:
%tm_act
bps
tps
bread
bwrtn
0.0
0.0
0.0
0.0
0.0
read:
rps avgserv minserv maxserv
timeouts
fails
0.0
0.0
0.5
0.5
0
0
write:
wps avgserv minserv maxserv
timeouts
fails
0.0
0.0
0.0
0.0
0
0
queue: avgtime mintime maxtime avgwqsz
avgsqsz
sqfull
0.0
0.0
0.0
0.0
0.0
0
------------------------------------------------------------------------------hdisk0
xfer: %tm_act
bps
tps
bread
bwrtn
20.0
208.9K
45.0
0.0
208.9K
read:
rps avgserv minserv maxserv
timeouts
fails
0.0
0.0
0.5
0.5
0
0
write:
wps avgserv minserv maxserv
timeouts
fails
45.0
8.2
1.7
21.5
0
0
queue: avgtime mintime maxtime avgwqsz
avgsqsz
sqfull
20.3
0.0
77.4
1.4
0.2
23
------------------------------------------------------------------------------hdisk0
xfer: %tm_act
bps
tps
bread
bwrtn
0.0
0.0
0.0
0.0
0.0
fails
read:
rps avgserv minserv maxserv
timeouts
0.0
0.0
0.5
0.5
0
0
write:
wps avgserv minserv maxserv
timeouts
fails
0.0
0.0
1.7
21.5
0
0
queue: avgtime mintime maxtime avgwqsz
avgsqsz
sqfull
0.0
0.0
77.4
0.0
0.0
0
-------------------------------------------------------------------------------
xfer:
read:
write:
Kbps
11.5
rps
0.0
wps
tps
1.7
avgserv
32.0S
avgserv
bkread
bkwrtn partition-id
1.0
0.7
0
minserv maxserv
0.2
25.3S
minserv maxserv
113
7414ch05.fm
queue:
11791.1
0.0
1.2
25.3S
avgtime mintime maxtime avgwqsz
avgsqsz
sqfull
0.0
0.0
0.0
0.0
0.0
Paths/Disks:
hdisk0
xfer:
read:
%tm_act
0.5
rps
bps
11.8K
avgserv
tps
bread
bwrtn
1.7
8.3K
3.5K
minserv maxserv timeouts
fails
1.0
4.2
0.2
22.9
0
write:
wps
avgserv
minserv maxserv
timeouts
fails
0.7
7.8
1.2
35.2
0
queue:
avgtime
mintime
maxtime avgwqsz
avgsqsz
sqfull
5.8
0.0
191.6
0.0
0.0
605
-------------------------------------------------------------------------------
The output reports data for the adapter and then all the devices attached to it. For
non-VSCSI adapters, the adapter statistics will not report the full detail of output
but the disk devices will. The fields in the output are split into the same sections
as for the extended disk output, however, there are some extra fields and slight
alterations to the outputs. As with the disk statistics, different suffixes are used to
represent data of different orders of magnitude.
114
Kbps
tps
bkread
bkwrtn
partition-id
7414ch05.fm
avgserv
minserv
maxserv
avgserv
minserv
maxserv
mintime
maxtime
avgwqsz
avgsqsz
sqfull
115
7414ch05.fm
Description
Example 5-15 shows the usage of the hpmcount command to check the activities
when issuing the ls command.
Example 5-15 The hpmcount command example
#/usr/pmapi/tools/hpmcount -s 1 ls
Sachin_xlc
lost+found
paging
sach_test.conf spots
dump
lpp_source
resources
sach_test.txt tmp
home
mksysbs
root
scripts
Execution time (wall clock time): 0.003478 seconds
116
7414ch05.fm
########
########
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
0.000794 seconds
0.001816 seconds
284 Kbytes
0 Kbytes*sec
0 Kbytes*sec
78
0
0
0
0
0
0
0
0
0
########
Set: 1
Counting duration: 0.003147321 seconds
PM_FPU_1FLOP (FPU executed one flop instruction)
:
PM_CYC (Processor cycles)
:
PM_MRK_FPU_FIN (Marked instruction FPU processing finished) :
PM_FPU_FIN (FPU produced a result)
:
PM_INST_CMPL (Instructions completed)
:
PM_RUN_CYC (Run cycles)
:
Utilization rate
MIPS
Instructions per cycle
HW floating point instructions per Cycle
HW floating point instructions / user time
HWflops/s
HW floating point rate (HW Flops / WCT)
HWflops/s
:
:
:
:
:
:
0
561038
0
45
264157
561038
9.751 %
75.951 MIPS
0.471
0.000
0.133 M
0.013 M
117
7414ch05.fm
When specified without command line options, the hpmstat command counts the
default 1 iteration of user, kernel, and hypervisor (for processors supporting
hypervisor mode) activity for 1 second for the default set of events. It then writes
the raw counter values and derived metrics to standard output. By default,
runlatch is disabled so that counts can be performed while executing in idle cycle.
hpmstat [-d] [-H] [-k] [-o file] [-r] [-s set] [-T] [-U] [-u]
interval count
or
hpmstat [-h]
Table 5-10 provides the hpmstat command parameters.
Table 5-10 The hpmstat command parameter details
Parameter
Description
Example 5-16 shows the usage of hpmstat command to check the activities of
system.
Example 5-16 The hpmstat command example
#/usr/pmapi/tools/hpmstat -s 7
Execution time (wall clock time): 1.000307 seconds
Set: 7
118
7414ch05.fm
Utilization rate
MIPS
Instructions per cycle
Total load and store operations
Instructions per load/store
Number of loads per TLB miss
Number of load/store per TLB miss
:
:
:
:
:
:
:
:
:
:
:
:
:
37114
17995529
874385
1835177
8820856
14684180
1.087 %
8.818 MIPS
0.490
2.710 M
3.255
49.447
73.006
119
7414ch05.fm
Starting with AIX 5L 5300-01, the vmo command has been enhanced to provide
more control over user memory placement across these pools. Memory can be
allocated in one of two ways, first-touch and round robin. With the first-touch
scheduling policy, memory is allocated from the chip module that the thread was
running on when it first touched that memory segment, which is the first page
fault. With the round-robin scheduling policy, memory allocation is striped across
each of the vmpools. The following parameters have been added to the vmo
command to control how different memory types are allocated and can either
have a value of 1, signifying the first touch scheduling policy, or 2, signifying the
round-robin scheduling policy.
memplace_data
memplace_mapped_file
120
memplace_shm_named
memplace_stack
memplace_text
7414ch05.fm
(0)> mempool *
memp_frs+010000
memp_frs+010280
(0)>
VMP MEMP
00 000
00 001
NB_PAGES
0001672C
000174A1
FRAMESETS
000 001
002 003
NUMFRB
0000DEE2
0000E241
121
7414ch05.fm
The fcstat command used in Example 5-18 displays statistics for the Fibre
Channel device driver fcs0. At this moment their are no flags associated with this
command.
Example 5-18 The fcstat command
#fcstat fcs0
FIBRE CHANNEL STATISTICS REPORT: fcs0
Device Type: FC Adapter (df1000f9)
Serial Number: 1E313BB001
Option ROM Version: 02C82115
Firmware Version: B1F2.10A5
Node WWN: 20000000C9487B04
Port WWN: 10000000C9416DA4
FC4 Types
Supported:
0x0000010000000000000000000000000000000000000000000000000000000000
Active:
0x0000010000000000000000000000000000000000000000000000000000000000
Class of Service: 4
Port FC ID: 011400
Port Speed (supported): 2 GBIT
Port Speed (running): 1 GBIT
Port Type: Fabric
Seconds Since Last Reset: 345422
Transmit Statistics Receive Statistics
------------------- -----------------Frames: 1 Frames: 1
Words: 1 Words: 1
LIP Count: 1
NOS Count: 1
Error Frames: 1
Dumped Frames: 1
Link Failure Count: 1
Loss of Sync Count: 1
Loss of Signal: 1
Primitive Seq Protocol Err Count: 1
Invalid Tx Word Count: 1
Invalid CRC Count: 1
122
7414ch05.fm
Description
Device Type
Serial Number
Firmware Version
Node WWN
Port FC ID
Port Type
Port Speed
Port WWN
123
7414ch05.fm
124
Statistics
Description
Frames
Words
LIP Count
NOS Count
Error Frames
Dumped Frames
Loss of Signal
IP over FC Adapter
Driver Information: No
Adapter Elements Count
IP over FC Traffic
Statistics: Input
Requests
IP over FC Traffic
Statistics: Output
Requests
7414ch05.fm
Statistics
Description
IP over FC Traffic
Statistics: Control
Requests
IP over FC Traffic
Statistics: Input Bytes
IP over FC Traffic
Statistics: Output Bytes
FC SCSI Traffic
Statistics: Input
Requests
FC SCSI Traffic
Statistics: Output
Requests
FC SCSI Traffic
Statistics: Control
Requests
FC SCSI Traffic
Statistics: Input Bytes
FC SCSI Traffic
Statistics: Output Bytes
Note : Some adapters might not support a specific statistic. The value of
non-supported statistic fields is always 0.
125
7414ch05.fm
126
7414ch06.fm
Chapter 6.
127
7414ch06.fm
A timer wheel has N number of slots. A slot represents a time unit, named si (slot
interval). A curser in the timing wheel moves one location every time unit (just like
a seconds hand in a clock). Whenever a curser moves to a slot, named cs
(current slot), it implies that the list of timers in that slot, if any, expire at that
instant or when the curser reaches the same slot in the subsequent cycles.
When a new timer with timer interval, named ti (time interval), is to be added to
this wheel, the slot for the new timer, named ts (timer slot) is calculated as :
ts = ( cs + (ti / si)) % N
128
7414ch06.fm
Assume that the maximum timer interval value that any timer takes on does not
exceed an upper limit, named tmax. If N is sufficiently large to accommodate
tmax within a rotation from the current curser position, then it would mean that
when the curser moves to a specific slot, all timers in that slot expire at the same
instant or that there should be no subsequent cycles. This prevents traversing the
list to check which of the timers expire now and which of them expire in the
subsequent cycles.
The timer wheel has the following attributes:
The maximum value for RTO is 64 (tmax) seconds.
The least granularity for the timer is 10ms.
129
7414ch06.fm
130
7414ch06.fm
131
7414ch06.fm
sets the DIO option. Without the mount option, you can also enable DIO per-file
by using the AIX O_DIRECT flag in open() call. For more information about DIO,
see the -o option for the mount command.
NFSV4 replication
Replication allows the hosting of data on multiple NFSv4 servers. Multiple copies
of the same data placed on different servers are known as replicas whereas
servers holding copies of data are known as replica servers. The unit of
replication used in NFSv4 replication is a mounted file system. The NFSv4 server
communicates a location data list to all NFSv4 clients. Thus, the AIX 5L NFSv4
client becomes replica aware. In the event of primary data server failure, the
client switches to an alternate location. The location list from the server is
assumed to be ordered with the first location the most preferred. The client may
132
7414ch06.fm
override this with the nfs4cl prefer command. The default client fail-over
behavior is governed by the timeo NFS mount option. If the client cannot contact
the server in two NFS time out periods, it will initiate fail-over processing to find
another server. Fail over processing can be influenced with the nfso
replica_failover_time option.
A replica export can only be made if replication is enabled on the server. By
default, replication is not enabled. The chnfs command can be used to enable or
disable replication.
#chnfs -R {on|off|host[+hosts]}
Changing the replication mode can only be done if no NFSv4 exports are active.
Replicas can be exported using -replica option to exportfs -o command.
Server delegation
The AIX server supports read delegation and is available with the 64-bit AIX
kernel. At the server, delegation can be controlled globally with the nfso
command (server_delegation), or per export with new deleg option to exportfs
command.
Server delegation can be disabled with the nfso -o server_delegation=0
command. Administrators can use the exportfs deleg=yes | no option to
disable or enable the granting of delegations on a per file system basis, which will
override the nfso command setting.
Client delegation
The AIX 5L client supports both read and write delegation on both the 32 and 64
bit kernel. Client delegation can be disabled with the nfso -o
client_delegation=0 command. Client delegation must be set before any
mounts take place on the client.
All delegation statistics can be extracted with the nfsstat -d command.
133
7414ch06.fm
134
Command
Description
chnfs -g on | off
7414ch06.fm
Command
Description
chnfs -G
chnfs -x extend_cnt
The NFSv4 subsystem uses runtime metrics (such as the time of the last
successful NFSv4 reclaim operation) to detect reclamation of the state in
progress, and extends the grace period for a length of time up to the duration of
the given number of iterations.
135
7414ch06.fm
Main Data
Center
NFS Server
NFS v3 or v4 Clients
Local Data
Remote
NFS Server
Data Center
Local Data
Proxy Export
with Cache
NFS v3 or v4 Clients
Data less
Site
Local Data
NFS Server
Proxy Export
with Cache
NFS v3 or v4 Clients
NFS proxy serving uses disk caching of accessed data to serve similar
subsequent requests locally with reduced network traffic to the back-end server.
Proxy serving can potentially extend NFS data access over slower or less reliable
networks with improved performance and reduced network traffic to the primary
server where the data resides. Depending on availability and content
management requirements, proxy serving can provide a solution for extending
NFS access to network edges without the need for copying data. You can
configure NFS proxy serving using the mknfsproxy command.
Proxy caching can be used with both the NFS version 3 and 4 protocols. The
protocol between the proxy and the connected clients must match the protocol
used between the proxy and the back-end server for each exported proxy
instance. Both reads and writes of data are supported in addition to byte range
advisory locks. Proxy serving requires the 64-bit kernel environment. Enhanced
JFS file system (JFS2) is used as the cached file system with proxy serving
136
7414ch06.fm
%ckfilt -v4
137
7414ch06.fm
138
7414ch06.fm
139
7414ch06.fm
140
7414ch06.fm
length of this data has been increased to allow a maximum of 256 bytes. Also, it
allows hex data as a shared secret.
For more information, see RADIUS Server in AIX 5L Versions 5.3 Security on the
following Web site:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/
com.ibm.aix.doc/aixbman/security-kickoff.htm
maxlogins = 32767
logintimeout = 60
auth_type = PAM_AUTH
141
7414ch06.fm
NDAF domain
An NDAF domain consists of one or more administration clients (systems from
which an administrator can control the NDAF environment through the dmf
command), one or more NDAF-enabled NFS servers, and potentially, one or
more non-NDAF enabled NFS servers grouped around an NDAF administration
server.
All systems in the NDAF domain share the same user and group definitions. For
example, if NDAF is deployed using Kerberos security, all systems in the domain
are members of the same Kerberos realm. The NDAF domain and the NFSv4
domain must be the same domain.
In an NDAF domain, the NDAF administration server receives its process
information from commands run by one or more system administrators over a
command-line interface (CLI). The NDAF administration server initiates all NDAF
actions at the NFS data server systems that are part of the domain.
142
7414ch06.fm
Data set
The basic unit of NDAF management is a data set. A data set is a directory tree.
NDAF creates data sets and manages their attributes, mounting, and contained
data.
Data sets, also called dsets, are manageable units of data that can be shared
across a network. They provide basic management for replicating network data
and inserting it into a namespace. They are linked together using data set
references to form the file namespace. NDAF supports thousands of data sets or
replicas across all managed servers. Read-only replicas of a data set can be
created, distributed, and maintained across multiple data servers. When the
master source for a collection of replicas is modified, the dmf update replica
command is used to propagate the changes to all replica locations. A given data
set can have as many as 24 replicas.
Data that is copied into data set directories is NFS-exported, but the data is not
visible until the data set is mounted with the dmf mount dset command. file
system objects in a data set include files, directories, access control lists (ACLs),
links, and more.
Unless specified when they are created, data sets are created in the directory
specified when the dms daemon is started by the -ndaf_dataset_default
parameter or, if unspecified, the -ndaf_dir parameter.
Cell
Data sets can be grouped with other data sets and organized into a single file
namespace. This grouping is called a cell.
A cell is a unit of management and namespace that is hosted by an
administration server. After a cell is defined on an administration server, more
data sets can be created on that server using that cell. Each cell in an
administration server is independent of all other cells hosted by that
administration server. A cell contains its own namespace, consisting of data sets,
and its own role-based security objects. Roles are privileges attached to a set of
users that manage the resources within a cell. As many as eight distinct roles
can be defined for each cell.
After a cell is created using the dmf create cell name command, it is
automatically placed on the administration server. You cannot use the dmf place
cell name to place a cell on the administration server. Placing a cell results in
the transfer of the cells root directory information from the administration server
to the targeted data server. A cell can place its data sets on any server defined
for the administration server on which the cell is hosted. NFSv4 clients mount the
root directory of the cell to access the cells full namespace.
143
7414ch06.fm
All NFSv4 clients can view the objects within a cell by mounting the root of the
cell from any NDAF server on which the cell has been placed.
NDAF supports up to 64 cells for every deployed NDAF instance (domain) that
has cells residing on one or more data servers. When a cell is destroyed, all its
data sets and replicas are also destroyed.
Replicas
These read-only data sets are called replicas. A replica is placed in the global
namespace in the same way as a data set. Multiple replicas of the same data set
can be placed on different servers so that if the primary server of a replica
becomes unavailable to the client, the client can automatically access the same
files from a different server. Replicas will not reflect updates that are made to the
data set unless the replica is updated using the dmf update replica command.
A given data set can have as many as 24 replicas.
Unless specified when they are created, replicas are created in the directory
specified when the dms daemon is started by the -ndaf_replica_default
parameter or, if unspecified, the -ndaf_dir parameter.
Replica clones
For replicas, the dmf place replica command creates a clone of the replica at a
specified location on the server.
If the replica is mounted in the cell, this clone location is added to the NFS replica
list that is returned to the NFS clients that are accessing the replica. For more
information, see NFS replication and global namespace. The order of the
referrals in this list depends on the network configuration. Every clone location of
a replica is updated asynchronously upon dmf update commands. The dmf place
replica command takes as parameters the server and, optionally, the local path
on the server.
A clone location of a replica can be removed from a server, as in the following
example:
144
7414ch06.fm
Replication updates
The master replica is a read-only copy of the source data set, and the clones are
copies of the master replica. If the source data set is updated, the replicas are
not updated until explicitly done so using the dmf update replica command.
There are two methods of data transfer:
copy
Performs data transfer using full file tree copy. The copy
method implements the data transfer method plugin interface
and performs a data transfer operation by doing a complete
walk of the directory tree for the data set and transmitting all
objects and data to the target.
rsync
Administration client
An administration client is any system in the network that has the ndaf.base.client
fileset installed on it from which the dmf command can be run.
The NDAF administration server receives its process information from
commands run by system administrators over a command-line interface. The
program name for this administration client is the dmf command.
Administration server
The NDAF administration server is a data server that runs daemon processes
and acts as the central point of control for the collection of NDAF data servers.
It receives commands from the system administrators who use the administration
client (the dmf command), the dms command, and the dmadm command. The
145
7414ch06.fm
Data server
A data server is the server that runs the daemon process (the dms daemon)
controlling an NFS file server and its associated physical file system (PFS). The
data provisioned with NDAF resides at the data server.
The dms process runs at each data server and carries out actions in underlying
file systems. It sets default directories, timeout values, level of logging, security
method used, Kerberos keytab path, Kerberos principal, and communication
ports on each data server. Only data servers within the NDAF domain can be
replicated.
Data server databases are created in a server subdirectory in the directory
specified by the -ndaf_dir parameter when the dms daemon is started. These
servers require 64-bit systems running the AIX 5L 64-bit kernel. The
administration server also serves as a data server.
Principal
A principal is an authorized NDAF user that Kerberos and other security methods
screen for during security checks.
Principals control how objects can be manipulated and by which operations.
Only the first user to run the dmf create admin command, called the
DmPrincipal, can create cells, servers, and roles. Additional NDAF principals can
be added to manage an object with the dmf add_to object DmPrincipal=login
command. All members of the DmPrincipal list are considered to be owners of
the object and can control it.
NDAF principals can also be removed using the dmf remove_from action.
146
7414ch06.fm
NDAF Domain
Administration
(dmf)
Cell
Administration server
(dmadm)
NFSv4 clients
NFSv4 clients
NFSv4 clients
The dmf command takes one or more of the following optional parameter values:
147
7414ch06.fm
-V
-U
-R
[-log_level=val]
Critical errors
Errors
Warning
Notice
Information
[-security=val]
auth_sys
krb5
krb5i
krb5p
[-krb5_principal=val]
148
7414ch06.fm
[-admin_port=val]
[-serv_port=val]
[-ndaf_dir=val]
[-krb5_keytab=val]
[-admin_cb_port=val]
[-log_level=val]
Critical errors
149
7414ch06.fm
Errors
Warning
Notice
Information
[-security=val]
auth_sys
krb5
krb5i
krb5p
[-krb5_principal=val]
[-serv_port=val]
[-serv_serv_port=val]
Sets the dms port waiting for the other dms RPC.
Default value is 28003.
[-ndaf_dir=val]
150
7414ch06.fm
[-krb5_keytab=val]
[-admin_cb_port=val]
-q
-s
151
7414ch06.fm
rmndaf
chndaf
dmf
Roles
Roles are privileges attached to a set of NDAF principals for managing the
resources within a cell. NDAF roles are a distinct function separate from AIX
administrative roles.
Note: For more information and examples on NDAF security, refer to the IBM
AIX Information Center publication on NDAF at:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/topic/com.i
bm.aix.ndaf/doc/ndaf/NDAF_security.htm
152
7414ch06.fm
Installing NDAF
To install NDAF, the system must have IBM AIX 5L Version 5.3 with the 5300-05
Technology Level or greater installed. The system must be using the 64-bit
kernel.
NDAF servers must be configured as NFSv4 servers.
The NDAF administration server and all NDAF data servers and administration
clients must be configured as Kerberos clients.
In order to communicate correctly, each server must be aware of the ports the
other servers are listening on and emitting to.
A given system can assume one of three roles in an NDAF domain. Different
pieces of the ndaf.base fileset must be installed depending on the roles. The
roles are:
Administration server For this system, ndaf.base.admin and ndaf.base.server
must be installed. There is only one administration server
for a federation of servers.
Data servers
153
7414ch06.fm
name
-r
Note: Entering dmf create admin my_admin also creates the my_admin server
object.
You can use the dmf command to add a data server:
dmf create server name dns_target [-e] [-r] [-a admin_server]
The flags have the following definition:
154
7414ch06.fm
-a admin_server
dns_target
-e
name
-r
To add the NDAF administration server using SMIT, perform the following steps:
1. From the NDAF menu, select NDAF Management > Administration Server
Management > Create Admin Server.
Note: You can also use the ndafadmincreate fastpath.
2. Specify the DNS name or IP address of the administration server in the
Admin Server DNS name field and press Enter.
3. Specify a name for the new administration server in the Admin Server name
field and press Enter.
To add an NDAF data server using SMIT, perform the following steps:
1. From the NDAF menu, select NDAF Management > Data Server Management >
Create Data Server.
Note: You can also use the ndafdscreate fastpath.
2. Specify the DNS name or IP address of the administration server in the
Admin Server DNS name field.
3. Specify a name for the new data server in the Data Server name field and
press Enter.
4. Specify the DNS name or IP address of the new data server in the Data
Server DNS name field and press Enter.
155
7414ch06.fm
Creating a cell
You can create a cell for use with NFSv4 servers. You must create an NDAF
administration server before you can create a cell.
You can use the dmf command to create a cell:
dmf create cell name
name
-r
-w timeout
156
7414ch06.fm
-a admin_server
pattern
-r
-c container
-r
You can use the dmf command to change the attributes for a cell:
dmf set cell key=value [-r] [-a admin_server] [-c container]
The flags have the following definitions:
-a admin_server
-c container
key=value
-r
157
7414ch06.fm
Perform the following steps to show and change the attributes for a specific cell
using SMIT:
1. Select NDAF > NDAF Management > Namespace (cell) Management >
Change/show cell attributes.
2. Specify the name of the administration server that manages the NDAF
domain in the Admin name field.
3. Enter the name of the cell in the Cell name field (or choose one from the list
by pressing F4). The following attributes are displayed:
Admin server DNS name (or IP address)
Specifies the DNS name or IP address of the administration server that
manages the NDAF domain.
Admin server name
Specifies the name of the administration server that manages the NDAF
domain.
Cell name
Specifies the name of the cell.
Cell UUID
Specifies the uuid for the cell.
Maximum number of reported locations
Specifies the maximum number of NFS location referrals that can be
returned to an NFS client for an object.
NDAF principals
Enter the list of users, separated by commas, directly in the input field.
Users from this list are owners of this cell and can manipulate the cell.
158
-a admin_server
-c container
7414ch06.fm
-f
-r
-r
-a admin_server
-c container
Perform the following steps to enable a cell to use a data server to host the cell's
data set using SMIT:
1. From the NDAF menu, select NDAF Management > Namespace (cell)
Management > Add Server to a Cell Namespace.
2. Specify the name of the administration server that manages the NDAF
domain in the Admin name field and press Enter.
3. Enter the name of the cell in the Cell Name field (or choose one from the list
by pressing F4) and press Enter.
4. Enter the name of the data server in the Data server name field (or choose
one from the list by pressing F4) and press Enter.
159
7414ch06.fm
-r
-f
-a admin_server
-c container
Perform the following steps to prevent a cell from using a data server to host the
cell's data sets using SMIT:
1. From the NDAF menu, select NDAF Management > Namespace (cell)
Management > Remove Server from a Cell Namespace.
2. Specify the name of the administration server that manages the NDAF
domain in the Admin name field and press Enter.
3. Enter the name of the cell in the Cell Name field (or choose one from the list
by pressing F4) and press Enter.
4. Enter the name of the data server in the Data server name field (or choose
one from the list by pressing F4) and press Enter.
160
7414ch06.fm
161
7414ch06.fm
162
Problem
Action
The DmMode for the data set might not permit writes. To fix
this, use
dmf set dset DmMode=<required mode>
Cannot specify
directory when creating
a data set.
This feature is not supported. Clients will fail over from one
replica to another, but not from the source data set to a
replica.
7414ch06.fm
Problem
Action
NDAF checker
To help diagnose problems, NDAF provides commands to check the consistency
of the databases used to manage the NDAF objects on administration and data
servers
You can use the following dmf command code to check the validity and
consistency of the administration server database:
dmf check_adm admin [-r] [-a admin_server]
The flags have the following definitions:
-r
-a admin_server
You can use the following dmf command to check the validity and consistency of
the data server database on the specified data server or on every managed data
server if none is specified:
dmf check_serv server [-r] [-a admin_server] [-c container]
The flags have the following definitions:
-r
-a admin_server
-c container
You can use the following dmf command to check the consistency of the
database on the administration server with the database on the specified data
server or with the databases on every managed data server if none is specified:
dmf check_adm_serv {admin|server} [-r] [-a admin_server] [-c
container]
The flags have the following definitions:
163
7414ch06.fm
-r
-a admin_server
-c container
If the data from this command indicates inconsistencies between the databases,
it might be necessary to recover from a backup to restore correct behavior. See
NDAF data backup and NDAF data recovery for more information.
Example 6-3 shows sample output from running the dmf command:
Example 6-3 The dmf command output
164
7414ch06.fm
165
7414ch06.fm
The AIX Security Expert view of security levels is derived in part from the
National Institute of Standards and Technology document Security Configuration
Checklists Program for IT Products - Guidance for CheckLists Users and
Developers:
http://csrc.nist.gov/checklists/download_sp800-70.html
However, High, Medium, and Low level security mean different things to different
people. It is important to understand the environment in which your system
operates. If you chose a security level that too high, you could lock yourself out of
your computer. If you chose a security level that is too low, your computer might
be vulnerable to a cyber attack.
The following coarse-grain security settings are available:
High Level Security
Advanced Security
166
7414ch06.fm
Miscellaneous group
167
7414ch06.fm
Check password definitions for High Level Security, Medium Level Security,
and Low Level Security.
Enable X-Server access for High Level Security, Medium Level Security, and
Low Level Security.
Check user definitions for High Level Security, Medium Level Security, and
Low Level Security.
Remove dot from non-root path for High Level Security and AIX Standard
Settings.
Check group definitions for High Level Security, Medium Level Security, and
Low Level Security.
Remove guest account for High Level Security, Medium Level Security, and
Low Level Security.
TCB update for High Level Security, Medium Level Security, and Low Level
Security.
168
7414ch06.fm
169
7414ch06.fm
Running aixpert with the only the -l flag set implements the security settings
promptly without letting the user configure the settings. For example, running
aixpert -l high applies all the high-level security settings to the system
automatically. However, running aixpert -l with the -n -o filename option saves
the security settings to a file specified by the filename parameter. The user can
then use the -v flag to view the file and view the settings. The -f flag then applies
the new configurations.
Note: It is recommended that aixpert be rerun after any major systems
changes, such as the installation or updates of software. If a particular security
configuration item is deselected when aixpert is rerun, that configuration item
is skipped.
The following are the useful flags and their definitions:
170
-a
The settings with the associated level security options are written
in abbreviated file format to the file specified by the -o flag.
-c
-e
The settings with the associated level security options are written
in expanded file format to the file specified by the -o option.
-f
7414ch06.fm
-l
171
7414ch06.fm
-o
-u
-v
To start the graphical user interface to step through the security settings in wizard
fashion, type:
aixpert
172
7414ch06.fm
To write all of the high level security options to an output file, type:
aixpert -l high -o
/etc/security/aixpert/plugin/myPreferredSettings.xml
Note: After completing this command, the output file can be edited, and
specific security roles can be commented out by enclosing them in the
standard xml comment string (<-- begins the comment and -\> closes the
comment).
To apply the security settings from a configuration file, type:
aixpert -f /etc/security/aixpert/plugin/myPreferredSettings.xml
To view the security settings that have been applied to the system, type:
aixpert -v /etc/security/aixpert/core/AppliedAixpert.xml
AIXpert can be run from SMIT with the fastpath: smit aixpert
Figure 6-4 shows the screenshot of the AIX Security Expert SMIT menu that will
be displayed:
The Web-based System Manager GUI for AIX Security Expert is recommended
as it is especially user-friendly in working with the tool, as shown in Figure 6-5.
173
7414ch06.fm
174
7414ch06.fm
175
7414ch06.fm
The AIX Security Expert tool is installed with the bos.aixpert.cmds fileset, which
is part of the 5300-05 Technology Level. It is also available as a fix download at
the following Web site:
http://www7b.boulder.ibm.com/aix/fixes/byCompID/5765G0300/bos.aixper
t/bos.aixpert.cmds.5.3.0.0.bff
For more information on AIX Security Expert, please refer to the IBM AIX
Information Center at:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topi
c=/com.ibm.aix.security/doc/security/aix_sec_expert_pwd_policy_setti
ngs.htm
176
7414ch07.fm
Chapter 7.
177
7414ch07.fm
178
7414ch07.fm
179
7414ch07.fm
2. Type the hostname or IP address of the machine that you want to be a thin
server. This machine will be added to the NIM environment and to the NIM
database.
3. Select the Common OS Image Name for the thin server.
4. Provide NIM the network information and press enter to proceed.
It will take a few minutes for NIM to add the thin server to the database and
prepare for other data.
AIX 5L Version 5.3 with TL 5300-05 provides a useful function which allows a thin
server to switch to a different common image. The administrator has the option of
allowing the thin server to switch over to a different common OS image now, at a
specific time, or at the client's own convenience. You can use the following smit
fastpath to achieve this:
smit swts
Type the time, in the at command format, to switch the thin server over to the
specified common OS image in the Specify Time to Switch Thin Server field. At
the specified time, the thin server will mount the common OS image as its /usr file
system. If 'Specify Time to Switch Thin Server and Allow Thin Server to Switch
Itself are not specified, then the switch will occur immediately.
180
7414ch07.fm
181
7414ch07.fm
182
7414ch07.fm
183
7414ch07.fm
8. The initiate installation on lpars phase reboots each LPAR via the control host
(HMC partition) and initiates the installation.
Note: This phase ends when the installation begins. The actual progress of
the installation is not monitored.
9. Post-migration software assessment
Assesses the overall success of the migration after each installation, and reports
on any software migration issues. It may be necessary to manually correct the
errors reported for filesets that fail to migrate.
10.Post-installation customization
Performs a NIM customization operation on each client with the values provided
if an alternate lpp_source, fileset list, or customization script was provided to the
nim_move_up application. This allows for the optional installation of additional
software applications or for any additional customization that may be needed.
184
7414ch07.fm
You can use other panels to interact with the nim_move_up application, in addition
to the Configure nim_move_up Input Values panel and the Execute
nim_move_up Phases panel. The are as follows:
185
7414ch07.fm
Unconfigure nim_move_up
This SMIT panel provides an interface to unconfiguring the nim_move_up
environment. Unconfiguring the environment removes all state information,
including what phase to execute next, saved data files generated as a result of
the execution of some phases, and all saved input values. Optionally, all NIM
resources created through nim_move_up can also be removed. Using this panel is
the equivalent of using the -r command-line option.
186
7414ch07.fm
187
7414ch07.fm
System Firmware
Adapter/Component Microcode that can be upgraded
Table 7-1 gives details on the geninv command usage, using the following
syntax:
geninv { -c | -l } [-D] [-P <protocol> | -H <host>]
Table 7-1 The geninv command parameter details
Parameter
Description
188
7414ch07.fm
2.0.47.1
5.3.0.50
P
R
C
C
cdrecord-1.9-7
1.9
mkisofs-1.13-4
1.13
openssl-0.9.7g-2
0.9.7g
prngd-0.9.29-1
0.9.29
53323847
2107
sisioa.5052414E.030D004f
030D004f
System Object
hdisk.IC35L07.44543031
rmt.2107.
sys.system.9113-550
SF240_219
189
7414ch07.fm
Selecting the highlighted text brings you to the panel shown in Figure 7-5.
Selecting the highlighted selection brings you to the panel shown in Figure 7-6.
190
7414ch07.fm
191
7414ch07.fm
192
7414ch07.fm
193
7414ch07.fm
operations to setup, access, maintain, update, and customize this new instance
of the BOS.
The setup operation creates a standby BOS that boots from a distinct Boot
Logical Volume (BLV). This creates two bootable instances of the BOS on a given
rootvg. You can boot from either instance of a BOS by specifying the respective
BLV as an argument to the bootlist command, or using system firmware boot
operations.
You can simultaneously maintain two bootable instances of a BOS. The instance
of a BOS associated with the booted BLV is the active BOS. The instance of a
BOS associated with the BLV that has not been booted is the standby BOS. Only
two instances of BOS are supported per rootvg.
The multibos command allows you to access, install, maintain, update, and
customize the standby BOS either during setup or during any subsequent
customization operations. Installing maintenance or technology level updates to
the standby BOS does not change system files on the active BOS. This allows for
concurrent update of the standby BOS, while the active BOS remains in
production.
The multibos utility has the ability to copy or share logical volumes and file
systems. By default, the multibos utility copies the BOS file systems (currently
the /, /usr, /var, /opt, and /home directories), associated log devices, and the boot
logical volume. The administrator has the ability to share or keep private all other
data in the rootvg.
Note: As a general rule, shared data should be limited to file systems and
logical volumes containing data not affected by an upgrade or modification of
private data.
Copies of additional BOS objects (see the L flag) can also be made. All other
file systems and logical volumes are shared between instances of the BOS.
Separate log device logical volumes (those not contained within the file system)
are not supported for copy and will be shared.
Note: Automatic file system expansion
Run all multibos operations with the -X flag auto-expansion feature. This flag
allows for automatic file system expansion on any multibos command
execution, if additional space is necessary to perform multibos-related tasks.
194
7414ch07.fm
Definition
-a
-B
Build boot image operation. The standby boot image is created and
written to the standby BLV using the bosboot command.
-b
-c
-e
File lists active BOS files to be excluded during the setup operation
in regular expression syntax.
-f
File Lists fixes (such as APARs) that are to be installed during the
setup or customization operation. The list's syntax follows instfix
conventions.
-i
-L
-l
-m
-N
-n
Does not perform cleanup upon failure. This option is useful to retain
multibos data after a failed operation.
-p
-R
-S
-s
-t
-u
195
7414ch07.fm
Flag
Definition
-X
196
7414ch07.fm
# multibos -s -X
Initializing multibos methods ...
Initializing log /etc/multibos/logs/op.alog ...
Gathering system information ...
+---------------------------------------------------------------------+
Setup Operation
+---------------------------------------------------------------------+
Verifying operation parameters ...
Creating image.data file ...
+---------------------------------------------------------------------+
Logical Volumes
+---------------------------------------------------------------------+
Creating standby BOS logical volume bos_hd5
Creating standby BOS logical volume bos_hd4
Creating standby BOS logical volume bos_hd2
Creating standby BOS logical volume bos_hd9var
Creating standby BOS logical volume bos_hd10opt
+---------------------------------------------------------------------+
File Systems
+---------------------------------------------------------------------+
Creating all standby BOS file systems ...
Creating standby BOS file system /bos_inst on logical volume bos_hd4
Creating standby BOS file system /bos_inst/usr on logical volume
bos_hd2
Creating standby BOS file system /bos_inst/var on logical volume
bos_hd9var
Creating standby BOS file system /bos_inst/opt on logical volume
bos_hd10opt
+---------------------------------------------------------------------+
Mount Processing
+---------------------------------------------------------------------+
Mounting all standby BOS file systems ...
Mounting /bos_inst
197
7414ch07.fm
Mounting /bos_inst/usr
Mounting /bos_inst/var
Mounting /bos_inst/opt
+---------------------------------------------------------------------+
BOS Files
+---------------------------------------------------------------------+
Including files for file system /
Including files for file system /usr
Including files for file system /var
Including files for file system /opt
Copying files
Percentage of
Percentage of
...
Percentage of
+---------------------------------------------------------------------+
Boot Partition Processing
+---------------------------------------------------------------------+
Active boot logical volume is hd5.
Standby boot logical volume is bos_hd5.
Creating standby BOS boot image on boot logical volume bos_hd5
bosboot: Boot image is 30420 512 byte blocks.
+---------------------------------------------------------------------+
Mount Processing
+---------------------------------------------------------------------+
Unmounting all standby BOS file systems ...
Unmounting /bos_inst/opt
Unmounting /bos_inst/var
Unmounting /bos_inst/usr
Unmounting /bos_inst
+---------------------------------------------------------------------+
Bootlist Processing
+---------------------------------------------------------------------+
Verifying operation parameters ...
Setting bootlist to logical volume bos_hd5 on hdisk0.
ATTENTION: firmware recovery string for standby BLV (bos_hd5):
boot /pci@800000020000003/pci@2,4/pci1069,b166@1/scsi@0/sd@3:4
ATTENTION: firmware recovery string for active BLV (hd5):
boot /pci@800000020000003/pci@2,4/pci1069,b166@1/scsi@0/sd@3:2
198
7414ch07.fm
199
7414ch07.fm
9. The standby boot image is created and written to the standby BLV using the
bosboot command. You can block this step with the -N flag. Only use the -N
flag if you are an experienced administrator and have a good understanding
the AIX boot process.
10.The standby BLV is set as the first boot device, and the active BLV is set as
the second boot device. You can skip this step using the -t flag.
# multibos -m -X
Initializing multibos methods ...
Initializing log /etc/multibos/logs/op.alog ...
Gathering system information ...
+---------------------------------------------------------------------+
BOS Mount Operation
+---------------------------------------------------------------------+
Verifying operation parameters ...
+---------------------------------------------------------------------+
Mount Processing
200
7414ch07.fm
+---------------------------------------------------------------------+
Mounting all standby BOS file systems ...
Mounting /bos_inst
Mounting /bos_inst/usr
Mounting /bos_inst/var
Mounting /bos_inst/opt
Log file is /etc/multibos/logs/op.alog
Return Status = SUCCESS
201
7414ch07.fm
202
7414ch07.fm
The set of BOS objects, such as the BLV, logical volumes, file systems, and so on
that are currently booted are considered the active BOS, regardless of logical
volume names. The previously active BOS becomes the standby BOS in the
existing boot environment.
scriptlog.<timestamp>.txt
A log of commands being run during the current
shell operation.
scriptlog.<timestamp>.txt.Z
A compressed log of commands run during a
previous shell operation.
In addition, the bootlog contains redundant logging of all multibos operations that
occur during boot (For example, the verify that attempts synchronization from
inittab).
Multibos private data is stored in the /etc/multibos/data directory, the logical
volume control block (LVCB) of logical volumes that were the source or target of a
203
7414ch07.fm
copy, and the /etc/filesystems entries that were the source or target of a copy.
The following are examples of files found in the /etc/multibos/data directory:
acttag
sbyfslist
sbylvlist
sbytag
The following may be seen in the fs field of the logical volumes that were the
source or target of a copy:
mb=<TAG>:mbverify=<integer>
The following may be seen in /etc/filesystems as an attribute of file systems
that were the source or target of a copy:
mb = <TAG>
The user should not modify multibos private data.
To prevent multibos operations from working simultaneously, the directory
/etc/multibos/locks contains lock files. The following is an example of a file that
may be found in this directory:
global_lock : The process ID (PID) of the currently running multibos
operation.
If a multibos operation exited unexpectedly and was not able to clean up, it may
be necessary to remove this file. The user should check that the PID is not
running before removing this file.
This inittab entry (if runnable) is removed upon removal of the standby BOS
using multibos -R.
Note: For more detailed information, refer to the latest
/usr/lpp/bos/README.multibos file, and documentation regarding multibos
command in the AIX Information Center.
204
7414ch07.fm
To 5.1
To 5.2
To 5.3
FROM 4.3
(4330-10)
Yes
Yes
Yes
FROM 5.1
No
Yes
Yes
FROM 5.2
No
No
Yes
FROM 5.3
No
No
No
TO Output
(post-migrated)
205
7414ch07.fm
INSTALL_METHOD
A mksysb migration attempts to recover the sys0 attributed for the source system
as specified in the mksysb ODM, but no other device-specific data is recovered
from the source system.
Any user-supplied values for these variable are ignored.
The file should list the disks to be installed in the TARGET_DISK_DATA stanza to
ensure that only those disks are used. A mksysb migration is a combination of an
overwrite installation and a migration installation. The overwrite portion destroys
all of the data on the target disks. The TARGET_DISK_DATA stanza must have
enough information to clearly single out a disk. If you supply an empty
TARGET_DISK_DATA stanza, the default disk for the platform is used, if
available. The following examples show possible values for the
TARGET_DISK_DATA stanza:
Example 7-4 shows a record with disk names only (two disks)
Example 7-4 Record with disk names only
target_disk_data:
PVID =
PHYSICAL_LOCATION =
CONNECTION =
LOCATION =
SIZE_MB =
HDISKNAME = hdisk0
target_disk_data:
PVID =
PHYSICAL_LOCATION =
CONNECTION =
LOCATION =
SIZE_MB =
206
7414ch07.fm
HDISKNAME = hdisk1
Example 7-5 shows a record with the physical location specified (1 disk)
Example 7-5 Record with physical locations only
target_disk_data:
PVID =
PHYSICAL_LOCATION = U0.1-P2/Z1-A8
CONNECTION =
LOCATION =
SIZE_MB =
HDISKNAME =
Example 7-6 shows a record by physical volume ID (PVID) (2 disks)
Example 7-6 Record with PVIDs only
target_disk_data:
PVID = 0007245fc49bfe3e
PHYSICAL_LOCATION =
CONNECTION =
LOCATION =
SIZE_MB =
HDISKNAME =
target_disk_data:
PVID = 00000000a472476f
PHYSICAL_LOCATION =
CONNECTION =
LOCATION =
SIZE_MB =
HDISKNAME =
Migration overview
The following is a list of the major tasks performed during a migration.
1. Target system boots from 5300-03.
2. MKSYSB_MIGRATION_DEVICE is found in customized bosinst.data
3. The device is checked to verify it exists
207
7414ch07.fm
4. The mksysb migration banner is shown on the screen after the console is
configured.
5. The image.data is checked for correctness and the target disks are inspected
for availability and size.
6. The required mksysb migration files are restored from the mksysb, namely,
image.data and /etc/filesystems. The user is prompted to swap CD/DVD if
required for CD Boot installs with mksysb on CD/DVD. After restoration the
user is asked reinsert the product media.
7. The target logical volumes and file systems are created according to the
image.data file and mounted according to the /etc/filesystems file.
8. The mksysb data is restored. The user is prompted to swap CD/DVD if
required for CD Boot installs with mksysb on CD/DVD.
9. The system now looks like an imported rootvg. The installation continues as
a migration from here on.
Prerequisites
The following are the major prerequisites:
All requisite hardware, including any external devices (such as tape, CD, or
DVD-ROM drives), must be physically connected. For more information about
connecting external devices, see the hardware documentation that
accompanied your system.
Before you begin the installation, other users who have access to your system
must be logged off.
Verify that your applications run on AIX 5L Version 5.3. Also, verify that your
applications are binary compatible with AIX 5L Version 5.3. If your system is
an application server, verify that there are no licensing issues. Refer to your
application documentation or provider to verify on which levels of AIX your
applications are supported and licensed. You can also check the AIX
application availability guide at the following Web address:
http://www-1.ibm.com/servers/aix/products/ibmsw/list/
Verify that your hardware microcode is up-to-date.
There must be adequate disk space and memory available. AIX 5L Version
5.3 requires 256512 MB of memory and 2.2 GB of physical disk space. For
additional release information, see the AIX 5.3 Release Notes at:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?t
opic=/com.ibm.aix.resources/53relnotes.htm
Make a backup copy of your system software and data. For instructions on
how to create a system backup, refer to Creating system backups at
208
7414ch07.fm
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=
/com.ibm.aix.install/doc/insgdrf/create_sys_backup.htm
in the IBM AIX Information Center. This backup is used during the mksysb
migration installation to restore your system files prior to migration.
If the source system is available, run the pre-migration script on it. Ignore any
messages that pertain to the hardware configuration of the source system
because the migration takes place on the target system. Correct any other
problems as recommended by the script.
209
7414ch07.fm
210
7414ch07.fm
about other software that is being installed displays. After the BOS installation is
complete, the system automatically reboots.
After the system has restarted, you are prompted to configure your installation of
the BOS.
Note: If there is not enough space to migrate all of the usually migrated
software, a collection of software called a migration bundle is available when
you install additional software later. You must create additional disk space on
the machine where you want to install the migration bundle, and then you can
run smit update_all to complete the installation where the migration bundle
is installed.
If you are not doing the installation from a graphics console, a Graphics_Startup
bundle is created. If the pre-migration script ran on the source system, run the
post-migration script and verify the output files.
211
7414ch07.fm
If the source system is available, run the pre-migration script on it. Ignore any
messages that pertain to the hardware configuration of the source system
because the migration takes place on the target system. Correct any other
problems as recommended by the script.
212
7414ch07.fm
213
7414ch07.fm
Example scenarios
The following sections describe common scenarios you may find handy.
mksysb to Client
A defined mksysb resource on the NIM environment is restored onto alternate file
systems on the NIM Master. The data is then migrated and written out to
alternate file systems on the clients alternate disk. Free disk(s) are required on
the client.
The syntax for this operation is:
nimadm -s<spot> -l<lpp_source> -c <client> -d<disks(s)> -j<cache vg>
-T <mksysb NIM resource>
Client to mksysb
Copies of the clients file systems are made on the NIM Master. The data is then
migrated and backed up to produce a mksysb resource on the NIM environment.
The syntax for this operation is:
nimadm -s<spot> -l<lpp_source> -c <client>
-O <mksysb>
-j<cache vg>
mksysb to mksysb
A NIM mksysb resource is restored to alternate file systems on the master, the
data is migrated and then backed up to produce a NIM mksysb resource at the
new level.
The syntax for this operation is:
nimadm -s<spot> -l<lpp_source> -j<cache vg>
-T <mksysb NIM resource> -O <mksysb NIM resource>
214
7414ch07.fm
215
7414ch07.fm
8. The client's hardware and software should support the AIX 5L level that is
being migrated to and meet all other conventional migration requirements.
If you cannot meet requirements 1-7, you will need to perform a conventional
migration. If you cannot meet requirement 8, then migration is not possible.
When planning to use nimadm, ensure that you are aware of the following
considerations:
If the NIM client rootvg has TCB turned on, you will need to either
permanently disable it or perform a conventional migration. This situation
exists because TCB needs to access file metadata that is not visible over
NFS.
All NIM resources used by nimadm must be local to the NIM master.
Although there is almost no interference with the client's active rootvg during
the migration, the NIM client may experience minor performance decrease
due to increased disk input/output, NFS biod activity, and some CPU usage
associated with alt_disk_install cloning.
A reliable network and NFS tuning will assist in optimizing nimadm network
performance.
216
7414ch07.fm
217
7414ch07.fm
218
7414abrv.fm
AC
Alternating Current
ACL
CHRP
Common Hardware
Reference Platform
CLI
AFPA
CLVM
Concurrent LVM
CPU
AIO
Asynchronous I/O
CRC
AIX
Advanced Interactive
Executive
CSM
Cluster Systems
Management
APAR
CUoD
Capacity Upgrade on
Demand
API
Application Programming
Interface
DCM
DES
ARP
DGD
ASMI
Advanced System
Management Interface
DHCP
BFF
DLPAR
Dynamic LPAR
BIND
DMA
DNS
DRM
Dynamic Reconfiguration
Manager
BIST
Built-In Self-Test
BLV
BOOTP
Boot Protocol
DR
Dynamic Reconfiguration
BOS
DVD
BSD
EC
EtherChannel
CA
Certificate Authority
ECC
CATE
EOF
End of File
CD
Compact Disk
EPOW
CDE
Common Desktop
Environment
ERRM
CD-R
CD Recordable
CD-ROM
ESS
F/C
Feature Code
CEC
FC
Fibre Channel
FCAL
219
7414abrv.fm
FDX
Full Duplex
LA
Link Aggregation
FLOP
LACP
FRU
FTP
LAN
GDPS
Geographically Dispersed
Parallel Sysplex
LDAP
GID
Group ID
LED
GPFS
LMB
LPAR
Logical Partition
GUI
LPP
HACMP
LUN
LV
Logical Volume
HBA
LVCB
HMC
Hardware Management
Console
LVM
HTML
MAC
HTTP
Mbps
Hz
Hertz
I/O
Input/Output
IBM
International Business
Machines
ID
MBps
MCM
Multichip Module
ML
Maintenance Level
MP
Multiprocessor
Identification
MPIO
Multipath I/O
IDE
MTU
IEEE
NFS
NIB
IP
Internetwork Protocol
NIM
IPAT
IP Address Takeover
Network Installation
Management
IPL
NIMOL
NIM on Linux
IPMP
IP Multipathing
NVRAM
ISV
ITSO
International Technical
Support Organization
ODM
OSPF
IVM
Integrated Virtualization
Manager
PCI
Peripheral Component
Interconnect
JFS
PIC
L1
Level 1
PID
Process ID
L2
Level 2
PKI
L3
Level 3
PLM
220
7414abrv.fm
Power-On Self-test
SCSI
POWER
Performance Optimization
with Enhanced Risc
(Architecture)
SDD
SMIT
System Management
Interface Tool
PPC
Physical Processor
Consumption
SMP
Symmetric Multiprocessor
PPFC
SMS
System Management
Services
PTF
SMT
Simultaneous Multithreading
PTX
Performance Toolbox
SP
Service Processor
PURR
Processor Utilization
Resource Register
SPOT
PV
Physical Volume
SRC
PVID
SRN
PVID
SSA
QoS
Quality of Service
SSH
Secure Shell
RAID
Redundant Array of
Independent Disks
SSL
SUID
Set User ID
SAN Virtualization Controller
RAM
SVC
RAS
TCP/IP
Transmission Control
Protocol/Internet Protocol
RCP
Remote Copy
TSA
UDF
UDID
RIO
Remote I/O
VIPA
Virtual IP Address
RIP
VG
Volume Group
RISC
Reduced Instruction-Set
Computer
VGDA
RMC
VGSA
VLAN
RPC
VP
Virtual Processor
RPL
VPD
RPM
VPN
RSA
VRRP
RSCT
VSD
RSH
Remote Shell
WLM
Workload Manager
SAN
RDAC
221
7414abrv.fm
222
7414bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information about ordering these publications, see How to get IBM
Redbooks on page 226. Note that some of the documents referenced here may
be available in softcopy only.
AIX 5L Differences Guide Version 5.3 Edition, SG24-7463
Advanced POWER Virtualization on IBM eServer p5 Servers: Architecture
and Performance Considerations, SG24-5768
AIX 5L Practical Performance Tools and Tuning Guide, SG24-6478
Effective System Management Using the IBM Hardware Management
Console for pSeries, SG24-7038
IBM System p Advanced POWER Virtualization Best Practices, REDP-4194
Implementing High Availability Cluster Multi-Processing (HACMP) Cookbook,
SG24-6769
Introduction to pSeries Provisioning, SG24-6389
Linux Applications on pSeries, SG24-6033
Managing AIX Server Farms, SG24-6606
NIM from A to Z in AIX 5L, SG24-7296
Partitioning Implementations for IBM eServer p5 Servers, SG24-7039
A Practical Guide for Resource Monitoring and Control (RMC), SG24-6615
Integrated Virtualization Manager on IBM System p5, REDP-4061
223
7414bibl.fm
Other publications
These publications are also relevant as further information sources:
The following types of documentation are located through the Internet at the
following URL:
http://www.ibm.com/servers/eserver/pseries/library
User guides
System management guides
Application programmer guides
All commands reference volumes
Files reference
Technical reference volumes used by application programmers
Detailed documentation about the Advanced POWER Virtualization feature
and the Virtual I/O Server
https://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/
home.html
AIX 5L V5.3 Partition Load Manager Guide and Reference, SC23-4883
Linux for pSeries installation and administration (SLES 9), found at:
http://www-128.ibm.com/developerworks/linux/library/l-pow-pinstall/
Linux virtualization on POWER5: A hands-on setup guide, found at:
http://www-128.ibm.com/developerworks/edu/l-dw-linux-pow-virutal.htm
l
POWER5 Virtualization: How to set up the SUSE Linux Virtual I/O Server,
found at:
http://www-128.ibm.com/developerworks/eserver/library/es-susevio/
Online resources
These Web sites and URLs are also relevant as further information sources:
AIX 5L and Linux on POWER community
http://www-03.ibm.com/systems/p/community/
Capacity on Demand
http://www.ibm.com/systems/p/cod/
224
7414bibl.fm
Related publications
225
7414bibl.fm
226
7414IX.fm
Index
Symbols
$deferevents variable 2, 7
$stack_details variable 23
.BZ format 59
.Z format 59
/, and ? subcommand 7
/etc/environment 15
/etc/filesystems file 208
/etc/inetd.conf, file
AIX Security Expert 167
/etc/inittab, file
AIX Security Expert 167
/etc/perf/ directory 107
/etc/perf/daily directory 106
/etc/rc.tcpip, file
AIX Security Expert 167
/etc/security/user 24
/etc/trcfmt 43
/usr/include/sys/trcmacros.h 50
/usr/lpp/perfagent/config_aixwle.sh 106
/usr/lpp/perfagent/config_topas.sh 106
/var/adm/ras/errlog 52
/var/adm/ras/mtrcdir 47, 49
/var/adm/ras/trcfile 44
Numerics
16 GB pages 71
16 MB pages 71
4 KB pages 71
64 KB pages 71
9116 561 53
9117 53
9117-570 53
9119-590 53
9119-595 53
9406 53
A
acctrpt command 77
process accounting 77
system accounting 80
transaction accounting 83
ACL
conversion 70
active dead gateway detection 130
Active Directory 89
adapter statistics 121
addcmd subcommand 2, 5
Administrator Service Processor Failover 55
Advanced Accounting 77
ITUAM integration 84
reporting 77
advanced accounting
LDAP integration 88
AIO 24
IOCP support 24
AIO fast path for concurrent I/O 34
Enhanced Journaled File System 34
I/O optimization 34
Journaled File System 34
kprocs 34
Logical Volume Manager 34
aio_nwait routine 24
aiocb 18
AIX 5L Release Support strategy
Concluding Service Pack 62
Interim Fix 63
Service Pack 62
Technology Level 62
aix MIO module 17
AIX Security Expert
Check Security 168
files 169
groups 166
security configuration copy 169
security level settings 165
smit 173
Undo Security 167
WebSM 165, 173
AIX Security Expert (aixpert) 165
aixpert, command 170
commands
aixpert 165
AIXTHREAD_READ_GUARDPAGES 14
AltiVec 1819
AltiVec instruction 18
227
7414IX.fm
B
backbyinode 30
Base Operating System (BOS) 193
Bengali UTF-8 codeset (BN_IN) 88
BLV 194
Boot Logical Volume (BLV) 194
BOS 193
bos.help.msg.en_US.smit 140
bos.msg.LANG.rte 140
bosboot command 40, 4344
bosinst.data 192
EXISTING_SYSTEM_OVERWRITE 193
PHYSICAL_LOCATION 192
SAN_DISKID 192
bosinst.data file 205, 210
CONTROL_FLOW 206
C
cell 143
cfgmgr, command 58
chcore, command 58
chfs 2829
chndaf, command 152
chnfs 132134
chuser command 24
CIO NFS 131
O_CIO flags 132
ckfilt command 137
Client Delegation 133
cmmands
cron 64
Commands
usrck 69
commands 86, 121
acctrpt 77
at 180
backbyinode 30
bosboot 40, 43
cfgmgr 58
chcore 58
chfs 2829
chndaf 152
228
chnfs 132134
chuser 24
ckfilt 137
compare_report 187
compress 59
cp 7071
ctctrl 4142
curt 9495
dbx 9
dmadm 148, 151
dmf 143, 145, 147, 152
dmpfmt 57
dmpuncompress 59
dms 149
dms_enable_fs 150151
drmgr 14
drmgr scriptinfo 14
errctrl 11
exportfs 132133
fcstat 121
filemon 99
gencopy 86
genfilt 137
geninstall 8586
geninv 187
gprof 101
hpmcount 116
ifconfig 129130
iostat 109
kdb 10, 64
ld 72
ldedit 72
lscore 58
lsldap 89
mirscan 31
mkfilt 137
mkndaf 151152
mknfsproxy 136
mkprojldap 88
mksysb 183, 217
more 66
mount 134
mtrcsave 48
multbos 193
multibos 196
mv 7071
netpmon 94, 99
nfs4cl 133134
nfso 133
7414IX.fm
D
dbx 2
$deferevents variable 2, 7
$stack_details variable 23
/, and ? subcommand 7
addcmd subcommand 2, 5
attribute subcommand 8
condition subcommand 8
Deferred event 7
frame subcommand 2, 5
mutex subcommand 8
registers subcommand 5
Thread level breakpoint and watchpoint 2, 8
thread subcommand 8
tnext subcommand 9
tnexti subcommand 9
tskip subcommand 9
tstep subcommand 9
tstepi subcommand 9
tstop subcommand 8
tstophwp subcommand 8
tstopi subcommand 9
ttrace subcommand 9
ttracehwp subcommand 8
ttracei subcommand 9
where subcommand 4
dbx command 9
dbx dump command 9
dead interface detection 130
debugger 2
dbx 2
debugging 57
error detection 58
KDB 58
RTEC 58
debugging xmalloc 11
default route 130
Deferred event 2, 7
Delegation 133
Client Delegation 133
Index
229
7414IX.fm
exportfs 133
nfso 133
Server Delegation 133
devices 58
mksysb enhancements 217
DIO NFS 131
O_DIRECT flag 132
disaster recovery
GLVM 36
dlopen 16
dmadm, command 148, 151
DMAPI 28, 30
DMAPI managed JFS2 28, 30
backbyinode 30
tar 30
dmf, command 143, 145, 147, 152
dmpfmt, command 57
dmpuncompress command 59
dms, command 149
dms_enable_fs, command 150151
doubletext32 17
DR_MEM_PERCENT 14
dr_reconfig 14
drmgr 14
drmgr scriptinfo 14
dump minidump 51
dump parallel dump 52
dump subcommand 9
dump sysdumpdev 52, 59
dump trcdead 47
dump, command 57
DVD install media 217
error detection 58
error log 56
/tmp/errlog.save 56
/usr/include/sys/err_rec.h 56
Estonian ISO8859-4 codeset (et_EE) 88
ex 2
exportfs 132133
Extended disk service time metrics 109
Extended vscsi statistics 113
external storage 34
E
ed 2
editors
ed 2
ex 2
vi 2
environment file 15
environment variable
AIXTHREAD_READ_GUARDPAGES 14
DR_MEM_PERCENT 14
LD_LIBRARY_PATH 15
LDR_PRELOAD 15
LDR_PRELOAD64 15
LIBPATH 15
errctrl 11
230
F
FAStT boot support enhancements 34
fcstat command 121
ffdc 40, 43, 52
Fibre Channel device driver 121
file
etc/security/login.cfg 141
radiusd.conf 139
File System Freeze and Thaw 28
chfs 2829
fscntl 28
File system Rollback 28, 30
rollback 30
filemon command 99
Firmware levels 53
Firmware Version 122
first failure data capture 40, 43, 52
frame subcommand 2, 5
fscntl 28
functions
dlopen 16
fscntl 28
load 16
G
gencopy 86
gencopy flags 87
genfilt command 137
geninstall 8586
geninstall flags 86
geninv 187
gensyms command 96
Geographic Logical Volume Manager (GLVM) 36
Global Name Space
chnfs 132
Global Namespace
exportfs 132
7414IX.fm
H
HACMP 37
GLVM 37
Hardware Management Console 73
HMC 73
configuring 16GB huge pages 73
providing CEC level data to topas 104
hopcount 130
hpmcount command 116
Hypervisor 52
I
I/O pacing 134
IBM Tivoli Usage and Accounting Manager 84
ifconfig command 129130
ifconfig monitor flag 130
ifconfig option tcp_low_rto 129
image.data 195
image.data file 208
Inode Creation 28, 31
Installation
1TB disks 178
installation
DVD install media support 217
mksysb migration support 204
targeting install disk 192
Interim Fix 63
IO completion ports 24
IOCP 24
iostat command 109
Extended disk service time metrics 109
Extended vscsi statistics 113
IPFilters 141
IPsec filters
AIX Security Expert group 167
ITUAM 84
J
JFS2 28
DMAPI 28, 30
backbyinode 30
tar 30
File System Freeze and Thaw 28
chfs 2829
fscntl 28
File System Rollback 28, 30
rollback 30
inode creation 28, 31
JS20 19
K
Kannada UTF-8 codeset (KN_IN) 88
KDB 58
kdb
mtrace 48
kdb commad 64
kdb command 10, 121
check subcommand 10
mempool subcommand 121
kerberos 142, 146, 152
kernel 40, 43
L
Latvian ISO8859-4 codeset (lv_LV) 88
ld command 72
LD_LIBRARY_PATH 15
LDAP 89
/etc/security/ldap/ldap.cfg 89
advanced accounting 88
client support for Active Directory 89
ldap.cfg password encryption 89
list LDAP records 89
lsldap 89
mkprojldap 88
ldedit command 72
LDR_CNTRL 73
LDR_CTRL 16
LDR_PRELOAD 15
LDR_PRELOAD64 15
lgpg_regions parameter 73
lgpg_size paramete 73
libaacct.a library 77
LIBPATH 15
library
dlopen 16
LD_LIBRARY_PATH 15
LDR_PRELOAD 15
LDR_PRELOAD64 15
LIBPATH 15
Index
231
7414IX.fm
load 16
library variables 15
lightweight memory trace 43
Lightweight Memory Trace (LMT) 58
lightweight memory tracing 41
LMT 43, 58
lmt 41
load 16
Logical Volume Manager 31
mirscan command 31
Logical Volume Manager (LVM) 199
LPAR accounting 80
lpp_source resource 178
lrud daemon 119
lscore, command 58
lsldap
default output 89
lsldap, command 89
LVM 199
M
macros 41
mail command 64
Malayalam UTF-8 codeset (ML_IN) 88
Managed System Properties 54
memory
xmalloc 11
Memory Affinity 119
Memory Management
DR_MEM_PERCENT 14
drmgr 14
drmgr scriptinfo 14
memory pools 119
memory trace mode 41
memory_affinity parameter 121
memplace_data parameter 120
memplace_mapped_file parameter 120
memplace_shm_anonymous parameter 120
memplace_shm_named parameter 120
memplace_stack parameter 120
memplace_text parameter 120
memplace_unmapped_file parameter 121
Micro-partition 125
Micro-partition performance enhancements 125
migration
mksysb
debugging 216
mksysb migration support 204
232
N
named shared libraries
doubletext32 17
NAMEDSHLIB 16
Named shared library areas 16
named shared library areas 100
NAMEDSHLIB 16
NAT 141
National language support 88
NDAF 142
adding server to cell namespace 159
adding servers 154
administration client 145, 153
administration server 145, 153
creating and managing cells 155
data backup 164
data recovery 164
data server 146
data servers 153
data set 143
domain 142, 147
installation 153
principal 146
removing a cell namespace 158
removing server from cell namespace 160
replicas 144
roles 160
security 152
NFS 152
roles 152
showing and changing cell attributes 157
troubleshooting 161
checker 163
7414IX.fm
netpmon command 94, 99
network
AIX Security Expert 165, 167
network address translation (NAT) 141
Network Data Administration Facility (NDAF) 142
Network File System 131
Network intrusion detection 137
NFS 131
CIO 131
DIO 131
I/O pacing 134
physical filesystem 142
Release-behind-on-read 134
NFS server grace period 134
nfs4cl 133134
nfso 133
nfsstat 133
NFSv4 132, 161
chnfs 132133
Delegation 133
Client Delegation 133
exportfs 133
nfso 133
Server Delegation 133
exportfs 132
Global Name Space 132
chnfs 132
I/O pacing 134
mknfsproxy 136
mount 134
nfs4cl 133134
nfso 133
nfsstat 133
Referral 132
Replica 132
Replica servers 132
Replication 132
chnfs 133
nfs4cl 133
nfso 133
replica_failover_time 133
server grace period 134
chnfs 134
Server Proxy Serving 135
mknfsproxy 136
NIM
mksysb migration with NIM 211
nim_inventory 191
nim_move_up command 216
Index
233
7414IX.fm
nimadm 213
niminv 189
SUMA integration 186
nim 178, 192
nim_inventory 191
nim_move_up command 182, 216
nimadm command 213
planning 216
prerequisite 215
TCB 216
nimadmin command 183
niminv 189
nimsh command 178
NLS 85, 88
Assamese UTF-8 codeset (AS_IN) 88
Bengali UTF-8 codeset (BN_IN) 88
Estonian ISO8859-4 codeset (et_EE) 88
hpmstat command 88
Kannada UTF-8 codeset (KN_IN) 88
Latvian ISO8859-4 codeset (lv_LV) 88
Oriya UTF-8 codeset (OR_IN) 88
Punjabi UTF-8 codeset (PA_IN) 88
no command 129
no command option timer_wheel_tick 129
no option tcp_low_rto 129
STD_AUTH 141
PMAPI 116
hpmcount command 116
POSIX prioritized I/O 18
prioritized I/O 18
process accounting 77
processor utilization register (PURR) 58
ps command 67
Punjabi UTF-8 codeset (PA_IN) 88
PURR 58
O
ODM database 121
Oriya UTF-8 codeset (OR_IN) 88
P
pagesize command 72
parallel dump 52
Performance Management API 116
performance statistics recording 106
performance tool curt 9495
performance tool filemon 99
performance tool netpmon 94, 99
performance tool svmon 94
performance tool tprof 94, 99100
performance tool vmstat 94
pf 17
PFS 142
Physical filesystem (PFS) 142
PHYSICAL_LOCATION 192
Pluggable authentication module 141
/etc/security/login.cfg 141
auth_type 141
234
Q
qaltivec, flags 21
qenablevmx, flags 21
qhot, flags 22
qvecnvol, flag 20
R
RADIUS Enhanced shared Secret 140
RADIUS IP address Pooling 140
Radius Server Support 138
Accounting 138
Authentication 138
Authorization 138
radiusd.conf 139
radius.base 140
rare events 43
RAS 56
core file 58
debugging 57
devices 58
errctrl 11
error log hardening 56
snap command enhancements 57
system dump enhancements 56
trace 57
trcgen 11
trcgenk 11
xmalloc 11
raso command 44
raw socket 23
recov 17
Redbooks Web site 226
Contact us xix
Redundant Flexible Service Processors 52
Referral 132
registers subcommand 5
Release-behind-on-read 134
7414IX.fm
S
San boot procedures 35
second failure data capture 40, 44
security
aixpert 165
SED 12
sedmgr 12
sedmgr 12
Server Delegation 133
server grace period
chnfs 134
Server Proxy Serving 135
mknfsproxy 136
Service Pack 62
sfdc 40, 44
shared porduct object tree 178
show locked users 68
Signal
SIGRECONFIG 14
SIGRECONFIG 14
simd 18
single instruction, multiple data 18
snap 57
snap command 49
snap, command 57
snap, commands 57
sockets 23
spot 178
Stack execution disable 12
sedmgr 12
Stateful filters 137
structures
aiocb 18
subroutine
dr_config 14
subroutines
trcgen 11
trcgenk 11
SUID
disabling in AIX Security Expert 167
SUMA 186
NIM integration 186
suma command 186187
svmon command 94
sysdumpdev command 52, 59
sysdumpdev, command 5657
system accounting 80
system dump 56
dmpfmt 57
dump 57
sysdumpdev 56
system trace mode 41
T
tar 30
target install disk
PHYSICAL_LOCATION 192
targeting install disk 192
tcp_low_rto, flags 129
tcp_slowtimo() 128
Technology Level 62
thin server 178
Thread level breakpoint and watchpoint 2, 8
thread subcommand 8
threads
AIXTHREAD_READ_GUARDPAGES 14
time stamp 51
timer_wheel_tick, flags 129
timer-wheel algorithm 128
Tivoli Access Manager 36
Tivoli Access Manager for system p pre-installed
36
Index
235
7414IX.fm
tnext subcommand 9
tnexti subcommand 9
topas command 101, 106
cross partition monitoring 101
performance statistics recording 106
topasout command 107
tprof command 94, 99100
trace 17, 57
CT 58
LMT 58
trrctl 58
trace common events 43
trace ctctrl 4142
trace lmt 41, 43
trace macros 41
trace mode 41
trace rare events 43
trace raso 44
trace trcdead 47
trace trcrpt 41, 47
transaction accounting 83
trcctl, command 58
trcdead command 47
trcgen 11
trcgenk 11
trcrpt command 41, 43, 47
tskip subcommand 9
tstep subcommand 9
tstepi subcommand 9
tstop subcommand 8
tstophwp subcommand 8
tstopi subcommand 9
ttrace subcommand 9
ttracehwp subcommand 8
ttracei subcommand 9
lgpg_size parameter 73
memory_affinity parameter 121
memplace_data parameter 120
memplace_mapped_file parameter 120
memplace_shm_anonymous parameter 120
memplace_shm_named parameter 120
memplace_stack parameter 120
memplace_text parameter 120
memplace_unmapped_file parameter 121
vmm_mpsize_support parameter 74
vmstat command 94
W
WebSM now uses PAM 141
where subcommand 4
X
XL C/C++ 20
Xmalloc 11
xmwlm command 106
U
uncompress command 59
usrck 69
usrck flags 69
V
vector arithmetic logic unit 19
Velocity Engine 18
vi 2
vmm_mpsize_support parameter 74
vmo command 73, 120
lgpg_regions parameter 73
236
7414spine.fm
237
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50#
smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for
the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your
book by opening the book file with the spine.fm still open and File>Import>Formats the Conditional Text Settings (ONLY!) to the book files.
(0.2spine)
0.17<->0.473
90<->249 pages
7414spine.fm
238
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50#
smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for
the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your
book by opening the book file with the spine.fm still open and File>Import>Formats the Conditional Text Settings (ONLY!) to the book files.
Back cover
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
For clients who are not familiar with the base enhancements
of AIX 5L Version 5.3, a companion publication, AIX 5L
Differences Guide Version 5.3 Edition, SG24-7463 is
available.
ISBN