0% found this document useful (0 votes)
21 views

Install and configure a GPFS cluster on AIX

Gpfs cluster AIX

Uploaded by

Prathmesh Pol
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Install and configure a GPFS cluster on AIX

Gpfs cluster AIX

Uploaded by

Prathmesh Pol
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Install and configure a GPFS cluster on

AIX
Objectives
 Verify the system environment
 Create a GPFS cluster
 Define NSD's
 Create a GPFS file system

You will need


Requirements for this lab (not necessarily GPFS minimum requirements):

 Two AIX 6.1 or 7.1 operating systems (LPARs)


o Very similar to Linux installation. AIX LPP packages replace the Linux RPMs,
some of the administrative commands are different.

 At least 4 hdisks

Step 1: Verify Environment


1. Verify nodes properly installed
1. Check that the operating system level is supported

On the system run oslevel

Check the GPFS FAQ:http://www-


01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfs
clustersfaq.html
2. Is the installed OS level supported by GPFS? Yes No
3. Is there a specific GPFS patch level required for the installed OS? Yes No
4. If so what patch level is required? ___________
2. Verify nodes configured properly on the network(s)
1. Write the name of Node1: ____________
2. Write the name of Node2: ____________
3. From node 1 ping node 2
4. From node 2 ping node 1

If the pings fail, resolve the issue before continuing.


3. Verify node-to-node ssh communications (For this lab you will use ssh and scp for secure
remote commands/copy)
1. On each node create an ssh-key. To do this use the command ssh-keygen; if you don't
specify a blank passphrase, -N, then you need to press enter each time you are
promoted to create a key with no passphrase until you are returned to a prompt. The
result should look something like this:
2. # ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
3. Generating public/private rsa key pair.
4. Created directory '/.ssh'.
5. Your identification has been saved in /.ssh/id_rsa.
6. Your public key has been saved in /.ssh/id_rsa.pub.
7. The key fingerprint is:
8. 7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61
root@node1

9. On node1 copy the $HOME/.ssh/id_rsa.pub file to $HOME/.ssh/authorized_keys

# cp $HOME/.ssh/id_rsa.pub $HOME/.ssh/authorized_keys

10. From node1 copy the $HOME/.ssh/id_rsa.pub file from node2 to /tmp/id_rsa.pub

# scp node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub

11. Add the public key from node2 to the authorized_keys file on node1

# cat /tmp/id_rsa.pub >> $HOME/.ssh/authorized_keys

12. Copy the authorized key file from node1 to node2

# scp $HOME/.ssh/authorized_keys node2:/.ssh/authorized_keys

13. To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2
until you are no longer prompted for a password or for addition to the known_hosts file.

14. node1# ssh node1 date


15. node1# ssh node2 date
16. node2# ssh node1 date
17. node2# ssh node2 date
18. Supress ssh banners by creating a .hushlogin file in the root home directory

# touch $HOME/.hushlogin

4. Verify the disks are available to the system

For this lab you should have 4 disks available for use hdiskw-hdiskz.
1. Use lspv to verify the disks exist

2. Ensure you see 4 unused disks besides the existing rootvg disks and/or other volume
groups.
Step 2: Install the GPFS software

On node1
1. Locate the GPFS software in /yourdir/gpfs/base/

# cd /yourdir/gpfs/base/

2. Run the inutoc command to create the table of contents, if not done already

# inutoc .

3. Install the base GPFS code using the installp command

# installp -aXY -d/yourdir/gpfs/base all

4. Locate the latest GPFS updates in /yourdir/gpfs/fixes/

# cd /yourdir/gpfs/fixes/

5. Run the inutoc command to create the table of contents, if not done already

# inutoc .

6. Install the GPFS PTF updates using the installp command

# installp -aXY -d/yourdir/gpfs/fixes all

7. Repeat Steps 1-7 on node2. On node1 and node2 confirm GPFS is installed using the
lslpp command

# lslpp -L gpfs.\*
the output should look similar to this
Fileset Level State Type Description
(Uninstaller)
------------------------------------------------------------
----------------
gpfs.base 4.1.0.3 A F GPFS File
Manager
gpfs.docs.data 4.1.0.1 A F GPFS Server
Manpages and Documentation
gpfs.gskit 4.1.0.3 A F GPFS GSKit
Cryptography Runtime
gpfs.msg.en_US 4.1.0.3 A F GPFS Server
Messages U.S. English
Note 1: The above example is from GPFS V4.1 Express Edition. The important part is the
base, docs and msg filesets are present.

If you have GPFS Standard Edition, you should also have the following:
gpfs.ext 4.1.0.3 A F GPFS
Extended Features

If you have GPFS Advanced Edition, in addition to gpfs.ext, you should also have
the following entry:

gpfs.crypto 4.1.0.3 A F GPFS


Cryptographic Subsystem

Note2: The gpfs.gnr fileset is used by the Power 775 HPC cluster only and there is no
need to install this fileset on any other AIX cluster. This fileset does not ship on the V4.1
media.

8. Confirm the GPFS binaries are in your $PATH using the mmlscluster command
9. # mmlscluster
10. mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error
messages to determine cause.
Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin

Step 3: Create the GPFS cluster


For this exercise the cluster is initially created with a single node. When creating the cluster
make node1 the primary configuration server and give node1 the designations quorum and
manager. Use ssh and scp as the remote shell and remote file copy commands.

*Primary Configuration server (node1): __________


*Verify fully qualified path to ssh and scp:
ssh path__________
scp path_____________

1. Use the mmcrcluster command to create the cluster


2. # mmcrcluster -N node1:manager-quorum -p node1 -r
/usr/bin/ssh -R /usr/bin/scp
3. Thu Mar 1 09:04:33 CST 2012: mmcrcluster: Processing node
node1
4. mmcrcluster: Command successfully completed
5. mmcrcluster: Warning: Not all nodes have proper GPFS license
designations.
Use the mmchlicense command to designate licenses as
needed.

6. Run the mmlscluster command again to see that the cluster was created
7. # mmlscluster
8.
9. ============================================================
===================
10. | Warning:
|
11. | This cluster contains nodes that do not have a proper
GPFS license |
12. | designation. This violates the terms of the GPFS
licensing agreement. |
13. | Use the mmchlicense command and assign the appropriate
GPFS licenses |
14. | to each of the nodes in the cluster. For more
information about GPFS |
15. | license designation, see the Concepts, Planning, and
Installation Guide. |
16. ============================================================
===================
17.
18. GPFS cluster information
19. ========================
20.
21. GPFS cluster name: node1.ibm.com
22. GPFS cluster id: 13882390374179224464
23. GPFS UID domain: node1.ibm.com
24. Remote shell command: /usr/bin/ssh
25. Remote file copy command: /usr/bin/scp
26.
27. GPFS cluster configuration servers:
28. -----------------------------------
29.
30. Primary server: node1.ibm.com
31. Secondary server: (none)
32.
33. Node Daemon node name IP address Admin node
name Designation
34. ------------------------------------------------------------
-----------------------------------
1 node1.lab.ibm.com 10.0.0.1
node1.ibm.com quorum-manager

35. Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
36. # mmchlicense server --accept -N node1
37.
38. The following nodes will be designated as possessing GPFS
server licenses:
39. node1.ibm.com
mmchlicense: Command successfully completed
Step 4: Start GPFS and verify the status of all nodes
1. Start GPFS on all the nodes in the GPFS cluster using the mmstartup command

# mmstartup -a

2. Check the status of the cluster using the mmgetstate command


3. # mmgetstate -a
4.
5. Node number Node name GPFS state
6. ------------------------------------------
1 node1 active

Step 5: Add the second node to the cluster


1. One node 1 use the mmaddnode command to add node2 to the cluster

# mmaddnode -N node2

2. Confirm the node was added to the cluster using the mmlscluster command

# mmlscluster

3. Use the mmchcluster command to set node2 as the secondary configuration server

# mmchcluster -s node2

4. Set the license mode for the node using the mmchlicense command. Use a server
license for this node.

# mmchlicense server --accept -N node2

5. Start node2 using the mmstartup command

# mmstartup -N node2

6. Use the mmgetstate command to verify that both nodes are in the active state

# mmgetstate -a
Step 6: Collect information about the cluster
Now we will take a moment to check a few things about the cluster. Examine the cluster
configuration using the mmlscluster command

1. What is the cluster name? ______________________


2. What is the IP address of node2? _____________________

3. What date was this version of GPFS "Built"? ________________

Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest

Step 7: Create NSDs


You will use the 4 hdisks.

 Each disk will store both data and metadata


 The NSD server field (ServerList) can be access to the shared LUNs. left blank if both
nodes have direct

1. On node1 create the directory /yourdir/data


2. Create a disk stanza file /yourdir/data/diskdesc.txt using your favorite text editor.

The format for the file is:


3. %nsd: device=DiskName
4. nsd=NsdName
5. servers=ServerList
6. usage={dataOnly | metadataOnly | dataAndMetadata |
descOnly}
7. failureGroup=FailureGroup
8. pool=StoragePool
9.
10. You only need to populate the fields required to create the NSD's, in this example all
NSD's use the default failure group and pool definitions.
11. %nsd:
12. device=/dev/hdisk1
13. nsd=mynsd1
14. usage=dataAndMetadata
15.
16. %nsd:
17. device=/dev/hdisk2
18. nsd=mynsd2
19. usage=dataAndMetadata
20.
Create the NSD's using the mmcrnsd command:

mmcrnsd -F /yourdir/data/diskdesc.txt
Note: hdisk numbers will vary per system.

21. Create the NSD's using the mmcrnsd command

# mmcrnsd -F /yourdir/data/diskdesc.txt

Step 8: Collect information about the NSD's


Now collect some information about the NSD's you have created.

1. Examine the NSD configuration using the mmlsnsd command

1. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?)
associated with an NSD? _______

Step 9: Create a file system


Now that there is a GPFS cluster and some NSDs available you can create a file system. In this
section we will create a file system.

 Set the file system blocksize to 64kb


 Mount the file system at /gpfs

1. Create the file system using the mmcrfs command

# mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k

2. Verify the file system was created correctly using the mmlsfs command

# mmlsfs fs1
Will the file system automatically mounted when GPFS starts? _________________

3. Mount the file system using the _mmmount_ command

# mmmount all -a

4. Verify the file system is mounted using the df command


5. # df -k
6. Filesystem 1024-blocks Free %Used Iused %Iused
Mounted on
7. /dev/hd4 65536 6508 91% 3375 64% /
8. /dev/hd2 1769472 465416 74% 35508 24%
/usr
9. /dev/hd9var 131072 75660 43% 620
4% /var
10. /dev/hd3 196608 192864 2% 37 1%
/tmp
11. /dev/hd1 65536 65144 1% 13 1%
/home
12. /proc - - - - -
/proc
13. /dev/hd10opt 327680 47572 86% 7766 41%
/opt
/dev/fs1 398929107 398929000 1% 1 1%
/gpfs

14. Use the mmdf command to get information on the file system.

# mmdf fs1

You might also like