Enable Serial Console Support for IBM System x Compute Nodes
Enable Serial Console Support for IBM System x Compute Nodes
IBM Platform Cluster Manager Standard Edition supports the following System x® machines:
After you install IBM Platform Cluster Manager Standard Edition on a System x management
node you can complete the following steps to enable the serial console:
1. Configure image profile to run a post-boot script to enable serial console redirection in
EEFI of a compute node
2. Add and configure the Flex System Chassis that is used by the System x nodes
3. Add System x compute nodes
4. Provision System x compute nodes
After the System x nodes are provisioned, you can connect to the serial console using the
command line or the Web Portal.
Parent topic: Supporting IBM hardware
Procedure
1. Log in to the IBM Platform Cluster Manager Standard Edition management node as a
root user.
2. You must download and install the tool that is used to update the Unified Extensible
Firmware Interface (UEFI) settings. Download the IBM Advanced Settings Utility (ASU)
v9.30 (Linux 64-bit) RPM package from the IBM Support Site, found here: http://www-
947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-ASU
3. Copy the ASU RPM package to the/install/contrib directory, as follows:
o If you are provisioning a Red Hat Enterprise Linux (RHEL) 6.3 x64 node, then
run the following commands:
o # cp ibm_utl_asu_asut79n-9.30_linux_x86-64.rpm
/install/contrib/rhels6.3/x86_64
o # cd /install/contrib/rhels6.3/x86_64
# mv ibm_utl_asu_asut79n-9.30_linux_x86-64.rpm ibm_utl_asu-9.30-
asut79N.x86_64.rpm
o If you are provisioning a Red Hat Enterprise Linux (RHEL) 5.8 x64 node, then
run the following commands:
o # cp ibm_utl_asu_asut79n-9.30_linux_x86-64.rpm
/install/contrib/rhels5.8/x86_64
o # cd /install/contrib/rhels5.8/x86_64
# mv ibm_utl_asu_asut79n-9.30_linux_x86-64.rpm ibm_utl_asu-9.30-
asut79N.x86_64.rpm
4. Create an UEFI settings file in /install/postscripts/uefi.settings. The UEFI settings file can
be used with all System x servers that IBM Platform Cluster Manager Standard Edition
supports.
# vi /install/postscripts/uefi.settings
DevicesandIOPorts.COMPort1=Enable
DevicesandIOPorts.COMPort2=Enable
DevicesandIOPorts.RemoteConsole=Enable
DevicesandIOPorts.SerialPortSharing=Enable
DevicesandIOPorts.SerialPortAccessMode=Dedicated
DevicesandIOPorts.SPRedirection=Enable
DevicesandIOPorts.LegacyOptionROMDisplay=COM Port 1
DevicesandIOPorts.Com1BaudRate=115200
DevicesandIOPorts.Com1DataBits=8
DevicesandIOPorts.Com1Parity=None
DevicesandIOPorts.Com1StopBits=1
DevicesandIOPorts.Com1TerminalEmulation=ANSI
DevicesandIOPorts.Com1ActiveAfterBoot=Enable
DevicesandIOPorts.Com1FlowControl=Hardware
DevicesandIOPorts.Com2BaudRate=115200
DevicesandIOPorts.Com2DataBits=8
DevicesandIOPorts.Com2Parity=None
DevicesandIOPorts.Com2StopBits=1
DevicesandIOPorts.Com2TerminalEmulation=ANSI
DevicesandIOPorts.Com2ActiveAfterBoot=Enable
DevicesandIOPorts.Com2FlowControl=Hardware
5. Create a post-boot script that runs the ASU utility to import the UEFI settings.
# vi /install/postscripts/uefi_update.sh
#!/bin/sh
cp -f ./uefi.settings /tmp
cd /opt/ibm/toolscenter/asu
./asu64 restore /tmp/uefi.settings
rm -f /tmp/uefi.settings
6. Register the IBM Platform Cluster Manager Standard Edition post-boot script in the IBM
Platform Cluster Manager Standard Edition cluster.
# chmod +x /install/postscripts/uefi_update.sh
# plcclient.sh -d pcmimageprofileloader
7. In the Web Portal, you must modify an image profile to run the post-boot script that you
created.
8. Log in to Web Portal as an administrator.
9. Select the Resources tab, and go to Node ProvisioningProisioning TemplatesImage
Profiles.
10. Select the image profile that you want to modify and click Modify.
11. In the bottom pane, select the Packages tab.
12. Under the Custom Packages section, select the ibm_utl_asu.x86_64 package.
13. Click the Post Scripts and select the Use a script on server option. Input the following
script: /install/postscripts/uefi_update.sh
14. To save your changes, click OK. Wait until changes are saved, then click Close.
Note: A CMM is not required for IBM System x3550 M4, x3650 M4, x3750 M4, or dx360 M4
nodes.
In the CMM, you must set up the user name and password, as well ad include the static IP
address. The chassis type must be specified as IBM_Flex_chassis. For example:
...
#chassis object
cmm01:
objtype=chassis
chassistype=IBM_Flex_chassis
ip=172.17.8.12
username=USERID
password=PASSW0RD
...
You must also enable SSH and SNMP.
You can specify the MAC address for each node in the node information file. The node
information file must include the slotid attribute, for example:
blade01:
mac=11:11:11:11:11:11
Your nodes must be configured to PXE boot. To configure your node to PXE boot, change the
boot order of your node in the BIOS to network boot first.
Procedure
1. Select the Resources tab, go to Devices > Nodes.
2. Click New.
3. Select a provisioning template. From the Provisioning Template menu, you must select a
provisioning template that includes the following profiles:
o An image profile that uses a post-boot script to enable serial console redirection.
o A BMC network profile.
o If you are adding IBM System x3550 M4, x3650 M4, x3750 M4, or dx360 M4
nodes, you must use the IBM_System_x_M4 hardware profile.
o If you are adding IBM Flex System x220, x240, or x440 nodes, you must use the
IBM_Flex_System_x hardware profile.
4. Optionally, you can select a node group.
5. Specify a node discovery method. Choose to automatically discover nodes by PXE boot
by selecting the Auto discovery by PXE boot option. By using the auto discovery
method, the management node is configured to respond to PXE requests on one or more
provisioning networks.
6. Specify the location of the nodes. You must specify the IBM Flex System Chassis which
contains the node you are discovering.
7. Click Start Listening for the node discovery to start. The Web Portal automatically
detects and adds new compute nodes that are physically connected to the same
provisioning network.
You can click Stop Listening at any time. If you stop the node discovery, the node status
also stops updating in this window but continues in the background.
o From the management node command line, run the nodestat node-range
command to view the installation status.
o From the physical console, press ALT-F2 to enter the virtual console to view the
installer messages.
8. Click Close after all the nodes are added.
Note: It can take several minutes for the nodes to display in the Web Portal.
To provision nodes for first time, you must power on the machines manually, or remotely
through a power distribution unit (PDU). You cannot use the Web Portal to power on the nodes
because the remote hardware access is not yet configured on the compute node's IMM module.
During provisioning, xCAT automatically enables the serial console at the operating system
level. When the operating system starts, the OS outputs all kernel and system messages to the
serial console (for example, /dev/ttyS0), instead of the main console (for example, /dev/tty0).
The conserver daemon must be running on management node. To check whether the
conserver daemon is running, run the following command:
# service conserver status
conserver (pid XXXXX) is running...
The compute node must be powered on. To check whether node is powered on, the node's
state must be set to on. Run the following command:
# rpower node-name state
node-name: on
Procedure
1. Log in to the IBM Platform Cluster Manager Standard Edition management node.
2. Connect to serial console of the compute node by running the rcons command:
# rcons node-name
If you see no output, you can press the ENTER key to see the output.
3. The rcons command displays a different serial console output that depends on the state of
the compute node:
o If the node is booting up and the operating system is not loaded, the output is
related to server firmware information, such as diagnostic tests and device
information.
o If the node is provisioning, the output is related to installation information.
o # rcons node-name
o [Enter `^Ec?' for help]
o
o Preparing transaction from installation source
o In progress
o
o Installing libgcc-4.4.6-4.el6.x86_64 (114 KB)
o GCC version 4.4 shared support library
o Packages completed: 1 of 411
o Installing setup-2.8.14-16.el6.noarch (650 KB)
o A set of system configuration and setup files
o Packages completed: 2 of 411
o Installing filesystem-2.4.30-3.el6.x86_64 (0 Bytes)
o The basic directory layout for a Linux system
Packages completed: 3 of 411
o If the operating system is booting up, the output is related to kernel boot
information. For example:
o # rcons node-name
o [Enter `^Ec?' for help]
o
o Initializing cgroup subsys cpuset
o Initializing cgroup subsys cpu
o Linux version 2.6.32-279.el6.x86_64 (mockbuild@x86-
008.build.bos.redhat.com)
o (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Wed
Jun 13
o 18:24:36 EDT 2012
o Command line: ro root=UUID=c074ceca-9739-418b-8e48-a39798f2ca5d
rd_NO_LUKS
o rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16
crashkernel=auto
o rd_NO_DM KEYBOARDTYPE=pc KEYTABLE=us console=ttyS0,19200n8r
o KERNEL supported cpus:
o Intel GenuineIntel
o AMD AuthenticAMD
o Centaur CentaurHauls
o BIOS-provided physical RAM map:
o BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
o BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
o BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
o BIOS-e820: 0000000000100000 - 000000003fff0000 (usable)
o BIOS-e820: 000000003fff0000 - 0000000040000000 (ACPI data)
o BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
o DMI 2.5 present.
o SMBIOS version 2.5 @ 0xFFF30
o last_pfn = 0x3fff0 max_arch_pfn = 0x400000000
o x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
o CPU MTRRs all blank - virtualized system.
...
o If the operating system booting is finished, the output is the operating system
login prompt.
o # rcons node-name
o [Enter `^Ec?' for help]
o
o Red Hat Enterprise Linux Server release 6.3 (Santiago)
o Kernel 2.6.32-279.el6.x86_64 on an x86_64
o
o node login: root
Password:
Procedure
1. Log in to the Web Portal as an administrator.
2. Select the Resources tab and go to Devices > Nodes.
3. Select the node and click the Summary tab in the bottom pane.
4. In the Summary tab, select the Console menu and click the Serial Console option to open
a new serial console.
5. A serial console is loaded in a pop-up window. The applet completes the following
procedure:
o Creates an SSH connection to the IBM Platform Cluster Manager Standard
Edition management node.
o The rcons node-name command is executed, where node-name is the name of the
selected node.
6. When finished with the serial console, close the applet. If you open the serial console
again for the same node, the previous console is displayed.