Release Notes Release 3.2.0.0 October 31, 2012: Pigeon Point Systems Confidential Pigeon Point Openhpi Release 3.2.0.0
Release Notes Release 3.2.0.0 October 31, 2012: Pigeon Point Systems Confidential Pigeon Point Openhpi Release 3.2.0.0
Release 3.2.0.0
October 31, 2012
Introduction
This release is based on OpenHPI release 3.2.0 and the corresponding SNMP subagent release
2.3.4; it incorporates release 3.2.0.0 of the Pigeon Point enhancements.
OpenHPI and SNMP subagent binaries can be added straightforwardly to an existing Pigeon Point
Shelf Manager 3.0.0 build environment, with no additional host system requirements.
To build OpenHPI and subagent source code, the host systems should have the following
packages installed:
gcc-3.2.0 or later
glib2 and glib2-devel version 2.14 or later
autoconf-2.61 or later
automake-1.10 or later
libtool (no special requirements)
pkgconfig (no special requirements)
e2fsprogs and e2fsprogs-devel or libuuid and libuuid-devel (no
special requirements)
net-snmp-5.1.2 or above (for building the OpenHPI SNMP subagent)
On Linux systems like Fedora Core 4, the necessary packages are usually installed as part of the
base system, but if an older Linux distribution is used, automake and autoconf sources can
be downloaded from http://ftp.gnu.org/gnu/autoconf and http://ftp.gnu.org/gnu/automake or it may
be possible to find the corresponding RPM packages using http://www.rpmfind.net/.
OpenHPI and its subagent can be built in either of two modes: on-ShMM and native (off-ShMM).
In the on-ShMM mode, the OpenHPI and subagent binaries are cross-compiled for the ShMM
architecture, uploaded to the ShMM as part of an RFS and execute on the ShMM in parallel with
the Shelf Manager.
In the native (off-ShMM) mode, the OpenHPI and subagent binaries are compiled for the local
architecture and run on the local system. Interactions between OpenHPI and subagent binaries
and the Shelf Manager occur over Ethernet using the RMCP protocol.
Only the large memory model ShMM-500s are supported for OpenHPI and its subagent in the onShMM mode since the RFS image that includes this functionality doesnt fit into Flash on the
smaller models. The specific model numbers of the supported ShMM-500s are:
ShMM-500R-333M64F128R
ShMM-500R-333M64F128R-R2
ShMM-500-333M64F128R
All available models of ShMM-1500 and ShMM-700 have enough Flash capacity for OpenHPI and
its SNMP subagent.
Release Contents
On-ShMM mode:
A build_shmm500.txt script is provided for the Shelf Manager 3.0.0 build environment that
builds OpenHPI and the subagent in the on-ShMM mode. This script downloads OpenHPI
packages from the web site, sets up the build environment, patches OpenHPI and subagent
binaries into the build tree, and builds the RFS image.
Off-ShMM mode:
There are currently three variants of the off-ShMM package:
1. Full source, with corresponding build script.
2. Pre-built binary RPMs.
Variant #1 consists of the full source package, including source code for the Pigeon Point plug-in.
This package and its associated build script, full_source_x86.txt, can be used to build
and install OpenHPI.
Variant #2 consists of pre-built RPM packages. These RPM packages support Red Hat Enterprise
Linux 3 and 4; they provide the simplest installation logistics for users of those platforms.
Both of the build scripts described above must be run as a supervisor level user.
Variant #1 has been tested on the following x86 Linux platforms:
Red Hat Enterprise Linux 5
Fedora Core 13
Debian 4.0
Support for other x86/x86_64 Linux platforms and Windows platforms can be discussed with
Pigeon Point on a case by case basis.
http://www.montereylinux.com/partners/shmm/common/openhpi/3.2.0.0/3.0.0/build_shmm700.sh
These build scripts take as arguments the username and password for your Pigeon Point partner
web page. These scripts will HTTP download all of the appropriate packages needed to build the
RFS and kernel images. It is expected that an integration engineer can modify these scripts as
appropriate for inclusion into an existing integration framework.
Building OpenHPI in native mode:
Download the build script:
http://www.montereylinux.com/partners/shmm/common/openhpi/3.2.0.0/x86/full_source_x86.sh
or
http://www.montereylinux.com/partners/shmm/common/openhpi/3.2.0.0/x86/full_source_x86_suba
gent.sh
These build scripts take as arguments the username and password for your Pigeon Point partner
web page. These scripts will HTTP download all of the appropriate packages needed to build
OpenHPI and its subagent in native mode. It is expected that an integration engineer can modify
these scripts as appropriate for inclusion into an existing integration framework. These scripts must
be run under a user with super-user privileges.
81h
85h
87h
Running Executables
For on-ShMM mode:
By default, the OpenHPI daemons do not run at system startup on the ShMM. To enable OpenHPI
support, issue the following command at ShMM prompt:
# setenv openhpi y
Note that OpenHPI 3.2.0 normally expects that the following environment variables to be set before
launching the daemon:
# export OPENHPI_UID_MAP=/tmp/uid_map
If these variables are not set, the OpenHPI daemon tries default values for the configuration file
and UID map file locations.
While running an OpenHPI application on a system that is not the system where the OpenHPI
daemon is running, it is necessary to set the IP address of the system where the OpenHPI daemon
is running, using the OPENHPI_DAEMON_HOST variable, for example:
# export OPENHPI_DAEMON_HOST=192.168.0.2
#
# hpi_shell
The hpi_shell utility in the OpenHPI distribution provides a command line interface to the
OpenHPI service, which is useful, for example, in testing. Pigeon Point primarily uses that utility for
testing.
For native (off-ShMM) mode:
Before running the OpenHPI binaries, check the /etc/openhpi/openhpi.conf
configuration file, which describes the way OpenHPI connects to the Shelf Manager. The default
RMCP address, 192.168.0.2, is hard-coded there, so if you use a different RMCP address, please
edit this file accordingly.
Before launching the OpenHPI daemon, make sure the Shelf Manager is running on the target
system that the OpenHPI daemon will connect to; the subagent also requires a running SNMP
daemon on the local system. To run the OpenHPI daemon and the OpenHPI subagent manually,
issue the following commands:
# openhpid -c /etc/openhpi/openhpi.conf -f /tmp/openhpid.pid
# daemon -f hpiSubagent
OpenHPI and subagent binaries are usually installed below the directory /usr/local, in the
directories /usr/local/bin, /usr/local/lib/openhpi, and
/usr/local/include/openhpi.
Note that OpenHPI 3.2.0 normally expects that the following environment variables to be set before
launching the daemon:
# export OPENHPI_UID_MAP=/tmp/uid_map
If these variables are not set, the OpenHPI daemon tries default values for the configuration file
and UID map file locations.
While running an OpenHPI application on a system that is not the system where the OpenHPI
daemon is running, it is necessary to set the IP address of the system where the OpenHPI daemon
is running, using the OPENHPI_DAEMON_HOST variable.
Use the loopback address 127.0.0.1 if the daemon runs on the same system as the client, for
example
# OPENHPI_DAEMON_HOST=127.0.0.1
# export OPENHPI_DAEMON_HOST
# hpi_shell
Starting with OpenHPI 2.12.0, the OpenHPI daemon can manage only a single HPI domain. On the
other hand, the OpenHPI client library can be configured for access to multiple domains and a
separate configuration file is introduced for the OpenHPI client library. The location of this file is
hardcoded to /etc/openhpi/openhpiclient.conf. This file includes a separate
domain description stanza for each configured HPI domain. Each domain is identified by a number
called the domain identifier. The default domain has the identifier 0 and is always considered to
be present, even it there is no stanza describing it.
Here is an example of /etc/openhpi/openhpiclient.conf:
domain default {
host = "localhost"
}
domain 1 {
host = "192.168.1.1"
}
domain 2 {
host = "192.168.1.2"
port = "1234"
}
The hpi_shell utility in the OpenHPI distribution provides a command line interface to the
OpenHPI service, which is useful, for example, in testing. Pigeon Point primarily uses that utility for
testing.
User Guide
The Pigeon Point HPI User Guide has been updated to reflect the new functionality in this release.
The User Guide also covers Pigeon Point IntegralHPI, a complementary HPI implementation that
operates as a subsystem within the Pigeon Point Shelf Manager.
Changes
Except for the Pigeon Point plug-in, the per-component change lists below are deltas to the
openhpi.org-posted 3.2.0 source code (including the 2.3.4 subagent), which is the current stable
release.
In addition, to the changes listed below from the posted OpenHPI 3.2.0, it needs to be noted that
the previous release of Pigeon Point OpenHPI was based on OpenHPI 2.14.1 and there have been
quite a few changes between that release and 3.2.0. In particular, there were changes in the
command line options and commands of the hpi_shell utility. See the Pigeon Point HPI User
Guide for details.
Changes in the Pigeon Point Plug-in since the 2.14.1.0 Release
Support for mapping the IPM Controller Diagnostic Initiator facility to HPI DIMIs has been
introduced.
An HPI Diagnostics Initiator Management Instrument (DIMI) has been introduced for ShMM
runtime rests.
IPv6 support has been added for connections between the plug-in and Shelf Manager. Despite
the lack of IPv6 support in the Shelf Manager, this feature can be used if OpenHPI runs in an
IPv6 network and connects to the Shelf Manager using an IPv4-mapped IPv6 address.
Starting with this release, the x86_64 architecture is officially supported.
Some handler log code has been refined. A bug when a single log file had been used for
several handlers has been fixed. Now, separate log settings can be specified for separate
handlers.
The thread id is now always reported in log lines; the thread log flag has therefore become
redundant and has been removed.
The new rmcp log flag causes RMCP data to appear in the log output.
A Resource Event Log representing the IPMI System Event Log (SEL) has been restored. It is
now provided by the Shelf Resource.
Ignoring IPMI sensors with a 0xFF sensor number has been implemented. This value is
reserved in the IPMI specification and a corresponding SDR is often used as a placeholder for
a non-existent sensor.
Ignoring ATCA Hot Swap Events M0 -> M0 has been implemented.
An incorrect decoding of ASCII6 and BCD+-encoded data in FRU Information has been fixed.
The data had been decoded twice.
Incorrect parsing of FRU Information records with length greater than 31 bytes has been fixed.
Determination of Managed FRU slot information has been improved. It had been requested
from Managing FRU only. Now the information is requested from the Address Table record in
the Shelf FRU Information, if the Managing FRU is unable to provide it.
Support for IPMI Event-Only Sensor Records has been added.
Propagation of sensor enable status and sensor event enable status to the IPMI level has been
implemented. Previous releases of Pigeon Point OpenHPI didn't propagate such status to the
IPMI level and just ignored events from such sensors.
Incorrect mode settings for the IPMI Command Control and the AMC IPMI Command
Control were fixed. The controls' modes had been defined as MANUAL and READ_ONLY
but Pigeon Point OpenHPI allowed setting the modes to AUTO.
The ID String Ipmi Cmd has been changed to IPMI Cmd for the IPMI Command
Control.
The ID String Ipmi Amc Cmd has been changed to IPMI AMC Cmd for the IPMI AMC
Command Control.
An incorrect mode setting for the FRU/AMC Power On Sequence Controls has been
fixed. Previous releases of Pigeon Point OpenHPI didn't allow setting the state to OFF.
Mapping of a text field from a FRU Inventory has been improved. Trailing zero characters are
not mapped now.
Mapping of OEM Sensor types has been improved. These sensor types are from range [0xC0,
0xFF], but Pigeon Point OpenHPI had mapped them all to a single HPI OEM sensor type.
Incorrect generation of the Hot Swap event from the Active Shelf Manager has been fixed. This
was invalid because the Active Shelf Manager is represented as an HPI Resource without the
hot swap capability.
The Get Power Level command has been fixed. It had requested incorrect power type.
A memory leak has been fixed. The Hot Swap Sensor had created an RDR copy and never
deallocated it.
Device SDR Repository re-reading in case of any IPMI error has been implemented. It had
been re-read only in case of a lost reservation.
The incorrect representation of new IPMI 2.0 Entity IDs as the HPI unknown entity type has
been fixed.
The plug-in now uses IPMI Software ID 87h when interacting with the Shelf Manager over
RMCP. This enables distinguishing the plug-in from other RMCP clients.
An incorrect error code has been fixed for the IPMC Reset Control in the case of an
unsupported reset mode. The code was SA_ERR_HPI_INVALID_CMD, but the HPI-toxTCA mapping spec defines SA_ERR_HPI_UNSUPPORTED_PARAMS, which is the
code now used.
10
Added retrying of the initial connection to the OpenHPI daemon if it is not yet up and running
when the subagent starts 1
Fixed a crash in the subagent in the case of an empty resource tag1
Cleared the discovery flag when the subagent works in single thread mode
Fixed a bug in which the discovery flag could be cleared incorrectly in the case of a
rediscovery
Removed the dumping all events into the standard output
Corrected the subagents tendency to open extraneous HPI sessions
Fixed the handling of the variable check_hpi_interval in the
hpiSubagent.conf file and added a handler for this variable that is used in single
thread mode to set new events checking interval
Added a check for successful allocation of memory
Fixed several memory leaks in the subagent that eventually caused memory exhaustion
Fixed a problem in which data items were released without removing them from the container
in the case of an OpenHPI function returning an error
Corrected the reporting of sensor readings and thresholds, which could be incorrect
Added support for limiting the number of the cached events via the configuration file
Set DataType and Language correctly for the Domain tag
Fixed a bug in which a resource was sometimes not removed when it transitioned to the hot
swap state Not Present
Corrected endianness handling of several date/time values (including the variable
ExtractTimeout)
Corrected the order in which ResourceId and EntryId were placed in the oid
Fixed the discovery procedure to have correct information about presence of resources
Corrected the reporting of the Event Category and Event Enable variables
Corrected the reporting of the LowerMinor and UpperCritical variables
Implemented sending SNMP traps for HPI events
Added SNMP traps for notification about breaking/restoring the HPI session
Implemented delayed updates for some tables to accelerate event handling (so that, if the
event rate is high, some tables are updated later rather than immediately when processing the
event)
Implemented automatic reopening of a broken HPI session
Fixed debug messages; now they are shown only if the debug mode is on (previously they
were always shown irrespective of the debug mode settings)
Eliminated compiler warnings
Added a new command line parameter -n to specify the IP address or name of the OpenHPI
daemon host
Implemented asynchronous population of the event logs and the list of session events, to
accelerate the startup of the subagent.
Added SNMP variables that report HPI Version, Agent Version and SNMP Resource Id
This fix was included in the initial 2.8.1 release of Pigeon Point OpenHPI, but not listed in
the corresponding release notes.
1
11
Removed locking for the discovery state variable to avoid timeouts in SNMP client
applications
Added Control Mode and State assignment when the Control Table is populated
Corrected the procedure of obtaining sensor readings to avoid polling currently inaccessible
sensors
Reworked the procedure of retrieving events from the event log, so that the number of events
retrieved during one invocation is limited; this prevents the retrieval procedure from taking an
indefinitely long time if new events are placed in the log with a high rate while the existing ones
are being retrieved from the event log
Removed unnecessary log messages
Corrected the initialization of the FieldTable (an incorrect function was called in the original
subagent)
Fixed the reporting of hot swap state for resources that were in the Communication Lost state
Ensured that all RDR records are removed when a resource disappears
Fixed errors in the reporting of the following variables: Previous State, Current State, Event
Category, Power Action, Reset Action
Fixed errors in setting the following variables: Reading Min, Reading Max, Reading Nominal,
Normal Min, Normal Max, Trigger Reading Type, Trigger Threshold, Oem, Sensor Specific,
Resource Capabilities, Sensor State Type, Sensor State Value, Sensor Type, Sensor Modifier
Units
Corrected the representation of the sensor reading in text format (added the measurement
unit)
Fixed linker error when building OpenHPI Subagent. The reason for the error was that
OpenHPI began to provide libopenhpiutils with a separate pkg-config file.
12
FUMI Instruments provide only Logical Bank operations and do not support Explicit Bank
functions.
Contact
If you have problems with this release, please send email to [email protected].
13