0% found this document useful (0 votes)
19 views

Red Hat Enterprise MRG-2-Management Console Installation Guide-En-US

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Red Hat Enterprise MRG-2-Management Console Installation Guide-En-US

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Red Hat Enterprise MRG 2

Management Console
Installation Guide
Installing the MRG Management
Console for use with MRG Messaging

Lana Brindley

Alison Young
Management Console Installation Guide

Red Hat Enterprise MRG 2 Management Console Installation


Guide
Installing the MRG Management Console for use with MRG
Messaging
Edition 2

Author Lana Brindley [email protected]


Author Alison Young [email protected]

Copyright © 2011 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available
at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this
document or an adaptation of it, you must provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity
Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.

All other trademarks are the property of their respective owners.

1801 Varsity Drive


Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701

This book contains basic overview and installation procedures for the MRG Management Console
component of the Red Hat Enterprise MRG distributed computing platform. The MRG Management
Console provides a web-based tool for management of MRG Messaging
Preface v
1. Document Conventions ................................................................................................... v
1.1. Typographic Conventions ..................................................................................... vi
1.2. Pull-quote Conventions ........................................................................................ vii
1.3. Notes and Warnings ............................................................................................ vii
2. Getting Help and Giving Feedback ................................................................................ viii
2.1. Do You Need Help? ........................................................................................... viii
2.2. We Need Feedback! ........................................................................................... viii
1. Deployment Sizes 1
2. Installation 3
2.1. Installing the Broker ..................................................................................................... 3
2.2. Installing Cumin ........................................................................................................... 3
2.2.1. Install the Console ............................................................................................. 3
2.2.2. The Cumin database ......................................................................................... 4
2.3. Installing Sesame ......................................................................................................... 4
2.4. Installing Grid Plug-ins .................................................................................................. 4
3. Configuration 5
3.1. Configuring Cumin ........................................................................................................ 5
3.1.1. Creating SASL Credentials ................................................................................ 5
3.1.2. Setting the Broker Address and Authentication .................................................... 5
3.1.3. Specifying the Broker Address for Use of the Remote Configuration Feature .......... 6
3.1.4. Setting the Network Interface ............................................................................. 7
3.1.5. Setting the MRG Management Console Persona ................................................. 7
3.1.6. Adding Users .................................................................................................... 7
3.2. Configuring Sesame ..................................................................................................... 8
3.2.1. Setting the Broker Address ................................................................................ 8
3.2.2. Configuring Authentication ................................................................................. 8
3.3. Configuring Grid Plug-ins .............................................................................................. 8
3.3.1. Setting Broker Address and General Configuration .............................................. 8
3.3.2. Configuring Authentication ................................................................................. 9
3.3.3. Setting Job Server Configuration ...................................................................... 10
4. Running 13
4.1. Starting Services Manually .......................................................................................... 13
4.2. Starting Services on System Boot ............................................................................... 14
4.3. Connecting to the MRG Management Console ............................................................. 14
4.4. Logging ...................................................................................................................... 14
5. Frequently Asked Questions 15
Frequently Asked Questions .............................................................................................. 15
6. More Information 17
A. Configuring the MRG Management Console for Medium Scale Deployment 19
A.1. Running Multiple MRG Management Console Web Servers .......................................... 19
A.2. Limiting Objects Processed by the MRG Management Console ..................................... 20
A.3. Increasing the Default QMF Update Interval for MRG Grid Components ......................... 21
A.4. Tuning the Cumin Database ....................................................................................... 21
B. Configuring the Messaging Broker 23
B.1. Changing the Update Interval ..................................................................................... 23
B.2. Configuring SSL ......................................................................................................... 23
B.3. Adding Credentials to Optional Broker ACLs for MRG Services ..................................... 24
C. Revision History 27

iii
iv
Preface
Red Hat Enterprise MRG
This book contains basic overview and installation information for the MRG Management Console
component of Red Hat Enterprise MRG. Red Hat Enterprise MRG is a high performance distributed
computing platform consisting of three components:

1. Messaging — Cross platform, high performance, reliable messaging using the Advanced Message
Queuing Protocol (AMQP) standard.

2. Realtime — Consistent low-latency and predictable response times for applications that require
microsecond latency.

3. Grid — Distributed High Throughput (HTC) and High Performance Computing (HPC).

All three components of Red Hat Enterprise MRG are designed to be used as part of the platform, but
can also be used separately.

MRG Management Console


This book explains how to install and configure the MRG Management Console. The MRG
Management Console, also known as Cumin, provides a web-based graphical interface to manage
your Red Hat Enterprise MRG deployment.

MRG Messaging is built on the Qpid Management Framework (QMF). The MRG Management
Console uses QMF to access data and functionality provided by the MRG Messaging broker (qpidd),
inventory daemon (sesame) and MRG Grid components.

This book describes how to set up and configure Cumin, a MRG Messaging broker, and a distributed
inventory. The broker is necessary for communication between the distributed components and
Cumin. The inventory and MRG Grid component installations must be performed on all nodes in the
deployment.

For more information about MRG Messaging architecture, including advanced installation and
configuration of the MRG Messaging broker, see the MRG Messaging User Guide.

For more information about MRG Grid, including advanced features and configuration, see the MRG
Grid User guide.

1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.
1
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not,
alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes
the Liberation Fonts set by default.

1
https://fedorahosted.org/liberation-fonts/

v
Preface

1.1. Typographic Conventions


Four typographic conventions are used to call attention to specific words and phrases. These
conventions, and the circumstances they apply to, are as follows.

Mono-spaced Bold

Used to highlight system input, including shell commands, file names and paths. Also used to highlight
keycaps and key combinations. For example:

To see the contents of the file my_next_bestselling_novel in your current


working directory, enter the cat my_next_bestselling_novel command at the
shell prompt and press Enter to execute the command.

The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold
and all distinguishable thanks to context.

Key combinations can be distinguished from keycaps by the hyphen connecting each part of a key
combination. For example:

Press Enter to execute the command.

Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to


return to your X-Windows session.

The first paragraph highlights the particular keycap to press. The second highlights two key
combinations (each a set of three keycaps with each set pressed simultaneously).

If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:

File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.

Proportional Bold

This denotes words or phrases encountered on a system, including application names; dialog box text;
labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:

Choose System → Preferences → Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, click the Left-handed mouse check box and click
Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).

To insert a special character into a gedit file, choose Applications → Accessories


→ Character Map from the main menu bar. Next, choose Search → Find… from the
Character Map menu bar, type the name of the character in the Search field and click
Next. The character you sought will be highlighted in the Character Table. Double-
click this highlighted character to place it in the Text to copy field and then click the
Copy button. Now switch back to your document and choose Edit → Paste from the
gedit menu bar.

The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in proportional bold and
all distinguishable by context.

Mono-spaced Bold Italic or Proportional Bold Italic

vi
Pull-quote Conventions

Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or
variable text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:

To connect to a remote machine using ssh, type ssh [email protected] at


a shell prompt. If the remote machine is example.com and your username on that
machine is john, type ssh [email protected].

The mount -o remount file-system command remounts the named file


system. For example, to remount the /home file system, the command is mount -o
remount /home.

To see the version of a currently installed package, use the rpm -q package
command. It will return a result as follows: package-version-release.

Note the words in bold italics above — username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you enter when issuing a command or for text
displayed by the system.

Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:

Publican is a DocBook publishing system.

1.2. Pull-quote Conventions


Terminal output and source code listings are set off visually from the surrounding text.

Output sent to a terminal is set in mono-spaced roman and presented thus:

books Desktop documentation drafts mss photos stuff svn


books_tests Desktop1 downloads images notes scripts svgs

Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:

package org.jboss.book.jca.ex1;

import javax.naming.InitialContext;

public class ExClient


{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object ref = iniCtx.lookup("EchoBean");
EchoHome home = (EchoHome) ref;
Echo echo = home.create();

System.out.println("Created Echo");

System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));


}
}

1.3. Notes and Warnings


Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

vii
Preface

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should
have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to
the current session, or services that need restarting before an update will apply. Ignoring a box
labeled 'Important' will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat
Customer Portal at http://access.redhat.com. Through the customer portal, you can:

• search or browse through a knowledgebase of technical support articles about Red Hat products.

• submit a support case to Red Hat Global Support Services (GSS).

• access other product documentation.

Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and
technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/
listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. We Need Feedback!

If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/
against the product Red Hat Enterprise MRG.

When submitting a bug report, be sure to mention the manual's identifier:


Management_Console_Installation_Guide
viii
We Need Feedback!

If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the
surrounding text so we can find it easily.

ix
x
Chapter 1.

Deployment Sizes
The MRG Management Console is designed to scale for deployments of MRG Messaging and MRG
Grid. The following configurations indicate typical size and load characteristics for small, medium and
large deployments.

Small
The default software configuration of the MRG Management Console is appropriate for small scale
deployments. An example small scale deployment is:

• 64 nodes (each quad dual-core CPUs)

• 5 concurrent console users, accessing the console at 1 page view per second (peak)

• 10 job submitters, submitting 1 job per second concurrently (peak)

• 10 job completions per minute (sustained), 3 years of job history (1 million jobs)

• Ability to sustain peak rates for at least 5 minutes

Medium
To configure the MRG Management Console for use with medium scale deployments, see
Appendix A, Configuring the MRG Management Console for Medium Scale Deployment. An example
medium scale deployment is:

• 500 nodes (each quad dual-core CPUs)

• 20 concurrent console users, accessing the console at 1 page view per second (peak)

• 20 job submitters, submitting 2 jobs per second concurrently (peak)

• 100 job completions per minute (sustained), 3 years of job history (10 million jobs)

• Ability to sustain peak rates for at least 5 minutes

Large
A large scale console is defined as a console supporting more than 5000 Execute Nodes and
100 concurrent users accessing the console at 1 page view per second during peak periods. There
are several considerations when implementing a large scale console. Red Hat, Inc recommends
that customers configure large scale MRG Management Console installations in cooperation with a
Solutions Architect through Red Hat, Inc consulting.

1
2
Chapter 2.

Installation
To install the MRG Management Console you will need to have registered your system with Red Hat
1
Network . This table lists the Red Hat Enterprise MRG channels available on Red Hat Network for the
MRG Management Console.

Table 2.1. Red Hat Enterprise MRG Channels Available on Red Hat Network
Channel Name Operating System Architecture
Red Hat MRG Management RHEL-5 Server 32-bit, 64-bit
Red Hat MRG Management RHEL-6 Server 32-bit, 64-bit

Hardware Requirements
It is recommended that you have the following minimum hardware requirements before attempting to
install the MRG Management Console:

• Intel Pentium IV or AMD Athlon class machine

• 512 MB RAM

• 10 GB disk space

• A network interface card

2.1. Installing the Broker


To use the MRG Management Console, a MRG Messaging broker must first be installed. The broker
may be installed on any host that is accessible over the network from other nodes in the deployment.

A full installation of the MRG Messaging components is recommended, but only the broker is required.
Install the broker with the following yum command as root:

# yum install qpid-cpp-server

For more information on installing the MRG Messaging broker, see the MRG Messaging Installation
Guide.

2.2. Installing Cumin

2.2.1. Install the Console


Install the MRG Management Console with the following yum command as root:

# yum install cumin

1
https://rhn.redhat.com/help/about.pxt

3
Chapter 2. Installation

Note

If you find that yum is not installing all the dependencies you require, make sure that you have
2
registered your system with Red Hat Network .

Before you run the MRG Management Console for the first time, you will need to install the Cumin
database.

2.2.2. The Cumin database


Install the Cumin database with the following command:

# cumin-database install

This command will produce a warning that it is about to modify any existing configuration. Enter yes to
continue with the installation.

2.3. Installing Sesame


Sesame is a management package that allows a system on which it is installed to display system
statistics in the MRG Management Console's Inventory page. It should be installed on every system
that is part of a MRG Grid deployment.

Use yum to install the Sesame package:

# yum install sesame

2.4. Installing Grid Plug-ins


The Condor QMF plug-ins allow MRG Grid nodes to connect to a MRG Messaging broker. Install MRG
Grid using the procedures described in the MRG Grid Installation Guide, if you haven't already done
so.

Install the QMF plug-ins on each node in the condor pool using the yum command:

# yum install condor-qmf

2
https://rhn.redhat.com/help/about.pxt

4
Chapter 3.

Configuration
3.1. Configuring Cumin

3.1.1. Creating SASL Credentials


Authentication credentials for the MRG Management Console must be created on the host running the
MRG Messaging broker.

Important

The MRG Management Console must connect to the MRG Messaging broker authenticated as
the cumin user for full operability. The MRG Management Console Installation Guide assumes
that MRG Messaging has already been configured to support password authentication using the
Cyrus SASL library. For information on configuring security in MRG Messaging see the MRG
Messaging Installation Guide and MRG Messaging User Guide.

On the host, run the saslpasswd2 command as the qpidd user:

$ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID cumin

When prompted, create a password.

This command will create a cumin user in the SASL database. Section 3.1.2, “Setting the Broker
Address and Authentication” explains how to configure the Management Console to use these
credentials for authentication to the broker.

For more information on the saslpasswd2 command, see the MRG Messaging Installation Guide.

Note

The qpidd user should be able to read /var/lib/qpidd/qpidd.sasldb. If the ownership is


wrong /var/log/messages will display a permission denied error.

3.1.2. Setting the Broker Address and Authentication


The default configuration settings will connect the MRG Management Console without authentication
to a MRG Messaging broker running on the same machine. You will need to change the default
settings.

1. As the root user, open the /etc/cumin/cumin.conf file in your preferred text editor and locate
the brokers parameter.

The authentication information will be stored in plain text. However as permissions on this file are
restricted the information will be secure provided users do not have root access.
5
Chapter 3. Configuration

Note

The format of a broker address containing credentials is:

[<protocol>://]<username>/<password>@<target-host>[:<tcp-port>]

The optional tcp-port parameter will default to 5672 if not specified. The optional protocol
value may be amqp (the default) or amqps for SSL. Refer to Appendix B, Configuring the
Messaging Broker and the MRG Messaging User Guide for additional information.

2. The username value in this case must be cumin, the user that was added to the SASL
configuration in Section 3.1.1, “Creating SASL Credentials”.

The password will be the password that you supplied when prompted by the saslpasswd2
command. In addition, multiple brokers may be specified in a comma-separated list. For example:

brokers: cumin/[email protected], amqps://cumin/[email protected]:5671

Set the sasl-mech-list parameter to explicitly restrict the console from using anonymous
authentication.

Do this by setting the value to a space separated list of appropriate mechanisms supported by the
broker, excluding anonymous. In a default broker configuration this list will include only the plain
mechanism.

sasl-mech-list: PLAIN

More information on authentication mechanisms can be found in the Cyrus SASL documentation.

3.1.3. Specifying the Broker Address for Use of the Remote


Configuration Feature
Cumin uses the remote configuration feature to augment inventory data and provide tag management
facilities. The remote configuration feature (often referred to simply as 'wallaby') consists of the
Wallaby service, the wallaby command-line tool, and other tools and daemons that interact with the
Wallaby service. For further information, see the Remote Configuration chapter in the MRG Grid User
Guide.

By default, Cumin will use the first address specified in the brokers parameter as the address of the
MRG Messaging broker for remote configuration. If that address is correct, this step can be skipped.

If the remote configuration feature is set up to use a different broker, the wallaby-broker parameter
needs to be set accordingly. For example:

wallaby-broker: cumin/[email protected]

6
Setting the Network Interface

To control how often Cumin will poll wallaby, adjust the wallaby-refresh parameter. The default
value is 60 seconds.

3.1.4. Setting the Network Interface


The MRG Management Console is a web-based tool. You can use any internet browser to access the
tool whether it is running on the local host or on a remote machine.

The web console is bound to the localhost network interface by default. This setting allows only
local connections to be made. To make the MRG Management Console accessible to other machines
on the network, the IP address of another network interface on the host needs to be specified in the
configuration file.

1. Specify the IP address by opening the /etc/cumin/cumin.conf file and locating the [web]
section.

On installation, the [web] section in the configuration file will have the following lines commented
out. Remove the # symbol and edit each line to bind the web console to a different network
interface:

[web]
host: 192.168.0.20
port: 1234

2. Setting the host parameter to 0.0.0.0 will make the web console bind to all local network
interfaces.

3.1.5. Setting the MRG Management Console Persona


The default installation prepares the MRG Management Console interface for use with both MRG
Grid and MRG Messaging. It is possible to streamline the interface for use with one or the other by
selecting an alternate persona. To do this, edit the /etc/cumin/cumin.conf file and change the
persona value in the [web] section from default to either messaging or grid. For example:

[web]
persona: grid

3.1.6. Adding Users


A username and password are required to log into the web interface. Create users with the following
command:

# cumin-admin add-user user

This will add a new user user and prompt for a password. Using this form of the command ensures
that passwords are not retained in the shell history.

7
Chapter 3. Configuration

3.2. Configuring Sesame

3.2.1. Setting the Broker Address


This configuration should be performed on all nodes where the sesame package is installed.

Open the /etc/sesame/sesame.conf file in your preferred text editor and locate the host
parameter. This parameter must be set to the hostname of the machine running the MRG Messaging
broker:

host=example.com

The port parameter can also be set, although the default value should be correct for most
configurations.

3.2.2. Configuring Authentication


Sesame will authenticate to the MRG Messaging broker using the anonymous mechanism by default.
If anonymous authentication is permitted by the broker, this step may be skipped. Otherwise use the
following command on the host where the broker is installed to create credentials for use by all nodes
running Sesame:

$ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID sesame

This command will create a sesame user in the SASL database. For more information about the
saslpasswd2 command, refer to the MRG Messaging Installation Guide.

On each node where the sesame package is installed, open /etc/sesame/sesame.conf in


your preferred text editor and modify the following parameters. Set the mech to PLAIN or another
supported password mechanism (this value may be a space separated list if there are multiple
supported mechanisms ). Set uid to sesame and pwd to the password.

mech=PLAIN
uid=sesame
pwd=password

Note

See configuration file comments on the pwd-file parameter if you wish to place the password
in an external file.

3.3. Configuring Grid Plug-ins

3.3.1. Setting Broker Address and General Configuration


This configuration should be performed on every MRG Grid node where the condor-qmf package has
been installed.

8
Configuring Authentication

Note

MRG Grid can also be configured remotely using the remote configuration feature. For more
information about the remote configuration feature and how to use it, see the MRG Messaging
User Guide.

1. Create a new file in the /etc/condor/config.d/ directory called 40QMF.config:

# cd /etc/condor/config.d/
# touch 40QMF.config

2. To set the broker address on all nodes which are not running the MRG Messaging broker locally,
add the following line to the 40QMF.config file and specify the hostname of the machine running
the broker:

QMF_BROKER_HOST = <hostname>

3. To be able to edit fair-share in the MRG Management Console, edit the 40QMF.config file on all
nodes running the condor_negotiator to add the following line:

ENABLE_RUNTIME_CONFIG = TRUE

To enable runtime configuration of Limit values it is vital that this line is present.

4. The sampling frequency of some graphs in the MRG Grid overview screens is related to how
frequently the condor collector sends updates. The default rate is fifteen minutes (900 seconds).
This can be changed by adjusting the COLLECTOR_UPDATE_INTERVAL parameter.

Do this by editing the new 40QMF.config file on the node running the condor_collector to
add the following line, with the desired value in seconds:

COLLECTOR_UPDATE_INTERVAL = 60

5. Restart the condor service to pick up the changes (this command will also start the condor
service if it is not already running):

# /sbin/service condor restart

3.3.2. Configuring Authentication


MRG Grid will authenticate to the MRG Messaging broker using the anonymous mechanism by
default. If anonymous authentication is permitted by the broker, this step can be skipped. Otherwise
use the following command on the host where the broker is installed to create credentials for use by all
MRG Grid nodes:

9
Chapter 3. Configuration

$ sudo -u qpidd /usr/sbin/saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID grid

When prompted, create a password. This command will create a grid user in the SASL database.
For more information about the saslpasswd2 command, refer to the MRG Messaging Installation
Guide.

Note

The qpid user should be able to read /var/lib/qpidd/qpidd.sasldb. If the ownership is


wrong /var/log/messages will display a permission denied error.

The following lines should be added to the 40QMF.config file on every MRG Grid node where the
condor-qmf package has been installed. The QMF_BROKER_AUTH_MECH parameter may be set to
PLAIN or another supported mechanism:

QMF_BROKER_AUTH_MECH = PLAIN
QMF_BROKER_USERNAME = grid
QMF_BROKER_PASSWORD_FILE = <path>

The last parameter specifies the path of a file containing the password for the grid user in plain text.
The security of the password file is the responsibility of system administrators.

3.3.3. Setting Job Server Configuration


A Job Server must be configured in the MRG Grid pool for Cumin to show job submissions and details.
The Job Server can be configured in the following ways:

1. Default configuration. When the condor-qmf package is installed, the scheduler plug-ins will be set
up by default to provide a job server. This configuration will publish data for jobs in the scheduler
job queue log. No action is needed to use this configuration.

2. A feature named JobServer is predefined in the configuration store for use with the remote
configuration tools. This feature will set up a dedicated process to publish data for jobs based on
the job history files and the scheduler job queue log.

Applying the JobServer feature through remote configuration is the recommended way to
configure a dedicated job server. Generally, using the remote configuration feature removes the
need to edit configuration files and restart manually, simplifying potentially complex configuration
tasks.

3. A dedicated job server as described in 2. above can also be configured manually. For a manual
configuration, edit the /etc/condor/config.d/40QMF.config file and add the following:

QMF_PUBLISH_SUBMISSIONS = False
DAEMON_LIST = $(DAEMON_LIST) JOB_SERVER

You can also add and modify the following:

10
Setting Job Server Configuration

HISTORY_INTERVAL = 60
JOB_SERVER.JOB_SERVER_DEBUG = D_FULLDEBUG

The default value for HISTORY_INTERVAL is 120 seconds and the


JOB_SERVER.JOB_SERVER_DEBUG setting will enable detailed logging.

Configuration changes will take effect the next time condor is started.

11
12
Chapter 4.

Running
4.1. Starting Services Manually
The service command can be used to manually start, stop, restart, or check the status of services
on the local host.

1. Use these commands to start the following MRG services on the node(s) where they are installed:

Starting the MRG Messaging broker:

# service qpidd start


Starting Qpid AMQP daemon: [ OK ]

Starting Sesame:

# service sesame start


Starting Sesame daemon: [ OK ]

Starting MRG Grid:

# service condor start


Starting Condor daemons: [ OK ]

Starting the MRG Management Console:

# service cumin start


Starting Cumin: [ OK ]

Note

The cumin-database install command must be run before the MRG Management
Console can be started for the first time.

2. After a configuration option has been changed, use the service command to restart a running
application:

# service cumin restart


Stopping Cumin: [ OK ]
Starting Cumin: [ OK ]

13
Chapter 4. Running

4.2. Starting Services on System Boot


The MRG Messaging broker, Sesame, MRG Grid and MRG Management Console services are all
configured by default to start automatically on system boot. However, the MRG Management Console
will only start automatically if the postgresql service is configured to start on system boot; by default
it is not.

To configure postgresql to start on system boot, use the chkconfig command to set default run
levels:

# chkconfig postgresql on

4.3. Connecting to the MRG Management Console


Open an internet browser and enter the web address (URL) for the MRG Management Console.
The web address is the host and port where the Cumin service is running, for example http://
localhost:45672/. The TCP port used by the MRG Management Console (default 45672) must be open
for incoming traffic on the console host firewall to allow access from other hosts on the network.

4.4. Logging
The MRG Management Console keeps log files in the /var/log/cumin directory. This directory will
contain log files for the master script and each cumin-web or cumin-data process that is started as part
of the cumin service.

Three log files are kept for each process and have the extensions .log, .stderr and .stdout.
The .log file contains log entries from the running application. The .stderr and .stdout files
contain redirected terminal output. Normally the .stderr and .stdout would be empty but they may
contain error information. The master script makes an entry in the master.log file each time it starts
or restarts another cumin process. If /sbin/service reports [FAILED] when cumin is started or if
cumin does not seem to be running as expected, check these files for information.

A maximum log file size is enforced, and logs will be rolled over when they reach the maximum size.
The maximum log file size and the number of rolled-over log files to archive can be set in the /etc/
cumin/cumin.conf file with the log-max-mb and log-max-archives parameters.

14
Chapter 5.

Frequently Asked Questions


Q: If I uninstall, reinstall or update the Cumin software will my database be lost?

A: No, the data in the database will persist. Even an uninstall, reinstall, or update of PostgreSQL
should not affect your data. However, you're advised to back up the database prior to any such
operations (more information on backup can be found in the PostgreSQL documentation).

Q: So what if I want to create a fresh database?

A: To discard your data, the database must be destroyed and recreated. Optionally, you may
preserve the user account data during this procedure.

To backup your user account data:

$ cumin-admin export-users my_users

Then destroy the old database and create a new one:

Warning

This command will cause you to lose all data previously stored in the database. Use only
with extreme caution.

$ cumin-database drop
$ cumin-database create

To restore your user account data:

$ cumin-admin import-users my_users

Q: Help! My database is corrupted! What do I do now?

A: If the database is completely corrupted, the easiest way to fix the problem is to destroy the old
database, and create a new one as described above.

Q: Will I ever be required to recreate my database as part of a software upgrade?

A: Occasionally, new features in Cumin may require changes to the database schema. If this is the
case, the Release Notes will inform you that the database must be recreated for use with the
new version of software. If practical, additional instructions or facilities may be included to help
with the transition. For example, instructions on preserving the user account data.

Q: If I have to recreate my database, what will I actually lose?

15
Chapter 5. Frequently Asked Questions

A: Presently Cumin stores 24 hours of sample data for calculating statistics along with user account
data and information about agents and objects it discovers through QMF. Cumin will dynamically
rediscover agents and objects while it runs, so this type of data is not really lost.

User account data will be lost but may be restored as described above, this is assuming it has
previously been exported with cumin-admin. Sample data from the last 24 hours will be lost,
affecting some statistics and charts displayed by Cumin.

Q: How can I make the graph labeled Grid - Overview, Host info update more frequently?

A: The data comes from the Collector, controlled by the COLLECTOR_UPDATE_INTERVAL. The
default value is 300 seconds (15 minutes). For more frequent updates, set it to a smaller value,
such as 30, on the nodes where the condor_collector is running. This can be done in /
etc/condor/config.d/40QMF.config.

16
Chapter 6.

More Information
Reporting a Bug
If you have found a bug in the MRG Management Console, follow these instructions to enter a bug
report:
1 2
1. You will need a Bugzilla account. You can create one at Create Bugzilla Account .
3
2. Once you have a Bugzilla account, log in and click on Enter A New Bug Report .

3. When submitting a bug report, identify the product (Red Hat Enterprise MRG), the version (2.2),
and whether the bug occurs in the software (component = management) or in the documentation
(component = Management_Console_Installation_Guide).

Further Reading
Red Hat Enterprise MRG and MRG Messaging Product Information
http://www.redhat.com/mrg

Red Hat Enterprise MRG manuals


http://docs.redhat.com/docs/en-US/index.html

Red Hat Knowledgebase


https://access.redhat.com/knowledge/search

17
18
Appendix A. Configuring the MRG
Management Console for Medium Scale
Deployment
Configuration considerations for deployments change as scale increases. This chapter describes how
to configure the MRG Management Console installation for medium scale deployments. A medium
scale deployment is described in Chapter 1, Deployment Sizes.

A.1. Running Multiple MRG Management Console Web


Servers
In medium scale environments, it may be necessary to run multiple MRG Management Console web
servers as the total number of page views per second increases. To ensure optimal performance, it
is recommended that a single web server is used by no more than 20 to 30 simultaneous users. This
section describes how to configure the MRG Management Console installation to run multiple web
servers.

1. Creating Additional Sections in /etc/cumin/cumin.conf.

To add web servers, a new configuration section must be added to /etc/cumin/cumin.conf


for each additional server. These sections have the same structure and default values as the
standard [web] section with the exception of the log-file parameter. By default, each new
server will log to a file in /var/log/cumin/section_name.log.

Each new section must specify a unique value for port as each server binds to its own port.
Adding the following lines to /etc/cumin/cumin.conf will add 3 new web servers to the
configuration, web1, web2 and web3; using default values for each server except port. The
default port for the web section is 45672.

[web1]
port: 45674

[web2]
port: 45675

[web3]
port: 45676

The port values used above are chosen arbitrarily.

The names of the sections created above must be added to the webs in the [master] section in
order for the new web servers to run.

[master]
webs: web, web1, web2, web3

2. Checking the Configuration.

After making the changes above, Cumin may be restarted. The /var/log/cumin/master.log
file should contain entries for the new web servers.

19
Appendix A. Configuring the MRG Management Console for Medium Scale Deployment

# /sbin/service cumin restart


Stopping cumin: [ OK ]
Starting cumin: [ OK ]

# tail /var/log/cumin/master.log
...
20861 2011-04-01 12:09:45,560 INFO Starting: cumin-web --section=web --daemon
20861 2011-04-01 12:09:45,588 INFO Starting: cumin-web --section=web1 --daemon
20861 2011-04-01 12:09:45,602 INFO Starting: cumin-web --section=web2 --daemon
20861 2011-04-01 12:09:45,609 INFO Starting: cumin-web --section=web3 –daemon
...

3. Accessing different servers.

To visit a particular server, navigate using the appropriate port value. For example, on the
machine where the MRG Management Console is installed, open an internet browser and
navigate to http://localhost:45675/. This visits the [web2] server as configured above.

4. Troubleshooting.

Make sure that the section names listed in the webs parameter of the [master] section are
spelled correctly. Section naming errors can be identified by searching for NoSectionError in /
var/log/cumin/*.stderr.

If Cumin is running but cannot be accessed on a particular port as expected, make sure the port
values specified in /etc/cumin/cumin.conf for each section are correct and that the ports are
not used by any other application on the system.

Whenever changes are made to /etc/cumin/cumin.conf the service must be restarted for the
changes to take effect.

5. A note about load balancing and proxies.

The above instructions do not cover setting up a web server proxy; users must select a port
manually. However, it may be desirable in a particular installation to set up a proxy which handles
load balancing automatically and allows users to visit a single URL rather than specific ports.

A.2. Limiting Objects Processed by the MRG Management


Console
In the default configuration, the MRG Management Console will process all objects available from
the MRG Messaging broker. If the persona value for all cumin-web instances at a site has been
specialized for MRG Messaging or MRG Grid, the types of objects processed by cumin may be
limited (refer to Setting the MRG Management Console Persona in Section 2.2, “Installing Cumin” for
specialization of web servers). This will reduce the load on the MRG Messaging broker and on the
host running the Cumin service.

For convenience, the standard /etc/cumin/cumin.conf file already contains several alternative
settings for the datas in the [master] section with explanatory comments. Select one of these
settings based on the persona value being used.

20
Increasing the Default QMF Update Interval for MRG Grid Components

A.3. Increasing the Default QMF Update Interval for MRG


Grid Components
The default QMF update interval for MRG Grid components is 10 seconds. This interval affects how
frequently MRG Grid notifies the MRG Management Console of changes in status. Increasing this
interval for certain components can noticeably decrease load on the MRG Management Console. Edit
the /etc/condor/config.d/40QMF.config file created in Section 3.3.1, “Setting Broker Address
and General Configuration” to add the following recommended setting for a medium scale deployment:

STARTD.QMF_UPDATE_INTERVAL = 30

Important

The NEGOTIATOR.QMF_UPDATE_INTERVAL should be less than or equal to the


NEGOTIATOR_INTERVAL (which defaults to 60 seconds). If either of these intervals is modified,
check that this relationship still holds.

A.4. Tuning the Cumin Database

max_connections
The max_connections parameter controls the number of simultaneous database connections
allowed by the PostgreSQL server; the default value is 100. This value must be large enough to
support the cumin-web and cumin-data processes that make up the MRG Management Console.

It is a good idea to check the value of this parameter if the MRG Management Console is configured to
run multiple cumin-web instances (as described in Section A.1, “Running Multiple MRG Management
Console Web Servers”) or if other applications besides Cumin use the same PostgreSQL server.

The maximum number of concurrent connections needed by Cumin can be estimated with the
following formula:

(cumin-web instances * 36) + (cumin-data instances) + 2

For a default Cumin configuration this number will be 43 but running multiple cumin-web instances
will increase the number significantly.

If you receive the error message OperationalError: FATAL: sorry, too many clients
already in the user interface, or contained in a cumin log file, this means that the available database
connections were exhausted and a Cumin operation failed.

To change the allowed number of database connections, edit the /var/lib/pgsql/data/


postgresql.conf file and set the max_connections parameter. The PostgreSQL server must be
restarted for this change to take effect.

max_fsm_pages
The max_fsm_pages parameter in /var/lib/pgsql/data/postgresql.conf affects
PostgreSQL's ability to reclaim free space. Free space will be reclaimed when the MRG Management

21
Appendix A. Configuring the MRG Management Console for Medium Scale Deployment

Console runs the VACUUM command on the database (the vacuum interval can be set in /etc/
cumin/cumin.conf). The default value for max_fsm_pages is 20,000. In medium scale
deployments, it is recommended that max_fsm_pages be set to at least 64,000.

Important

The following procedure is only applicable on a Red Hat Enterprise Linux 5 operating system, in
which the PostgreSQL 8.1 database is in use. Red Hat Enterprise Linux 6 carries a later version
of PostgreSQL, in which the max_fsm_pages parameter is no longer valid.

To set the max_fsm_pages parameter, use the following procedure:

1. Start an interactive PostgreSQL shell.

$ psql -d cumin -U cumin -h localhost

2. Run the following command from the PostgreSQL prompt.

cumin=# VACUUM ANALYZE VERBOSE;

This will produce a large amount of output and may take several minutes to complete.

3. Edit the /var/lib/pgsql/data/postgresql.conf file and set the max_fsm_pages


parameter to at least the indicated value from output of the previous command.

4. Restart the PostgreSQL service and perform this process again, repeating until PostgreSQL
indicates that free space tracking is adequate:

DETAIL: A total of 25712 page slots are in use (including overhead).


25712 page slots are required to track all free space.
Current limits are: 32000 page slots, 1000 relations, using 292 KB.
VACUUM

5. When PostgreSQL is restarted, restart Cumin for changes to take effect.

22
Appendix B. Configuring the
Messaging Broker
B.1. Changing the Update Interval
By default, the MRG Messaging broker will send updated information to the MRG Management
Console every ten seconds. Increase the interval to receive fewer updates and reduce load on the
broker or the network. Decrease the interval to receive more updates.

To change the update interval, open the /etc/qpidd.conf file in your preferred text editor and add
the mgmt-pub-interval configuration option on the broker:

mgmt-pub-interval=30

Enter the required update interval in seconds.

B.2. Configuring SSL


The MRG Messaging broker will always run with authentication checks turned on by default.
Passwords will be sent to the MRG Messaging broker from the MRG Management Console in
plain text. For greater security, SSL encryption can be used for communication between the MRG
Management Console and the broker.

In the broker, SSL is provided through the ssl.so module. This module is installed and loaded by
default in MRG Messaging. To enable the module, you need to specify the location of the database
containing the certificate and key to use. This certificate database is created and managed by the
Mozilla Network Security Services (NSS) certutil tool.

Use the following procedure to create a certificate database in /var/lib/qpidd and enable
communication over SSL:

1. Create a file named /var/lib/qpidd/passwordfile to hold the certificate database


password. This is a plain text file containing a single password. The file should be owned by the
qpidd user and should not be readable by any other user. Ownership and permissions on the file
can be set as follows:

# chown qpidd:qpidd /var/lib/qpidd/passwordfile


# chmod 600 /var/lib/qpidd/passwordfile

2. Create the database and insert a new certificate:

# cd /var/lib/qpidd
# sudo -u qpidd certutil -N -d . -f passwordfile
# sudo -u qpidd certutil -S -d . -f passwordfile -n nickname -s "CN=nickname" -t "CT,,"
-x -z /usr/bin/certutil

3. Set the following options in the /etc/qpidd.conf configuration file:

23
Appendix B. Configuring the Messaging Broker

ssl-cert-password-file=/var/lib/qpidd/passwordfile
ssl-cert-db=/var/lib/qpidd
ssl-cert-name=nickname

Note

The default port for SSL communication is 5671. This port may be changed by specifying the
ssl-port option in the /etc/qpidd.conf file.

4. Install the qpid-cpp-server-ssl package:

# yum install qpid-cpp-server-ssl

5. Restart the broker.

# service qpidd restart

After restarting, you can check the /var/log/messages file to quickly verify that the broker is
listening for SSL connections. The message Listening for SSL connections on TCP
port 5671 indicates that SSL communication has been successfully configured.

6. Clients may now communicate with the broker using a URL specifying the amqps protocol and the
SSL port number, for example amqps://localhost:5671.

Important

The brokers parameter in /etc/cumin/cumin.conf must be changed to specify the


amqps protocol and the SSL port number, and enable the MRG Management Console to
restart using SSL. Refer to Section 3.1.2, “Setting the Broker Address and Authentication” for
information on setting the brokers parameter.

For more information on setting up SSL encryption, refer to the MRG Messaging User Guide.

B.3. Adding Credentials to Optional Broker ACLs for MRG


Services
The MRG Messaging broker can be configured to use an access control list (ACL). If an ACL has
been created for the MRG Messaging broker, ensure that any SASL users that have been created for
Cumin, Sesame and MRG Grid are handled in the ACL. Note that if MRG Grid or Sesame is using
anonymous authentication, the anonymous@qpid user must also be added.

For example, these additions to an ACL file grant unrestricted access to the users cumin, grid, and
sesame:
24
Adding Credentials to Optional Broker ACLs for MRG Services

acl allow cumin@QPID all all


acl allow grid@QPID all all
acl allow sesame@QPID all all

For a full discussion of ACLs, see the MRG Messaging User Guide sections on security and
authorization.

25
26
Appendix C. Revision History
Revision 2-7 Tue Feb 28 2012 Tim Hildred [email protected]
Updated configuration file for new publication tool.

Revision 2-4 Mon Jan 16 2012 Cheryn Tan [email protected]


Fixed typos, rearranged content on tuning Cumin database

Revision 2-3 Thu Jan 12 2012 Cheryn Tan [email protected]


BZ#754224 - Moved content on setting up SSL and configuring ACLs to new appendix
BZ#753867 - Reverted to original hardware requirements
BZ#754223 - Rewrote firstrun instructions

Revision 2-2 Fri Jan 6 2012 Cheryn Tan [email protected]


BZ#753867 - Edited hardware requirements
BZ#768192 - Added appendix on max_fsm_pages

Revision 2-1 Thu Jan 5 2012 Cheryn Tan [email protected]


BZ#768192 - Added PostgreSQL max_connections for Cumin users
BZ#754225 - Link between wallaby and remote configuration feature

Revision 2-0 Tue Dec 6 2011 Alison Young [email protected]


Prepared for publishing

Revision 1-18 Tue Nov 29 2011 Alison Young [email protected]


BZ#731801 - minor update

Revision 1-17 Thu Nov 24 2011 Alison Young [email protected]


BZ#752912 - comment 23

Revision 1-16 Nov 18 2011 Alison Young [email protected]


BZ#752912 - addressed comments 14 - 21

Revision 1-15 Thur Nov 17 2011 Alison Young [email protected]


BZ#752406 - change RHEL versions
BZ#752912 - addressed comments 3 - 11

Revision 1-14 Mon Nov 14 2011 Alison Young [email protected]


BZ#629912 - FAQ clarification

27
Appendix C. Revision History

BZ#752912 - Tech Review of 2.1 MCIG

Revision 1-11 Mon Nov 07 2011 Alison Young [email protected]


BZ#750820 - Instructions for configuring Sesame authentication are wrong and/or missing

Revision 1-10 Mon Oct 24 2011 Alison Young [email protected]


BZ#738793 - updates from review

Revision 1-9 Fri Oct 21 2011 Alison Young [email protected]


BZ#733683 - Added information on wallaby-broker configuration parameter
BZ#738793 - updates from review

Revision 1-8 Thu Oct 20 2011 Alison Young [email protected]


BZ#731813 - authentication as cumin user is necessary for job ops
BZ#738793 - updates from review

Revision 1-7 Mon Oct 18 2011 Alison Young [email protected]


BZ#629912 - How to Adjust update rate on slot utilization graph in Cumin
BZ#706096 - Remove content on configuration file ownership
BZ#731799 - "one of two ways" to configure the job server clarification
BZ#731801 - Review config instructions for jobserver
BZ#738785 - Missing preposition in chapter 5
BZ#738793 - Restructure review updates

Revision 1-6 Mon Oct 17 2011 Alison Young [email protected]


BZ#738793 - continued restructure

Revision 1-5 Fri Oct 14 2011 Alison Young [email protected]


BZ#738793 - commenced restructure

Revision 1-4 Thu Oct 13 2011 Alison Young [email protected]


BZ#731811 - cumin-database install and cumin-admin add-user must be run from root shell
BZ#737637 - Extra quotes in section 4.1
BZ#738792 - Name of the qpidd user in Note in section 2.1 is given as 'qpid'

Revision 1-3 Wed Sep 07 2011 Alison Young [email protected]


Prepared for publishing

Revision 1-1 Wed Sep 07 2011 Alison Young [email protected]

28
BZ#735358 - Update for adding cumin and grid to sasldb

Revision 1-0 Thu Jun 23 2011 Alison Young [email protected]


Prepared for publishing

Revision 0.1-5 Tue May 31 2011 Alison Young [email protected]


Rebuilt as some changes missing from previous build.

Revision 0.1-4 Mon May 30 2011 Alison Young [email protected]


Technical review fixes
BZ#674834 - treatment of data on uninstall/upgrade/reinstall
BZ#705828 - Sesame installation updates
BZ#706182 - configuration parameter settings for Job Server
BZ#706446 - RHEL-6 Server channel missing from table 2.1

Revision 0.1-3 Thu Apr 07 2011 Alison Young [email protected]


BZ#692227 - setting sasl_mech_list parameter in cumin.conf
BZ#696223 - Changed section 2.1 default MRG Messaging set up has changed

Revision 0.1-2 Thu Apr 07 2011 Alison Young [email protected]


BZ#681283 - Scale Documentation (2.x)
BZ#689785 - Change default QMF update interval, special config for submissions
BZ#690453 - setting the 'persona' value for console specialization
BZ#692983 - subsection on logging to Chapter 3

Revision 0.1-1 Tue Apr 05 2011 Alison Young [email protected]


BZ#687872- Need instructions for anonymous@QPID plugin authentication
added update from v1.3 for BZ#634932 - Runtime Grid config setting

Revision 0.1-0 Tue Feb 22 2011 Alison Young [email protected]


Fork from 1.3

29
30

You might also like