Red Hat Enterprise MRG-2-Management Console Installation Guide-En-US
Red Hat Enterprise MRG-2-Management Console Installation Guide-En-US
Management Console
Installation Guide
Installing the MRG Management
Console for use with MRG Messaging
Lana Brindley
Alison Young
Management Console Installation Guide
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available
at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this
document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity
Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.
This book contains basic overview and installation procedures for the MRG Management Console
component of the Red Hat Enterprise MRG distributed computing platform. The MRG Management
Console provides a web-based tool for management of MRG Messaging
Preface v
1. Document Conventions ................................................................................................... v
1.1. Typographic Conventions ..................................................................................... vi
1.2. Pull-quote Conventions ........................................................................................ vii
1.3. Notes and Warnings ............................................................................................ vii
2. Getting Help and Giving Feedback ................................................................................ viii
2.1. Do You Need Help? ........................................................................................... viii
2.2. We Need Feedback! ........................................................................................... viii
1. Deployment Sizes 1
2. Installation 3
2.1. Installing the Broker ..................................................................................................... 3
2.2. Installing Cumin ........................................................................................................... 3
2.2.1. Install the Console ............................................................................................. 3
2.2.2. The Cumin database ......................................................................................... 4
2.3. Installing Sesame ......................................................................................................... 4
2.4. Installing Grid Plug-ins .................................................................................................. 4
3. Configuration 5
3.1. Configuring Cumin ........................................................................................................ 5
3.1.1. Creating SASL Credentials ................................................................................ 5
3.1.2. Setting the Broker Address and Authentication .................................................... 5
3.1.3. Specifying the Broker Address for Use of the Remote Configuration Feature .......... 6
3.1.4. Setting the Network Interface ............................................................................. 7
3.1.5. Setting the MRG Management Console Persona ................................................. 7
3.1.6. Adding Users .................................................................................................... 7
3.2. Configuring Sesame ..................................................................................................... 8
3.2.1. Setting the Broker Address ................................................................................ 8
3.2.2. Configuring Authentication ................................................................................. 8
3.3. Configuring Grid Plug-ins .............................................................................................. 8
3.3.1. Setting Broker Address and General Configuration .............................................. 8
3.3.2. Configuring Authentication ................................................................................. 9
3.3.3. Setting Job Server Configuration ...................................................................... 10
4. Running 13
4.1. Starting Services Manually .......................................................................................... 13
4.2. Starting Services on System Boot ............................................................................... 14
4.3. Connecting to the MRG Management Console ............................................................. 14
4.4. Logging ...................................................................................................................... 14
5. Frequently Asked Questions 15
Frequently Asked Questions .............................................................................................. 15
6. More Information 17
A. Configuring the MRG Management Console for Medium Scale Deployment 19
A.1. Running Multiple MRG Management Console Web Servers .......................................... 19
A.2. Limiting Objects Processed by the MRG Management Console ..................................... 20
A.3. Increasing the Default QMF Update Interval for MRG Grid Components ......................... 21
A.4. Tuning the Cumin Database ....................................................................................... 21
B. Configuring the Messaging Broker 23
B.1. Changing the Update Interval ..................................................................................... 23
B.2. Configuring SSL ......................................................................................................... 23
B.3. Adding Credentials to Optional Broker ACLs for MRG Services ..................................... 24
C. Revision History 27
iii
iv
Preface
Red Hat Enterprise MRG
This book contains basic overview and installation information for the MRG Management Console
component of Red Hat Enterprise MRG. Red Hat Enterprise MRG is a high performance distributed
computing platform consisting of three components:
1. Messaging — Cross platform, high performance, reliable messaging using the Advanced Message
Queuing Protocol (AMQP) standard.
2. Realtime — Consistent low-latency and predictable response times for applications that require
microsecond latency.
3. Grid — Distributed High Throughput (HTC) and High Performance Computing (HPC).
All three components of Red Hat Enterprise MRG are designed to be used as part of the platform, but
can also be used separately.
MRG Messaging is built on the Qpid Management Framework (QMF). The MRG Management
Console uses QMF to access data and functionality provided by the MRG Messaging broker (qpidd),
inventory daemon (sesame) and MRG Grid components.
This book describes how to set up and configure Cumin, a MRG Messaging broker, and a distributed
inventory. The broker is necessary for communication between the distributed components and
Cumin. The inventory and MRG Grid component installations must be performed on all nodes in the
deployment.
For more information about MRG Messaging architecture, including advanced installation and
configuration of the MRG Messaging broker, see the MRG Messaging User Guide.
For more information about MRG Grid, including advanced features and configuration, see the MRG
Grid User guide.
1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.
1
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not,
alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes
the Liberation Fonts set by default.
1
https://fedorahosted.org/liberation-fonts/
v
Preface
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight
keycaps and key combinations. For example:
The above includes a file name, a shell command and a keycap, all presented in mono-spaced bold
and all distinguishable thanks to context.
Key combinations can be distinguished from keycaps by the hyphen connecting each part of a key
combination. For example:
The first paragraph highlights the particular keycap to press. The second highlights two key
combinations (each a set of three keycaps with each set pressed simultaneously).
If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog box text;
labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose System → Preferences → Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, click the Left-handed mouse check box and click
Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).
The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in proportional bold and
all distinguishable by context.
vi
Pull-quote Conventions
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or
variable text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:
To see the version of a currently installed package, use the rpm -q package
command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you enter when issuing a command or for text
displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
System.out.println("Created Echo");
vii
Preface
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should
have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to
the current session, or services that need restarting before an update will apply. Ignoring a box
labeled 'Important' will not cause data loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
If you experience difficulty with a procedure described in this documentation, visit the Red Hat
Customer Portal at http://access.redhat.com. Through the customer portal, you can:
• search or browse through a knowledgebase of technical support articles about Red Hat products.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and
technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/
listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.
If you find a typographical error in this manual, or if you have thought of a way to make this manual
better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/
against the product Red Hat Enterprise MRG.
If you have a suggestion for improving the documentation, try to be as specific as possible when
describing it. If you have found an error, please include the section number and some of the
surrounding text so we can find it easily.
ix
x
Chapter 1.
Deployment Sizes
The MRG Management Console is designed to scale for deployments of MRG Messaging and MRG
Grid. The following configurations indicate typical size and load characteristics for small, medium and
large deployments.
Small
The default software configuration of the MRG Management Console is appropriate for small scale
deployments. An example small scale deployment is:
• 5 concurrent console users, accessing the console at 1 page view per second (peak)
• 10 job completions per minute (sustained), 3 years of job history (1 million jobs)
Medium
To configure the MRG Management Console for use with medium scale deployments, see
Appendix A, Configuring the MRG Management Console for Medium Scale Deployment. An example
medium scale deployment is:
• 20 concurrent console users, accessing the console at 1 page view per second (peak)
• 100 job completions per minute (sustained), 3 years of job history (10 million jobs)
Large
A large scale console is defined as a console supporting more than 5000 Execute Nodes and
100 concurrent users accessing the console at 1 page view per second during peak periods. There
are several considerations when implementing a large scale console. Red Hat, Inc recommends
that customers configure large scale MRG Management Console installations in cooperation with a
Solutions Architect through Red Hat, Inc consulting.
1
2
Chapter 2.
Installation
To install the MRG Management Console you will need to have registered your system with Red Hat
1
Network . This table lists the Red Hat Enterprise MRG channels available on Red Hat Network for the
MRG Management Console.
Table 2.1. Red Hat Enterprise MRG Channels Available on Red Hat Network
Channel Name Operating System Architecture
Red Hat MRG Management RHEL-5 Server 32-bit, 64-bit
Red Hat MRG Management RHEL-6 Server 32-bit, 64-bit
Hardware Requirements
It is recommended that you have the following minimum hardware requirements before attempting to
install the MRG Management Console:
• 512 MB RAM
• 10 GB disk space
A full installation of the MRG Messaging components is recommended, but only the broker is required.
Install the broker with the following yum command as root:
For more information on installing the MRG Messaging broker, see the MRG Messaging Installation
Guide.
1
https://rhn.redhat.com/help/about.pxt
3
Chapter 2. Installation
Note
If you find that yum is not installing all the dependencies you require, make sure that you have
2
registered your system with Red Hat Network .
Before you run the MRG Management Console for the first time, you will need to install the Cumin
database.
# cumin-database install
This command will produce a warning that it is about to modify any existing configuration. Enter yes to
continue with the installation.
Install the QMF plug-ins on each node in the condor pool using the yum command:
2
https://rhn.redhat.com/help/about.pxt
4
Chapter 3.
Configuration
3.1. Configuring Cumin
Important
The MRG Management Console must connect to the MRG Messaging broker authenticated as
the cumin user for full operability. The MRG Management Console Installation Guide assumes
that MRG Messaging has already been configured to support password authentication using the
Cyrus SASL library. For information on configuring security in MRG Messaging see the MRG
Messaging Installation Guide and MRG Messaging User Guide.
This command will create a cumin user in the SASL database. Section 3.1.2, “Setting the Broker
Address and Authentication” explains how to configure the Management Console to use these
credentials for authentication to the broker.
For more information on the saslpasswd2 command, see the MRG Messaging Installation Guide.
Note
1. As the root user, open the /etc/cumin/cumin.conf file in your preferred text editor and locate
the brokers parameter.
The authentication information will be stored in plain text. However as permissions on this file are
restricted the information will be secure provided users do not have root access.
5
Chapter 3. Configuration
Note
[<protocol>://]<username>/<password>@<target-host>[:<tcp-port>]
The optional tcp-port parameter will default to 5672 if not specified. The optional protocol
value may be amqp (the default) or amqps for SSL. Refer to Appendix B, Configuring the
Messaging Broker and the MRG Messaging User Guide for additional information.
2. The username value in this case must be cumin, the user that was added to the SASL
configuration in Section 3.1.1, “Creating SASL Credentials”.
The password will be the password that you supplied when prompted by the saslpasswd2
command. In addition, multiple brokers may be specified in a comma-separated list. For example:
Set the sasl-mech-list parameter to explicitly restrict the console from using anonymous
authentication.
Do this by setting the value to a space separated list of appropriate mechanisms supported by the
broker, excluding anonymous. In a default broker configuration this list will include only the plain
mechanism.
sasl-mech-list: PLAIN
More information on authentication mechanisms can be found in the Cyrus SASL documentation.
By default, Cumin will use the first address specified in the brokers parameter as the address of the
MRG Messaging broker for remote configuration. If that address is correct, this step can be skipped.
If the remote configuration feature is set up to use a different broker, the wallaby-broker parameter
needs to be set accordingly. For example:
wallaby-broker: cumin/[email protected]
6
Setting the Network Interface
To control how often Cumin will poll wallaby, adjust the wallaby-refresh parameter. The default
value is 60 seconds.
The web console is bound to the localhost network interface by default. This setting allows only
local connections to be made. To make the MRG Management Console accessible to other machines
on the network, the IP address of another network interface on the host needs to be specified in the
configuration file.
1. Specify the IP address by opening the /etc/cumin/cumin.conf file and locating the [web]
section.
On installation, the [web] section in the configuration file will have the following lines commented
out. Remove the # symbol and edit each line to bind the web console to a different network
interface:
[web]
host: 192.168.0.20
port: 1234
2. Setting the host parameter to 0.0.0.0 will make the web console bind to all local network
interfaces.
[web]
persona: grid
This will add a new user user and prompt for a password. Using this form of the command ensures
that passwords are not retained in the shell history.
7
Chapter 3. Configuration
Open the /etc/sesame/sesame.conf file in your preferred text editor and locate the host
parameter. This parameter must be set to the hostname of the machine running the MRG Messaging
broker:
host=example.com
The port parameter can also be set, although the default value should be correct for most
configurations.
This command will create a sesame user in the SASL database. For more information about the
saslpasswd2 command, refer to the MRG Messaging Installation Guide.
mech=PLAIN
uid=sesame
pwd=password
Note
See configuration file comments on the pwd-file parameter if you wish to place the password
in an external file.
8
Configuring Authentication
Note
MRG Grid can also be configured remotely using the remote configuration feature. For more
information about the remote configuration feature and how to use it, see the MRG Messaging
User Guide.
# cd /etc/condor/config.d/
# touch 40QMF.config
2. To set the broker address on all nodes which are not running the MRG Messaging broker locally,
add the following line to the 40QMF.config file and specify the hostname of the machine running
the broker:
QMF_BROKER_HOST = <hostname>
3. To be able to edit fair-share in the MRG Management Console, edit the 40QMF.config file on all
nodes running the condor_negotiator to add the following line:
ENABLE_RUNTIME_CONFIG = TRUE
To enable runtime configuration of Limit values it is vital that this line is present.
4. The sampling frequency of some graphs in the MRG Grid overview screens is related to how
frequently the condor collector sends updates. The default rate is fifteen minutes (900 seconds).
This can be changed by adjusting the COLLECTOR_UPDATE_INTERVAL parameter.
Do this by editing the new 40QMF.config file on the node running the condor_collector to
add the following line, with the desired value in seconds:
COLLECTOR_UPDATE_INTERVAL = 60
5. Restart the condor service to pick up the changes (this command will also start the condor
service if it is not already running):
9
Chapter 3. Configuration
When prompted, create a password. This command will create a grid user in the SASL database.
For more information about the saslpasswd2 command, refer to the MRG Messaging Installation
Guide.
Note
The following lines should be added to the 40QMF.config file on every MRG Grid node where the
condor-qmf package has been installed. The QMF_BROKER_AUTH_MECH parameter may be set to
PLAIN or another supported mechanism:
QMF_BROKER_AUTH_MECH = PLAIN
QMF_BROKER_USERNAME = grid
QMF_BROKER_PASSWORD_FILE = <path>
The last parameter specifies the path of a file containing the password for the grid user in plain text.
The security of the password file is the responsibility of system administrators.
1. Default configuration. When the condor-qmf package is installed, the scheduler plug-ins will be set
up by default to provide a job server. This configuration will publish data for jobs in the scheduler
job queue log. No action is needed to use this configuration.
2. A feature named JobServer is predefined in the configuration store for use with the remote
configuration tools. This feature will set up a dedicated process to publish data for jobs based on
the job history files and the scheduler job queue log.
Applying the JobServer feature through remote configuration is the recommended way to
configure a dedicated job server. Generally, using the remote configuration feature removes the
need to edit configuration files and restart manually, simplifying potentially complex configuration
tasks.
3. A dedicated job server as described in 2. above can also be configured manually. For a manual
configuration, edit the /etc/condor/config.d/40QMF.config file and add the following:
QMF_PUBLISH_SUBMISSIONS = False
DAEMON_LIST = $(DAEMON_LIST) JOB_SERVER
10
Setting Job Server Configuration
HISTORY_INTERVAL = 60
JOB_SERVER.JOB_SERVER_DEBUG = D_FULLDEBUG
Configuration changes will take effect the next time condor is started.
11
12
Chapter 4.
Running
4.1. Starting Services Manually
The service command can be used to manually start, stop, restart, or check the status of services
on the local host.
1. Use these commands to start the following MRG services on the node(s) where they are installed:
Starting Sesame:
Note
The cumin-database install command must be run before the MRG Management
Console can be started for the first time.
2. After a configuration option has been changed, use the service command to restart a running
application:
13
Chapter 4. Running
To configure postgresql to start on system boot, use the chkconfig command to set default run
levels:
# chkconfig postgresql on
4.4. Logging
The MRG Management Console keeps log files in the /var/log/cumin directory. This directory will
contain log files for the master script and each cumin-web or cumin-data process that is started as part
of the cumin service.
Three log files are kept for each process and have the extensions .log, .stderr and .stdout.
The .log file contains log entries from the running application. The .stderr and .stdout files
contain redirected terminal output. Normally the .stderr and .stdout would be empty but they may
contain error information. The master script makes an entry in the master.log file each time it starts
or restarts another cumin process. If /sbin/service reports [FAILED] when cumin is started or if
cumin does not seem to be running as expected, check these files for information.
A maximum log file size is enforced, and logs will be rolled over when they reach the maximum size.
The maximum log file size and the number of rolled-over log files to archive can be set in the /etc/
cumin/cumin.conf file with the log-max-mb and log-max-archives parameters.
14
Chapter 5.
A: No, the data in the database will persist. Even an uninstall, reinstall, or update of PostgreSQL
should not affect your data. However, you're advised to back up the database prior to any such
operations (more information on backup can be found in the PostgreSQL documentation).
A: To discard your data, the database must be destroyed and recreated. Optionally, you may
preserve the user account data during this procedure.
Warning
This command will cause you to lose all data previously stored in the database. Use only
with extreme caution.
$ cumin-database drop
$ cumin-database create
A: If the database is completely corrupted, the easiest way to fix the problem is to destroy the old
database, and create a new one as described above.
A: Occasionally, new features in Cumin may require changes to the database schema. If this is the
case, the Release Notes will inform you that the database must be recreated for use with the
new version of software. If practical, additional instructions or facilities may be included to help
with the transition. For example, instructions on preserving the user account data.
15
Chapter 5. Frequently Asked Questions
A: Presently Cumin stores 24 hours of sample data for calculating statistics along with user account
data and information about agents and objects it discovers through QMF. Cumin will dynamically
rediscover agents and objects while it runs, so this type of data is not really lost.
User account data will be lost but may be restored as described above, this is assuming it has
previously been exported with cumin-admin. Sample data from the last 24 hours will be lost,
affecting some statistics and charts displayed by Cumin.
Q: How can I make the graph labeled Grid - Overview, Host info update more frequently?
A: The data comes from the Collector, controlled by the COLLECTOR_UPDATE_INTERVAL. The
default value is 300 seconds (15 minutes). For more frequent updates, set it to a smaller value,
such as 30, on the nodes where the condor_collector is running. This can be done in /
etc/condor/config.d/40QMF.config.
16
Chapter 6.
More Information
Reporting a Bug
If you have found a bug in the MRG Management Console, follow these instructions to enter a bug
report:
1 2
1. You will need a Bugzilla account. You can create one at Create Bugzilla Account .
3
2. Once you have a Bugzilla account, log in and click on Enter A New Bug Report .
3. When submitting a bug report, identify the product (Red Hat Enterprise MRG), the version (2.2),
and whether the bug occurs in the software (component = management) or in the documentation
(component = Management_Console_Installation_Guide).
Further Reading
Red Hat Enterprise MRG and MRG Messaging Product Information
http://www.redhat.com/mrg
17
18
Appendix A. Configuring the MRG
Management Console for Medium Scale
Deployment
Configuration considerations for deployments change as scale increases. This chapter describes how
to configure the MRG Management Console installation for medium scale deployments. A medium
scale deployment is described in Chapter 1, Deployment Sizes.
Each new section must specify a unique value for port as each server binds to its own port.
Adding the following lines to /etc/cumin/cumin.conf will add 3 new web servers to the
configuration, web1, web2 and web3; using default values for each server except port. The
default port for the web section is 45672.
[web1]
port: 45674
[web2]
port: 45675
[web3]
port: 45676
The names of the sections created above must be added to the webs in the [master] section in
order for the new web servers to run.
[master]
webs: web, web1, web2, web3
After making the changes above, Cumin may be restarted. The /var/log/cumin/master.log
file should contain entries for the new web servers.
19
Appendix A. Configuring the MRG Management Console for Medium Scale Deployment
# tail /var/log/cumin/master.log
...
20861 2011-04-01 12:09:45,560 INFO Starting: cumin-web --section=web --daemon
20861 2011-04-01 12:09:45,588 INFO Starting: cumin-web --section=web1 --daemon
20861 2011-04-01 12:09:45,602 INFO Starting: cumin-web --section=web2 --daemon
20861 2011-04-01 12:09:45,609 INFO Starting: cumin-web --section=web3 –daemon
...
To visit a particular server, navigate using the appropriate port value. For example, on the
machine where the MRG Management Console is installed, open an internet browser and
navigate to http://localhost:45675/. This visits the [web2] server as configured above.
4. Troubleshooting.
Make sure that the section names listed in the webs parameter of the [master] section are
spelled correctly. Section naming errors can be identified by searching for NoSectionError in /
var/log/cumin/*.stderr.
If Cumin is running but cannot be accessed on a particular port as expected, make sure the port
values specified in /etc/cumin/cumin.conf for each section are correct and that the ports are
not used by any other application on the system.
Whenever changes are made to /etc/cumin/cumin.conf the service must be restarted for the
changes to take effect.
The above instructions do not cover setting up a web server proxy; users must select a port
manually. However, it may be desirable in a particular installation to set up a proxy which handles
load balancing automatically and allows users to visit a single URL rather than specific ports.
For convenience, the standard /etc/cumin/cumin.conf file already contains several alternative
settings for the datas in the [master] section with explanatory comments. Select one of these
settings based on the persona value being used.
20
Increasing the Default QMF Update Interval for MRG Grid Components
STARTD.QMF_UPDATE_INTERVAL = 30
Important
max_connections
The max_connections parameter controls the number of simultaneous database connections
allowed by the PostgreSQL server; the default value is 100. This value must be large enough to
support the cumin-web and cumin-data processes that make up the MRG Management Console.
It is a good idea to check the value of this parameter if the MRG Management Console is configured to
run multiple cumin-web instances (as described in Section A.1, “Running Multiple MRG Management
Console Web Servers”) or if other applications besides Cumin use the same PostgreSQL server.
The maximum number of concurrent connections needed by Cumin can be estimated with the
following formula:
For a default Cumin configuration this number will be 43 but running multiple cumin-web instances
will increase the number significantly.
If you receive the error message OperationalError: FATAL: sorry, too many clients
already in the user interface, or contained in a cumin log file, this means that the available database
connections were exhausted and a Cumin operation failed.
max_fsm_pages
The max_fsm_pages parameter in /var/lib/pgsql/data/postgresql.conf affects
PostgreSQL's ability to reclaim free space. Free space will be reclaimed when the MRG Management
21
Appendix A. Configuring the MRG Management Console for Medium Scale Deployment
Console runs the VACUUM command on the database (the vacuum interval can be set in /etc/
cumin/cumin.conf). The default value for max_fsm_pages is 20,000. In medium scale
deployments, it is recommended that max_fsm_pages be set to at least 64,000.
Important
The following procedure is only applicable on a Red Hat Enterprise Linux 5 operating system, in
which the PostgreSQL 8.1 database is in use. Red Hat Enterprise Linux 6 carries a later version
of PostgreSQL, in which the max_fsm_pages parameter is no longer valid.
This will produce a large amount of output and may take several minutes to complete.
4. Restart the PostgreSQL service and perform this process again, repeating until PostgreSQL
indicates that free space tracking is adequate:
22
Appendix B. Configuring the
Messaging Broker
B.1. Changing the Update Interval
By default, the MRG Messaging broker will send updated information to the MRG Management
Console every ten seconds. Increase the interval to receive fewer updates and reduce load on the
broker or the network. Decrease the interval to receive more updates.
To change the update interval, open the /etc/qpidd.conf file in your preferred text editor and add
the mgmt-pub-interval configuration option on the broker:
mgmt-pub-interval=30
In the broker, SSL is provided through the ssl.so module. This module is installed and loaded by
default in MRG Messaging. To enable the module, you need to specify the location of the database
containing the certificate and key to use. This certificate database is created and managed by the
Mozilla Network Security Services (NSS) certutil tool.
Use the following procedure to create a certificate database in /var/lib/qpidd and enable
communication over SSL:
# cd /var/lib/qpidd
# sudo -u qpidd certutil -N -d . -f passwordfile
# sudo -u qpidd certutil -S -d . -f passwordfile -n nickname -s "CN=nickname" -t "CT,,"
-x -z /usr/bin/certutil
23
Appendix B. Configuring the Messaging Broker
ssl-cert-password-file=/var/lib/qpidd/passwordfile
ssl-cert-db=/var/lib/qpidd
ssl-cert-name=nickname
Note
The default port for SSL communication is 5671. This port may be changed by specifying the
ssl-port option in the /etc/qpidd.conf file.
After restarting, you can check the /var/log/messages file to quickly verify that the broker is
listening for SSL connections. The message Listening for SSL connections on TCP
port 5671 indicates that SSL communication has been successfully configured.
6. Clients may now communicate with the broker using a URL specifying the amqps protocol and the
SSL port number, for example amqps://localhost:5671.
Important
For more information on setting up SSL encryption, refer to the MRG Messaging User Guide.
For example, these additions to an ACL file grant unrestricted access to the users cumin, grid, and
sesame:
24
Adding Credentials to Optional Broker ACLs for MRG Services
For a full discussion of ACLs, see the MRG Messaging User Guide sections on security and
authorization.
25
26
Appendix C. Revision History
Revision 2-7 Tue Feb 28 2012 Tim Hildred [email protected]
Updated configuration file for new publication tool.
27
Appendix C. Revision History
28
BZ#735358 - Update for adding cumin and grid to sasldb
29
30