Administracion Informix
Administracion Informix
Administering
Contents
1. System administration
1.1. List of utilities
1.2. Administrator's Guide
1.2.1. What's new in administration for Informix, Version 12.10
1.2.2. The database server
1.2.2.1. Overview of database server configuration and administration
1.2.2.1.1. Database server concepts
1.2.2.1.2. Environment configuration
1.2.2.1.3. Database server configuration
1.2.2.1.3.1. Storage space creation and management
1.2.2.1.3.2. Automatic performance tuning
1.2.2.1.3.3. Feature configuration
1.2.2.1.3.4. Connectivity configuration
1.2.2.1.3.5. Limit session resources
1.2.2.1.3.6. Automate startup and shutdown on UNIX
1.2.2.1.3.7. Automate startup on Windows
1.2.2.1.4. Database server maintenance tasks
1.2.2.1.4.1. Database server monitoring
1.2.2.1.4.1.1. UNIX operating-system tools
1.2.2.1.4.1.2. Windows administrative utilities
1.2.2.2. Client/server communication
1.2.2.2.1. Client/server architecture
1.2.2.2.1.1. Network protocol
1.2.2.2.1.2. Network programming interface
1.2.2.2.1.3. Windows network domain
1.2.2.2.1.4. Database server connections
1.2.2.2.1.5. Supporting multiplexed connections
1.2.2.2.2. Connections that the database server supports
1.2.2.2.3. Local connections
1.2.2.2.3.1. Shared-memory connections (UNIX)
1.2.2.2.3.2. Stream-pipe connections (UNIX and Linux)
1.2.2.2.3.3. Named-pipe connections (Windows)
1.2.2.2.3.4. Local-loopback connections
1.2.2.2.4. Communication support services
1.2.2.2.5. Connectivity files
1.2.2.2.5.1. Network-configuration files
1.2.2.2.5.1.1. TCP/IP connectivity files
1.2.2.2.5.1.1.1. Client and server actions when a TCP/IP connection is opened
1.2.2.2.5.1.2. Multiple TCP/IP ports
1.2.2.2.5.2. Network security files
1.2.2.2.5.2.1. Trusted-host information
1.2.2.2.5.2.2. Trusted-user information
1.2.2.2.5.2.3. The netrc information
1.2.2.2.5.2.3.1. User impersonation
1.2.2.2.5.3. The sqlhosts file and the SQLHOSTS registry key
1.2.2.2.5.3.1. Creating the sqlhosts file with a text editor
1.2.2.2.5.3.2. Setting up the SQLHOSTS registry key with Setnet32 (Windows)
1.2.2.2.6. The sqlhosts information
1.2.2.2.6.1. IANA standard service names and port numbers in the sqlhosts.std file
1.2.2.2.6.2. sqlhosts connectivity information
1.2.2.2.6.2.1. sqlhosts file and SQLHOSTS registry key options
1.2.2.2.6.3. Group information
1.2.2.2.6.3.1. Creating a group in the sqlhosts file
1.2.2.2.6.4. Alternatives for TCP/IP connections
1.2.2.2.7. Informix support for IPv6 addresses
1.2.2.2.8. Configuration parameters related to connectivity
1.2.2.2.8.1. Connection information set in the DBSERVERNAME configuration parameter
1.2.2.2.8.2. Connection information set in the DBSERVERALIASES configuration parameter
1.2.2.2.8.3. Connection information set in the LIMITNUMSESSIONS configuration parameter
1.2.2.2.8.4. Connection information set in the NETTYPE configuration parameter
1.2.2.2.8.5. Name service maximum retention time set in the NS_CACHE configuration parameter
1.2.2.2.8.6. Connection information set in the NUMFDSERVERS configuration parameter
1.2.2.2.8.7. Connection information set in the HA_ALIAS configuration parameter
1.2.2.2.9. Environment variables for network connections
1.2.2.2.10. Automatically terminating idle connections
1.2.2.2.11. Distributed Relational Database Architecture (DRDA) communications
1.2.2.2.11.1. Overview of DRDA
1.2.2.2.11.2. Configuring connectivity between Informix and IBM Data Server clients
1.2.2.2.11.3. Allocating poll threads for an interface/protocol combination with the NETTYPE configuration parameter
1.2.2.2.11.4. Specify the size of the DRDA communication buffer with the DRDA_COMMBUFFSIZE configuration parameter
1.2.2.2.11.5. The DRDAEXEC thread and queries from clients
1.2.2.2.11.6. SQL and supported and unsupported data types
1.2.2.2.11.7. Display DRDA connection information
1.2.2.2.11.8. Display DRDA session information
1.2.2.2.12. Examples of client/server configurations
1.2.2.2.12.1. A network connection
1.2.2.2.12.2. Multiple connection types
1.2.2.2.12.3. Accessing multiple database servers
1.2.2.2.13. HCL Informix MaxConnect
1.2.2.3. Database server initialization
1.2.2.3.1. Initialization process
1.2.2.3.2. Database server operating modes
1.2.2.3.2.1. Users permitted to change modes
1.2.2.3.2.2. Changing database server operating modes
1.2.2.3.2.2.1. Specifying administration mode users
1.2.3. Disk, memory, and process management
1.2.3.1. Virtual processors and threads
1.2.3.1.1. Virtual processors
1.2.3.1.1.1. Threads
1.2.3.5.3. Blobpages
1.2.3.5.4. Sbpages
1.2.3.5.5. Extents
1.2.3.5.6. Dbspaces
1.2.3.5.6.1. Control of where simple large object data is stored
1.2.3.5.6.2. Root dbspace
1.2.3.5.6.3. Temporary dbspaces
1.2.3.5.7. Blobspaces
1.2.3.5.8. Sbspaces
1.2.3.5.8.1. Advantages of using sbspaces
1.2.3.5.8.2. Sbspaces and Enterprise Replication
1.2.3.5.8.3. Metadata, user data, and reserved area
1.2.3.5.8.4. Control of where smart large object data is stored
1.2.3.5.8.5. Storage characteristics of sbspaces
1.2.3.5.8.5.1. Extent sizes for sbspaces
1.2.3.5.8.5.2. Average smart-large-object size
1.2.3.5.8.5.3. Buffering mode
1.2.3.5.8.5.4. Last-access time
1.2.3.5.8.5.5. Lock mode
1.2.3.5.8.5.6. Logging
1.2.3.5.8.6. Levels of inheritance for sbspace characteristics
1.2.3.5.8.7. More information about sbspaces
1.2.3.5.8.8. Temporary sbspaces
1.2.3.5.8.8.1. Comparison of temporary and standard sbspaces
1.2.3.5.8.8.2. Temporary smart large objects
1.2.3.5.9. Plogspace
1.2.3.5.10. Extspaces
1.2.3.5.11. Databases
1.2.3.5.12. Tables
1.2.3.5.12.1. Damaged tables
1.2.3.5.13. Table types for Informix
1.2.3.5.13.1. Standard permanent tables
1.2.3.5.13.2. RAW tables
1.2.3.5.13.3. Temp tables
1.2.3.5.13.4. Properties of table types
1.2.3.5.13.4.1. Loading of data into a table
1.2.3.5.13.4.2. Fast recovery of table types
1.2.3.5.13.4.3. Backup and restore of RAW tables
1.2.3.5.13.5. Temporary tables
1.2.3.5.13.5.1. Temporary tables that you create
1.2.3.5.13.5.1.1. Where user-created temporary tables are stored
1.2.3.5.13.5.2. Temporary tables that the database server creates
1.2.3.5.13.5.2.1. Where database server-created temporary tables are stored
1.2.3.5.14. Tblspaces
1.2.3.5.14.1. Maximum number of tblspaces in a table
1.2.3.5.14.2. Table and index tblspaces
1.2.3.5.14.3. Extent interleaving
1.2.3.5.15. Table fragmentation and data storage
1.2.3.5.16. Amount of disk space needed to store data
1.2.3.5.16.1. Size of the root dbspace
1.2.3.5.16.2. Amount of space that databases require
1.2.3.5.17. The storage pool
1.2.3.5.18. Disk-layout guidelines
1.2.3.5.18.1. Dbspace and chunk guidelines
1.2.3.5.18.2. Table-location guidelines
1.2.3.5.19. Sample disk layouts
1.2.3.5.20. Logical-volume manager
1.2.3.6. Manage disk space
1.2.3.6.1. Allocate disk space
1.2.3.6.1.1. Specify an offset
1.2.3.6.1.1.1. Specify an offset for the initial chunk of root dbspace
1.2.3.6.1.1.2. Specify an offset for additional chunks
1.2.3.6.1.1.3. Use offsets to create multiple chunks
1.2.3.6.1.2. Allocating cooked file spaces on UNIX
1.2.3.6.1.3. Allocating raw disk space on UNIX
1.2.3.6.1.4. Create symbolic links to raw devices (UNIX)
1.2.3.6.1.5. Allocating NTFS file space on Windows
1.2.3.6.1.6. Allocating raw disk space on Windows
1.2.3.6.2. Specify names for storage spaces and chunks
1.2.3.6.2.1. Specify the maximum size of chunks
1.2.3.6.2.2. Specify the maximum number of chunks and storage spaces
1.2.3.6.2.3. Back up after you change the physical schema
1.2.3.6.3. Monitor storage spaces
1.2.3.6.4. Manage dbspaces
1.2.3.6.4.1. Creating a dbspace that uses the default page size
1.2.3.6.4.1.1. Specifying the first and next extent sizes for the tblspace tblspace
1.2.3.6.4.2. Creating a dbspace with a non-default page size
1.2.3.6.4.3. Improving the performance of cooked-file dbspaces by using direct I/O
1.2.3.6.4.4. Storing multiple named fragments in a single dbspace
1.2.3.6.4.5. Creating a temporary dbspace
1.2.3.6.4.6. What to do if you run out of disk space
1.2.3.6.4.7. Adding a chunk to a dbspace or blobspace
1.2.3.6.4.8. Rename dbspaces
1.2.3.6.4.8.1. Additional actions that may be required after you rename a dbspace
1.2.3.6.4.9. Managing automatic location and fragmentation
1.2.3.6.5. Manage blobspaces
1.2.3.6.5.1. Creating a blobspace
1.2.3.6.5.2. Prepare blobspaces to store TEXT and BYTE data
1.2.3.6.5.3. Determine blobpage size
1.2.3.6.5.3.1. Determine database server page size
1.2.3.6.5.3.2. Obtain blobspace storage statistics
1.2.3.6.6. Manage sbspaces
1.2.3.6.6.1. Creating an sbspace
Administering Informix
In addition to administering the database server, you can tune performance, replicate data, and archive data.
Portals that list reference information by functional area and include links
onconfig portal
SQL administration API portal
onstat portal
Database server utilities
Environment variable portal
Limits in Informix®
Main resources
System administration
These topics contain concepts, procedures, and reference information for database and database server administrators to use for managing and tuning HCL
Informix database servers.
Replication
The topics in this group contain information about replicating data in HCL Informix databases by using Enterprise Replication.
Customizing Informix Dynamic Server for Your Environment (IBM® Redbooks® publication)
This IBM Redbooks publication provides an overview of some of the capabilities of version 11 of HCL Informix that enable it to be easily customized for your
particular environment. The focus of this book is on the areas of ease of administration and application development.
Recovery of a Down System (Best practice)
This presentation provides best practices you should follow to bring a down system back into production as soon as possible. It discusses planning for and
recovering from a critical situation that impacts your Informix database.
Knowledge Collection: Informix Data Compression and Storage Optimization (Support document)
This document helps you find the available resources that are related to the Informix storage optimization feature.
Using data file abstraction with external tables in Informix Dynamic Server (IBM developerWorks®)
This article describes how to use the external table feature to easily load and unload data.
Logical Logfile monitoring using SMI Tables (blog entry)
This entry provides SQL statements that query system monitoring tables to monitor logging activity, log-backup status, and event alarms for logical logs.
SYSDBOPEN: A flexible way to change session behavior in Informix (IBM developerWorks)
Database administrators can use the sysdbopen() and sysdbclose() procedures to set environments to activate user tracing, handle short-lasting locks on
data records, or change the reading behavior of sessions. This article shows how to create a sysdbopen() procedure that can be dynamically changed without
re-creating the procedure to avoid downtime if session environments must be adjusted.
Understand the Informix Server V11.70 defragmenter (IBM developerWorks)
You can defragment a table, a fragment, or an index, including system catalogs. Defragmenting reduces the number of extents in the partition. In this article, you
can learn about the defragmenter through usage examples.
Main resources
Administrator's Guide
These topics provide the information required to administer HCL Informix.
Administrator's Reference
These topics include comprehensive descriptions of HCL Informix configuration parameters, the system-monitoring interface (SMI) tables in the sysmaster
database, the syntax of database server utilities such as onmode and onstat, logical-log records, disk structures, event alarms, and unnumbered error messages.
InformixHQ
Performance Guide
These topics describe how to configure and operate your HCL Informix database server to improve overall system throughput and to improve the performance of
SQL queries.
A companion volume, the HCL Informix Administrator's Reference, contains reference material for using HCL Informix database servers. If you must tune the
performance of your database server and SQL queries, see your HCL Informix Performance Guide.
This publication is written with the assumption that you have the following background:
A working knowledge of your computer, your operating system, and the utilities that your operating system provides
Some experience working with relational databases or exposure to database concepts
Some experience with computer programming
Some experience with database server administration, operating-system administration, or network administration
This publication is written with the assumption that you are using Informix Version 12.10 as your database server.
These topics are taken from the With the release of Informix 12.10.xC10, the compression of binary large objects (BLOBs) has been enhanced:.
Fault tolerance
Distributed data
For a complete list of what's new in this release, go to What's new in Informix®.
Table 1. What's new in the IBM® Informix Administrator's Guide for 12.10.xC8
Overview Reference
Encrypt storage spaces Multitenancy
You can now encrypt storage spaces. The data in encrypted storage spaces is unintelligible without the encryption key. Database and data
Encrypting storage spaces is an effective way to protect sensitive information that is stored on media. requirements for clusters
Table 2. What's new in the IBM Informix Administrator's Guide for 12.10.xC6
Overview Reference
Limit shared memory and connections for tenant databases Multitenancy
You can limit shared memory and the number of connections for tenant databases in a multitenancy environment. You can use
configuration parameters to set limits for all tenants or parameters to the tenant create or tenant update argument to
the task or admin SQL administration API command:
Limit the amount of shared memory for all sessions that are connected to the tenant database. When the limit is
exceeded, the session that is using the most shared memory is terminated. Set the TENANT_LIMIT_MEMORY
configuration parameter or include the tenant_limit_memory parameter.
Limit the number of client connections to a tenant database. When the limit is reached, subsequent connection requests
to the tenant database are rejected. Set the TENANT_LIMIT_CONNECTIONS configuration parameter or include the
tenant_limit_connections parameter.
Table 3. What's new in the IBM Informix Administrator's Guide for 12.10.xC5
Overview Reference
Control tenant resources Multitenancy
You can further control the resources that are available for each tenant database to improve performance and restrict the tenant
database size. You can include new optional properties in the tenant definition when you run the admin() or task() SQL
administration command with the tenant create or tenant update arguments. Tenant properties take precedence over related
configuration parameters.
You can specify thesession_limit_memory property to end sessions that exceed a specified amount of shared memory, or
the session_limit_tempspace property to end those that exceed temporary storage space.
You can specify the session_limit_logspace property to roll back transactions that are too large, or the
session_limit_txn_time property to end those that take too long.
You can limit the total amount of permanent storage space for a tenant database by setting the tenant_limit_space
property or the TENANT_LIMIT_SPACE configuration parameter.
You can specify to end sessions that exceed a specified amount of shared memory or temporary storage space:
Set the SESSION_LIMIT_MEMORY configuration parameter to the maximum amount of shared memory that can be allocated
for a session.
Set the SESSION_LIMIT_TEMPSPACE configuration parameter to the maximum amount of temporary storage space that can
be allocated for a session.
You can specify to roll back transactions that are too large or take too long:
Set the SESSION_LIMIT_LOGSPACE configuration parameter to the maximum amount of log space that a transaction can fill.
Set the SESSION_LIMIT_TXN_TIME configuration parameter to the maximum number of seconds that a transaction can run.
Table 4. What's new in the IBM Informix Administrator's Guide for 12.10.xC4
Overview Reference
Multitenancy in Informix Multitenancy
You can now deploy an Informix server that supports multiple tenants. A tenant is a set of users in a client organization that needs
to access the same data and system resources. You create a dedicated tenant database, and assign storage and processing
resources for that database based on the service-level agreements with the client organization. For example, you can provide
services to multiple companies that run efficiently in a single Informix instance.
You can implement password authentication through a pluggable authentication module (PAM) for Distributed Relational Database
Architecture™ (DRDA) connections.
Table 5. What's new in the IBM Informix Administrator's Guide for 12.10.xC3
Overview Reference
Automatic resource tuning for performance Buffer pool portion of
shared memory
You can configure the database server to adjust resources to improve performance:
Dynamically add
Increase the size of the buffer pool: Include the extendable=1 option in the BUFFERPOOL configuration parameter logical logs for
value to make the buffer pool extendable. Use the new memory field to specify the size of the buffer pool in units of performance
memory, such as MB or GB, instead of units of pages. Buffer pools are now stored in the buffer pool segment of shared
memory. Plogspace
Increase the number of logical log files: Set the AUTO_LLOG configuration parameter to 1, the name of the dbspace for
logical logs, and optionally the maximum size of all logical log files. Determine the
Increase the number of CPU and AIO virtual processors: Include the autotune=1 option in the VPCLASS configuration number of CPU
parameter values for the CPU and AIO virtual processor settings. Optionally include a maximum number of CPU VPs. virtual processors
Increase the size of the physical log size: Create a plogspace storage space to store the physical log by running the needed
onspaces -c -P command. The plogspace is extendable by default.
AIO virtual
If you create a server during installation, the buffer pool, logical log, and physical log are configured for automatic expansion. The processors
number of expected users that you specify in the installation program sets the value of the AUTO_TUNE_SERVER_SIZE
configuration parameter, which controls the sizes of the buffer pool, the dbspace for the logical log, the plogspace, and other
automatically created storage spaces.
Set the AUTOLOCATE configuration parameter or session environment option to the number of initial round-robin fragments to
create for new tables. By default, all dbspaces are available. More fragments are added as needed when the table grows. You
can manage the list of dbspaces for table fragments by running the admin() or task() SQL administration API command with
one of the autolocate datatabase arguments.
Use the ROUNDROBIN policy to direct client connection requests in a repeating, ordered fashion (round-robin) to a group
of servers.
Use the SECAPPLYBACKLOG policy to redirect connections away from secondary, high-availability cluster servers that
have apply backlogs over a specific threshold.
If failover processing in a high-availability cluster cannot complete, the Connection Manager now automatically retries failover
processing at 1-minute intervals. The RETRY attribute of the Connection Manager FOC parameter is ignored because that
attribute is not supported as of this fix pack.
Overview Reference
Easier configuration and cloning of a server for replication Creating a clone of a
primary server
If you create a server during installation, you can easily create an Enterprise Replication domain or a high-availability cluster.
Previously, you had to configure connectivity manually on each server.
Run the ifxclone command with the -autoconf option to clone a server, configure connectivity, and start replication. You can
now create HDR and shared-disk secondary servers with the ifxclone utility.
Table 6. What's new in the IBM Informix Administrator's Guide for 12.10.xC1
Overview Reference
Automatic space management for Enterprise Replication Automatic space
management
If you have a storage pool, storage spaces are created automatically if needed when you define a replication server. Also,
the CDR_DBSPACE and CDR_QDATA_SBSPACE configuration parameters are set automatically in the onconfig file. In
earlier versions of Informix, you had to create the required spaces and set the configuration parameters before you could
define a replication server.
Configuring log flow control for shared-disk secondary servers Flow control for shared-disk
secondary servers
You can limit log activity on the primary server so that shared-disk (SD) secondary servers in the cluster can catch up. This
configuration can improve performance over congested or intermittent networks. You use the SDS_FLOW_CONTROL
configuration parameter to set thresholds that start and stop flow control.
Improved transactional consistency for HDR synchronization Fully synchronous mode for
HDR replication
Use improved HDR synchronization options to balance system performance and data protection in your high-availability
cluster. Set the new HDR_TXN_SCOPE configuration parameter or environment option to choose between fully Nearly synchronous mode
synchronous mode, asynchronous mode, or nearly synchronous mode. The three synchronization modes control when for HDR replication
transaction commits are returned to client applications: after being processed on the primary server, after being sent to the
HDR secondary server, or after being processed on the HDR secondary server. HDR synchronization can be set at the Asynchronous mode for
instance or session level. HDR replication
You can view more information about parameters, including current values, valid ranges, and parameter descriptions, with
onstat commands.
Improve space utilization by compressing, repacking, and shrinking B-tree indexes B-tree index compression
You can use SQL administration API commands or CREATE INDEX statements to save disk space by compressing B-tree
indexes. You can also use SQL administration API commands to consolidate free space in a B-tree index, return this free
space to the dbspace, and estimate the amount of space that is saved by compressing the indexes.
Save disk space by compressing simple large objects in dbspaces Data that you can compress
You can use SQL administration API commands to save disk space by compressing simple large objects (TEXT and BYTE
data types) that are stored in the same partition in the same dbspace as the table in which they are referenced. When you
run an SQL administration API compress or uncompress command, the database server compresses both the table row
data and the referenced simple large objects. You can choose to compress or uncompress only the table row data or only
the referenced simple large objects.
Also, when you run SQL administration API create dictionary and compress commands on existing tables and
fragments, you enable the automatic compression of subsequent data loads that contain 2,000 or more rows of data. If
you run an uncompress command, you disable automatic compression.
In addition to saving space, automatic compression saves time because you do not have to compress the data after you
load it.
Temporary table projection optimization for views and derived tables Temporary tables that the
database server creates
Applications and analytic tools can define a query in which a derived table contains multiple views joined with base tables,
potentially including hundreds of columns. The database server materializes this query in a system-generated temporary
table. The parent query, however, might project only a few columns.
Overview Reference
The database server creates internally generated temporary tables that include only the columns that are specified in the
Projection list, the WHERE clause, the ORDER BY clause, and in other clauses of the immediate parent query. By
excluding unnecessary columns from the temporary table, the database server uses storage resources efficiently and
avoids I/O operations on the columns that do not contribute to the query result.
Client/server communication
These topics explain the concepts and terms that you must understand in order to configure client/server communication.
When you install HCL Informix, follow the installation instructions to ensure that all prerequisites are met (for example, the permissions of all key files and directories are
set appropriately). The installation instructions are in the HCL Informix Installation Guide.
You must have the correct permissions to administer the database server. For most administration tasks, you need the following permissions:
On UNIX, you must be logged in as user root, user informix, or the owner of the non-root installation. If role separation is enabled, you must be granted the
DBSA role.
On Windows, you must be a member of the Informix-Admin group.
You have various options to choose from when you configure the database server. Configuration includes customizing your environment and the database server. You
can control how the database server runs and what function is available.
You must configure connectivity to connect to client administration tools and applications.
You must do some initial administration tasks to finish setting up your database server system. After you configure the database server, your administration
responsibilities include a set of routine tasks.
Environment configuration
You configure your environment by setting environment variables and creating or modifying files that relate to the environment variables. You can control whether
environment variables are set at the environment level, for a specific user, or for a database session. You must set environment variables for the database server
environment and for the client environments.
Root dbspace
The root dbspace is the initial dbspace, or storage space, that the database server creates. The root dbspace contains reserved pages and internal tables that
describe and track all physical and logical units of storage. The root dbspace is the default location for logical logs, the physical log, databases, and temporary tables.
The database server cannot run without the root dbspace.
Configuration (onconfig) file
The database server requires a configuration file. Typically, the name of the configuration file is onconfig.server_name. The onconfig file contains configuration
parameters that control database server properties. The database server reads the onconfig file during startup, shutdown, and for some operations while the server
is running. Many configuration parameters can also be set dynamically while the database server is running.
Virtual processors
A virtual processor runs multiple threads to perform queries and other tasks. The operating system schedules virtual processors as CPU processes. Multiple virtual
processors run multiple threads in parallel. Virtual processors are divided into classes where each class is dedicated to processing a specific type of thread.
Logical logs
The database server contains several logical log files that record data manipulation operations for logged databases, data definition operations for all databases, and
administrative information such as checkpoint records and additions and deletions of chunks. A logical log is similar to a transaction log in other relational database
management systems.
Physical log
The physical log stores the before-images of pages. "Before images" are images of pages that are taken before the database server records the changed pages on
disk. The unmodified pages are available in case the database server fails or a backup procedure requires the pages to provide an accurate snapshot of the database
server data.
Buffer pool
The buffer pool contains buffers that cache pages from disk in shared memory. Operations on pages that are cached run faster than operations on pages that must be
retrieved from disk.
Caches
The database server uses caches to store information in shared memory instead of performing a disk read or another operation to obtain the information. Caching
information improves performance for multiple queries that access the same tables.
Scheduler
The Scheduler is a subsystem that runs a set of tasks at predefined times or as determined internally by the server. Tasks are SQL statements can either collect
information or run a specific operation. Some tasks are internal to the database server and run automatically. You can enable other tasks, if appropriate. You can also
create your own tasks and schedule when they are run.
System databases
The system databases contain information about the database server. The sysmaster database contains the system-monitoring interface (SMI) tables. The SMI tables
provide information about the state of the database server. The sysadmin database contains the tables that contain and organize the Scheduler tasks and sensors,
store data that is collected by sensors, and record the results of Scheduler jobs and SQL administration API functions.
If you choose to create a database server instance during installation, the installation program sets the mandatory environment variables. Otherwise, you must set
environment variables before you start the database server. The following environment variables are mandatory:
The INFORMIXDIR environment variable specifies the directory where you installed the database server.
The INFORMIXSERVER environment variable specifies the name of the database server.
The ONCONFIG environment variable specifies the name of the onconfig file in the INFORMIXDIR/etc directory.
The PATH environment variable must include the INFORMIXDIR/bin directory.
To configure the database server environment, you can set other environment variables:
If you plan to create an sqlhosts file with a non-default name or location, set the INFORMIXSQLHOSTS environment variable to the name and path of your sqlhosts
file.
If you plan to use the DB-Access utility to run SQL statements, specify terminal properties with the INFORMIXTERM or a similar environment variable.
If you need Global Language Support (GLS), set GLS environment variables.
If you want to enable other functionality, set the appropriate environment variables. Some environment variables control functionality that is also controlled by
configuration parameters. Environment variables override configuration parameter settings.
To configure client environments, you can set the environment variables that are supported by your client API. For more information, see your client API manual.
You can choose from multiple methods for setting environment variables. For example, you can run the SET ENVIRONMENT statement to set environment variables for
the current session. You can add environment variable settings to log in scripts, at the command prompt, or in a configuration file.
Related information:
Environment variables
GLS-related environment variables
Environment variables for clients
Environment variable changes by version
You customize the database server properties by setting or modifying configuration parameters in the onconfig file. You can use the HCL OpenAdmin Tool (OAT) for
Informix® to monitor and update your configuration. OAT provides suggestions for configuration parameter values to optimize your database server configuration. The
current version of HCL Informix does not use some configuration parameters that are used in earlier versions of the server.
If you choose to configure a database server during installation, many configuration parameter and environment variables are set and a set of storage spaces are
created automatically. Alternatively, you can manually configure the database server.
When you start the database server for the first time, disk space is initialized and the initial chunk of the root dbspace is created. Any existing data in that disk space is
overwritten. Shared memory that the database server requires is also initialized. When you subsequently start the database server, only shared memory is initialized.
Although the root dbspace is the default location of log files and databases, you can store log files and databases in other storage spaces to prevent the root dbspace
from running out of space.
Feature configuration
You can configure the database server to support the types of optional functionality that you need.
Connectivity configuration
The connectivity information allows a client application to connect to the database server on the network. You must prepare the connectivity information even if
the client application and the database server are on the same computer or node.
Related concepts:
Database server initialization
Related information:
Configuration parameter changes by version
Modifying the onconfig file
Database server configuration during installation
Creating a database server after installation
A storage space is composed of one or more chunks. The maximum chunk size is 4 TB. You can have up to 32766 chunks in an instance.
After the database server is initialized, you can create storage spaces such as dbspaces and sbspaces. Use the HCL OpenAdmin Tool (OAT) for Informix® or the
onspaces utility to create storage spaces and chunks.
dbspace
Stores databases, tables, logical-log files, and the physical log file.
Temporary dbspaces store temporary tables.
sbspace
Stores smart large objects. Smart large objects consist of CLOB (character large object) and BLOB (binary large object) data types. User-defined data types can also
use sbspaces. Some features of Informix require sbspaces, such as Enterprise Replication, J/Foundation, spatial data types, and basic text search queries. In some
cases, sbspaces are created automatically when needed.
Temporary sbspaces store temporary smart large objects without logging metadata or user data.
plogspace
Stores the physical log. If you do not create a plogspace, the physical log is stored in a dbspace.
If you create a server during installation, some storage spaces are created automatically.
You can minimize the amount of space that data needs by configuring automatic data compression and consolidation. You can compress data, consolidate and return
free space, and merge extents. You can specify how frequently each of the operations occurs.
You can automatically rotate message logs to limit the amount of space for the logs.
After you create storage spaces, you can configure the server to automatically extend each storage space as needed. You create a storage pool of entries for available
raw devices, cooked files, and directories, and you make sure that the SP_AUTOEXPAND configuration parameter set to the default value of 1. All types of storage
spaces except external spaces (extspaces) are automatically expanded.
You can automate the process of deciding where to locate databases, tables, and indexes. You can enable the database server to choose the most optimal location for
databases, table, and indexes, and to automatically fragment tables. Instead of creating a new database in the root dbspace by default, the database server chooses the
location by favoring non-critical spaces, spaces that have the most efficient page size, and other factors. The database server fragments new tables by round-robin and
adds more fragments when necessary as the table grows.
You can override the automatic behavior by specifying a location for a database or table.
Multitenancy
You can create multiple tenant databases in a single Informix instance to segregate data, storage space, and processing resources among multiple client organizations.
Related concepts:
Managing automatic location and fragmentation
Data storage
Manage disk space
Automatic space management
Storage optimization
Multitenancy
Related information:
Manage message logs in an embedded environment
You can configure the database server to adjust resources to improve performance:
Increase the number of CPU virtual processors (VPs), up to the number of CPU processors or the number that you specify. Set the VPCLASS configuration
parameter for the cpu class to autotune=1.
Increase the number of AIO VPs. Set the VPCLASS configuration parameter for the aio class to autotune=1.
Increase the size of the buffer pool. Set the BUFFERPOOL configuration parameter to enable the automatic extension of the buffer pool.
Increase or decrease the size of private memory caches for CPU VPs. Set the VP_MEMORY_CACHE_KB configuration parameter to the initial size of the private
memory caches.
Increase the number of logical log files to improve performance. Set the AUTO_LLOG configuration parameter to 1, plus the name of the dbspace in which to add
log files, and an optional maximum number of KB for all logical log files.
Increase the size of the physical log as needed to improve performance. Create the plogspace to store the physical log.
If you created a server during installation, the buffer pool, logical log, and physical log are configured for automatic extension.
The following automatic tuning options are enabled by default. You can control whether the options are enabled.
Increase the number of CPU virtual processors to half the number of CPU processors to ensure optimum performance. Control with the auto_tune_cpu_vps task
in the Scheduler.
Increase the number of AIO virtual processors and page cleaner threads increase I/O capability. Control with the AUTO_TUNE configuration parameter.
Process read-ahead requests to reduce the time to wait for disk I/O. Control with the AUTO_TUNE configuration parameter.
Trigger checkpoints as frequently as necessary and add logical log files as needed to avoid the blocking of transactions. Control with the AUTO_TUNE and the
DYNAMIC_LOGS configuration parameters.
Tune LRU flushing to improve transaction throughput. Control with the AUTO_TUNE configuration parameter.
Reoptimize SPL routines and reprepare prepared objects after the schema of a table is changed to prevent manual processes and errors. Control with the
AUTO_TUNE configuration parameter.
Updates statistics that are stale or missing at scheduled intervals to improve query performance. Control with Auto Update Statistics tasks in the Scheduler and
the AUTO_TUNE configuration parameter.
Run light scans on compressed tables, tables with rows that are larger than a page, and tables with VARCHAR, LVARCHAR, and NVARCHAR data. Control with
the BATCHEDREAD_TABLE configuration parameter.
Fetch a set of keys from an index buffer to reduce the number of times that a buffer is read. Control with the BATCHREAD_INDEX configuration parameter.
Increase shared memory caches to improve query performance. Control with the DS_POOLSIZE, PC_POOLSIZE, PLCY_POOLSIZE, and USRC_POOLSIZE
configuration parameters.
Related reference:
Built-in tasks and sensors
Related information:
Database configuration parameters
onspaces -c -P: Create a plogspace
Related concepts:
High availability and scalability
Mirroring
Related information:
Parallel database query (PDQ)
Auditing data security
Securing data
Distributed queries
Overview of Informix Warehouse Accelerator
Storage space encryption
IBMInformix Enterprise Replication technical overview
Informix® client/server connectivity information, the sqlhosts information, includes the database server name, the type of connection that a client can use to connect to
the database server, the host name of the computer or node on which the database server runs, and the service name by which it is known. You do not need to specify
all possible network connections in the sqlhosts information before you start the database server. However, to make a new connection available you must shut down the
database server and then restart it.
The sqlhosts file contains connectivity information. You might also need to modify other connectivity and security files, depending on your needs.
When the database server is online, you can connect client applications and begin to create databases. Before you can access information in a database, the client
application must connect to the database server environment. To connect to and disconnect from a database server, you can issue SQL statements from the client
programs that are included in the HCL Informix Client Software Development Kit (Client SDK), such as OpenAdmin Tool (OAT) for Informix, DB-Access, or API drivers.
Related reference:
Connectivity files
You can set the IFX_SESSION_LIMIT_LOCKS environment option in the session, to specify a lower lock limit than the SESSION_LIMIT_LOCKS configuration
parameter value.
Session limits do not apply to a user who holds administrative privileges, such as user informix or a DBSA user.
Transactions and sessions that exceed a set limit are terminated by the session_mgr thread. The session_mgr thread starts when the database server starts, and
remains inactive until a session limit is exceeded.
Related information:
SESSION_LIMIT_LOCKS configuration parameter
SESSION_LIMIT_MEMORY configuration parameter
SESSION_LIMIT_TEMPSPACE configuration parameter
SESSION_LIMIT_LOGSPACE configuration parameter
SESSION_LIMIT_TXN_TIME configuration parameter
TENANT_LIMIT_SPACE configuration parameter
onstat -g ses command: Print session-related information
Modify the UNIX startup script to start the database server automatically when your computer enters multiuser mode.
1. Add UNIX and database server utility commands to the UNIX startup script so that the script performs the following actions:
Sets the INFORMIXDIR environment variable to the full path name of the directory in which the database server is installed.
Sets the PATH environment variable to include the $INFORMIXDIR/bin directory.
Sets the INFORMIXSERVER environment variable so that the sysmaster database can be updated (or created, if necessary).
Runs the oninit command, which starts the database server and leaves it in online mode.
2. If you plan to start multiple versions of the database server (multiple residency), you must add commands in the script to set the ONCONFIG and
INFORMIXSERVER environment variables and run the oninit command for each instance of the database server.
3. If different versions of the database server are installed in different directories, you must add commands to the script to set the INFORMIXDIR environment
variable and repeat the preceding steps for each version.
Modify your UNIX shutdown script to shut down the database server in a controlled manner whenever UNIX shuts down. The database server shutdown commands run
after all client applications complete transactions and exit.
1. Add UNIX and database server utility commands to the UNIX shutdown script so that the script performs the following tasks:
Sets the INFORMIXDIR environment variable to the full path name of the directory in which the database server is installed.
Sets the PATH environment variable to include the $INFORMIXDIR/bin directory.
Sets the ONCONFIG environment variable to the appropriate configuration file.
Runs the onmode -ky command, which initiates an immediate shutdown and takes the database server offline.
2. If you are running multiple versions of the database server (multiple residency), you must add commands in the script to set the ONCONFIG environment variable
and run the onmode -ky command for each instance.
3. If different versions of the database server are installed in different directories, you must add commands to the script to set the INFORMIXDIR environment
variable and repeat the preceding steps for each version.
Related information:
Database configuration parameters
The oninit utility
Environment variables in Informix products
1. From the Service control application window, select the HCL Informix® service and click Startup.
2. Select Automatic in the Status Type dialog box.
3. In the Log On As dialog box, select This Account and verify that informix is in the text box.
You can use the HCL OpenAdmin Tool (OAT) for Informix® or Informix command-line utilities to perform the following tasks. Not all of the following tasks are appropriate
for every installation.
Related concepts:
Consistency checking
Logical log
Physical logging, checkpoints, and fast recovery
Shared memory
Virtual processors and threads
Related information:
Overview of backup and restore
Tasks that automatically rotate message log files
OAT is a web application for administering and analyzing the performance of HCL Informix database servers. You can administer multiple database server instances
from a single OAT installation on a web server. You can access the web server through any browser to administer all your database servers. The HCL Informix Health
Advisor Plug-in for OpenAdmin Tool (OAT) analyzes the state of the Informix database server. The Health Advisor plug-in gathers information about the database
server, the databases, and the operating system, and creates a report that contains the results and recommendations.
You can check the status and performance of one or more Informix database servers from your mobile device with the HCL Mobile OpenAdmin Tool for Informix.
Event alarms
You can enable event alarms to report situations that require your immediate attention. To use the event-alarm feature, set the ALARMPROGRAM configuration
parameter to the full path name of an executable file that performs the necessary administrative actions. You can monitor event alarms in OAT or configure how to
receive the alarms.
Database server message log
The database server message log is an operating-system file. The messages that are contained in the database server message log do not usually require immediate
action. If the database server experiences a failure, the message log serves as an audit trail for retracing the events that led to an unanticipated problem. Often the
message log provides the exact nature of the problem and the suggested corrective action.
Monitor the message log once or twice a day to ensure that processing is proceeding normally and that events are being logged as expected. You can view the
message log in OAT.
Alternatively, run the onstat -m command to obtain the name of the message log and the 20 most recent entries, or read the complete message log in a text editor.
Run an operating-system command, such as the UNIX command tail -f, to see the messages as they occur. To view additional information about a message, use
the finderr utility.
onstat utility
The onstat utility provides commands to monitor the database server from the command line. The onstat utility reads data from shared memory and reports
statistics that are accurate for the instant during which the command runs. That is, onstat provides information that changes dynamically during processing, including
changes in buffers, locks, indexes, and users. You can run onstat commands at the command line or in OAT.
SMI tables
The system-monitoring interface (SMI) tables contain dynamic information about the state of the database server. You can use SELECT statements on SMI tables to
determine almost anything you might want to know about your database server. SMI tables are in the sysmaster database.
System console
The database server sends messages that are useful to the database server administrator by way of the system console. By default, the system console is set to
online.con. To change the destination path name of console messages, set the CONSOLE configuration parameter.
Windows only: A database server system administrator can log in to the console from any node to perform system management and monitoring tasks.
Related information:
Event Alarms
The finderr utility
The onstat utility
The sysmaster database
The following table shows typical UNIX operating-system resource-monitoring utilities. For information about how to monitor your operating-system resources, consult
your system administration guide.
The following Informix utilities simplify administration of the database server on Windows.
If you are logged on locally and run ixpasswd, it changes the password for services that log on as the local informix
user. If you are logged on domain and run ixpasswd, it changes the password for services that log on as
domain\informix
ixsu.exe Opens a command line window that runs as the specified user. The user is a local user unless you specify a domain
name. If you do not specify a user name, the default user is informix. You no longer are required to log off as the current
user and log on as informix to do DBA tasks that must be run as informix.
For information about setting Advanced User Rights, see your Windows documentation.
ntchname.exe Changes the registry entries for HCL Informix from the old host name to the new host name. Run ntchname after you
change the host name. This utility does not change user environment variables.
After you run ntchname, edit the %INFORMIXDIR%\%INFORMIXSERVER%.cmd file and change the INFORMIXSQLHOSTS
entry to the new host name.
The Event Viewer shows informational, warning, and error messages for the operating system, other applications, and the database server.
To show database server messages, from the Administrative Tools window, choose Event Viewer > Security. Double-click any event for a detailed message.
The Windows Performance Monitor (perfmon.exe) shows resources such as processor, memory, cache, threads, and processes. The Performance Monitor also
creates charts, alerts, and reports. You can save information to log files for later analysis.
To show the Performance Monitor, from the Administrative Tools window, choose Performance.
Client/server architecture
HCL Informix® products conform to the client/server software-design model.
Local connections
A local connection is a connection between a client and the database server on the same computer.
Connectivity files
The connectivity files contain the information that enables client/server communication and enable a database server to communicate with another database
server.
Related tasks:
Application or clients can be on the computer housing the database server or on a different computer. Client applications issue requests for services and data from the
database server. The database server responds by providing the services and data that the client requested.
You use a network protocol together with a network programming interface to connect and transfer data between the client and the database server.
Network protocol
A network protocol is a set of rules that govern how data is transferred between applications and, in this context, between a client and a database server. The
rules of a protocol are implemented in a network driver. A network driver contains the code that formats the data when it is sent from client to database server and
from database server to client.
Clients and database servers gain access to a network driver by way of a network programming interface. A network programming interface contains system calls or
library routines that provide access to network-communications facilities. An example of a network programming interface for UNIX is TLI (Transport Layer Interface). An
example of a network programming interface for Windows is WINSOCK (sockets programming interface).
The power of a network protocol lies in its ability to enable client/server communication even though the client and database server are on different computers with
different architectures and operating systems.
You can configure the database server to support more than one protocol, but consider this option only if some clients use TCP/IP.
Related concepts:
The sqlhosts file and the SQLHOSTS registry key
Database server connections
Related tasks:
Connections that the database server supports
Related reference:
Network-configuration files
In the context of this explanation, the client and the database server are the applications that call the routines in the TLI or sockets API. Clients and database servers
both use network programming interfaces to send and receive the data according to a communications protocol.
Both client and database server environments must be configured with the same protocol if client/server communication is to succeed. However, some network protocols
can be accessed through more than one network programming interface. For example, TCP/IP can be accessed through either TLI or sockets, depending on which
programming interface is available on the operating-system platform.
Related concepts:
Communication support services
Network security files
Related reference:
A network connection
A domain controller manages the user account information for all domain members. The domain controller facilitates network administration. By managing one account
list for all domain members, the domain controller relieves the network administrator of the requirement to synchronize the account lists on each of the domain
computers. In other words, the network administrator who creates or changes a user account must update only the account list on the domain controller rather than the
account lists on each of the computers in the domain.
To log in to a Windows database server, a user on another Windows computer must belong to either the same domain or a trusted domain. A trusted domain is one that
establishes a trust relationship with another domain. In a trust relationship, user accounts are only in the trusted domain.
A user who attempts to log in to a Windows computer that is a member of a domain can do so either by using a local login and profile or a domain login and profile.
However, if the user is listed as a trusted user or the computer from which the user attempts to log in is listed as a trusted host, the user can be granted login access
without a profile.
Important: A client application can connect to the database server only if there is an account for the user ID in the Windows domain in which the database server runs.
This rule also applies to trusted domains.
If you specify a user identifier but no domain name for a connection to a workstation that expects both a domain name and a user name (domain\user), the database
server checks only the local workstation and the primary domain for the user account. If you explicitly specify a domain name, that domain is used to search for the user
account. The attempted connection fails with error -951 if no matching domain\user account is found on the local workstation.
Use the CHECKALLDOMAINSFORUSER configuration parameter to configure how the database server searches for user names in a networked Windows environment.
Table 1. Locations Informix searches for user names specified either alone or with a domain name.
Domain and user specified User name only specified
CHECKALLDOMAINSFORUSER is unset Searches in the specified Searches on the local host only
domain only
CHECKALLDOMAINSFORUSER=0 Searches in the specified Searches on the local host only
domain only
CHECKALLDOMAINSFORUSER=1 Searches in the specified Searches in all domains
domain only
Important: The database server's trusted client mechanism is unrelated to the trust relationship that you can establish between Windows domains. Therefore, even if a
client connects from a trusted Windows domain, the user must have an account in the domain on which the database server is running.
Related information:
CHECKALLDOMAINSFORUSER configuration parameter
An application might contain the following CONNECT statement to connect to the database server named my_server:
CONNECT TO '@my_server'
Tip: The database server's internal communications facility is called Association Services Facility (ASF). If you see an error message that includes a reference to ASF,
you have a problem with your connection.
Related reference:
Network protocol
Related information:
CONNECT statement
DATABASE statement
Client applications can establish multiple connections to a database server to access more than one database on behalf of a single user. If the connections are not
multiplexed, each database connection establishes a separate network connection to the database server. Each additional network connection uses additional computer
memory and processor time, even for connections that are not active. Multiplexed connections enable the database server to create multiple database connections
without using up the additional computer resources that are required for additional network connections.
1. Define an alias using the DBSERVERALIASES configuration parameter. For example, specify:
DBSERVERALIASES ifx_mux
2. Add an sqlhosts file entry for the alias using onsqlmux as the nettype entry. The hostname and servicename, must have entries, but the entries are ignored.
Dashes (-) can be used as entries. For example:
3. Enable multiplexing for the selected connection types by specifying m=1 in the sqlhosts entry that the client uses for the database server connection. For
example:
4. On Windows platforms, you must also set the IFX_SESSION_MUX environment variable.
The following example shows both onconfig file and sqlhosts file entries.
onconfig file:
DBSERVERNAME web_tli
DBSERVERALIASES web_mux
sqlhosts file:
You are not required to change the sqlhosts information that the database server uses. The client program does not require any special SQL calls to enable
connections multiplexing. Connection multiplexing is enabled automatically when the onconfig file and the sqlhosts entries are configured appropriately and the
database server starts.
If any of these conditions exist when an application attempts to establish a connection, the database server establishes a standard connection. The database server
does not return an SQL error.
The Informix® ESQL/C sqlbreak() function is not supported during a multiplexed connection.
Related concepts:
The sqlhosts file and the SQLHOSTS registry key
Related reference:
sqlhosts file and SQLHOSTS registry key options
Related information:
DBSERVERNAME configuration parameter
DBSERVERALIASES configuration parameter
Multiplexed connections and CPU utilization
Secure Sockets Layer (SSL) connections use encryption for data communication between two points over a network.
When configuring connectivity, consider setting the LISTEN_TIMEOUT and MAX_INCOMPLETE_CONNECTION configuration parameters. These parameters enable
you to reduce the risk of a hostile denial-of-service (DOS) attack by making it more difficult to overwhelm the Listener VP that handles connections.
UNIX only: On many UNIX platforms, the database server supports multiple network programming interfaces. The machine notes show the interface/protocol
combinations that the database server supports for your operating system.
Related concepts:
The sqlhosts information
Related reference:
Network protocol
Connectivity files
Configuration parameters related to connectivity
Environment variables for network connections
sqlhosts connectivity information
Related information:
NETTYPE configuration parameter
Secure sockets layer protocol
LISTEN_TIMEOUT configuration parameter
MAX_INCOMPLETE_CONNECTIONS configuration parameter
Limiting denial-of-service flood attacks
Local-loopback connections
A network connection between a client application and a database server on the same computer is called a local-loopback connection.
Figure 1. Client application and a database server communication through a shared-memory connection.
Shared memory provides fast access to a database server, but it poses some security risks. Errant or malicious applications might delete or view message buffers of
their own or of other local users. Shared-memory communication is also vulnerable to programming errors if the client application performs explicit memory addressing
or over-indexes data arrays. Such errors do not affect the database server if you use network communication or stream pipes.
The following figure shows a shared-memory connection on the computer named river.
Figure 2. A shared-memory connection between a client application and a database server named river_shm.
The onconfig file for this installation includes the following line:
DBSERVERNAME river_shm
The sqlhosts file for this installation includes the following lines:
The client application connects to this database server using the following statement:
CONNECT TO '@river_shm'
For a shared-memory connection, no entries in network configuration files are required. Use arbitrary values for the hostname and servicename fields of the sqlhosts
file.
Related concepts:
Communications portion of shared memory (UNIX)
How a client attaches to the communications portion (UNIX)
Related reference:
Examples of client/server configurations
Related reference:
sqlhosts connectivity information
Named-pipe connections provide a high-level interface to network software by making transport-layer operations transparent. Named pipes store data in memory and
retrieve it when requested, in a way that is similar to reading from and writing to a file system.
The networking facilities used are the same as if the client application and the database server were on different computers. You can make a local-loopback connection
provided your computer is equipped to process network transactions. Local-loopback connections are not as fast as shared-memory connections, but they do not pose
the security risks of shared memory.
In a local-loopback connection, data seems to pass from the client application, out to the network, and then back in again to the database server. The internal connection
processes send the information directly between the client and the database server and do not put the information out on the network.
The following figure shows a local-loopback connection that uses sockets and TCP/IP.
Figure 1. A local-loopback connection between a client and a database server named river_soc on a computer named river.
The sqlhosts file for this installation includes the following lines:
If the network connection uses TLI instead of sockets, only the nettype entry in this example changes. In that case, the nettype entry is ontlitcp instead of onsoctcp.
The onconfig file for this installation includes the following lines:
DBSERVERNAME river_soc
This example assumes that an entry for river is in the hosts file and an entry for riverol is in the services file.
Related reference:
Examples of client/server configurations
Message integrity, which ensures that communication messages are intact and unaltered when they arrive at their destination.
Message confidentiality, which protects messages from unauthorized viewing, usually through encryption and decryption, during transmission.
Communication support services can also include other processing such as data compression or traffic-based accounting.
The database server provides extra security-related communication support services through plug-in software modules called Communication Support Modules (CSM).
The database server uses the default authentication policy when you do not specify a communications support module.
Related concepts:
Network programming interface
Network security files
Related information:
Communication support modules for data transmission encryption
Windows: On the database server, the connectivity information is stored in the sqlhosts file; however, on clients the connectivity information is stored in the
SQLHOSTS registry.
Network-configuration files
These topics identify and explain the use of network-configuration files on TCP/IP networks.
Related tasks:
Connections that the database server supports
Related reference:
Connectivity configuration
Related reference:
Network protocol
The hosts file requires a single entry for each network-controller card that connects a computer running HCL Informix® client/server products on the network. Each entry
in the file contains the IP address (or ethernet card address) and host name. You can also include the host alias. Although the length of the host name is not limited in
the hosts file, the HCL Informix database server limits the host name to 256 bytes.
The services file contains an entry for each service available through TCP/IP. Each entry is a single line that contains the following information:
Service name
HCL Informix products use this name to determine the port number and protocol for making client/server connections. The service name is limited to 128 bytes.
The port number is the computer port, and the protocol for TCP/IP is tcp.
The operating system imposes restrictions on the port number. User informix must use a port number equal to or greater than 1024. Only root users are allowed
to use a port number lower than 1024.
The service name and port number are arbitrary. However, they must be unique within the context of the file and must be identical on all the computers running HCL
Informix client/server products. The following example has one entry:
#servicename port/protocol
server2 1526/tcp
This entry makes server2 known as the service name for TCP port 1526. A database server can then use this port to service connection requests.
Important: For database servers that communicate with other database servers, you must define either a TCP/IP connection or an IPCSTR (interprocess
communications stream pipe) connection for the DBSERVERNAME configuration parameter. You can also define at least one DBSERVERALIASES configuration
parameter setting with the appropriate connection protocol for connectivity between the coordinator and the subordinate servers. For cross-server transactions, each
participating server must support a TCP/IP or an IPCSTR connection with the coordinator, even if both database server instances are on the same workstation.
You typically include a separate NETTYPE parameter for each connection type that is associated with a dbserver name. You list dbserver name entries in the
DBSERVERNAME and DBSERVERALIASES configuration parameters. You associate connection types with dbserver names through entries in the sqlhosts file or
registry.
The hosts and services files must be available to each computer that runs HCL Informix client/server products.
UNIX:
The hosts and services files are in the /etc directory.
On systems that use NIS, the hosts and services files are maintained on the NIS server. The hosts and services files that are on your local computer might
not be used and might not be up to date. To view the contents of the NIS files, enter the following commands on the command line:
ypcat hosts
ypcat services
Windows:
The hosts and services files are in %WINDIR%\system32\drivers\etc\.
You might want to configure TCP/IP to use the Domain Name Service (DNS) for host name resolutions.
The Dynamic Host Configuration Product (DHCP) dynamically assigns IP addresses from a pool of addresses instead of using IP addresses that are explicitly
assigned to each workstation. If your system uses DHCP, Windows Internet Name Service (WINS) is required. DHCP is transparent to the database server.
Related reference:
sqlhosts connectivity information
Related information:
NETTYPE configuration parameter
Each port of a single IP address must be unique. Separate ethernet cards can use unique or shared port numbers. You might want to use the same port number
on ethernet cards connecting to the same database server. (In this scenario, the service name is the same.)
Put one entry per ethernet card in the hosts file with a separate IP address, as in the following example:
In the onconfig file, set DBSERVERNAME configuration parameter for one of the ethernet cards and the DBSERVERALIASES configuration parameter for the other
ethernet card. The following lines show sample entries in the onconfig file:
DBSERVERNAME chicago1
DBSERVERALIASES chicago2
Add one sqlhosts entry for each ethernet card. That is, make an entry for the DBSERVERNAME and another entry for the DBSERVERALIASES.
After this configuration is in place, the application communicates through the ethernet card assigned to the dbserver name that the INFORMIXSERVER environment
variable provides.
Related information:
INFORMIXSERVER environment variable
For a client application to connect to a database server on a remote computer, the user of the client application must have a valid user ID on the remote computer.
Trusted-host information
Users on trusted hosts are allowed to access the local system without supplying a password. You can include an optional user name to limit the authentication to
a specific user on a specific host.
Trusted-user information
A user can list hosts from which they can connect as a trusted user in their .rhosts file.
Related concepts:
Network programming interface
Communication support services
Related information:
REMOTE_SERVER_CFG configuration parameter
REMOTE_USERS_CFG configuration parameter
S6_USE_REMOTE_SERVER_CFG configuration parameter
Use one of the following trusted-hosts files to specify remote hosts for rlogin, rsh, rcp, and rcmd remote-authentication:
hosts.equiv
The file that is specified by a database server's REMOTE_SERVER_CFG configuration parameter
Use trusted-hosts information only for client applications that do not supply a user account or password. If a client application supplies an invalid account name and
password, the database server rejects the connection even if the trusted-host information contains an entry for the client computer.
To use trusted-host information for authentication, specify the s=1 or s=3 options in sqlhosts file entries. If you do not specify an s option, s=3 is the default.
On Linux and UNIX systems, the trusted-host file is in the $INFORMIXDIR/etc/ directory.
If you are using the hosts.equiv file and you use the rlogind daemon, you can execute the following statement on the client computer to determine whether the client
is trusted:
rlogin hostname
If you log-in successfully without receiving a password prompt, the client is trusted. This method of determining if a client is trusted does not work when the file specified
by the REMOTE_SERVER_CFG configuration parameter is used
To avoid an extra DNS lookup, specify the host name both with and without the domain name. For example, if the trusted host is named host1 and it is in the domain
example.com, then add the following entries to the trusted-host file:
#trustedhost username
host1 informix
host1.example.com informix
On some networks, the host name that a remote host uses to connect to a particular computer might not be the same as the host name that the computer uses to refer to
itself. For example, the network host with the fully qualified domain name (FQDN) host2.example.com might refer to itself with the local host name viking. If this
situation occurs, specify both host-name formats:
#trustedhost
host2.example.com
viking
Using the file specified by the REMOTE_SERVER_CFG configuration parameter instead of the hosts.equiv file
In the following situations, use the REMOTE_SERVER_CFG configuration parameter and the file that the parameter specifies:
You need different trusted hosts for the database server than those listed for the OS.
The security policies at your installation do not allow the use of hosts.equiv.
You are a user of a non-root server instance and need to control which hosts are trusted.
To add entries to the file specified by the REMOTE_SERVER_CFG configuration parameter, you can manually enter the information or you can run the admin() or
task() function with the cdr add trustedhost argument. If you run cdr add trustedhost argument with the admin() or task() function on a server in a
high-availability cluster, the trusted-host information is added to the trusted-host files of all database servers in the cluster. Do not run the admin() or task() function
with the cdr list trustedhost argument if you have manually entered trusted-host information on any of the database servers in a high-availability cluster or
Enterprise Replication domain.
Related concepts:
Redirecting clients with the connectivity information
Related tasks:
Configuring secure connections for high-availability clusters
Related reference:
sqlhosts file and SQLHOSTS registry key options
Related information:
Creating sqlhost group entries for replication servers
INFORMIXSERVER environment variable
DBPATH environment variable
REMOTE_SERVER_CFG configuration parameter
S6_USE_REMOTE_SERVER_CFG configuration parameter
The .rhosts file is located in the user's home directory on the computer housing the database server. To enable the trusted user authentication, specify s=2 or s=3 in
the options in the sqlhosts entry. If you do not specify an s option, s=3 is the default.
There may be reasons why a user’s .rhosts file cannot be used. For example, a non-root installation might not have read access to a specific user’s .rhosts file. You
can specify an alternate filename by setting the REMOTE_USERS_CFG configuration parameter. If you set this parameter, the database server only has a single
trusted-user file for all users.
Each line of the .rhosts file is a host from which the user can connect. You must specify server names both with and without domain names to avoid performing an
extra DNS lookup. For example:
#trustedusers
xxx.example.com
xxx
yyy.example.com
yyy
The file specified by the REMOTE_USERS_CFG configuration parameter must be a combination of individual .rhosts files. Each single-line entry of the file has the
following format:
hostname username
For example, suppose the following two .rhosts files existed for users John and Fred:
~john/.rhosts
#trustedhosts
xxx.example.com
xxx
yyy.example.com
yyy
~fred/.rhosts
#trustedhosts
xxx.example.com
xxx
zzz.example.com
zzz
John does not trust zzz.example.com or zzz, and Fred does not trust yyy.example.com or yyy.
The .rhosts files could be combined into a single file with the following format:
#trustedhost username
xxx.example.com john
xxx john
yyy.example.com john
yyy john
xxx.example.com fred
xxx fred
zzz.example.com fred
zzz fred
Windows: A home directory is not automatically assigned when the Windows administrator creates a user identity. The administrator can add a home directory to a
user's profile with the User Manager application.
Related concepts:
Redirecting clients with the connectivity information
Related reference:
Related information:
Creating sqlhost group entries for replication servers
INFORMIXSERVER environment variable
DBPATH environment variable
REMOTE_USERS_CFG configuration parameter
UNIX: The netrc information is in the .netrc file in the user's home directory. Use any standard text editor to prepare the .netrc file. The format of a netrc entry is:
Windows: Use the Host Information tab of setnet32 to edit the netrc information.
If you do not explicitly provide the user password in an application for a remote server (that is, through the USER clause of the CONNECT statement or the user name
and password prompts in DB-Access), the client application looks for the user name and password in the netrc information. If the user explicitly specified the password
in the application, or if the database server is not remote, the netrc information is not consulted.
The database server uses the netrc information regardless of whether it uses the default authentication policy or a communications support module.
For information about the specific content of this file, see your operating system documentation.
Windows only: On Windows, a home directory is not automatically assigned when the Windows administrator creates a user identity. The administrator can add a home
directory to a user's profile with the User Manager application
User impersonation
The database server must impersonate the client to run a process or program on behalf of the client for certain client queries or operations.
Related reference:
sqlhosts file and SQLHOSTS registry key options
In order to impersonate the client, the database server must receive a password for each client connection. Clients can provide a user ID and password through the
CONNECT statement or netrc information.
The following examples show how you can provide a password to impersonate a client.
netrc
CONNECT statement
UNIX:
$INFORMIXDIR/etc/sqlhosts
Windows:
%INFORMIXDIR%\etc\sqlhosts.%INFORMIXSERVER%
If you store the information in another location, you must set the INFORMIXSQLHOSTS environment variable.
If you set up several database servers to use distributed queries, use one of the following ways to store the sqlhosts information for all the databases:
In one sqlhosts file, pointed to by the INFORMIXSQLHOSTS environment variable
In separate sqlhosts files in each database server directory
Related concepts:
Related tasks:
Supporting multiplexed connections
Related reference:
Network protocol
The sqlhosts file is located, by default, in the $INFORMIXDIR/etc directory. As an alternative, you can set the INFORMIXSQLHOSTS environment variable to the full
path name and file name of a file that contains the sqlhosts information.
Note:
Use white space (spaces, tabs, or both) to separate the fields.
Do not include any spaces or tabs within a field.
To put comments in the sqlhosts file, start a line with the comment character (#). You can also leave lines blank for readability.
Parent topic: The sqlhosts file and the SQLHOSTS registry key
If you specify a shared SQLHOSTS registry key, you must set the INFORMIXSQLHOSTS environment variable on your local computer to the name of the Windows
computer that stores the registry. The database server first looks for the SQLHOSTS registry key on the INFORMIXSQLHOSTS computer. If the database server
does not find an SQLHOSTS registry key on the INFORMIXSQLHOSTS computer, or if INFORMIXSQLHOSTS is not set, the database server looks for an
SQLHOSTS registry key on the local computer.
You must comply with Windows network-access conventions and file permissions to ensure that the local computer has access to the shared SQLHOSTS registry
key. For information about network-access conventions and file permissions, see your Windows documentation.
1. Double-click Setnet32 in the folder that contains the Client SDK products.
2. Click the Server Information tab to display the Server Information page, which has the following elements:
Informix Server
Select an existing Informix database server or type the name of a new database server.
Host Name
Select the host computer with the database server that you want to use or type the name of a new host computer.
Protocol Name
Select a network protocol from a list of protocols that the installation procedure provides.
Service Name
Specify the service name that is associated with a specific database server. Type either the service name or the port number that is assigned to the
database server on the host computer. You must obtain this information from the database server administrator.
Requirement: If you enter a service name, it must be defined on the client computer in the services file in the Windows installation directory. This file is
in system32\drivers\etc\services. The service definition must match the definition on the database server host computer.
Options
Enter options specific to the database server. For more information, see the IBM® Informix Administrator's Guide.
Sets the INFORMIXSERVER environment variable to the name of the current database server to make it the default database server.
Delete Server
Deletes the definition of a database server from the Windows registry. It also deletes the host name, protocol name, and service name associated with that
database server.
Parent topic: The sqlhosts file and the SQLHOSTS registry key
Each computer that hosts a database server or a client must include connectivity information.
In the sqlhosts file, each row contains the connectivity information for one database server, or the definition for one group.
The connectivity information for each database server includes four fields of required information and one field for options.
The group definition contains information in only three of the fields.
In the registry, the database server name is assigned to a key in the SQLHOSTS registry key, and the other fields are values of that key.
The following table summarizes the fields that are used for the SQLHOSTS information.
Field name in the Field name in the SQLHOSTS Description of connectivity Description of group information
sqlhosts file registry key information
dbservername Database server name key or Database server name Database server group name
database server group key
nettype PROTOCOL Connection type The keyword group
hostname HOST Host computer for the database No information. Use a dash as a
server placeholder in this field.
servicename SERVICE Alias for the port number No information. Use a dash as a
placeholder in this field.
options OPTIONS Options that describe or limit the Group options
connection
UNIX: If you install HCL Informix® Enterprise Gateway with DRDA in the same directory as the database server, your sqlhosts file also contains entries for the
Gateway and non-Informix database servers. However, this section covers only the entries for the database server. For information about other entries in the sqlhosts
file, see the IBM Informix Enterprise Gateway with DRDA User Manual.
IANA standard service names and port numbers in the sqlhosts.std file
The Internet Assigned Numbers Authority (IANA) assigns service names and port numbers for HCL Informix database servers.
Group information
You define server groups in the sqlhosts file or SQLHOSTS registry key. When you create a server group, you can treat multiple related database server or
Connection Manager SLA entries as a single entity for client connections to simplify connection redirection to database servers or Connection Managers. You
must create group entries for database servers that participate in Enterprise Replication.
Related concepts:
Strategies for increasing availability with Connection Managers
Related tasks:
Defining sqlhosts information for connection management
Configuring connection management
Connections that the database server supports
Configuring connectivity between Informix database servers and IBM Data Server clients
Defining sqlhosts information for connection management of high-availability clusters
Defining sqlhosts information for connection management of high-availability clusters that use secure ports
Defining sqlhosts information for high-availability clusters that use Distributed Relational Database Architecture (DRDA)
Defining sqlhosts information for high-availability clusters that use Distributed Relational Database Architecture (DRDA) and secure ports
Defining sqlhosts information for connection management of grids and replicate sets
Defining sqlhosts information for connection management of grids and replicate sets that use secure ports
Defining sqlhosts information for connection management high-availability replication systems
Defining sqlhosts information for connection management of high-availability replication systems that use secure ports
Defining sqlhosts information for connection management of server sets
Related information:
The syncsqlhosts utility
The services names and port numbers for database servers are:
These service names are created in the sqlhosts.std file of Informix. You are not required to change installed Informix systems, because they continue to work with
existing port numbers and service names. (Also, there is no guarantee that some other system is not already using the service names or port numbers assigned to
Informix.)
Organizations that have policies for following standards can use these service names and port numbers if they want the database server to be in compliance with the
IANA standard. If another application that is installed on the same workstation already uses one of the service names or port numbers, you can ask the publisher of the
non-compliant application to register for an IANA port number assignment to avoid the conflict. When applications are noncompliant, you can run Informix using non-
standard ports.
Syntax
>>---dbservername--+-connection_type-+--hostname--servicename--+-----------------+---><
'-group-----------' | (1) |
'-| Options |-----'
Notes:
1. See sqlhosts file and SQLHOSTS registry key options
connection_type
Describes the type of connection that is made between the
database server and the client application or another
database server.
hostname
Specifies the computer where the database server is The field length is limited to 256 bytes.
located.
If the group keyword is specified, must be null (-).
servicename
Specifies the alias for the port number. The interpretation of The field length is limited to 128 bytes.
the service name field depends on the type of connection in
the connection-type field. If the group keyword is specified, must be null (-).
dbservername field
Each database server across all of your associated networks must have a unique database server name.
If an sqlhosts file has multiple entries with the same dbservername, only the first one is used.
Connection-type field
The connection-type field is called nettype in the sqlhosts file and PROTOCOL in the SQLHOSTS registry key.
The following table summarizes the possible connection-type values for database server connections on different operating systems.
Note: The connection-type values that begin with "on" can use "ol" in the place of "on". For example, either onipcshm or olipcshm specify shared-memory
connections if used in the sqlhosts information.
The host name is entered in the hostname field in the sqlhosts file, and in the HOST registry key.
If the connection type is onsqlmux, the hostname field must not be empty, but any specific value entered in it is ignored.
Following is an explanation of how client applications derive the values that are used in the host name field.
In some cases, you might use the actual TCP listen-port number in the service name field.
Windows: Named-pipe communication
For a named-pipe connection (onipcnmp), the SERVICE entry can be any short group of letters that is unique in the environment of the host computer where the
database server is located.
UNIX: Shared-memory and stream-pipe communication
For a shared-memory connection (onipcshm) or a stream-pipe connection (onipcstr), the database server uses the value in the servicename entry internally to create
a file that supports the connection. For both onipcshm and onipcstr connections, the servicename can be any short group of letters that is unique in the environment
of the host computer where the database server is located.
Multiplexed connections
For multiplexed connections (onsqlmux), the hostname field must have an entry, but the entry is ignored. Dashes (-) can be used as entries.
Related tasks:
Configuring secure connections for high-availability clusters
Changing client connectivity information
Connections that the database server supports
Related reference:
Configuration parameters related to connectivity
Group information
Stream-pipe connections (UNIX and Linux)
Specifying Network Connections
HCL Informix MaxConnect
TCP/IP connectivity files
Alternatives for TCP/IP connections
Related information:
Connectivity protocols
The following syntax fragments show the server options. The syntax fragment for group options is described in a section after the server options.
Important: Options must be separated by commas, but the first option that is listed in each sqlhosts entry must not have a comma before it.
Server options
|--+-------------+--+-------------------+----------------------->
'-a--=--token-' '-+---+--b--=--size-'
'-,-'
>--+-------------------------+--+--------------------+---------->
'-+---+--cfd--=--filepath-' '-+---+--g--=--group-'
'-,-' '-,-'
>--+--------------------+--+--------------------+--------------->
| .-1-. | | .-0-. |
'-+---+--k--=--+-0-+-' '-+---+--m--=--+-1-+-'
'-,-' '-,-'
>--+------------------------------------------------------------------+--|
+-+--------------------+--+-----------------+--+-----------------+-+
| '-| Lookup options |-' '-| PAM options |-' '-| CSM options |-' |
'-+---+--s=6-------------------------------------------------------'
'-,-'
Lookup options
|--+--------------------+--+--------------------+---------------|
'-+---+--r--=--+-0-+-' | .-3-. |
'-,-' '-1-' '-+---+--s--=--+-0-+-'
'-,-' +-1-+
'-2-'
PAM options
|--+---+--s=4,--pam_serv--=(--name--),--pamauth--=(--+-challenge-+--)--|
'-,-' '-password--'
CSM options
|--+---+--csm--=(--+-GSSCSM----+----------------+--+----------------+---+--)--|
'-,-' | | .-1-. | | .-1-. | |
| '-,--c--=--+-0-+-' '-,--i--=--+-0-+-' |
+-SPWDCSM--+-----------------+-----------------------+
| | .-1-. | |
| '-,--p--=(--+-0-+-' |
'-ENCCSM---------------------------------------------'
Table 1. Server options in the sqlhosts file and SQLHOSTS registry key.
Element Purpose Restrictions
a Stores the authentication token that is required
for connecting to the Informix® Warehouse Important: Do not manually change this option.
Accelerator. This entry is created by Informix
during Informix Warehouse Accelerator
connection setup.
b Specifies, in bytes, the size of the The maximum buffer size supported is 32 KB.
communications buffer space for TCP/IP
connections.
c Enables confidentiality service for the Generic c=1 enables the service (default)
Security Services CSM, which supports single c=0 disables the service
sign-on (SSO).
cfd Indicates the storage location for communication The length of the cfd path is restricted to 70
files that are used in shared-memory and bytes. Relative-path byte lengths include
stream-pipe connections. $INFORMIXDIR.
csm Describes the communication support module CSM entries must be specified in the
(CSM) for each database server that uses a concsm.cfg file.
CSM.
ENCCSM The name of the encryption communication The ENCCSM must be specified in the
support module. concsm.cfg file.
g Specifies the name of the group to which the The group must be defined.
database server belongs.
p Enables and disables the simple password p=0 password is not required (default)
CSM, which provides password encryption. p=1 password is required
The following syntax fragment shows the group options in the sqlhosts file.
Group options
|--+-------------+--+---------------------+--------------------->
| .-0-. | '-+---+--e--=--server-'
'-c--=--+-1-+-' '-,-'
>--+-------------------------+----------------------------------|
'-+---+--i--=--identifier-'
'-,-'
Table 2. Group options in the sqlhosts file and SQLHOSTS registry key.
Element Purpose Restrictions
c Controls connection redirection. Indicates the
order in which client applications choose
database servers, or the aliases within a
database server group.
e Specifies a database server name that marks
the end of a database server group.
i Assigns an identifying number to a database The identifier must be a positive integer from 1
server group. through 32767 and must be unique within your
network environment. The i option is required
for Enterprise Replication.
Usage
When you change option values in an sqlhosts entry, those changes affect the next connection that a client application makes. The server automatically recognizes any
changes that are made.
The database server evaluates the options entries as a series of columns. A comma or white space in an options entry represents an end of a column. Client and
database server applications check each column to determine whether the option is supported.
You can combine multiple options in each entry, and you can include them in any order. The maximum length for an options entry is 256 bytes.
The b option applies only to connections that use the TCP/IP connection protocol. Other types of connections ignore the b option.
You can adjust the buffer size to use system and network resources more efficiently; however, if the buffer size is set too high, the user receives a connection-reject error
because no memory can be allocated. For example, if you set b=16000 on a system that has 1000 users, the system might require 16 megabytes of memory for the
communications buffers. This setting might exhaust the memory resources of the computer. The default buffer size for the database server for TCP/IP is 4096 bytes.
If your network includes several different types of computers, be careful when you change the size of the communications buffer.
Tip: Use the default size for the communications buffer. If you choose to set the buffer size to a different value, set the client-side communications buffer and the
database server-side communications buffer to the same size.
The c option is valid only for servers that are assigned to a server group.
Client applications connect to the first database server instance listed in the server group in the sqlhosts information. If the client cannot
connect to the first instance, it attempts to connect to the second instance, and so on.
c=1 Client applications choose a random starting point from which to connect to a database server instance in a server group.
You can use the communication files directory option to store shared-memory or stream-pipe connection communication files in a new location. Specifying the
communication files directory option for non-root installations of Informix is necessary if the server and client are in different locations, and increases system performance
if the server and client are in the same location.
The cfd option can define an absolute path or a path relative to $INFORMIXDIR for storing communication files:
cfd=/location defines an absolute path
cfd=location defines a path relative to $INFORMIXDIR
The length of the cfd path is restricted to 70 bytes. Relative-path byte lengths include $INFORMIXDIR.
Non-root installations of Informix do not have permission to write to the /INFORMIXTMP directory, so shared-memory and stream-pipe connection communication files are
written to the $INFORMIXDIR/etc directory if no communication files directory is specified as an option in the sqlhosts information.
Important: This option must be defined for non-root installations of Informix, where the server and client are in different locations, or the connection fails.
The value of name must match a name entry in the concsm.cfg file.
CSM options that are defined in the sqlhosts file override options that are specified in the concsm.cfg file. CSM encryption options cannot be
specified in the sqlhosts information.
If you do not specify the csm option, the database server uses the default authentication policy for that database server.
Note: The s=7 option is deprecated and not required for the Single Sign-On (SSO) CSM.
If no e option is specified for a group, but all sqlhosts entries specify either groups or group members, the network must scan the entire file. You can use the e option to
specify the end of a server group, and improve system performance. The network layer scans the sqlhosts file until the entry specified by the e option is read.
If no end-of-group option is specified for a group, the group members are assumed to be contiguous. The end of the group is determined when an entry is reached that
does not belong to the group, or at the end of file, whichever comes first.
In the following example, the e option specifies entry lx3, so entry lx4 is not scanned by the network layer.
This option enables the network service to check periodically whether the connection between the client and server is still active. If the receiving end of the connection
does not respond within the time that is specified by the parameters of your operating system, the network service immediately detects the broken connection and frees
resources.
This option enables the database server to create multiple database connections to client applications without using up the additional computer resources that are
required for more network connections. You must restart the server after you enable this service.
The database server provides an interface to use pluggable authentication modules (PAM) for session authentication. To configure this interface, supply the PAM service
name and the authentication method. Authentication can be the connection password or a user challenge that requires the user to answer a question. Distributed
Relational Database Architecture (DRDA) connections for IBM® Data Server clients can use password authentication, but not challenge authentication.
pamauth The method of authentication that is used by the pamauth=password uses the connection request password for authentication.
PAM service.
pamauth=challenge authentication requires a correct user reply to a question or
An application must be designed to respond to the prompt
challenge prompt correctly before connecting to the
database server.
With these security options, you can specifically enable or disable the use of either or both files.
Disables trusted-user lookup in rhosts files or the file specified by the REMOTE_USERS_CFG configuration parameter.
Only incoming connections with passwords are accepted. Cannot be used for distributed database operations.
s=1 Enables trusted-hosts lookup in hosts.equiv or the file specified by the REMOTE_SERVER_CFG configuration
parameter.
Disables trusted-user lookup in rhosts files or the file specified by the REMOTE_USERS_CFG configuration parameter.
s=2 Disables trusted-hosts lookup in hosts.equiv or the file specified by the REMOTE_SERVER_CFG configuration
parameter.
Enables trusted-user lookup in rhosts files or the file specified by the REMOTE_USERS_CFG configuration parameter.
s=3 Enables trusted-hosts lookup in hosts.equiv or the file specified by the REMOTE_SERVER_CFG configuration
parameter.
Enables trusted-user lookup in rhosts files or the file specified by the REMOTE_USERS_CFG configuration parameter.
(default)
The s=6 option in the sqlhosts information ensures that the connections between cluster servers are trusted. Secure ports that are listed in the sqlhosts information can
be used only for cluster communication. Client applications cannot connect to secure ports.
Related concepts:
Trusted-host information
Trusted-user information
Related tasks:
Supporting multiplexed connections
Configuring secure connections for high-availability clusters
Related reference:
Group information
The netrc information
Related information:
Pluggable authentication modules (UNIX or Linux)
Communication support modules for data transmission encryption
Configuring the IBMInformix instance for SSO
Simple password encryption
Configuring secure ports for connections between replication servers
You can use the name of a group instead of the database server name in the following environment variables, or in the SQL CONNECT command:
The value of the INFORMIXSERVER environment variable for a client application can be the name of a group. However, you cannot use a group name as the
value of the INFORMIXSERVER environment variable for a database server or database server utility.
The value of the DBPATH environment variable can contain the names of groups.
Use a dash (-) character (ASCII 45) for hostname and server/port values when you specify a connection information for a group.
C=1 is optional, and specifies that a random starting point in the list of group members is used for connection attempts. e=member_name is optional, and specifies the final
entry for group members, so that the entire file is not scanned. The g=group_name option is required for group members, and specifies the group that the member
belongs to.
All database servers that participate in replication must be a member of a database server group. Each database server in the enterprise must have a unique identifier.
Enterprise Replication node groups have the following sqlhosts format:
group_name_2 - - i=identifier_2,e=member_name_2
member_name_2 protocol host_name_2 service_or_port_2 g=group_name_2
group_name_n - - i=identifier_n,e=member_name_n
member_name_n protocol host_name_n service_or_port_n g=group_name_n
The i=identifier is required for Enterprise Replication. e=member_name is optional, and specifies the final entry for group members, so that the entire file is not
scanned. The g=group_name option is required for group members, and specifies the group that the member belongs to.
C=1 is optional, and specifies that a random starting point in the list of group members is used for connection attempts. e=member_name is optional, and specifies the final
entry for group members, so that the entire file is not scanned. The g=group_name option is required for group members, and specifies the group that the member
belongs to.
Related reference:
sqlhosts connectivity information
sqlhosts file and SQLHOSTS registry key options
Example
The following example shows definition of a group named g_asia. The group contains four members.
For TCP/IP connections (both TLI and sockets), you can use the actual IP address in the hostname field instead of the host name or alias found in the hosts file. The
following example shows sample IP addresses and hosts from a hosts file.
Using the IP address for knight from the table, the following two sqlhosts entries are equivalent:
Using an IP address might speed up connection time in some circumstances. However, because computers are usually known by their host name, using IP addresses in
the host name field makes it less convenient to identify the computer with which an entry is associated.
UNIX: You can find the IP address in the net address field of the hosts file, or you can use the UNIX arp or ypmatch command.
Windows: You can configure Windows to use either of the following mechanisms to resolve a domain to an IP address:
Windows Internet Name Service
Domain Name Server
You can use wildcard addressing in the hostname field of the hosts file when both of the following conditions are met:
You are using TCP/IP connections.
The computer where the database server is located uses multiple network-interface cards (NICs).
If the preceding conditions are met, you can use an asterisk (*) as a wildcard in the hostname field that the database server uses. When you enter a wildcard in the
hostname field, the database server can accept connections at any valid IP address on its host computer.
Each IP address is associated with a unique host name. When a computer uses multiple NICs, as in the following table, the hosts file must have an entry for each
interface card. For example, the hosts file for the texas computer with two NICs might include these entries.
If the client application and database server share the sqlhosts information, you can specify both the wildcard and a host name or IP address in the hostname field (for
example, *texas1 or *123.45.67.81). The client application ignores the wildcard and uses the host name (or IP address) to make the connection, and the database
server uses the wildcard to accept a connection from any IP address.
The wildcard format allows the listen thread of the database server to wait for a client connection using the same service port number on each of the valid network-
interface cards. However, waiting for connections at multiple IP addresses might require more processor time than waiting for connections with a specific host name or IP
address.
The following figure shows a database server on a computer named texas that has two network-interface cards. The two client sites use different network cards to
communicate with the database server.
The following examples show potential sqlhosts connectivity information for the texas_srvr database server.
If the connectivity information corresponds to any of the preceding lines, the texas_srvr database server can accept client connections from either of the network cards.
The database server finds the wildcard in the hostname field and ignores the explicit host name.
Tip: For clarity and ease of maintenance, include a host name when you use the wildcard in the host name field (that is, use *host instead of *).
The connectivity information used by a client application must contain an explicit host name or IP address. The client applications on iowa can use any of the following
host names: texas1, *texas1, 123.45.67.81, or *123.45.67.81. If there is a wildcard (*) in the hostname field, the client application ignores it.
The client application on kansas can use any of the following host names: texas2, *texas2, 123.45.67.82, or *123.45.67.82.
For the TCP/IP network protocol, you can use the actual TCP listen port number in the service name field.
For example, if the port number for the sales database server in the services file is 1543, you can write an entry in the sqlhosts file as follows:
Using the actual port number might save time when you make a connection in some circumstances. However, as with the IP address in the hostname field, using the
actual port number might make administration of the connectivity information less convenient.
Related reference:
sqlhosts connectivity information
Beginning with Informix 10.00.xC4 and Client SDK 2.90.xC4, the database server checks, on startup, whether IPv6 is supported in the underlying operating system. If
IPv6 is supported it is used. If the underlying operating system does not support IPv6, the IPv4 address is used. Informix and Client SDK retrieve the IP address from the
name service.
You can treat Informix that runs on a host with both IPv4 and IPv6 addresses the same way you treat a server running on a multi-homed host. You can configure Informix
on a host with both IPv4 and IPv6 addresses in either of the following ways:
Create aliases (using the DBSERVERALIASES configuration parameter) and assign an IPv6 address to one of them and an IPv4 address to the other.
Instruct the Informix to listen on all the IP addresses configured on the host by using a wild-carded hostname in the sqlhosts file.
For example:
Starting with Informix Version 10.0, the host name entry in the SQLHOSTS file maps to an IPv6 address if the host has a configured IPv6 address. If the host
does not have a configured IPv6 address, the hostname entry maps to an IPv4 address.
Informix also provides a way to disable IPv6 support when working in IPv4 environments.
To disable IPv6 support for all database instances and client applications:
Create an empty file $INFORMIXDIR/etc/IFX_DISABLE_IPV6.
The file must have read permission for user informix. The file is not read from or written to, and is not required to contain any data.
To disable IPv6 support for a single database instance or for a single client application:
On the database server instance, or on the workstation on which applications are run, create an environment variable named IFX_DISABLE_IPV6 and set its
value to yes, as in:
IFX_DISABLE_IPV6=yes
When you restart the database server, the restart procedure uses the values that you set in these configuration parameters.
UNIX: When you configure connectivity, also consider setting the LISTEN_TIMEOUT and MAX_INCOMPLETE_CONNECTIONS configuration parameters. These
parameters can reduce the risk of a hostile denial-of-service (DOS) attack by making it more difficult to overwhelm the Listener VP that handles connections. For more
information, see the IBM® Informix Security Guide.
Name service maximum retention time set in the NS_CACHE configuration parameter
The NS_CACHE configuration parameter defines the maximum retention time for an individual entry in the host name/IP address cache, the service cache, the
user cache, and the group cache. If you specify maximum retention times, the database server gets host, service, user, and group database server information
from the cache.
Related tasks:
Connections that the database server supports
Related reference:
sqlhosts connectivity information
For example, to assign the name nyc_research to a database server, set the DBSERVERNAME value in the onconfig file or Windows registry key:
DBSERVERNAME nyc_research
Client applications specify the name of the database server in one of the following places:
In the INFORMIXSERVER environment variable
In SQL statements such as CONNECT, DATABASE, CREATE TABLE, and ALTER TABLE, which specify a database environment
In the DBPATH environment variable
The DBSERVERNAME must specify either the database server name or one of the database server aliases. The name must begin with a lowercase letter and can
contain other lowercase letters, digits, and underscores. The name must not include uppercase characters, a field delimiter (space or tab), or a new line character. Other
characters from the basic ASCII code set are not necessarily reliable. For example, a hyphen or minus sign can create problems and a colon might not work reliably. The
@ character is reserved to separate the database from the server (as in dbase@server).
For onimcsoc or onsoctcp protocols, you can update the DBSERVERNAME configuration parameter to include the number of multiple listen threads for the database
server aliases in your sqlhosts information, as follows:
DBSERVERNAME name-number_of_multiple_listen_threads
You can configure DBSERVERALIASES connections as SSL connections, and you can have a mix of SSL and non-SSL connections.
Related information:
DBSERVERNAME configuration parameter
The maximum number of aliases is 32. The following example shows entries in an onconfig configuration file that assign three dbserver names to the same database
server instance.
DBSERVERNAME sockets_srvr
DBSERVERALIASES ipx_srvr,shm_srvr
Because each dbserver name has a corresponding sqlhosts entry, you can associate multiple connection types with one database server.
Using the sqlhosts file shown in the previous example, a client application uses the following statement to connect to the database server using shared-memory
communication:
CONNECT TO '@shm_srvr'
A client application can initiate a TCP/IP sockets connection to the same database server using the following statement:
CONNECT TO '@sockets_srvr'
DBSERVERALIASES must begin with a lowercase letter and can contain other lowercase letters, digits, and underscores. DBSERVERALIASES must not include
uppercase characters, a field delimiter (space or tab), or a new line character. Other characters from the basic ASCII code set are not necessarily reliable. For example,
a hyphen or minus sign can create problems and a colon might not work reliably. The @ character is reserved to separate the database from the server (as in
dbase@server).
In the previous examples, the @shm_srvr statement connects to an unidentified database at that server; alternatively, you can connect to dbase1@shm_srvr.
For onimcsoc or onsoctcp protocols, you can update the DBSERVERALIASES configuration parameter to include the number of multiple listen threads for the
database server aliases in your sqlhosts information, as follows:
DBSERVERALIASESname-number,name-number
You can configure DBSERVERALIASES connections as SSL connections, and you can have a mix of SSL and non-SSL connections.
Related information:
DBSERVERALIASES configuration parameter
You might be required to dynamically increase or temporarily turn off the LIMITNUMSESSIONS configuration parameter to allow administrative utilities to run if the
database server is reaching the limit. Use onmode -wf or onmode -wm to dynamically increase or turn off LIMITNUMSESSIONS.
If the LIMITNUMSESSIONS configuration parameter is enabled and sessions are restricted because of this limit, both regular user threads and DBSA user threads
connecting to any database count against the limit. However, a DBSA user is allowed to connect to the server even after the limit is reached.
The LIMITNUMSESSIONS configuration parameter is not intended to be used as a means to adhere to license agreements.
Example
The following example specifies that you want a maximum of 100 sessions to connect to the database server and you want to print a warning message when the number
of connected sessions approaches 100: LIMITNUMSESSIONS 100,1
Recommendation: Although the NETTYPE parameter is not a required parameter, you must set NETTYPE if you use two or more connection types. After the database
server is running for some time, you can use the NETTYPE configuration parameter to tune the database server for better performance.
For more information about NETTYPE, see Network virtual processors. For information about the NETTYPE configuration parameter, see the HCL Informix®
Administrator's Reference.
Each cache entry expires either after the time configured for the specific cache or when the time is reconfigured.
Usually the network name service provider (for example, DNS) is on a remote computer. To avoid spending the time required to return information from the network
name service provider, you can use the NS_CACHE configuration parameter to specify the maximum retention times for obtaining information from one of the internal
caches. Then Informix® looks for information in the cache. If the information is not there, the database server queries the operating system for the information.
You can avoid many of these operating system lookups by using the Informix name service caching mechanism, which can keep and reuse each retrieved piece of
information for a configurable amount of time.
The server can get information from the cache faster than it does when querying the operating system. However, if you disable one or more of these caches by setting
the retention time to 0, the database server queries the operating system for the host, service, user, or group information.
As a DBA, you might want to modify the NS_CACHE configuration parameter settings if the network name service provider runs on a remote computer or the MSC VP is
running with a large amount of processor usage.
For example, you can run the onstat -g glo command to check the msc VP usage in the Individual virtual processors portion of the output. In the following
ouput sample, the msc processor usage, shown in the usercpu and syscpu columns is high. If you suspect the usage is high because the DNS call takes too much time,
you can confirm the high usage with an operating system command and then modify the NS_CACHE configuration parameter settings.
You might also want to specify NS_CACHE information, if your operating system does not have a name service (NS) cache or if you disabled the operating system NS
cache.
Example
To define the maximum retention time for your host and service connections as 600 seconds, and to disable the maximum retention limit for your user and group
database server connections, specify:
NS_CACHE host=600,service=600,user=0,group=0
Specifying NUMFDSERVERS information is useful if Informix has a high rate of new connect and disconnect requests or if you find a high amount of contention between
network shared file (NSF) locks.
Related information:
NUMFDSERVERS configuration parameter
Set the HA_ALIAS configuration parameter for each server in a high-availability cluster. The HA_ALIAS configuration parameter is required for high-availability cluster
servers that use shared-memory connections.
The HA_ALIAS configuration parameter value must match a DBSERVERNAME or DBSERVERALIASES configuration parameter value that is associated with a TCP
sqlhosts file entry.
Related information:
HA_ALIAS configuration parameter
onmode -d: Set data-replication types
DBSERVERALIASES configuration parameter
DBSERVERNAME configuration parameter
If the client application explicitly attaches to shared-memory segments, you might be required to set INFORMIXSHMBASE (shared-memory base).
You can use the INFORMIXSERVER environment variable to specify a default dbserver name to which your clients connect.
Related concepts:
How a client attaches to the communications portion (UNIX)
Related tasks:
Connections that the database server supports
Related information:
INFORMIXCONTIME environment variable
INFORMIXCONRETRY environment variable
INFORMIXSHMBASE environment variable (UNIX)
INFORMIXSERVER environment variable
You must be connected to the sysadmin database as user informix or another authorized user.
UPDATE ph_task
SET tk_enable = 't'
WHERE tk_name = 'idle_user_timeout';
By default, the idle_user_timeout task terminates user sessions that are idle for longer than 60 minutes. Sessions owned by user informix are not terminated. The
idle_user_timeout task starts checking for idle sessions after two hours, which is the default frequency for the task.
Tip: When the system time changes on the database server computer, the amount of time user sessions have been idle is no longer accurate. For example, if a user
session last did work at 3:14 PM and at 3:15 PM the system clock is moved forward by one hour, then to the database server, the user session has been idle for over an
hour.
To change the idle timeout period, update the frequency of running the task and the value of the threshold. The shortest idle timeout period allowed is 5 minutes. For
example, to change the timeout period to 5 minutes, run the following statements:
UPDATE ph_task
SET tk_frequency = INTERVAL (5) MINUTE TO MINUTE
WHERE tk_name = 'idle_user_timeout';
UPDATE ph_threshold
SET value = '5'
WHERE task_name = 'idle_user_timeout';
This section contains information about how to configure HCL Informix® to use the Distributed Relational Database Architecture™ (DRDA).
Overview of DRDA
Distributed Relational Database Architecture (DRDA) is a set of protocols that enable communication between applications and database systems on disparate
platforms, and enables relational data to be distributed among multiple platforms.
Configuring connectivity between Informix database servers and IBM Data Server clients
To connect to Informix with an IBM® Data Server client, you must follow certain configuration steps.
Allocating poll threads for an interface/protocol combination with the NETTYPE configuration parameter
The NETTYPE configuration parameter configures poll threads for each connection type that your instance of the database server supports. You can use this
configuration parameter to allocate more than one poll thread for an interface/protocol combination.
Specify the size of the DRDA communication buffer with the DRDA_COMMBUFFSIZE configuration parameter
Use the DRDA_COMMBUFFSIZE configuration parameter to specify the size of the DRDA communications buffer. The minimum size is 4 KB, the maximum size
is 2 megabytes, and the default value is 32 KB.
Any combination of relational database management products that use DRDA can be connected to form a distributed relational database management system. DRDA
coordinates communication between systems by defining what is exchanged and the exchange method.
You can configure the database server to use DRDA to respond to requests from a common API, such as the IBM® Data Server JDBC Driver or the IBM Data Server
.NET Provider.
Connection Managers support DRDA, so you can use connection management to redirect client connection requests to appropriate database servers. Connection
Managers can also provide automatic failover for high-availability clusters using DRDA.
Enterprise Replication, data replication, and Informix® utilities, such as DB-Access, require standard Informix connections. Enterprise Replication utilities do not operate
over DRDA connections. However, Enterprise Replication connections can coexist with DRDA connections.
When you use DRDA with ANSI-compliant databases, unbuffered logging and implicit transactions are enforced. If you migrate an application that is based on a non-
ANSI-compliant database to a DRDA environment, the connection must handle application logic for statements that need transactions. For example, a BEGIN WORK
statement is required before a concatenation operator in a stored procedure.
You can secure DRDA connections between a common client API and Informix in the following ways:
Encrypted password security or an encrypted user ID and encrypted password security
Secure Sockets Layer (SSL) protocol to encrypt data in end-to-end
Password authentication through a pluggable authentication module
Related concepts:
Connection management through the Connection Manager
Related information:
Transactions
BEGIN WORK statement
Secure sockets layer protocol
Configuring a connection to use PAM
1. On each Connection Manager and database server host, add sqlhosts file entries for each server: For example:
2. In each database server's onconfig file, set the DBSERVERALIASES parameter to specify an alias for the server.
DBSERVERALIASES drda_1
DBSERVERALIASES drda_2
DBSERVERALIASES drda_3
3. On each Connection Manager's host, add sqlhosts file entries for DRDA aliases. Specify a drtlitcp or drsoctcp protocol and specify a port for DRDA
communication. For example:
4. On the host of each Connection Manager, add a group entry for the group of database server and add a group entry for the group of DRDA aliases. Add group
options to the database server and DRDA alias entries. Use the c=1 group-entry option so that connection-attempt starting points in the list of group members is
random. Use the e=last_member group-entry option so that the entire sqlhosts is not scanned for group members. For example:
5. Add the DRDA service-level agreements to your Connection Managers' configuration files. For example:
The first Connection Manager's configuration file has the following entries:
NAME connection_manager_1
CLUSTER my_cluster
{
INFORMIXSERVER my_servers
SLA sla_primary_1 DBSERVERS=PRI
SLA sla_primary_drda_1 DBSERVERS=PRI
SLA sla_secondaries_1 DBSERVERS=SDS,HDR
SLA sla_secondaries_drda_1 DBSERVERS=SDS,HDR
FOC ORDER=ENABLED \
PRIORITY=1
}
The second Connection Manager's configuration file has the following entries:
NAME connection_manager_2
CLUSTER my_cluster
{
INFORMIXSERVER my_servers
SLA sla_primary_2 DBSERVERS=PRI
SLA sla_primary_drda_2 DBSERVERS=PRI
SLA sla_secondaries_2 DBSERVERS=SDS,HDR
SLA sla_secondaries_drda_2 DBSERVERS=SDS,HDR
FOC ORDER=ENABLED \
PRIORITY=2
}
6. On the host of each IBM Data Server client, create sqlhosts file entries for each service-level agreement (SLA) in each Connection Manager configuration file.
Create group entries for each group of SLA entries, and add group options to the SLA entries.
For example:
In the previous example, IBM Data Server client connection requests to @g_primary_drda are sent by drsoctcp protocol to one of the Connection Managers. The
Connection Manager that receives the request uses an SLA to provide the client application with connection information for the primary server.
If you receive error -23104 when accessing the server through the DRDA protocol, the client application might be trying to bind a value that has an encoding different
from the code page or code set of the database locale. Set the GL_USEGLU environment variable to 1 before you start the Informix instance. This setting enables the
server to initialize the appropriate Unicode converters that are required to handle the code set conversions.
Related concepts:
The sqlhosts information
Related tasks:
Defining sqlhosts information for high-availability clusters that use Distributed Relational Database Architecture (DRDA)
Defining sqlhosts information for high-availability clusters that use Distributed Relational Database Architecture (DRDA) and secure ports
Related information:
GL_USEGLU environment variable
NETTYPE drtlitcp,3,2,CPU
A NETTYPE entry can handle multiple database server aliases on the same protocol type. Thus, when DRDA is in use, the network listener thread (NETTYPE drtlitcp
or drsoctcp) typically has at least two sockets open and listening for connections. One socket is open for SQLI connections and another is open for DRDA connections.
Additional sockets might be open if many separate server aliases are configured.
Related information:
NETTYPE configuration parameter
You can specify a one megabyte buffer as 1M, 1m, 1024K, 1024k, or 1024. HCL Informix® automatically resets values that are less than 4 KB as 32 KB.
When a DRDA session is established, the session allocates a communication buffer of the current buffer size.
You can use the isgetdrdacommbuffsize() function to return the current value of DRDA_COMMBUFFSIZE.
You cannot use the onmode -wm command to change the setting while the database server is running.
Queries issued from a DRDA client run in parallel if PDQPRIORITY is set and the query can run in parallel. Queries run from DRDAEXEC threads can also run in
parallel.
The following data types are supported over the common API:
BIGINT
BIGSERIAL
BLOB
BOOLEAN
BSON
BYTE
CHAR(32k)
CLOB
DATE
DATETIME
DECIMAL
FLOAT
INT
INT8
INTERVAL
JSON
LVARCHAR(32k)
MONEY
NCHAR(32k)
NVARCHAR(255)
SERIAL
SERIAL8
SMALLFLOAT
SMALLINT
TEXT
VARCHAR(255)
The following data types are supported for use with database server host variables:
CHAR
DATE
INT
SMALLINT
VCHAR
The following commands display information about the thread name and session:
onstat -g ses
onstat -g sql
onstat -g ath
onstat -g stk
onstat -u
onstat -x
onstat -G
onstat -g ddr
onstat -g env
onstat -g stm
onstat -g ssc
onmode -D
onmode -Z
For example, the onstat output might show "drdaexec" as the threadname.
The table shows the client session ID, session application name, and a session value in the sesapp_sid, sesapp_name, and sesapp_value columns.
You can also display client session information using the onstat -g ses command.
Related information:
The sysmaster Database
syssesappinfo
onstat -g ses command: Print session-related information
You can assume that the network-configuration files hosts and services have been correctly prepared even if they are not explicitly mentioned. The following
examples are included:
Using a network connection
Using multiple connection types
Accessing multiple database servers
Examples of shared-memory and local-loopback connections can be found with the explanation of shared memory and local-loopback connections.
A network connection
The following figure shows a network-connection configuration for two database servers.
Related reference:
Shared-memory connections (UNIX)
Local-loopback connections
The client application is on host river and the database server is on host valley.
An sqlhosts entry for the valley_ds database server is defined on both computers.
Both computers are on the same TCP/IP network, but the host river uses sockets for its network programming interface, while the host valley uses TLI for its network
programming interface. The nettype field must reflect the type of network programming interface used by the computer on which sqlhosts is located. In this example,
the nettype field for the valley_ds database server on host river is onsoctcp, and the nettype field for the valley_ds database server on host valley is ontlitcp.
Related concepts:
Network programming interface
The following figure illustrates a configuration with more than one type of connection. The database server is on host river. Client A connects to the database server with
a shared-memory connection because shared memory is fast. Client B must use a network connection because the client and server are on different computers.
When you want the database server to accept more than one type of connection, you must take the following actions:
Add DBSERVERNAME and DBSERVERALIASES entries in the onconfig file.
Add an sqlhosts entry for each database server/connection type pair.
For the configuration in the following figure, the database server has two dbserver names: river_net and river_shm. The onconfig file includes the following entries:
DBSERVERNAME river_net
DBSERVERALIASES river_shm
Figure 1. An example of a UNIX client/server configuration that uses multiple connection types
The dbserver name used by a client application determines the type of connection that is used. Client A uses the following statement to connect to the database server:
CONNECT TO '@river_shm'
In the sqlhosts file, the nettype associated with the name river_shm specifies a shared-memory connection, so this connection is a shared-memory connection.
CONNECT TO '@river_net'
In the sqlhosts file, the nettype value associated with river_net specifies a network (TCP/IP) connection, so Client B uses a network connection.
The following figure shows a configuration with two database servers on host river.
For the configuration in previous example, you must prepare an onconfig file for database server A and another one for database server B. The sqlhosts file includes
the connectivity information for both database servers.
The onconfig file for database server A includes the following line:
DBSERVERNAME riverA_shm
The onconfig file for database server B includes the following line:
DBSERVERNAME riverB_soc
Related information:
Creating multiresident database servers (Windows)
Creating multiresident database servers (UNIX, Linux, Mac OS X)
Informix MaxConnect multiplexes connections so that the ratio of client connections to database connections can be 200:1 or higher. Informix MaxConnect increases
system scalability to many thousands of connections and saves system resources, reducing response times and processor requirements. Informix MaxConnect is best
for OLTP data transfers and should not be used for large multimedia data transfers.
Install Informix MaxConnect separately from your HCL Informix database server and client applications. For maximum performance benefit, install Informix MaxConnect
either on a separate computer to which HCL Informix clients connect or on the client application server. You can install Informix MaxConnect in the following
configurations:
On a dedicated server to which HCL Informix clients connect
On the client application server
On the database server computer
Two protocols for multiplexing connections, ontliimc and onsocimc, are available for Informix MaxConnect users. You can use the ontliimc and onsocimc protocols in
the following two configurations:
To connect Informix MaxConnect to the database server.
In this configuration, the client connections are multiplexed and use packet aggregation.
To connect the client applications directly to the database server without going through Informix MaxConnect.
In this configuration, the client does not get the benefits of connection multiplexing or packet aggregation. Choose this configuration when the client application is
transferring simple- or smart-large-object data, because a direct connection to the database server is best.
For more information about how to configure Informix MaxConnect and monitor it with the onstat -g imc and imcadmin commands, see the IBM Informix
MaxConnect User's Guide.
Important: Informix MaxConnect and the IBM Informix MaxConnect User's Guide ship separately from the HCL Informix database server.
Related reference:
sqlhosts connectivity information
Shared-memory initialization
Shared-memory initialization, or starting the server, establishes the contents of database server shared memory as follows: internal tables, buffers, and the shared-
memory communication area. Shared memory is initialized every time the database server starts. You use the oninit utility from the command line to initialize
database server shared memory and bring the database server online.
Disk-space initialization
Disk-space initialization uses the values that are stored in the configuration file to create the initial chunk of the root dbspace on disk. You use the oninit -i
command from the command line to initialize disk space. When you initialize disk space, the database server automatically initializes shared memory as part of the
process. Disk space is initialized the first time the database server starts.
Warning: When you initialize disk space, you overwrite whatever is on that disk space. If you reinitialize disk space for an existing database server, all the data in the
earlier database server is deleted.
You can prevent accidental disk initialization by setting the FULL_DISK_INIT configuration parameter to 0. When this configuration parameter is set to 0, the oninit
-i command fails if the root dbspace exists.
The key difference between shared-memory initialization and disk-space initialization is that shared-memory initialization has no effect on disk-space allocation or layout.
No data is deleted.
Initialization process
When you start the database server or initialize disk space, the database server performs a set of steps. You can see the results of each step in the message log.
Related concepts:
Database server configuration
Related information:
The oninit utility
onmode -k, -m, -s, -u, -j: Change database server mode
FULL_DISK_INIT configuration parameter
Disk-space initialization always includes the initialization of shared memory. However, some activities that normally take place during shared-memory initialization, such
as recording configuration changes, are not required during disk initialization because those activities are not relevant with a newly initialized disk.
The following main tasks are completed during the two types of initialization:
The database server uses configuration parameters to allocate shared-memory segments during initialization and restart. If you modify a shared-memory configuration
parameter, you must shut down and restart the database server for the change to take effect.
The ONCONFIG environment variable, which specifies the onconfig file that contains your configuration parameters, must be set before you initialize or restart the
database server.
During initialization, the database server looks for configuration values in the following files:
If the ONCONFIG environment variable is set, the database server reads values from the onconfig file.
If the ONCONFIG environment variable is set, but the database server cannot access the specified onconfig file, the server returns an error message.
If the ONCONFIG environment variable is not set, the database server reads the values from the onconfig file.
If you omit any configuration parameters in your onconfig file, the database server uses the default values that are built in the server.
The restart process compares the values in the current configuration file with the previous values, if any, that are stored in the root dbspace reserved page,
PAGE_CONFIG. When differences exist, the database server uses the values from the current onconfig configuration file when the database server is restarted.
The database server uses the configuration values to calculate the required size of the database server resident shared memory. In addition, the database server
computes additional configuration requirements from internal values. Space requirements are calculated and stored.
To create shared memory, the database server acquires the shared-memory space from the operating system for three different types of memory:
Resident portion, which is used for data buffers and internal tables
Virtual portion, used for most system and user-session memory requirements
IPC communication portion, which is used for IPC communication
The database server allocates this portion of shared memory only if you configure an IPC shared-memory connection.
Next, the database server attaches shared-memory segments to its virtual address space and initializes shared-memory structures. For more information about shared-
memory structures, see Virtual portion of shared memory.
After initialization is complete and the database server is running, it can create additional shared-memory segments as necessary. The database server creates
segments in increments of the page size.
After the database server attaches to shared memory, it clears the shared-memory space of uninitialized data. Next the database server lays out the shared-memory
header information and initializes data in the shared-memory structures. The database server lays out the space that is required for the logical-log buffer, initializes the
structures, and links together the three individual buffers that form the logical-log buffer.
After the database server remaps the shared-memory space, it registers the new starting addresses and sizes of each structure in the new shared-memory header.
During shared-memory initialization, disk structures and disk layout are not affected. The database server reads essential address information, such as the locations of
the logical and physical logs, from disk and uses this information to update pointers in shared memory.
Disk space is initialized only when you start the server for the first time or when you run the oninit -i command. Disk space is not initialized when the database
server is restarted. After shared-memory structures are initialized, the database server begins initializing the disk. The database server initializes all the reserved pages
that it maintains in the root dbspace on disk and writes page zero control information to the disk.
The FULL_DISK_INIT configuration parameter specifies whether oninit -i can run on your instance when a page zero exists at the root path location (at the first
page of the first chunk location). Use this configuration parameter to prevent an accidental disk reinitialization of an existing server instance.
The default setting of the FULL_DISK_INIT configuration parameter is 0. When the configuration parameter is set to0, the oninit -i command runs only if there is not
a page zero at the root path location.
If a page zero exists at the root path location, initialization occurs only if the FULL_DISK_INIT configuration parameter is set to 1. The database server automatically
resets the FULL_DISK_INIT configuration parameter to0 after the initialization.
The database server starts all the virtual processors that it requires.
The parameters in the onconfig file influence what processors are started. For example, the NETTYPE parameter can influence the number and type of processors that
are started for making connections. For more information about virtual processors, see Virtual processors.
The database server checks its internal files. If the files are from an earlier version, it updates these files to the current format.
The database server checks if fast recovery is required and, if so, starts it. Fast recovery is not performed during disk-space initialization because there is not yet
anything to recover.
After fast recovery completes, the database server runs a checkpoint to verify that all recovered transactions are flushed to disk so the transactions are not repeated.
As part of the checkpoint procedure, the database server writes a checkpoint-complete message in the message log. For more information about checkpoints, see
Checkpoints.
The database server compares the current values that are stored in the configuration file with the values previously stored in the root dbspace reserved page
PAGE_CONFIG. When differences exist, the database server notes both values (old and new) in a message to the message log.
The database server creates the oncfg_servername.servernum file and updates it every time that you add or delete a dbspace, blobspace, logical-log file, or chunk.
You are not required to manipulate this file in any way, but you can see it listed in your $INFORMIXDIR/etc directory on UNIX or in your %INFORMIXDIR%\etc directory
on Windows. The database server uses the oncfg_servername.servernum file during a full-system restore for salvaging the logical log.
The database server now moves to quiescent mode or online mode, depending on how you started the initialization or database-server restart process.
Temporary tblspaces, if any, are tblspaces left by user processes that died prematurely and were unable to perform appropriate cleanup. The database server deletes
any temporary tblspaces and reclaims the disk space. For more information about temporary tblspaces, see Temporary tables.
This task is performed when the database server is restarted; it is not performed during disk-space initialization.
If you use the -p option of oninit to initialize the database server, the database server skips this step.
If the value of the RESIDENT configuration parameter is -1 or a number greater than 0, the database server tries to enforce residency of shared memory.
If the host computer system does not support forced residency, the initialization procedure continues. Residency is not enforced, and the database server sends an error
message to the message log.
Control returns to the user when the database server writes the HCL Informix® Dynamic Server initialized - complete disk initialized message in the
message log only if initialization, not database-server restart, occurred and dynamically allocates a virtual shared-memory segment.
The server returns control to the user. Any error messages that are generated by the initialization procedure are displayed in the following locations:
The command line
The database server message log file, which is specified by the MSGPATH configuration parameter
You can use the oninit -w utility to force the server to return to a command prompt within a configurable timeout. The oninit -w utility is useful for troubleshooting
initialization failures.
The database server creates the system-monitoring interface (SMI) tables and other system databases if they do not exist.
If the SMI tables are not current, the database server updates the tables. If the SMI tables are not present, as is the case when the disk is initialized, the database server
creates the tables. After the database server builds the SMI tables, it puts the message sysmaster database built successfully in the message log. The database
server also re-creates the sysmaster database during conversion and reversion. For more information about SMI tables, see the chapter on the sysmaster database in
the HCL Informix Administrator's Reference.
If you shut down the database server before it finishes building the SMI tables, the process of building the tables stops. This condition does not damage the database
server. The database server builds the SMI tables the next time that you bring the database server online. However, if you do not allow the SMI tables to finish building,
you cannot run any queries against those tables, and you cannot use ON-Bar for backups.
The database server drops and re-creates the sysutils database during disk initialization, conversion, or reversion. ON-Bar stores backup and restore information in the
sysutils database. Wait until the message sysutils database built successfully displays in the message log.
The database server creates the sysuser database during initialization. The sysuser database is used for Pluggable Authentication Module (PAM) authentication in HCL
Informix server to server communication.
The database server creates the sysadmin database during initialization. The sysadmin database provides remote administration and scheduler API features in HCL
Informix.
After the SMI tables and system databases are created, the database server is ready for use. The database server runs until you stop it or, possibly, until a malfunction.
Recommendation: Do not try to stop the database server by stopping a virtual processor or ending another database server process. For more information, see Start
and stop virtual processors.
The database server prints the maximum number of user connections in the message log at each checkpoint in the following format: maximum server connections
number. You can monitor the number of users who connected to the database server since the last restart or disk initialization.
The number that is displayed is reset when the customer reinitializes the database server.
Related information:
DISK_ENCRYPTION configuration parameter
The table shows the principal modes of operation of the database server.
Administrators use this mode to perform maintenance Other users can view database-server status
functions that do not require the execution of SQL and information, but they cannot access the database server.
DDL statements.
Online mode This is the normal operating mode of the database Any authorized user can connect with the database
server. server and perform all database activities.
In addition, the database server can also be in one of the following modes:
Read-only mode is used by the secondary database server in a data replication environment. An application can query a secondary database server that is in
read-only mode, but the application cannot write to a read-only database.
Recovery mode is transitory. It occurs when the database server performs fast recovery or recovers from a system archive or system restore. Recovery occurs
during the change from offline to quiescent mode.
Shutdown mode is transitory. It occurs when the database server is moving from online to quiescent mode or from online (or quiescent) to offline mode. For the
current users access the system, but no new users are allowed access.
Users who are logged in as root or informix and members of the DBSA group can change the operating mode of the database server.
If you want users with the DBSA group to connect to the database server in administration mode, set the ADMIN_USER_MODE_WITH_DBSA configuration parameter to 1.
If this parameter is set to zero, then access is restricted to user informix only. If the parameter is missing from $ONCONFIG, it is treated as 0.
User informix or a DBSA can dynamically give one or more specific users the ability to connect to the database server in administration mode, using the onmode utility,
the oninit utility, or the ADMIN_MODE_USERS configuration parameter.
Note: For a member of the DBSA group, the permissions on $INFORMIXDIR/bin/oninit must be changed to allow public execute permission -
root:informix:6755 in a standard HCL Informix® installation.
Windows only
Table 1 shows which users can change the operating mode of the database server in Windows. Apache as user informix. The Apache server runs as a member of the
Informix-Admin group.
Windows only: In Windows, the database server runs as a service. Windows provides a service control application (also called the Services tool) to start, stop, and
pause a service. The service control application is located in the Control Panel program group. The service name for the database server includes the database server
name (the value of DBSERVERNAME in the onconfig file). For example, the HCL Informix® service for the newyork database server is:
To change mode with the Services tool, start the tool and select the database server service. Then choose the appropriate option in the Services window. The tables
shown later in these topics explain which option you select for each mode.
To start and stop the database server, you can use other Windows tools, such as the NET command and the Server Manager tool. For more information about these
methods, consult your Windows operating-system documentation.
Tip: After you change the mode of your database server, run the onstat command to verify the current server status.
Related information:
User informix or a DBSA can use the onmode -j -U or the oninit -U command to grant individual users access to the database server in administration mode for a
session.
For example, run the following command to enable three individual users to connect to the database server and have database server access until the database server
mode changes to offline, quiescent or online mode:
onmode -j -U mark,ajay,carol
After connecting, these individual users can run any SQL or DDL commands. When the server is changed to administration mode, all sessions for users not identified in
the onmode -j -U command lose their database server connection.
After initially running the onmode -j -U command, you can remove individuals by running onmode -j -U and removing individual user names from the new list of
names in the command, for example, by running:
onmode -j -U mark,carol
Run the oninit -U command with a blank space instead of a name to remove all users in the list, as shown in this example:
Unlike the oninit and onmode commands that enable you to specify administration mode users until the server changes to offline, quiescent, or online mode, the
ADMIN_MODE_USERS configuration parameter preserves a list of administration mode users indefinitely.
To create a list of administration mode users that is preserved in the onconfig file, specify a comma-separated list of users as ADMIN_MODE_USERS configuration
parameter values, for example, mark,ajay,carol.
To override ADMIN_MODE_USERS during a session, use the onmode -wf command, as shown in this example:
The effect of the ADMIN_MODE_USERS configuration parameter is to add to the list of people permitted to access the server in administration mode. Those people
listed in the onmode command line override those listed in the onconfig file.
Related information:
The oninit utility
onmode -k, -m, -s, -u, -j: Change database server mode
ADMIN_MODE_USERS configuration parameter
Shared memory
These topics describe the content of database server shared memory, the factors that determine the sizes of shared-memory areas, and how data moves into and
out of shared memory.
Data storage
The database server uses physical units of storage to allocate disk space. It stores data in logical units. Unlike the logical units of storage whose size fluctuates,
each of the physical units has a fixed or assigned size that is determined by the disk architecture.
Virtual processors
A virtual processor is a process that the operating system schedules for processing.
Related reference:
Database server maintenance tasks
Database server processes are called virtual processors because the way they function is similar to the way that a CPU functions in a computer. Just as a CPU runs
multiple operating-system processes to service multiple users, a database server virtual processor runs multiple threads to service multiple SQL client applications.
The following figure illustrates the relationship of client applications to virtual processors. A small number of virtual processors serve a much larger number of client
applications or queries.
Threads
A thread is a task for a virtual processor in the same way that the virtual processor is a task for the CPU.
The virtual processor is a task that the operating system schedules for execution on the CPU; a database server thread is a task that the virtual processor schedules
internally for processing. Threads are sometimes called lightweight processes because they are like processes, but they make fewer demands on the operating system.
Database server virtual processors are multithreaded because they run multiple concurrent threads.
Operating
system Action
UNIX A thread is a task that the virtual processor schedules internally for processing.
Windows A thread is a task that the virtual processor schedules internally for processing. Because the virtual processor is implemented as a
Windows thread, database server threads run within Windows threads.
Important: Throughout these topics, all references to thread refer to the threads created, scheduled, and deleted by the database server. All references to “Windows
threads” refer to the threads created, scheduled, and deleted by Windows.
A virtual processor runs threads on behalf of SQL client applications (session threads) and also to satisfy internal requirements (internal threads). In most cases, for each
connection by a client application, the database server runs one session thread. The database server runs internal threads to accomplish, among other things, database
I/O, logging I/O, page cleaning, and administrative tasks. For cases in which the database server runs multiple session threads for a single client, see Parallel
processing.
A user thread is a database server thread that services requests from client applications. User threads include session threads, called sqlexec threads, which are the
primary threads that the database server runs to service client applications.
User threads also include a thread to service requests from the onmode utility, threads for recovery, B-tree scanner threads, and page-cleaner threads.
To display active user threads, use onstat -u. For more information about monitoring sessions and threads, see HCL Informix® Performance Guide.
Compared to a database server process that services a single client application, the dynamic, multithreaded nature of a database server virtual processor provides the
following advantages:
Virtual processors can share processing.
Virtual processors save memory and resources.
Virtual processors can perform parallel processing.
You can start additional virtual processors and terminate active CPU virtual processors while the database server is running.
You can bind virtual processors to CPUs.
Shared processing
Virtual processors in the same class have identical code and share access to both data and processing queues in memory. Any virtual processor in a class can
run any thread that belongs to that class.
Parallel processing
When a client initiates index building, sorting, or logical recovery, the database server creates multiple threads to work on the task in parallel, using as much of the
computer resources as possible. While one thread is waiting for I/O, another can be working.
Generally, the database server tries to keep a thread running on the same virtual processor because moving it to a different virtual processor can require some data from
the memory of the processor to be transferred on the bus. When a thread is waiting to run, however, the database server can migrate the thread to another virtual
processor because the benefit of balancing the processing load outweighs the amount of overhead incurred in transferring the data.
Shared processing within a class of virtual processors occurs automatically and is transparent to the database user.
Multithreading permits more efficient use of the operating-system resources because threads share the resources allocated to the virtual processor. All threads that a
virtual processor runs have the same access to the virtual-processor memory, communication ports, and files. The virtual processor coordinates access to resources by
the threads. Individual processes, though, each have a distinct set of resources, and when multiple processes require access to the same resources, the operating
system must coordinate the access.
Generally, a virtual processor can switch from one thread to another faster than the operating system can switch from one process to another. When the operating
system switches between processes, it must stop one process from running on the processor, save its current processing state (or context), and start another process.
Both processes must enter and exit the operating-system kernel, and the contents of portions of physical memory might require replacement. Threads, though, share the
same virtual memory and file descriptors. When a virtual processor switches from one thread to another, the switch is from one path of execution to another. The virtual
processor, which is a process, continues to run on the CPU without interruption. For a description of how a virtual processor switches from one thread to another, see
Context switching.
In the following cases, virtual processors of the CPU class can run multiple session threads, working in parallel, for a single client:
Index building
Sorting
Recovery
Scanning
Joining
Aggregation
Grouping
User-defined-routine (UDR) execution
For more information about parallel UDR execution, see IBM® Informix® User-Defined Routines and Data Types Developer's Guide.
If the virtual processors of a class become compute bound or I/O bound (meaning that CPU work or I/O requests are accumulating faster than the current number of
virtual processors can process them), you can start additional virtual processors for that class to distribute the processing load further.
Windows only: In Windows, you cannot drop a virtual processor of any class.
While the database server is running, you can drop virtual processors of the CPU or a user-defined class. For more information, see Set virtual-processor configuration
parameters.
On multiprocessor computers for which the database server supports processor affinity, you can bind CPU virtual processors to specific CPUs in the computer. When
you bind a CPU virtual processor to a CPU, the virtual processor runs exclusively on that CPU. This operation improves the performance of the virtual processor
because it reduces the amount of switching between processes that the operating system must do. Binding CPU virtual processors to specific CPUs also enables you to
isolate database work on specific processors on the computer, leaving the remaining processors free for other work. Only CPU virtual processors can be bound to CPUs.
For information about how to assign CPU virtual processors to hardware processors, see Processor affinity.
At a given time, a virtual processor can run only one thread. A virtual processor runs a thread until it yields. When a thread yields, the virtual processor switches to the
next thread that is ready to run. The virtual processor continues this process, eventually returning to the original thread when that thread is ready to continue. Some
threads complete their work, and the virtual processor starts new threads to process new work. Because a virtual processor continually switches between threads, it can
keep the CPU processing continually. The speed at which processing occurs produces the appearance that the virtual processor processes multiple tasks simultaneously
and, in effect, it does.
Running multiple concurrent threads requires scheduling and synchronization to prevent one thread from interfering with the work of another. Virtual processors use the
following structures and methods to coordinate concurrent processing by multiple threads:
Control structures
Context switching
Stacks
Queues
Mutexes
These topics describe how virtual processors use these structures and methods.
Control structures
When a client connects to the database server, the database server creates a session structure, which is called a session control block, to hold information about
the connection and the user.
Context switching
A virtual processor switches from running one thread to running another one by context switching.
Stacks
The database server allocates an area in the virtual portion of shared memory to store nonshared data for the functions that a thread executes. This area is called
the stack.
Queues
The database server uses three types of queues to schedule the processing of multiple, concurrently running threads.
Mutexes
A mutex (mutually exclusive), also called a latch, is a latching mechanism that the database server uses to synchronize access by multiple threads to shared
resources.
A session begins when a client connects to the database server, and it ends when the connection terminates.
Next, the database server creates a thread structure, which is called a thread-control block (TCB) for the session, and initiates a primary thread (sqlexec) to process the
client request. When a thread yields—that is, when it pauses and allows another thread to run—the virtual processor saves information about the state of the thread in
the thread-control block. This information includes the content of the process system registers, the program counter (address of the next instruction to execute), and the
stack pointer. This information constitutes the context of the thread.
In most cases, the database server runs one primary thread per session. In cases where it performs parallel processing, however, it creates multiple session threads for
a single client, and, likewise, multiple corresponding thread-control blocks.
The database server does not preempt a running thread, as the operating system does to a process, when a fixed amount of time (time-slice) expires. Instead, a thread
yields at one of the following points:
A predetermined point in the code
When the thread can no longer execute until some condition is met
When the amount of processing required to complete a task would cause other threads to wait for an undue length of time, a thread yields at a predetermined point. The
code for such long-running tasks includes calls to the yield function at strategic points in the processing. When a thread performs one of these tasks, it yields when it
encounters a yield function call. Other threads in the ready queue then get a chance to run. When the original thread next gets a turn, it resumes executing code at the
point immediately after the call to the yield function. Predetermined calls to the yield function allow the database server to interrupt threads at points that are most
advantageous for performance.
A thread also yields when it can no longer continue its task until some condition occurs. For example, a thread yields when it is waiting for disk I/O to complete, when it is
waiting for data from the client, or when it is waiting for a lock or other resource.
When a thread yields, the virtual processor saves its context in the thread-control block. Then the virtual processor selects a new thread to run from a queue of ready
threads, loads the context of the new thread from its thread-control block, and begins executing at the new address in the program counter. The following figure
illustrates how a virtual processor accomplishes a context switch.
Figure 1. Context switch: how a virtual processor switches from one thread to another
For information about how to set the size of the stack, see Stacks.
The stack enables a virtual processor to protect the nonshared data of a thread from being overwritten by other threads that concurrently execute the same code. For
example, if several client applications concurrently perform SELECT statements, the session threads for each client execute many of the same functions in the code. If a
thread did not have a private stack, one thread might overwrite local data that belongs to another thread within a function.
When a virtual processor switches to a new thread, it loads a stack pointer for that thread from a field in the thread-control block. The stack pointer stores the beginning
address of the stack. The virtual processor can then specify offsets to the beginning address to access data within the stack. The figure illustrates how a virtual processor
uses the stack to segregate nonshared data for session threads.
Virtual processors of the same class share queues. This fact, in part, enables a thread to migrate from one virtual processor in a class to another when necessary.
Ready queues
Ready queues hold threads that are ready to run when the current (running) thread yields.
Sleep queues
Sleep queues hold the contexts of threads that have no work to do at a particular time. A thread is put to sleep either for a specified period of time or forever.
Wait queues
Wait queues hold threads that must wait for a particular event before they can continue to run.
When a thread yields, the virtual processor picks the next thread with the appropriate priority from the ready queue. Within the queue, the virtual processor processes
threads that have the same priority on a first-in-first-out (FIFO) basis.
On a multiprocessor computer, if you notice that threads are accumulating in the ready queue for a class of virtual processors (indicating that work is accumulating faster
than the virtual processor can process it), you can start additional virtual processors of that class to distribute the processing load. For information about how to monitor
the ready queues, see Monitor virtual processors. For information about how to add virtual processors while the database server is in online mode, see Add virtual
processors in online mode.
The administration class (ADM) of virtual processors runs the system timer and special utility threads. Virtual processors in this class are created and run automatically.
No configuration parameters affect this class of virtual processors.
The ADM virtual processor wakes up threads that have slept for the specified time. A thread that runs in the ADM virtual processor checks on sleeping threads at one-
second intervals. If a sleeping thread has slept for its specified time, the ADM virtual processor moves it into the appropriate ready queue. A thread that is sleeping for a
specified time can also be explicitly awakened by another thread.
A thread that is sleeping forever is awakened when it has more work to do. For example, when a thread that is running on a CPU virtual processor must access a disk, it
issues an I/O request, places itself in a sleep queue for the CPU virtual processor, and yields. When the I/O thread notifies the CPU virtual processor that the I/O is
complete, the CPU virtual processor schedules the original thread to continue processing by moving it from the sleep queue to a ready queue. The following figure
illustrates how the database server threads are queued to perform database I/O.
Figure 1. How database server threads are queued to perform database I/O
Wait queues coordinate access to shared data by threads. When a user thread tries to acquire the logical-log latch but finds that the latch is held by another user, the
thread that was denied access puts itself in the logical-log wait queue. When the thread that owns the lock is ready to release the latch, it checks for waiting threads,
and, if threads are waiting, it wakes up the next thread in the wait queue.
Mutexes are similar to semaphores, which some operating systems use to regulate access to shared data by multiple processes. However, mutexes permit a greater
degree of parallelism than semaphores.
A mutex is a variable that is associated with a shared resource such as a buffer. A thread must acquire the mutex for a resource before it can access the resource. Other
threads are excluded from accessing the resource until the owner releases it. A thread acquires a mutex, after a mutex becomes available, by setting it to an in-use
state. The synchronization that mutexes provide ensures that only one thread at a time writes to an area of shared memory.
For information about monitoring mutexes, see Monitor the shared-memory profile and latches.
Related concepts:
Buffer pool portion of shared memory
The following table shows the classes of virtual processors and the types of processing that they do.
The number of virtual processors of each class that you configure depends on the availability of physical processors (CPUs), hardware memory, and the database
applications in use.
On Windows systems, the number of encrypt virtual processors is always set to 1, regardless of
the value that is set in the onconfig file.
The following figure illustrates the major components and the extensibility of the database server.
Related concepts:
Start and stop virtual processors
Related information:
VPCLASS configuration parameter
The CPU virtual processor runs all session threads (the threads that process requests from SQL client applications) and some internal threads.
Internal threads perform services that are internal to the database server. For example, a thread that listens for connection requests from client applications is an internal
thread.
Each CPU virtual processor can have a private memory cache associated with it. Each private memory cache block consists of 1 to 32 memory pages, where each
memory page is 4096 bytes. The database server uses the private memory cache to improve access time to memory blocks. Use the VP_MEMORY_CACHE_KB
configuration parameter to enable a private memory cache and specify information about the memory cache. For more information, see the HCL Informix®
Administrator's Reference and the HCL Informix Performance Guide.
Processor affinity
The database server supports automatic binding of CPU virtual processors to processors on multiprocessor computers that support processor affinity.
When the database server starts, the number of CPU virtual processors is automatically increased to half the number of CPU processors on the database server
computer, unless the SINGLE_CPU_VP configuration parameter is enabled. However, you can adjust the number of CPU VPs based on your system.
You can configure the database server to automatically add CPU VPs when needed, up to the number of CPU processors.
To evaluate the performance of the CPU virtual processors while the database server is running, repeat the following command at regular intervals over a set period:
onstat -g glo
If the accumulated usercpu and syscpu times, taken together, approach 100 percent of the actual elapsed time for the period of the test, add another CPU virtual
processor if you have a CPU available to run it.
Use the VPCLASS configuration parameter to specify the following information about CPU virtual processors:
The number of virtual processors to start initially for a class
The maximum number of virtual processors to run for the class
Processor affinity for CPU class virtual processors
Disabling of priority aging, if applicable
Whether the database server automatically adds CPU virtual processors as needed
In addition to considering the number of CPUs in the computer and the number of users who connect to the database server, also consider that user-defined routines
and DataBlade modules, which are collections of user-defined routines, run on either CPU virtual processors or user-defined virtual processors.
Note: Use the VPCLASS configuration parameter instead of the following discontinued configuration parameters: AFF_SPROC, AFFNPROCS, NOAGE, NUMCPUVPS,
and NUMAIOVPS.
Related reference:
Run poll threads on CPU or network virtual processors
Assign a UDR to a user-defined virtual-processor class
Related information:
VPCLASS configuration parameter
onstat -g glo command: Print global multithreading information
When you set MULTIPROCESSOR to 1, the database server performs locking in a manner that is appropriate for a multiprocessor computer. For information about
setting multiprocessor mode, see the chapter on configuration parameters in the HCL Informix® Administrator's Reference.
Setting MULTIPROCESSOR to 0 enables the database server to bypass the locking that is required for multiple processes on a multiprocessor computer. For information
about the MULTIPROCESSOR configuration parameter, see the HCL Informix® Administrator's Reference.
Setting SINGLE_CPU_VP to 1 allows the database server to bypass some of the mutex calls that it normally makes when it runs multiple CPU virtual processors. For
information about setting the SINGLE_CPU_VP parameter, see the HCL Informix Administrator's Reference.
Important: Setting VPCLASS num to 1 and SINGLE_CPU_VP to 0 does not reduce the number of mutex calls, even though the database server starts only one CPU
virtual processor. You must set SINGLE_CPU_VP to 1 to reduce the amount of latching that is performed when you run a single CPU virtual processor.
Setting the SINGLE_CPU_VP parameter to 1 imposes two important restrictions on the database server, as follows:
Only one CPU virtual processor is allowed.
You cannot add CPU virtual processors while the database server is in online mode.
No user-defined classes are allowed. (However, users can still define routines that run directly on the CPU VP.)
For instructions on how to add or drop CPU class virtual processors, see Add virtual processors in online mode and Drop CPU and user-defined virtual processors.
Priority aging can cause the performance of database server processes to decline over time. In some cases, however, you can use the operating system to disable this
feature and keep long-running processes running at a high priority.
To determine if priority aging is available on your computer, check the machine notes file that comes with your installation and is described in the Introduction to this
guide.
If you can disable priority aging through the operating system, you can disable it by specifying noage for the priority entry in the VPCLASS configuration parameter. For
more information, see the HCL Informix® Administrator's Reference.
Your database server distribution includes a machine notes file that contains information about whether your database server version supports this feature. When you
assign a CPU virtual processor to a specific CPU, the virtual processor runs only on that CPU, but other processes also can run on that CPU.
Use the VPCLASS configuration parameter with the aff option to implement processor affinity on multiprocessor computers that support it.
UNIX only: To see if processor affinity is supported on your UNIX platform, see the machine notes file.
When specifying a range of processors, you can also specify an incremental value with the range that indicates which CPUs in the range are assigned to the virtual
processors. For example, you can specify that the virtual processors are assigned to every other CPU in the range 0-6, starting with CPU 0.
VPCLASS CPU,num=4,aff=(0-6/2)
If you specify VPCLASS CPU,num=4,aff=(1-10/3), the virtual processors are assigned to every third CPU in the range 1-10, starting with CPU 1. The virtual processors
are assigned to CPUs 1, 4, 7, 10.
When you specify more than one value or range, the values and ranges are not required to be incremental or in any particular order. For example you can specify
aff=(8,12,7-9,0-6/2).
The database server assigns CPU virtual processors to CPUs in a circular pattern, starting with the first processor number that you specify in the aff option. If you
specify a larger number of CPU virtual processors than physical CPUs, the database server continues to assign CPU virtual processors starting with the first CPU. For
example, suppose you specify the following VPCLASS settings:
VPCLASS cpu,num=8,aff=(4-7)
For more information, see the VPCLASS configuration parameter in the HCL Informix® Administrator's Reference.
User-defined routines are typically written to support user-defined data types. If you do not want a user-defined routine to run in the CPU class, which is the default, you
can assign it to a user-defined class of virtual processors (VPs). User-defined classes of virtual processors are also called extension virtual processors.
These topics provide the following information about user-defined virtual processors:
When to run a C-language UDR in a user-defined VP instead of in the CPU VP
How to assign a C-language UDR to a particular user-defined VP class
How to add and drop user-defined VPs when the database server is in online mode
If you run many UDRs or parallel PDQ queries with UDRs, you must configure more user-defined virtual processors.
A well-behaved C-language UDR has none of these characteristics. Run only well-behaved C-language UDRs in a CPU VP.
Warning: Execution of an ill-behaved routine in a CPU VP can cause serious interference with the operation of the database server, possibly causing it to fail or behave
erratically. In addition, the routine itself might not produce correct results.
To ensure safe execution, assign any ill-behaved user-defined routines to a user-defined class of virtual processors. User-defined VPs remove the following
programming restrictions on the CPU VP class:
The requirement to yield the processor regularly
The requirement to eliminate blocking I/O calls
Functions that run in a user-defined virtual-processor class are not required to yield the processor, and they might issue direct file-system calls that block further
processing by the virtual processor until the I/O is complete.
The normal processing of user queries is not affected by ill-behaved traits of a C-language UDR because these UDRs do not execute in CPU virtual processors. For a
more detailed explanation of ill-behaved routines, see the IBM® Informix® DataBlade API Programmer's Guide.
For more information, see Set virtual-processor configuration parameters and the topics about configuration parameters in the HCL Informix® Administrator's Reference.
The following CREATE FUNCTION statement registers the user-defined routine, GreaterThanEqual(), and specifies that calls to this routine are executed by the
user-defined VP class named UDR:
To execute this function, the onconfig file must include a VPCLASS parameter that defines the UDR class. If not, calls to the GreaterThanEqual function fail.
Tip: The CLASS routine modifier can specify any name for the VP class. This class name is not required to exist when you register the UDR. However, when you try to
run a UDR that specifies a user-defined VP class for its execution, this class must exist and have virtual processors assigned to it.
To configure the UDR class, include a line similar to the following one in the onconfig file. This line configures the UDR class with two virtual processors and with no
priority aging.
The preceding line defines the UDR VP class as a yielding VP class; that is, this VP class allows the C-language UDR to yield to other threads that must access to the
UDR VP class. For more information about how to use the VPCLASS configuration parameter, see the HCL Informix® Administrator's Reference.
For more information about the CREATE FUNCTION statement, see the IBM Informix Guide to SQL: Syntax.
Related reference:
Determine the number of CPU virtual processors needed
For instructions on how to do this, see Add virtual processors in online mode and Drop CPU and user-defined virtual processors.
You can create a tenant virtual processor class by defining the class and the number of virtual processors when you create a tenant database. You can assign a tenant
virtual processor class to multiple tenant databases. Set the VP_MEMORY_CACHE_KB configuration parameter to create a private memory cache for each CPU virtual
processor and tenant virtual processor.
A tenant virtual processor class is automatically dropped when all tenant databases that include the virtual processor class in their definitions are dropped.
Related concepts:
Multitenancy
Related information:
tenant create argument: Create a tenant database (SQL Administration API)
tenant update argument: Modify tenant database properties (SQL Administration API)
VP_MEMORY_CACHE_KB configuration parameter
onstat -g vpcache command: Print CPU virtual processor and tenant virtual processor private memory cache statistics
A JVP embeds a Java virtual machine (JVM) in its code. A JVP has the same capabilities as a CPU VP in that it can process complete SQL queries.
You can specify as many JVPs as your operating system allows. If you run many Java UDRs or parallel PDQ queries with Java UDRs, you must configure more JVPs.
For more information about UDRs written in Java, see IBM® J/Foundation Developer's Guide.
Use the VPCLASS configuration parameter with the jvp keyword to configure JVPs. For more information, see the configuration parameters chapter in the HCL
Informix® Administrator's Reference.
The PIO class performs all I/O to the physical-log file, and the LIO class performs all I/O to the logical-log files, unless those files are in raw disk space and the database
server has implemented KAIO.
On operating systems that do not support KAIO, the database server uses the AIO class of virtual processors to perform database I/O that is not related to physical or
logical logging.
The database server uses the CPU class to perform KAIO when it is available on a platform. If the database server implements KAIO, a KAIO thread performs all I/O to
raw disk space, including I/O to the physical and logical logs.
UNIX only: To find out if your UNIX platform supports KAIO, see the machine notes file.
I/O priorities
The database server prioritizes disk I/O by assigning different types of I/O to different classes of virtual processors and by assigning priorities to the nonlogging
I/O queues.
Logical-log I/O
The LIO class of virtual processors performs I/O to the logical-log files.
Physical-log I/O
The PIO class of virtual processors performs I/O to the physical-log file.
Asynchronous I/O
The database server performs database I/O asynchronously, meaning that I/O is queued and performed independently of the thread that requests the I/O.
Performing I/O asynchronously allows the thread that makes the request to continue working while the I/O is being performed.
Prioritizing ensures that a high-priority log I/O, for example, is never queued behind a write to a temporary file, which has a low priority. The database server prioritizes
the different types of disk I/O that it performs, as the table shows.
Only when KAIO is implemented and the logical-log files are in raw disk space does the database server use a KAIO thread in the CPU virtual processor to perform I/O
to the logical log.
The logical-log files store the data that enables the database server to roll back transactions and recover from system failures. I/O to the logical-log files is the highest
priority disk I/O that the database server performs.
If the logical-log files are in a dbspace that is not mirrored, the database server runs only one LIO virtual processor. If the logical-log files are in a dbspace that is
mirrored, the database server runs two LIO virtual processors. This class of virtual processors has no parameters associated with it.
Only when KAIO is implemented and the physical-log file is in raw disk space does the database server use a KAIO thread in the CPU virtual processor to perform I/O to
the physical log. The physical-log file stores before-images of dbspace pages that have changed since the last checkpoint. (For more information about checkpoints, see
Checkpoints.) At the start of recovery, before processing transactions from the logical log, the database server uses the physical-log file to restore before-images to
dbspace pages that have changed since the last checkpoint. I/O to the physical-log file is the second-highest priority I/O after I/O to the logical-log files.
If the physical-log file is in a dbspace that is not mirrored, the database server runs only one PIO virtual processor. If the physical-log file is in a dbspace that is mirrored,
the database server runs two PIO virtual processors. This class of virtual processors has no parameters associated with it.
The database server performs all database I/O asynchronously, using one of the following facilities:
AIO virtual processors
KAIO on platforms that support it
Database I/O includes I/O for SQL statements, read-ahead, page cleaning, and checkpoints.
Kernel-asynchronous I/O
The database server implements KAIO by running a KAIO thread on the CPU virtual processor. The KAIO thread performs I/O by making system calls to the
operating system, which performs the I/O independently of the virtual processor.
The database server uses KAIO when the following conditions exist:
The computer and operating system support it.
A performance gain is realized.
The I/O is to raw disk space.
The KAIO thread can produce better performance for disk I/O than the AIO virtual processor can, because it does not require a switch between the CPU and AIO virtual
processors.
UNIX only: HCL Informix® implements KAIO when Informix ports to a platform that supports this feature. The database server administrator does not configure KAIO. To
see if KAIO is supported on your platform, see the machine notes file.
Linux only: Kernel asynchronous I/O (KAIO) is enabled by default. You can disable this by specifying that KAIOOFF=1 in the environment of the process that starts the
server.
On Linux, there is a system-wide limit of the maximum number of parallel KAIO requests. The /proc/sys/fs/aio-max-nr file contains this value. The Linux system
administrator can increase the value, for example, by using this command:
The current number of allocated requests of all operating system processes is visible in the /proc/sys/fs/aio-nr file.
By default, Dynamic Version allocates half of the maximum number of requests and assigns them equally to the number of configured CPU virtual processors. You can
use the environment variable KAIOON to control the number of requests allocated per CPU virtual processor. Do this by setting KAIOON to the required value before
starting Informix.
The minimum value for KAIOON is 100. If Linux is about to run out of KAIO resources, for example when dynamically adding many CPU virtual processors, warnings are
printed in the online.log file. If this happens, the Linux system administrator must add KAIO resources as described previously.
The database server assigns each disk chunk a queue, sometimes known as a gfd queue, which is based on the file name of the chunk. The database server orders I/O
requests within a queue according to an algorithm that minimizes disk-head movement. The AIO virtual processors service queues that have pending work in round-
robin fashion. All other non-chunk I/O is queued in the AIO queue.
Use the VPCLASS parameter with the aio keyword to specify the number of AIO virtual processors that the database server starts initially. You can start additional AIO
virtual processors while the database server is in online mode. You cannot drop AIO virtual processors while the database server is in online mode.
You can enable the database server to add AIO virtual processors and flusher threads when the server detects that AIO VPs are not keeping up with the I/O workload.
Include the autotune=1 keyword in the VPCLASS configuration parameter setting.
The goal in allocating AIO virtual processors is to allocate enough of them so that the lengths of the I/O request queues are kept short; that is, the queues have as few
I/O requests in them as possible. When the gfd queues are consistently short, it indicates that I/O to the disk devices is being processed as fast as the requests occur.
The onstat-g ioq command shows the length and other statistics about I/O queues. You can use this command to monitor the length of the gfd queues for the AIO
virtual processors.
Allocate two AIO virtual processors per active dbspace that is composed of buffered file chunks.
If the database server implements KAIO, but you are using some buffered files for chunks
IF KAIO is not supports by the system for chunks.
If KAIO is not implemented on your platform, allocate two AIO virtual processors for each disk that the database server accesses frequently.
If you use cooked files and if you enable direct I/O using the DIRECT_IO configuration parameter, you might be able to reduce the number of AIO virtual processors.
If the database server implements KAIO and you enabled direct I/O using the DIRECT_IO configuration parameter, HCL Informix® attempts to use KAIO, so you
probably do not require more than one AIO virtual processor. However, even when direct I/O is enabled, if the file system does not support either direct I/O or KAIO, you
still must allocate two AIO virtual processors for every active dbspace that is composed of buffered file chunks or does not use KAIO.
Temporary dbspaces do not use direct I/O. If you have temporary dbspaces, you probably require more than one AIO virtual processors.
Allocate enough AIO virtual processors to accommodate the peak number of I/O requests. Generally, it is not detrimental to allocate too many AIO virtual processors.
Related information:
VPCLASS configuration parameter
The network connection can be made by a client on a remote computer or by a client on the local computer mimicking a connection from a remote computer (called a
local-loopback connection).
Fast polling
You can use the FASTPOLL configuration parameter to enable or disable fast polling of your network, if your operating-system platform supports fast polling.
The NETTYPE configuration parameter provides optional configuration information for an interface/protocol combination. You can use it to allocate more than one poll
thread for an interface/protocol combination and also designate the virtual-processor class (CPU or NET) on which the poll threads run.
For a complete description of the NETTYPE configuration parameter, see the HCL Informix® Administrator's Reference.
Related reference:
sqlhosts connectivity information
In general, and particularly on a single-processor computer, poll threads run more efficiently on CPU virtual processors. This might not be true, however, on a
multiprocessor computer with many remote clients.
The NETTYPE parameter has an optional entry, called vp class, which you can use to specify either CPU or NET, for CPU or network virtual-processor classes,
respectively.
If you do not specify a virtual processor class for the interface/protocol combination (poll threads) associated with the DBSERVERNAME variable, the class defaults to
CPU. The database server assumes that the interface/protocol combination associated with DBSERVERNAME is the primary interface/protocol combination and that it is
the most efficient.
While the database server is in online mode, you cannot drop a CPU virtual processor that is running a poll or a listen thread.
Important: You must carefully distinguish between poll threads for network connections and poll threads for shared memory connections, which run one per CPU virtual
processor. TCP connections must only be in network virtual processors, and you must only have the minimum required to maintain responsiveness. Shared memory
connections must only be in CPU virtual processors and run in every CPU virtual processor.
Related reference:
Determine the number of CPU virtual processors needed
If you specify CPU for the vp class, you must allocate a sufficient number of CPU virtual processors to run the poll threads. If the database server does not have a CPU
virtual processor to run a CPU poll thread, it starts a network virtual processor of the specified class to run it.
For most systems, one poll thread and consequently one virtual processor per network interface/protocol combination is sufficient. For systems with 200 or more network
users, running additional network virtual processors might improve throughput. In this case, you must experiment to determine the optimal number of virtual processors
for each interface/protocol combination.
To specify a listen port for each of these dbservername entries, assign it a unique combination of hostname and service name entries in sqlhosts. For example, the
sqlhosts file or registry entry shown in the following table causes the database server soc_ol1 to start a listen thread for port1 on the host, or network address,
myhost.
The listen thread opens the port and requests one of the poll threads for the specified interface/protocol combination to monitor the port for client requests. The poll
thread runs either in the CPU virtual processor or in the network virtual processor for the connection that is being used. For information about the number of poll threads,
see Specify the number of networking virtual processors.
For information about how to specify whether the poll threads for an interface/protocol combination run in CPU or network virtual processors, see Run poll threads on
CPU or network virtual processors and to the NETTYPE configuration parameter in the IBM® Informix® Administrator's Reference.
When a poll thread receives a connection request from a client, it passes the request to the listen thread for the port. The listen thread authenticates the user, establishes
the connection to the database server, and starts an sqlexec thread, the session thread that performs the primary processing for the client. The following figure
illustrates the roles of the listen and poll threads in establishing a connection with a client application.
Figure 1. The roles of the poll and the listen threads in connecting to a client
A poll thread waits for requests from the client and places them in shared memory to be processed by the sqlexec thread. For network connections, the poll thread
places the message in a queue in the shared-memory global pool. The poll thread then wakes up the sqlexec thread of the client to process the request. Whenever
possible, the sqlexec thread writes directly back to the client without the help of the poll thread. In general, the poll thread reads data from the client, and the sqlexec
thread sends data to the client.
UNIX only: For a shared-memory connection, the poll thread places the message in the communications portion of shared memory.
The following figure illustrates the basic tasks that the poll thread and the sqlexec thread perform in communicating with a client application.
Figure 2. The roles of the poll and sqlexec threads in communicating with the client application
Fast polling is beneficial if you have many connections. For example, if you have more than 300 concurrent connections with the database server, you can enable the
FASTPOLL configuration parameter for better performance. You can enable fast polling by setting the FASTPOLL configuration parameter to 1.
If your operating system does not support fast polling, HCL Informix® ignores the FASTPOLL configuration parameter.
If the database server cannot service connection requests satisfactorily for a given interface/protocol combination with a single port and corresponding listen thread, you
can improve service for connection requests in the following ways:
By adding listen threads for additional ports.
By adding listen threads to the same port if you have the onimcsoc or onsoctcp protocol
By adding another network-interface card.
By dynamically starting, stopping, or restarting listen threads for a SOCTCP or TLITCP network protocol, using SQL administration API or onmode -P
commands.
If you have multiple listen threads for one port for the onsoctcp protocol, the database server can accept new connections if a CPU VP connection is busy.
You can also set up multiple listen threads for one service (port) for the onimcsoc or onsoctcp protocol.
To add listen threads for additional ports, you must first use the DBSERVERALIASES parameter to specify dbservernames for each of the ports. For example, the
DBSERVERALIASES parameter in the following figure defines two additional dbservernames, soc_ol2 and soc_ol3, for the database server instance identified as
soc_ol1.
DBSERVERNAME soc_ol1
DBSERVERALIASES soc_ol2,soc_ol3
After you define additional dbservernames for the database server, you must specify an interface/protocol combination and port for each of them in the sqlhosts file or
registry. Each port is identified by a unique combination of hostname and servicename entries. For example, the sqlhosts entries shown in the following table cause
the database server to start three listen threads for the onsoctcp interface/protocol combination, one for each of the ports defined.
Table 1. The sqlhosts entries to listen to multiple ports for a single interface/protocol combination
dbservername nettype hostname service name
soc_ol1 onsoctcp myhost port1
soc_ol2 onsoctcp myhost port2
soc_ol3 onsoctcp myhost port3
If you include a NETTYPE parameter for an interface/protocol combination, it applies to all the connections for that interface/protocol combination. In other words, if a
NETTYPE parameter exists for onsoctcp in the previous table, it applies to all of the connections shown. In this example, the database server runs one poll thread for
the onsoctcp interface/protocol combination unless the NETTYPE parameter specifies more. For more information about entries in the sqlhosts file or registry, see
Connectivity files.
Setting up multiple listen threads for one port for the onimcsoc or onsoctcp protocol
To set up multiple listen threads for one service (port) for the onimcsoc or onsoctcp protocol, specify DBSERVERNAME and DBSERVERALIASES information as
follows:
DBSERVERNAME <name>-<n>
DBSERVERALIASES <name1>-<n>,<name2>
For example:
To bring up two listen threads for the server with the DBSERVERNAME of ifx, specify:
DBSERVERNAME ifx-2
To bring up two listen threads for DBSERVERALIASES ifx_a and ifx_b, specify:
DBSERVERALIASES ifx_a-2,ifx_b-2
You might want to improve performance if the network-interface card for the host computer cannot service connection requests satisfactorily.
To support multiple network-interface cards, you must assign each card a unique hostname (network address) in sqlhosts.
For example, using the same dbservernames shown in Add listen threads, the sqlhosts file or registry entries shown in the following table cause the database server to
start three listen threads for the same interface/protocol combination (as did the entries in Add listen threads). In this case, however, two of the threads are listening to
ports on one interface card (myhost1), and the third thread is listening to a port on the second interface card (myhost2).
Table 1. Example of sqlhosts entries to support two network-interface cards for the onsoctcp interface/protocol combination
dbservername nettype hostname service name
soc_ol1 onsoctcp myhost1 port1
soc_ol2 onsoctcp myhost1 port2
soc_ol3 onsoctcp myhost2 port1
The listen thread must be defined in the sqlhosts file for the server. If necessary, before start, stop, or restart a listen thread, you can revise the sqlhosts entry.
An admin() or task() command with the stop listen argument, using the format
An admin() or task() command with the restart listen argument, using the format
For example, either of the following commands starts a new listen thread for a server named ifx_serv2:
The database server starts the same number of CSM virtual processors as the number of CPU virtual processors that it starts, unless the communications support
module is set to GSSCSM to support single sign-on. When the communications support module is GSSCSM, the database server starts only one CSM virtual processor.
For more information about the communications support service, see Client/server communication.
If the encrypt option of the VPCLASS parameter is not defined in the onconfig configuration file, the database server starts one ENCRYPT VP the first time that any
encryption or decryption functions defined for column-level encryption are called. You can define multiple ENCRYPT VPs if necessary to decrease the time required to
start the database server.
To add five ENCRYPT VPs, add information in the onconfig file as follows:
VPCLASS encrypt,num=5
You can modify the same information using the onmode utility, as follows:
onmode -p 5 encrypt
For more information, see the configuration parameters and the onmode utility topics in the HCL Informix® Administrator's Reference. For more information about
column-level encryption, see the IBM Informix Security Guide.
For more information about database server auditing, see the IBM® Informix® Security Guide.
Only one thread runs on this virtual processor; it executes with a stack of 128 KB.
A basic text search virtual processor is added automatically when you create a basic text search index.
A basic text search virtual processor runs without yielding; it processes one index operation at a time. To run multiple basic text search index operations and queries
simultaneously, create additional basic text search virtual processors.
Use the VPCLASS configuration parameter with the BTS keyword to configure basic text search virtual processors. For example, to add five BTS virtual processors, add
the following line to the onconfig and restart the database server:
VPCLASS bts,num=5
You can dynamically add BTS virtual processors by using the onmode -p command, for example:
onmode -p 5 bts
Related information:
VPCLASS configuration parameter
onmode -p: Add or drop virtual processors
Basic Text Search
An MQ virtual processor runs without yielding; it processes one operation at a time. To perform multiple MQ messaging transactions simultaneously, create additional
MQ virtual processors.
Use the VPCLASS configuration parameter with the MQ keyword to configure MQ virtual processors. For example, to add five MQ virtual processors, add the following
line to the onconfig and restart the database server:
VPCLASS mq,noyield,num=5
For more information about the VPCLASS configuration parameter, see the IBM® Informix® Administrator's Reference. For more information about MQ messaging, see
the IBM Informix Database Extensions User's Guide.
When you run a WFS routine, a WFS virtual processor is created automatically.
A WFS virtual processor runs without yielding; it processes one operation at a time. To run multiple WFS routines simultaneously, create additional WFS virtual
processors.
Use the VPCLASS configuration parameter with the WFSVP keyword to configure WFS virtual processors. For example, to add five WFS virtual processors, add the
following line to the onconfig and restart the database server:
VPCLASS wfsvp,noyield,num=5
For more information about the VPCLASS configuration parameter, see the IBM® Informix® Administrator's Reference. For more information about WFS, see the IBM
Informix Database Extensions User's Guide.
When you run an XML function, an XML virtual processor is created automatically.
An XML virtual processor runs one XML function at a time. To run multiple XML functions simultaneously, create additional XML virtual processors.
Use the VPCLASS configuration parameter with the IDSXMLVP keyword to configure XML virtual processors. For example, to add five XML virtual processors, add the
following line to the onconfig and restart the database server:
VPCLASS idsxmlvp,num=5
You can dynamically add XML virtual processors by using the onmode -p command, for example:
onmode -p 5 idsxmlvp
For more information about the VPCLASS configuration parameter and the onmode utility, see the IBM® Informix® Administrator's Reference. For more information
about XML publishing, see the IBM Informix Database Extensions User's Guide.
For descriptions of the virtual-processor classes and for advice on how many virtual processors you must specify for each class, see Virtual processors and threads.
The table lists the configuration parameters that are used to configure virtual processors.
Related information:
VPCLASS configuration parameter
You configure virtual processors primarily through configuration parameters and, for network virtual processors, through parameters in the sqlhosts information.
You can use the database server to start a maximum of 1000 virtual processors.
After the database server is in online mode, you can start more virtual processors to improve performance, if necessary.
While the database server is in online mode, you can drop virtual processors of the CPU and user-defined classes.
To shut down the database server and stop all virtual processors, use the onmode -k command.
Related concepts:
Virtual processor classes
Related information:
onmode -k, -m, -s, -u, -j: Change database server mode
You can start these additional virtual processors with the -p option of the onmode utility. You can add network virtual processors with the NETTYPE configuration
parameter.
You can also start additional virtual processors for user-defined classes to run user-defined routines. For more information about user-defined virtual processors, see
Assign a UDR to a user-defined virtual-processor class.
Related information:
onmode -p: Add or drop virtual processors
NETTYPE configuration parameter
Specify the number of virtual processors that you want to add with a positive number. As an option, you can precede the number of virtual processors with a plus sign
(+). Following the number, specify the virtual processor class in lowercase letters. For example, either of the following commands starts four additional virtual processors
in the AIO class:
onmode -p 4 aio
onmode -p +4 aio
You can add virtual processors to only one class at a time. To add virtual processors for another class, you must run onmode again.
In the following example, the poll threads handle a total of 240 connections:
For ipcshm, the number of poll threads correspond to the number of memory segments. For example, if NETTYPE is set to 3,100 and you want one poll thread, set the
poll thread to 1,300.
While the database server is in online mode, you can use the -p option of the onmode utility to drop, or terminate, virtual processors of the CPU and user-defined
classes.
Following the onmode command, specify a negative number that is the number of virtual processors that you want to drop, and then specify the CPU class in lowercase
letters. For example, the following command drops two CPU virtual processors:
% onmode -p -2 cpu
If you attempt to drop a CPU virtual processor that is running a poll thread, you receive the following message:
onmode: failed when trying to change the number of cpu virtual processor by -number.
For more information, see Run poll threads on CPU or network virtual processors.
Following the onmode command, specify a negative number that is the number of virtual processors that you want to drop, and then specify the user-defined class in
lowercase letters. For example, the following command drops two virtual processors of the class usr:
onmode -p -2 usr
Windows only: In Windows, you can have only one user-defined virtual processor class at a time. Omit the number parameter in the onmode -p vpclass command.
For information about how to create a user-defined class of virtual processors and assign user-defined routines to it, see User-defined classes of virtual processors.
For more information about these onstat -g options, see the topics on the effect of configuration on CPU utilization in the HCL Informix® Performance Guide.
For examples of output for the onstat -g commands, see information about the onstat utility in the IBM Informix Administrator's Reference.
For an example of onstat -g glo output, see information about the onstat utility in the IBM® Informix® Administrator's Reference.
If the length of the I/O queue is growing, I/O requests are accumulating faster than the AIO virtual processors can process them. If the length of the I/O queue continues
to show that I/O requests are accumulating, consider adding AIO virtual processors.
For an example of onstat -g ioq output, see information in the IBM® Informix® Administrator's Reference.
If the number of threads in the ready queue is growing for a class of virtual processors (for example, the CPU class), you might be required to add more virtual
processors to your configuration.
For an example of onstat -g rea output, see information in the IBM® Informix® Administrator's Reference.
Column Description
vpid Virtual-processor ID number
class Virtual-processor class
usercpu Minutes of user CPU used
syscpu Minutes of system CPU used
For information about how to change the database server configuration parameters that determine shared memory allocations, see Manage shared memory.
Shared memory
Shared memory is an operating-system feature that allows the database server threads and processes to share data by sharing access to pools of memory.
Shared-memory use
The database server uses shared memory to enable virtual processors and utilities to share data and to provide a fast communications channel for local client
applications that use IPC communication.
Concurrency control
The database server threads that run on the same virtual processor and on separate virtual processors share access to resources in shared memory.
Related reference:
Database server maintenance tasks
Shared memory is an operating-system feature that allows the database server threads and processes to share data by sharing access to pools of memory.
The database server uses shared memory for the following purposes:
To reduce memory usage and disk I/O
To perform high-speed communication between processes
Shared memory enables the database server to reduce overall memory usage because the participating processes, in this case, virtual processors, do not require
maintaining private copies of the data that is in shared memory.
Shared memory reduces disk I/O, because buffers, which are managed as a common pool, are flushed on a database server-wide basis instead of a per-process basis.
Furthermore, a virtual processor can often avoid reading data from disk because the data is already in shared memory as a result of an earlier read operation. The
reduction in disk I/O reduces execution time.
Shared memory provides the fastest method of interprocess communication, because it processes read and write messages at the speed of memory transfers.
Shared-memory allocation
The database server creates portions in shared memory to handle different processes.
Shared-memory size
Each portion of the database server shared memory consists of one or more operating-system segments of memory, each one divided into a series of blocks that
are 4 KB in size and managed by a bitmap.
If the sqlhosts file specifies shared-memory communications, the database server allocates memory for the communications portion.
The database server adds operating-system segments, as required, to the virtual and virtual-extension portions of shared memory.
For more information about shared-memory settings for your platform, see the machine notes. The following figure shows the contents of each portion of shared memory.
All database server virtual processors have access to the same shared-memory segments. Each virtual processor manages its work by maintaining its own set of
pointers to shared-memory resources such as buffers, locks, and latches. Virtual processors attach to shared memory when you take the database server from offline
mode to quiescent, administration, or online. The database server uses locks and latches to manage concurrent access to shared-memory resources by multiple
threads.
The header-line output by the onstat utility contains the size of the database server shared memory, expressed in KB. You can also use onstat -g seg to monitor
how much memory the database server allocates for each portion of shared memory. For information about how to use onstat, see the HCL Informix® Administrator's
Reference.
You can set the SHMTOTAL parameter in the onconfig file to limit the amount of memory overhead that the database server can place on your computer or node. The
SHMTOTAL parameter specifies the total amount of shared memory that the database server can use for all memory allocations. However, certain operations might fail if
the database server requires more memory than the amount set in SHMTOTAL. If this condition occurs, the database server displays the following message in the
message log:
In addition, the database server returns an error message to the application that initiated the offending operation. For example, if the database server requires more
memory than you specify in SHMTOTAL while it tries to perform an operation such as an index build or a hash join, it returns an error message to the application that is
similar to one of the following messages:
After the database server sends these messages, it rolls back any partial results performed by the offending query.
Internal operations, such as page-cleaner or checkpoint activity, can also cause the database server to exceed the SHMTOTAL ceiling. When this situation occurs, the
database server sends a message to the message log. For example, suppose that the database server attempts and fails to allocate additional memory for page-cleaner
activity. As a consequence, the database server sends information to the message log that is similar to the following messages:
After the database server informs you about the failure to allocate additional memory, it rolls back the transactions that caused it to exceed the SHMTOTAL limit.
Immediately after the rollback, operations no longer fail from lack of memory, and the database server continues to process transactions as usual.
Only the operation that caused the database server to run out of memory temporarily fails. Other operations continue to be processed in a normal fashion.
If messages indicate on a regular basis that the database server requires more memory than SHMTOTAL allows, you have not configured the database server correctly.
Lowering DS_TOTAL_MEMORY or the buffers value in the BUFFERPOOL configuration parameter is one possible solution; increasing the value of SHMTOTAL is
another.
The following topics describe how each type of process attaches to the database server shared memory.
For information about specifying a shared-memory connection, see Client/server communication, and Network virtual processors.
If the INFORMIXSHMBASE environment variable is not set, the client application attaches to the communications portion at an address that is platform-specific. If the
client application attaches to other shared-memory segments (not database server shared memory), the user can set the INFORMIXSHMBASE environment variable to
specify the address at which to attach the database server shared-memory communications segments. When you specify the address at which to address the shared-
memory communications segments, you can prevent the database server from colliding with the other shared-memory segments that your application uses. For
information about how to set the INFORMIXSHMBASE environment variable, see the HCL Informix® Guide to SQL: Reference.
Related reference:
Shared-memory connections (UNIX)
Environment variables for network connections
The onstat, onmode, and ontape utilities attach to shared memory through one of the following files.
The variable servername is the value of the DBSERVERNAME parameter in the onconfig file. The utilities obtain the servername portion of the file name from the
INFORMIXSERVER environment variable.
The oninit process reads the onconfig file and creates the file .infos.servername when it starts the database server. The file is removed when the database server
terminates.
During this process, the database server must satisfy the following two requirements:
Ensure that all virtual processors can locate and access the same shared-memory segments
Ensure that the shared-memory segments are located in physical memory locations that are different than the shared-memory segments assigned to other
instances of the database server, if any, on the same computer
The database server uses two configuration parameters, SERVERNUM and SHMBASE, to meet these requirements.
When a virtual processor attaches to shared memory, it performs the following major steps:
Accesses the SERVERNUM parameter from the onconfig file
Uses SERVERNUM to calculate a shared-memory key value
Requests a shared-memory segment using the shared-memory key value
The operating system returns the shared-memory identifier for the first shared-memory segment.
Directs the operating system to attach the first shared-memory segment to its process space at SHMBASE
Attaches additional shared-memory segments, if required, to be contiguous with the first segment
The following topics describe how the database server uses the values of the SERVERNUM and SHMBASE configuration parameters in the process of attaching
shared-memory segments.
To see the key values for shared-memory segments, run the onstat -g seg command. For more information, see the sections on SHMADD and the buffer pool in
your HCL Informix® Performance Guide.
When a virtual processor requests that the operating system attach the first shared-memory segment, it supplies the unique key value to identify the segment. In return,
the operating system passes back a shared-memory segment identifier associated with the key value. Using this identifier, the virtual processor requests that the
operating system attach the segment of shared memory to the virtual-processor address space.
Each virtual processor attaches to the first shared-memory segment at the same virtual address. This situation enables all virtual processors within the same database
server instance to reference the same locations in shared memory without calculating shared-memory addresses. All shared-memory addresses for an instance of the
database server are relative to SHMBASE.
For information about SHMBASE, see your HCL Informix® machine notes.
Each virtual processor must attach to the total amount of shared memory that the database server has acquired. After a virtual processor attaches each shared-memory
segment, it calculates how much shared memory it has attached and how much remains. The database server facilitates this process by writing a shared-memory
header to the first shared-memory segment. Sixteen bytes into the header, a virtual processor can obtain the following data:
The total size of shared memory for this database server
The size of each shared-memory segment
For the additional segments, however, the virtual processor adds 1 to the previous value of shmkey. The virtual processor directs the operating system to attach the
segment at the address that results from the following calculation:
The virtual processor repeats this process until it has acquired the total amount of shared memory.
Given the initial key value of (SERVERNUM * 65536) + shmkey, the database server can request up to 65,536 shared-memory segments before it can request a shared-
memory key value used by another database server instance on the same computer.
The following figure illustrates the problem. If the lower-boundary address is less than the ending address of the previous segment plus the size of the current segment,
the operating system attaches the current segment at a point beyond the end of the previous segment. This action creates a gap between the two segments. Because
shared memory must be attached to a virtual processor so that it looks like contiguous memory, this gap creates problems. The database server receives errors when
this situation occurs.
To correct the problem, check the operating-system kernel parameter that specifies the lower-boundary address or reconfigure the kernel to allow larger shared-memory
segments.
The database server requests that the operating system keep the virtual portions in physical memory when the following two conditions exist:
The operating system supports shared-memory residency.
The RESIDENT parameter in the onconfig file is set to -1 or a value that is greater than 0.
Warning: You must consider the use of shared memory by all applications when you consider whether to set the RESIDENT parameter to -1. Locking all shared memory
for the use of the HCL Informix® database server can adversely affect the performance of other applications, if any, on the same computer.
The resident portion of the database server shared memory stores the following data structures that do not change in size while the database server is running:
Shared-memory header
Logical-log buffer
Physical-log buffer
Lock table
Shared-memory header
The shared-memory header contains a description of all other structures in shared memory, including internal tables and the buffer pool, and pointers to the
locations of these structures.
Logical-log buffer
The database server uses the logical log to store a record of changes to the database server data since the last dbspace backup. The logical log stores records
that represent logical units of work for the database server.
Physical-log buffer
The database server uses the physical-log buffer to hold before-images of some of the modified dbspace pages.
Lock table
A lock is created when a user thread writes an entry in the lock table. A single transaction can own multiple locks. The lock table is the pool of available locks.
Related information:
RESIDENT configuration parameter
When a virtual processor first attaches to shared memory, it reads address information in the shared-memory header for directions to all other structures.
The size of the shared-memory header is about 200 KB, but the size varies depending on the computer platform. You cannot tune the size of the header.
The logical log contains the following five types of log records, in addition to many others:
The database server uses only one of the logical-log buffers at a time. This buffer is the current logical-log buffer. Before the database server flushes the current logical-
log buffer to disk, it makes the second logical-log buffer the current one so that it can continue writing while the first buffer is flushed. If the second logical-log buffer fills
before the first one finishes flushing, the third logical-log buffer becomes the current one. This process is illustrated in the following figure.
Figure 1. The logical-log buffer and its relation to the logical-log files on disk
For a description of how the database server flushes the logical-log buffer, see Flush the logical-log buffer.
The LOGBUFF configuration parameter specifies the size of the logical-log buffers. Small buffers can create problems if you store records larger than the size of the
buffers (for example, TEXT or BYTE data in dbspaces). The recommended value for the size of a logical log buffer is 64 KB. Whenever the setting is less than the
recommended value, the database server suggests a value during server startup. For the possible values that you can assign to this configuration parameter, see the
HCL Informix® Administrator's Reference.
For information about the affect of TEXT and BYTE data on shared memory buffers, see Buffer large-object data.
The before-images in the physical log and the logical-log records enable the database server to restore consistency to its databases after a system failure.
The physical-log buffer is actually two buffers. Double buffering permits the database server processes to write to the active physical-log buffer while the other buffer is
being flushed to the physical log on disk. For a description of how the database server flushes the physical-log buffer, see Flush the physical-log buffer. For information
about monitoring the physical-log file, see Monitor physical and logical-logging activity.
The PHYSBUFF parameter in the onconfig file specifies the size of the physical-log buffers. A write to the physical-log buffer writes exactly one page. If the specified
size of the physical-log buffer is not evenly divisible by the page size, the database server rounds the size down to the nearest value that is evenly divisible by the page
size. Although some operations require the buffer to be flushed sooner, in general the database server flushes the buffer to the physical-log file on disk when the buffer
fills. Thus, the size of the buffer determines how frequently the database server must flush it to disk.
The default value for the physical log buffer size is 512 KB. If you decide to use a smaller value, the database server displays a message indicating that optimal
performance might not be attained. Using a physical log buffer smaller than 512 KB affects performance only, not transaction integrity.
For more information about this configuration parameter, see the HCL Informix® Administrator's Reference.
If you implement data replication for your database server, the primary database server holds logical-log records in the data replication buffers before it sends them to the
secondary database server. A data replication buffer is always the same size as the logical-log buffer. For information about the size of the logical-log buffer, see the
preceding topic, Logical-log buffer. For more information about how the data replication buffer is used, see How data replication works.
For an explanation of locking and the SQL statements associated with locking, see the IBM® Informix® Guide to SQL: Tutorial.
The following information, which is stored in the lock table, describes the lock:
The address of the transaction that owns the lock
The type of lock (exclusive, update, shared, byte, or intent)
The page or rowid that is locked
The table space where the lock is placed
Information about the bytes locked (byte-range locks for smart large objects):
Smart-large-object ID
Offset into the smart large object where the locked bytes begin
The number of bytes locked, starting at the offset
To specify the initial size of the lock table, set the LOCKS configuration parameter. For information about using the LOCKS configuration parameter to specify the
number of locks for a session, see the topics about configuration parameters in the HCL Informix Administrator's Reference and the topics about configuration effects on
memory utilization in your HCL Informix Performance Guide.
If the number of locks allocated by sessions exceeds the value specified in the LOCKS configuration parameter, the database server doubles the size of the lock table,
up to 15 times. The database server increases the size of the lock table by attempting to double the lock table on each increase. However, the amount added during
each increase is limited to a maximum value. For 32-bit platforms, a maximum of 100,000 locks can be added during each increase. Therefore, the total maximum locks
allowed for 32-bit platforms is 8,000,000 (maximum number of starting locks) + 99 (maximum number of dynamic lock table extensions) x 100,000 (maximum number of
locks added per lock table extension). For 64-bit platforms, a maximum of 1,000,000 locks can be added during each increase. Therefore, the total maximum locks
allowed is 500,000,000 (maximum number of starting locks) + 99 (maximum number of dynamic lock table extensions) x 1,000,000 (maximum number of locks added
per lock table extension).
Use the DEF_TABLE_LOCKMODE configuration parameter to set the lock mode to page or row for new tables.
Locks can prevent sessions from reading data until after a concurrent transaction is committed or rolled back. For databases created with transaction logging, you can
use the USELASTCOMMITTED configuration parameter in the onconfig file to specify whether the database server uses the last committed version of the data. The
last committed version of the data is the version of the data that existed before any updates occurred. The value you set with the USELASTCOMMITTED configuration
parameter overrides the isolation level that is specified in the SET ISOLATION TO COMMITTED READ statement of SQL. For more information about using the
USELASTCOMMITTED configuration parameter, see the topics about configuration parameters in the HCL Informix Administrator's Reference.
For more information about using and monitoring locks, see the topics about locking in your HCL Informix Performance Guide and the IBM Informix Guide to SQL:
Tutorial.
The following figure illustrates the shared-memory header and the buffer pool.
You use the BUFFERPOOL configuration parameter to specify information about a buffer pool, including the number of buffers in the buffer pool or the overall size of the
buffer pool. Each buffer is the size of one database server page. Too few buffers can severely affect performance. You can set the BUFFERPOOL configuration
parameter to allow the database server to automatically increase the number of buffers as needed to improve performance. Otherwise, you must monitor the database
server and tune the number of buffers to determine an acceptable value.
A buffer pool manages one size of pages. You need a different buffer pool for each page size that is used by storage spaces in the database server. The database server
automatically creates the required buffer pools. For example, if you create the first dbspace that has a page size of 6 KB, the database server creates a buffer pool to
cache the default number of 6 KB pages in memory. You can control the properties of buffer pools with the BUFFERPOOL configuration parameter.
If the database server is in online, quiescent, or administration mode, you can also use the onparams -b command to add a buffer pool of a different size. When you
use the onparams -b command, the information that you specify is transferred automatically to the onconfig file as a new entry of the BUFFERPOOL configuration
parameter.
In general, the database server performs I/O in full-page units, the size of a buffer. The exceptions are I/O performed from big buffers, from blobspace buffers, or from
lightweight I/O buffers.
Automatic LRU (least recently used) tuning affects all buffer pools and adjusts the lru_min_dirty and lru_max_dirty values that can be explicitly set by the
BUFFERPOOL configuration parameter.
The status of the buffers is tracked through the buffer table. Within shared memory, buffers are organized into FIFO/LRU buffer queues. Buffer acquisition is managed by
mutexes and lock-access information.
Related concepts:
Thread data
Mutexes
Related reference:
Creation of blobpage buffers
Related information:
BUFFERPOOL configuration parameter
onstat -b command: Print buffer information for buffers in use
The BUFFERPOOL configuration parameter and memory utilization
onparams -b: Add a buffer pool
As the database server executes, it automatically attaches additional operating-system segments, as necessary, to the virtual portion.
Data-distribution cache
The database server uses distribution statistics generated by the UPDATE STATISTICS statement in the MEDIUM or HIGH mode to determine the query plan
with the lowest cost.
Keeping related memory allocations in a pool helps to reduce memory fragmentation. It also enables the database server to free a large allocation of memory at one
time, as opposed to freeing each piece that makes up the pool.
All sessions have one or more memory pools. When the database server requires memory, it looks first in the specified pool. If insufficient memory is available in a pool
to satisfy a request, the database server adds memory from the system pool. If the database server cannot find enough memory in the system pool, it dynamically
allocates more segments to the virtual portion.
The database server allocates virtual shared memory for each of its subsystems (session pools, stacks, heaps, control blocks, system catalog, SPL routine caches, SQL
statement cache, sort pools, and message buffers) from pools that track free space through a linked list. When the database server allocates a portion of memory, it first
searches the pool free-list for a fragment of sufficient size. If it finds none, it brings new blocks into the pool from the virtual portion. When memory is freed, it goes back
to the pool as a free fragment and remains there until the pool is deleted. When the database server starts a session for a client application, for example, it allocates
memory for the session pool. When the session terminates, the database server returns the allocated memory as free fragments.
To specify the initial size of the virtual shared-memory portion, set the SHMVIRTSIZE configuration parameter. To specify the size of segments that are added later to the
virtual shared memory, set the SHMADD and the EXTSHMADD configuration parameter.
To specify the amount of memory available for PDQ queries, set the DS_TOTAL_MEMORY parameter.
If you want to increase the amount of memory that is available for a query that is not a PDQ query and the PDQ priority is set to 0 (zero), you can change the amount in
any of the following ways:
Set the DS_NONPDQ_QUERY_MEM configuration parameter
Run the onmode -wm or the onmode -wf command
For example, if you use the onmode utility, specify a value as shown in the following example:
The minimum value for DS_NONPDQ_QUERY_MEM is 128 KB. The maximum supported value is 25 percent of the value of DS_TOTAL_MEMORY.
Parent topic: Management of the virtual portion of shared memory
Related reference:
Add a segment to the virtual portion of shared memory
Related information:
DS_TOTAL_MEMORY configuration parameter
DS_NONPDQ_QUERY_MEM configuration parameter
SHMVIRTSIZE configuration parameter
SHMADD configuration parameter
EXTSHMADD configuration parameter
Big buffers
A big buffer is a single buffer that is made up of several pages. The actual number of pages is platform-dependent.
Session data
When a client application requests a connection to the database server, the database server begins a session with the client and creates a data structure for the
session in shared memory. The created data structure is called the session-control block.
Thread data
When a client connects to the database server, in addition to starting a session, the database server starts a primary session thread and creates a thread-control
block for it in shared memory.
The database server shared memory contains seven internal tables that track shared-memory resources.
Buffer table
The buffer table tracks the addresses and status of the individual buffers in the shared-memory pool.
Chunk table
The chunk table tracks all chunks in the database server.
Dbspace table
The dbspace table tracks storage spaces in the database server.
Page-cleaner table
The page-cleaner table tracks the state and location of each of the page-cleaner threads.
Tblspace table
he tblspace table tracks all active tblspaces in a database server instance.
Transaction table
The transaction table tracks all transactions in the database server.
User table
The user table tracks all user threads and system threads.
When a buffer is used, it contains an image of a data or index page from disk. For more information about the purpose and content of a disk page, see Pages.
Each buffer in the buffer table contains the following control information, which is required for buffer management:
Buffer status
Buffer status is described as empty, unmodified, or modified. An unmodified buffer contains data, but the data can be overwritten. A modified (dirty) buffer contains
data that must be written to disk before it can be overwritten.
Current lock-access level
Buffers receive lock-access levels depending on the type of operation that the user thread is executing. The database server supports two buffer lock-access levels:
shared and exclusive.
Threads waiting for the buffer
Each buffer header maintains a list of the threads that are waiting for the buffer and the lock-access level that each waiting thread requires.
Each database server buffer has one entry in the buffer table.
For information about the database server buffers, see Resident portion of shared memory. For information about how to monitor the buffers, see Monitor buffers.
The database server determines the number of entries in the buffer-table hash table, based on the number of allocated buffers. The maximum number of hash values is
the largest power of 2 that is less than the value of buffers, which is specified in one of the BUFFERPOOL configuration parameter fields.
If mirroring has been enabled, a corresponding mirror chunk table is also created when shared memory is set up. The mirror chunk table tracks all mirror chunks.
The chunk table in shared memory contains information that enables the database server to locate chunks on disk. This information includes the number of the initial
chunk and the number of the next chunk in the dbspace. Flags also describe chunk status: mirror or primary; offline, online, or recovery mode; and whether this chunk is
part of a blobspace. For information about monitoring chunks, see Monitor chunks.
The maximum number of entries in the chunk table might be limited by the maximum number of file descriptors that your operating system allows per process. You can
usually specify the number of file descriptors per process with an operating-system kernel-configuration parameter. For details, consult your operating-system manuals.
The dbspace-table information includes the following information about each dbspace:
Dbspace number
Dbspace name and owner
Dbspace mirror status (mirrored or not)
Date and time that the dbspace was created
If the storage space is a blobspace, flags indicate the media where the blobspace is located: magnetic or removable. If the storage space is an sbspace, it contains
internal tables that track metadata for smart large objects and large contiguous blocks of pages containing user data.
The number of page-cleaner threads is specified by the CLEANERS configuration parameter in the onconfig file. For advice on how many page-cleaner threads to
specify, see the chapter on configuration parameters in the HCL Informix® Administrator's Reference.
The page-cleaner table always contains 128 entries, regardless of the number of page-cleaner threads specified by the CLEANERS parameter in the onconfig file.
For information about monitoring the activity of page-cleaner threads, see information about the onstat -F option in the HCL Informix Administrator's Reference.
TAn active tblspace is one that is currently in use by a database session. Each active table accounts for one entry in the tblspace table. Active tblspaces include
database tables, temporary tables, and internal control tables, such as system catalog tables. Each tblspace table entry includes header information about the tblspace,
the tblspace name, and pointers to the tblspace tblspace in dbspaces on disk. (The shared-memory active tblspace table is different from the tblspace tblspace.) For
information about monitoring tblspaces, see Monitor tblspaces and extents.
The database server manages one tblspace table for each dbspace.
Tracking information derived from the transaction table is shown in the onstat -x display. For an example of the output that onstat -x displays, see monitoring
transactions in your HCL Informix® Performance Guide.
The database server automatically increases the number of entries in the transaction table, up to a maximum of 32,767, based on the number of current transactions.
For more information about transactions and the SQL statements that you use with transactions, see the IBM Informix Guide to SQL: Tutorial, the HCL Informix Guide to
SQL: Reference, and the IBM Informix Guide to SQL: Syntax.
UNIX only: The transaction table also specifically supports the X/Open environment. Support for the X/Open environment requires TP/XA.
Each client session has one primary thread and zero-to-many secondary threads, depending on the level of parallelism specified. System threads include one to monitor
and control checkpoints, one to process onmode commands, the B-tree scanner threads, and page-cleaner threads.
The database server increases the number of entries in the user table as necessary. You can monitor user threads with the onstat -u command.
The database server allocates big buffers to improve performance on large reads and writes. The database server uses a big buffer whenever it writes to disk multiple
pages that are physically contiguous. For example, the database server tries to use a big buffer to perform a series of sequential reads (light scans) or to read into
shared memory simple large objects that are stored in a dbspace.
Users do not have control over the big buffers. If the database server uses light scans, it allocates big buffers from shared memory.
For information about monitoring big buffers with the onstat command, see the topics about configuration effects on I/O activity in your HCL Informix® Performance
Guide.
The session-control block stores the session ID, the user ID, the process ID of the client, the name of the host computer, and various status flags.
You can impose restrictions on the memory allocated for sessions by setting the SESSION_LIMIT_MEMORY configuration parameter to specify the maximum amount of
memory that a session can allocate. The limits do not apply to a user who holds administrative privileges, such as user informix or a DBSA user.
Related information:
SESSION_LIMIT_MEMORY configuration parameter
onstat -g ses command: Print session-related information
When a client connects to the database server, in addition to starting a session, the database server starts a primary session thread and creates a thread-control block
for it in shared memory.
The database server also starts internal threads on its own behalf and creates thread-control blocks for them. When the database server switches from running one
thread to running another one (a context switch), it saves information about the thread— such as the register contents, program counter (address of the next instruction),
and global pointers—in the thread-control block. For more information about the thread-control block and how it is used, see Context switching.
Stacks
Each thread in the database server has its own stack area in the virtual portion of shared memory.
Heaps
Each thread has a heap to hold data structures that it creates while it is running.
Related concepts:
Buffer pool portion of shared memory
For a description of how threads use stacks, see Stacks. For information about how to monitor the size of the stack for a session, see monitoring sessions and threads
section in your HCL Informix® Performance Guide.
The size of the stack space for user threads is specified by the STACKSIZE parameter in the onconfig file. You can change the size of the stack for all user threads, if
necessary, by changing the value of STACKSIZE.
You can use the INFORMIXSTACKSIZE environment variable to override the STACKSIZE value in the server configuration file. Set INFORMIXSTACKSIZE in the
environment and recycle the instance.
Related information:
STACKSIZE configuration parameter
A heap is dynamically allocated when the thread is created. The size of the thread heap is not configurable.
When the database server accesses the distribution statistics for a specific column the first time, it reads the distribution statistics from the sysdistrib system catalog
table on disk and stores the statistics in the data-distribution cache. These statistics can then be read for the optimization of subsequent queries that access the column.
Performance improves if these statistics are efficiently stored and accessed from the data-distribution cache. You can configure the size of the data-distribution cache
with the DS_HASHSIZE and DS_POOLSIZE configuration parameters. For information about changing the default size of the data-distribution cache, see the topics
about queries and the query optimizer in your HCL Informix® Performance Guide.
Dictionary cache
When a session executes an SQL statement that requires access to a system catalog table, the database server reads data from the system catalog tables.
Sort memory
The amount of virtual shared memory that the database server allocates for a sort depends on the number of rows to be sorted and the size of the row, along with
other factors.
Global pool
The global pool stores structures that are global to the database server.
The database server stores the catalog data for each queried table in structures that it can access more efficiently during subsequent queries on that table. These
structures are created in the virtual portion of shared memory for use by all sessions. These structures constitute the dictionary cache.
You can configure the size of the dictionary cache with the DD_HASHSIZE and DD_HASHMAX configuration parameters. For more information about these parameters,
see the chapter on configuration effects on memory in your HCL Informix® Performance Guide.
The database server uses the SQL statement cache to store optimized SQL statements that a user runs. When users run a statement that is stored in the SQL statement
cache, the database server does not optimize the statement again, so performance improves.
For more information, see Set SQL statement cache parameters. For details on how these parameters affect the performance of the SQL statement cache, see the HCL
Informix® Performance Guide.
The following database operations can use large amounts of the virtual portion of shared memory to sort data:
Decision-support queries that involve joins, groups, aggregates and sort operations
Index builds
UPDATE STATISTICS statement in SQL
For information about parallel sorts, see your HCL Informix® Performance Guide.
When a session is required to access an SPL routine or other user-defined routine for the first time, the database server reads the definition from the system catalog
tables and stores the definition in the UDR cache.
You can configure the size of the UDR cache with the PC_HASHSIZE and PC_POOLSIZE configuration parameters. For information about changing the default size of
the UDR cache, see the chapter on queries and the query optimizer in your HCL Informix® Performance Guide.
The global pool contains the message queues where poll threads for network communications deposit messages from clients. The sqlexec threads pick up the
messages from the global pool and process them.
For more information, see the sections on network buffer pools and virtual portion of shared memory in your HCL Informix® Performance Guide.
The communications portion contains the message buffers for local client applications that use shared memory to communicate with the database server.
The size of the communications portion of shared memory equals approximately 12 KB multiplied by the expected number of connections required for shared-memory
communications (nettype ipcshm). If nettype ipcshm is not present, the expected number of connections defaults to 50. For information about how a client attaches to
the communications portion of shared memory, see How a client attaches to the communications portion (UNIX).
Related reference:
Shared-memory connections (UNIX)
Virtual-extension segments contain thread heaps for DataBlade modules and user-defined routines that run in user-defined virtual processors.
The EXTSHMADD configuration parameter sets the size of virtual-extension segments. The SHMADD and SHMTOTAL configuration parameters apply to the virtual-
extension portion of shared memory, just as they do to the other portions of shared memory.
When a thread writes to shared memory, it uses mechanisms called mutexes and locks to prevent other threads from simultaneously writing to the same area. A mutex
gives a thread the right to access a shared-memory resource. A lock prevents other threads from writing to a buffer until the thread that placed the lock is finished with
the buffer and releases the lock.
Shared-memory mutexes
The database server uses mutexes to coordinate threads as they attempt to modify data in shared memory. Every modifiable shared-memory resource is
associated with a mutex.
Before a thread can modify a shared-memory resource, it must first acquire the mutex associated with that resource. After the thread acquires the mutex, it can modify
the resource. When the modification is complete, the thread releases the mutex.
If a thread tries to obtain a mutex and finds that it is held by another thread, the incoming thread must wait for the mutex to be released.
For example, two threads can attempt to access the same slot in the chunk table, but only one can acquire the mutex associated with the chunk table. Only the thread
that holds the mutex can write its entry in the chunk table. The second thread must wait for the mutex to be released and then acquire it.
For information about monitoring mutexes (which are also called latches), see Monitor the shared-memory profile and latches.
Each of these lock types enforces the required level of thread isolation during execution.
Share lock
A buffer is in share mode, or has a share lock, if multiple threads have access to the buffer to read the data but none intends to modify the data.
Exclusive lock
A buffer is in exclusive mode, or has an exclusive lock, if a thread demands exclusive access to the buffer.
All other thread requests that access the buffer are placed in the wait queue. When the executing thread is ready to release the exclusive lock, it wakes the next thread in
the wait queue.
FIFO/LRU queues
A buffer holds data for the purpose of caching. The database server uses the least-recently used (LRU) queues to replace the cached data. HCL Informix® also
has a first-in first-out (FIFO) queue. When you set the number of LRU queues, you are actually setting the number of FIFO/LRU queues.
Read-ahead operations
The database server automatically reads several pages ahead of the current pages that are being processed for a query, unless you disable automatic read
ahead operations. Reading ahead enables applications to run faster because they spend less time waiting for disk I/O.
first-in first-out (FIFO) queue. When you set the number of LRU queues, you are actually setting the number of FIFO/LRU queues.
Use the BUFFERPOOL configuration parameter to specify information about the buffer pool, including information about the number of LRU queues to create when
database server shared memory is set up and values for lru_min_dirty and lru_max_dirty, which control how frequently the shared-memory buffers are flushed to disk.
To improve transaction throughput, increase the lru_min_dirty and lru_max_dirty values. However, do not change the gap between the lru_min_dirty and
lru_max_dirty values. If the AUTO_LRU_TUNING configuration parameter is enabled, the values of the lru_max_dirty and lru_min_dirty fields are reset automatically
as needed to improve performance.
Related information:
AUTO_LRU_TUNING configuration parameter
BUFFERPOOL configuration parameter
The free or unmodified page list is called the FLRU queue of the queue pair, and the modified page list is called the MLRU queue. The two separate lists eliminate the
task of searching a queue for a free or unmodified page. The following figure illustrates the structure of the LRU queues.
Rather than select a page randomly, the database server assumes that recently referenced pages are more likely to be referenced in the future than pages that it has not
referenced for some time. Thus, rather than replacing a recently accessed page, the database server replaces a least-recently accessed page. By maintaining pages in
least-recently to most-recently used order, the database server can easily locate the least-recently used pages in memory.
The buffers are evenly distributed among the FLRU queues. To calculate the number of buffers in each queue, divide the total number of buffers by the number of LRU
queues. The number of buffers and LRU queues are specified in the BUFFERPOOL configuration parameter.
When a user thread is required to acquire a buffer, the database server randomly selects one of the FLRU queues and uses the oldest or least-recently used entry in the
list. If the least-recently used page can be latched, that page is removed from the queue.
If the FLRU queue is locked, and the end page cannot be latched, the database server randomly selects another FLRU queue.
If a user thread is searching for a specific page in shared memory, it obtains the LRU-queue location of the page from the control information stored in the buffer table.
After an executing thread finishes its work, it releases the buffer. If the page has been modified, the buffer is placed at the most-recently used end of an MLRU queue. If
the page was read but not modified, the buffer is returned to the FLRU queue at its most-recently used end. For information about how to monitor LRU queues, see
Monitor buffers.
You specify the number of LRU queues by setting the lrus value in the BUFFERPOOL configuration parameter. The default number of LRU queues depends on the
number of CPUs on your computer:
If you have a uniprocessor computer, the default value of the lrus field is 8.
If you have a multiprocessor computer and the MULTIPROCESSOR configuration parameter is enabled, the default value of the lrus field is the greater of 8 or
the number of CPU virtual processors.
After you provide an initial value for the lrus field in the BUFFERPOOL configuration parameter, monitor your LRU queues with the onstat -R command. If you find
that the percentage of dirty LRU queues consistently exceeds the value of the lru_max_dirty field in the BUFFERPOOL configuration parameter, increase the value
of the lrus field to add more LRU queues.
For example, if the value of the lru_max_dirty field is 70 and your LRU queues are consistently 75 percent dirty, you can increase the value of the lrus field. If you
increase the number of LRU queues, you shorten the length of the queues, which reduces the work of the page cleaners. However, you must allocate enough page
cleaners with the CLEANERS configuration parameter.
Related information:
LRU tuning
BUFFERPOOL configuration parameter
Another factor that influences whether page cleaners keep up with the number of pages that require cleaning is whether you have enough page-cleaner threads
allocated. The percent of dirty pages might exceed the BUFFERPOOL value specified for lru_max_dirty in some queues because no page cleaners are available to
clean the queues. After a while, the page cleaners might be too far behind to catch up, and the buffer pool becomes dirtier than the percent that you specified in
lru_max_dirty.
For example, suppose that the CLEANERS parameter is set to 8, and you increase the number of LRU queues from 8 to 12. You can expect little in the way of a
performance gain because the 8 cleaners must now share the work of cleaning an additional 4 queues. If you increase the number of CLEANERS to 12, each of the
now-shortened queues can be more efficiently cleaned by a single cleaner.
Setting CLEANERS too low can cause performance to suffer whenever a checkpoint occurs because page cleaners must flush all modified pages to disk during
checkpoints. If you do not configure a sufficient number of page cleaners, checkpoints take longer, causing overall performance to suffer.
The initial setting of lru_max_dirty is 60.00, so page cleaning begins when 60 percent of the buffers managed by a queue are modified.
In practice, page cleaning begins under several conditions, only one of which is when an MLRU queue reaches the value of lru_max_dirty. For more information about
how the database server performs buffer-pool flushing, see Flush data to disk.
The following example shows how the value of lru_max_dirty is applied to an LRU queue to specify when page cleaning begins and thereby limit the number of buffers
in an MLRU queue.
The lru_min_dirty value in the BUFFERPOOL configuration parameter specifies the acceptable percentage of buffers in an MLRU queue. For example, if lru_min_dirty
is set to 50.00, page cleaning is not required when 50 percent of the buffers in an LRU queue are modified. In practice, page cleaning can continue beyond this point, as
directed by the page-cleaner threads.
The following example shows how the value of lru_min_dirty is applied to the LRU queue to specify the acceptable percent of buffers in an MLRU queue and the point
at which page cleaning ends.
You can use decimals for the lru_max_dirty and the lru_min_dirty values. For example, if you set lru_max_dirty to 1.0333 and lru_min_dirty to 1.0, this triggers the
LRU to write at 3,100 dirty buffers and to stop at 3,000 dirty buffers.
For more information about how the database server flushes the buffer pool, see Flush data to disk.
Automatic read-ahead requests for pages to be brought into the bufferpool cache during sequential scans of data records improves the performance of a query, including
OLTP queries and index scans, when the server detects that the query is encountering I/O.
By default, the database server automatically determines when to issue read-ahead requests and when to stop based on when the query is encountering i/o from disk:
If queries encounter I/O, the server issues read-ahead requests to improve the performance of the query. This performance improvement occurs because read-
ahead requests can greatly increase the speed of database processing by compensating for the slowness of I/O processing relative to the speed of CPU
processing.
If queries are mostly cached, the server detects that no I/O is occurring and does not read ahead.
Use the AUTO_READAHEAD configuration parameter to change the automatic read-ahead mode or to disable automatic read ahead for a query. You can:
Dynamically change the value of the AUTO_READAHEAD configuration parameter by running an onmode -wm or onmode -wf command.
Run a SET ENVIRONMENT AUTO_READAHEAD statement to change the mode or enable or disable automatic read-ahead for a session.
You can use the onstat -p command to view database server reads and writes and monitor number of times that a thread was required to wait for a shared-memory
latch. The RA-pgsused output field shows the number of pages used that the database server read ahead and monitor the database server use of read-ahead.
Use the onstat -g rah command to display statistics about read-ahead requests.
Related information:
AUTO_READAHEAD configuration parameter
onstat -p command: Print profile counts
onstat -g rah command: Print read-ahead request statistics
The database server uses two types of buffer locks to provide this concurrency without a loss in thread isolation. The two types of lock access are share and exclusive.
(For more information, see Types of buffer locks.)
When a user thread modifies data in a buffer, it marks the buffer as dirty. When the database server flushes the buffer to disk, it subsequently marks the buffer as not
dirty and allows the data in the buffer to be overwritten.
Logical-log buffer
Page-cleaner threads manage buffer flushing. The database server always runs at least one page-cleaner thread. If the database server is configured for more than one
page-cleaner thread, the LRU queues are divided among the page cleaners for more efficient flushing. For information about specifying how many page-cleaner threads
the database server runs, see the CLEANERS configuration parameter in the HCL Informix® Administrator's Reference.
Flushing the physical-log buffer, the modified shared-memory page buffers, and the logical-log buffer must be synchronized with page-cleaner activity according to
specific rules designed to maintain data consistency.
Automatic LRU tuning affects all buffer pools and adjusts the lru_min_dirty and lru_max_dirty values in the BUFFERPOOL configuration parameter.
In practice, the physical-log buffer is flushed first and then the buffers that contain modified pages. Therefore, even when a shared-memory buffer page must be flushed
because a user thread is trying to acquire a buffer but none is available (a foreground write), the buffer pages cannot be flushed until the before-image of the page has
been written to disk.
The database server temporarily stores before-images of some of the modified disk pages in the physical-log buffer. If the before-image is written to the physical-log
buffer but not to the physical log on disk, the server flushes the physical-log buffer to disk before flushing the modified page to disk.
The database server uses only one of the two physical-log buffers at a time. This buffer is the active (or current) physical-log buffer. Before the database server flushes
the active physical-log buffer to disk, it makes the other buffer the active physical-log buffer so that the server can continue writing to a buffer while the first buffer is being
flushed.
Both the physical-log buffer and the physical log help maintain the physical and logical consistency of the data. For information about physical logging, checkpoints, and
fast recovery, see Physical logging, checkpoints, and fast recovery.
To see the write counts that the database server maintains, run the onstat -F command.
If you implement mirroring for the database server, data is always written to the primary chunk first. The write is then repeated on the mirror chunk. Writes to a mirror
chunk are included in the counts.
Foreground write
Whenever an sqlexec thread writes a buffer to disk, it is termed a foreground write. A foreground write occurs when an sqlexec thread searches through the LRU
queues on behalf of a user but cannot locate an empty or unmodified buffer.
LRU write
LRU writes are performed by page cleaners rather than by sqlexec threads. The database server performs LRU writes as background writes that typically occur
when the percentage of dirty buffers exceeds the percent that is specified for lru_max_dirty in the BUFFERPOOL configuration parameter.
Chunk write
Chunk writes are commonly performed by page-cleaner threads during a checkpoint or, possibly, when every page in the shared-memory buffer pool is modified.
Chunk writes, which are performed as sorted writes, are the most efficient writes available to the database server.
Related reference:
Monitor buffers
Related information:
onstat -F command: Print counts
To make space, the sqlexec thread flushes pages, one at a time, to hold the data to be read from disk. For more information, see FIFO/LRU queues.
If the sqlexec thread must perform buffer flushing just to acquire a shared-memory buffer, performance can suffer. Foreground writes must be avoided. To display a
count of the number of foreground writes, run onstat -F. If you find that foreground writes are occurring on a regular basis, tune the value of the page-cleaning
parameters. Either increase the number of page cleaners or decrease the BUFFERPOOL lru_max_dirty value.
A foreground write can trigger an LRU write. When a foreground write occurs, the sqlexec thread that performed the write alerts a page-cleaner to wake up and clean
the LRU for which it performed the foreground write.
In an appropriately tuned system, page cleaners ensure that enough unmodified buffer pages are available for storing pages to be read from disk. Thus, sqlexec threads
that perform a query are not required to flush a page to disk before they read in the disk pages required by the query. This condition can result in significant performance
gains for queries that do not make use of foreground writes.
LRU writes are preferred over foreground writes because page-cleaner threads perform buffer writes much more efficiently than sqlexec threads do. To monitor both
types of writes, use onstat -F.
During a chunk write, each page-cleaner thread is assigned to one or more chunks. Each page-cleaner thread reads through the buffer headers and creates an array of
pointers to pages that are associated with its specific chunk. (The page cleaners have access to this information because the chunk number is contained within the
physical page number address, which is part of the page header.) This sorting minimizes head movement (disk seek time) on the disk and enables the page-cleaner
threads to use the big buffers during the write, if possible.
In addition, because user threads must wait for the checkpoint to complete, the page-cleaner threads are not competing with many threads for CPU time. As a result, the
page-cleaner threads can finish their work with less context switching.
The database server uses the shared-memory logical-log buffer as temporary storage for records that describe modifications to database server pages. From the logical-
log buffer, these records of changes are written to the current logical-log file on disk and eventually to the logical-log backup media. For a description of logical logging,
see Logical log.
When a page is modified that does not require a before-image in the physical-log file
When a page is modified that does not require a before-image in the physical log, the logical-log buffer must be flushed before that page is flushed to disk.
XPREPARE
ENDTRANS
For a comparison of buffered versus unbuffered logging, see the SET LOG statement in the IBM® Informix® Guide to SQL: Syntax.
When the database server terminates sessions that use unbuffered logging or nonlogging databases, the logical-log buffer is flushed to make sure that any logging
activity is recorded.
For a detailed description of the events that occur during a checkpoint, see Checkpoints.
The database server uses different methods to access each type of storage space. The following topics describe buffering methods for each.
You can also assign simple large objects to a blobspace. The database server writes simple large objects to a blobspace differently from the way that it writes other data
to a shared-memory buffer and then flushes it to disk. For a description of blobspaces, see the chapter on disk structure and storage in the HCL Informix®
Administrator's Reference.
The database server does not create or access blobpages by way of the shared-memory buffer pool, and it does not write blobspace blobpages to either the logical or
physical logs.
If blobspace data passed through the shared-memory pool, it might dilute the effectiveness of the pool by driving out index pages and data pages. Instead, blobpage
data is written directly to disk when it is created.
To reduce logical-log and physical-log traffic, the database server writes blobpages from magnetic media to dbspace backup tapes and logical-log backup tapes in a
different way than it writes dbspace pages. For a description of how blobspaces are logged, see Log blobspaces and simple large objects.
During an insert, for example, the simple large object is transferred before the rest of the row data. After the simple large object is stored, the data row is created with a
56-byte descriptor that points to its location. For a description of how simple large objects are stored physically, see the structure of a dbspace blobpage in the disk
storage and structure chapter of the HCL Informix® Administrator's Reference.
Simple large object data is transferred from the client-application process to the database server in 1 KB segments. The database server begins filling the blobspace
buffers with the 1 KB pieces and attempts to buffer two blobpages at a time. The database server buffers two blobpages so that it can determine when to add a
forwarding pointer from one page to the next. When it fills the first buffer and discovers that more data remains to transfer, it adds a forward pointer to the next page
before it writes the page to disk. When no more data remains to transfer, the database server writes the last page to disk without a forward pointer.
When the thread begins writing the first blobspace buffer to disk, it attempts to perform the I/O based on the user-defined blobpage size. For example, if the blobpage
size is 32 KB, the database server attempts to read or write the data in 32,768-byte increments. If the underlying hardware (such as the disk controller) cannot transfer
this amount of data in a single operation, the operating-system kernel loops internally (in kernel mode) until the transfer is complete.
The blobspace buffers remain until the thread that created them is finished. When the simple large object is written to disk, the database server deallocates the pair of
blobspace buffers. The following figure illustrates the process of writing a simple large object to a blobspace.
Blobspace blobpages are allocated and tracked with the free-map page. Links that connect the blobpages and pointers to the next blobpage segments are created as
necessary.
A record of the operation (insert, update, or delete) is written to the logical-log buffer.
Related concepts:
Buffer pool portion of shared memory
A smart large object is stored in an sbspace. You cannot store simple large objects in an sbspace, and you cannot store smart large objects in a blobspace. An sbspace
consists of a user-data area and a metadata area. The user-data area contains the smart-large-object data. The metadata area contains information about the content of
the sbspace. For more information about sbspaces, see Sbspaces.
Because smart large objects pass through the shared-memory buffer pool and can be logged, you must consider them when you allocate buffers. Use the
BUFFERPOOL configuration parameter to allocate shared-memory buffers. As a general rule, try to have enough buffers to hold two smart-large-object pages for each
concurrently open smart large object. (The additional page is available for read-ahead purposes.) For more information about tuning buffers for smart large objects, see
your HCL Informix® Performance Guide.
Use the LOGBUFF configuration parameter to specify the size of the logical-log buffer. For information about setting each of the following configuration parameters, see
the HCL Informix Administrator's Reference:
BUFFERPOOL
LOGBUFF
The user-data area of smart large objects that are logged does not pass through the physical log, so changing the PHYSBUFF parameter is not required for smart large
objects.
For more information about the structure of an sbspace, see sbspace structure in the disk structures and storage chapter of the HCL Informix Administrator's Reference.
For information about creating an sbspace, see information about the onspaces utility in the HCL Informix Administrator's Reference.
Because 64-bit platforms allow for larger memory-address space, the maximum values for the following memory-related configuration parameters are larger on 64-bit
platforms:
BUFFERPOOL
CLEANERS
DS_MAX_QUERIES
DS_TOTAL_MEMORY
LOCKS
SHMADD
SHMVIRTSIZE
The machine notes for each 64-bit platform lists the maximum values for these configuration parameters and platform-specific parameters such as SHMMAX. For more
information about the configuration parameters, see the HCL Informix® Administrator's Reference and the chapter on shared memory in the HCL Informix Performance
Guide.
These topics do not cover the DS_TOTAL_MEMORY configuration parameter. This parameter places a ceiling on the allocation of memory for decision-support queries.
For information about this parameter, see your HCL Informix® Performance Guide.
Parameter names are not provided because names vary among platforms, and not all parameters exist on all platforms. The following list describes these parameters by
function:
Maximum operating-system shared-memory segment size, expressed in bytes or KB
Minimum shared-memory segment size, expressed in bytes
Maximum number of shared-memory identifiers
Lower-boundary address for shared memory
Maximum number of attached shared-memory segments per process
Maximum amount of systemwide shared memory
UNIX only:
Maximum number of semaphore identifiers
Maximum number of semaphores
Maximum number of semaphores per identifier
On UNIX, the machine notes file contains recommended values that you use to configure operating-system resources. Use these recommended values when you
configure the operating system. For information about how to set these operating-system parameters, consult your operating-system manuals.
For specific information about your operating-system environment, see the machine notes file that is provided with the database server.
Semaphores (UNIX)
The database server operation requires one UNIX semaphore for each virtual processor, one for each user who connects to the database server through shared
memory (ipcshm protocol), six for database server utilities, and sixteen for other purposes.
Related information:
UNIX configuration parameters that affect CPU utilization
Windows configuration parameters that affect CPU utilization
The first segment size that the database server tries to acquire is the size of the portion that it is allocating (resident, virtual, or communications), rounded up to the
nearest multiple of 8 KB.
The database server receives an error from the operating system if the requested segment size exceeds the maximum size allowed. If the database server receives an
error, it divides the requested size by two and tries again. Attempts at acquisition continue until the largest segment size that is a multiple of 8 KB can be created. Then
the database server creates as many additional segments as it requires.
For Windows version 2003 and earlier, you must enable this feature with an entry in the Windows boot file. Enabling larger shared-memory segments is referred to by
Microsoft as 4-gigabyte tuning (4GT).
To add the entry, edit the boot.ini file (in the top level, or root directory). You can either add a boot option or use the currently existing boot option. To enable support
for more than two gigabytes, add the following text to the end of the boot line:
/3GB
The following example has support for more than two gigabytes enabled:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows NT
Workstation Version 4.00"
/3GB
The maximum size of the shared-memory segment depends on the operating system, but it is approximately 3 gigabytes for Windows without additional drivers.
For most operating systems, virtual processors receive identifiers on a first-come, first-served basis, up to the limit that is defined for the operating system as a whole.
For more information about shared-memory identifiers, see How virtual processors attach to shared memory.
You might be able to calculate the maximum amount of shared memory that the operating system can allocate by multiplying the number of shared-memory identifiers by
the maximum shared-memory segment size.
The BUFFERPOOL configuration parameter in the onconfig file specifies information about a buffer pool. Each page size that is used by the database server requires a
buffer pool, which is represented in the onconfig file by a BUFFERPOOL configuration parameter entry.
The following list contains parameters in the onconfig file that specify the configuration of the buffer pool and the internal tables in the resident portion of shared
memory. Before any changes that you make to the configuration parameters take effect, you must shut down and restart the database server.
LOCKS
Specifies the initial number of locks for database objects; for example, rows, key values, pages, and tables.
LOGBUFF
Specifies the size of the logical-log buffers.
PHYSBUFF
Specifies the size of the physical-log buffers.
RESIDENT
Specifies residency for the resident portion of the database server shared memory.
SERVERNUM
Specifies a unique identification number for the database server on the local host computer.
SHMTOTAL
Specifies the total amount of memory to be used by the database server.
The following list contains the configuration parameters that you use to configure the virtual portion of shared memory:
DS_HASHSIZE
Number of hash buckets for lists in the data-distribution cache.
DS_POOLSIZE
Maximum number of entries in the data-distribution cache.
PC_HASHSIZE
Specifies the number of hash buckets for the UDR cache and other caches that the database server uses.
PC_POOLSIZE
Specifies the number of UDRs (SPL routines and external routines) that can be stored in the UDR cache. In addition, this parameter specifies the size of other
database server caches, such as the typename cache and the opclass cache.
SHMADD
Specifies the size of dynamically added shared-memory segments.
SHMNOACCES
Specifies a list of virtual memory address ranges that are not used to attach shared memory. Use this parameter to avoid conflicts with other processes.
EXTSHMADD
Specifies the size of a virtual-extension segment added when a user-defined routine or a DataBlade routine runs in a user-defined virtual processor.
SHMTOTAL
Specifies the total amount of memory to be used by the database server.
SHMVIRTSIZE
Specifies the initial size of the virtual portion of shared memory.
STACKSIZE
Specifies the stack size for the database server user threads.
AUTO_READAHEAD
Specifies the automatic read-ahead mode or disables automatic read-ahead operations for a query. Automatic read-ahead operations help improve query performance
by issuing asynchronous page requests when the database server detects that the query is encountering I/O. Asynchronous page requests can improve query
performance by overlapping query processing with the processing necessary to retrieve data from disk and put it in the buffer pool.
CKPTINTVL
Specifies the maximum number of seconds that can elapse before the database server checks if a checkpoint is required and the RTO_SERVER_RESTART
configuration parameter is not set to turn on automatic checkpoint tuning.
CLEANERS
Specifies the number of page-cleaner threads that the database server is to run.
Related information:
Configuration parameters that affect memory utilization
Database configuration parameters
Shared memory
Modifying the onconfig file
The following table shows ways to configure the SQL statement cache.
Use the following onstat options to monitor the SQL statement cache:
onstat -g ssc
onstat -g ssc all
onstat -g ssc pool
For more information about these configuration parameters, onstat -g options, and onmode commands, see the HCL Informix® Administrator's Reference.
For more information about using the SQL statement cache, monitoring it with the onstat -g options, and tuning the configuration parameters, see improving query
performance in the HCL Informix Performance Guide. For details on qualifying and identical statements, see the IBM Informix Guide to SQL: Syntax.
For information about how to take the database server from online mode to offline, see Changing database server operating modes.
For a description of the resident portion of shared memory, see Resident portion of shared memory.
To turn on residency immediately for the resident portion of shared memory, run the following command:% onmode -r
To turn off residency immediately for the resident portion of shared memory, run the following command: % onmode -n
These commands do not change the value of the RESIDENT parameter in the onconfig file. That is, this change is not permanent, and residency reverts to the state
specified by the RESIDENT parameter the next time that you set up shared memory. On UNIX, you must be root or user informix to turn residency on or off. On
Windows, you must be a user in the Informix® Admin group to turn residency on or off.
To change the current state of residency, use a text editor to locate the RESIDENT parameter. Set RESIDENT to 1 to turn residency on or to 0 to turn residency off, and
rewrite the file to disk. Before the changes take effect, you must shut down and restart the database server.
You are not normally required to add segments to virtual shared memory because the database server automatically adds segments as necessary.
The option to add a segment with the onmode utility is useful if the number of operating-system segments is limited, and the initial segment size is so low, relative to the
amount that is required, that the operating-system limit of shared-memory segments is nearly exceeded.
Related concepts:
Size of the virtual portion of shared memory
If you enable the new LOW_MEMORY_RESERVE configuration parameter by setting it to a specified value in kilobytes, critical activities, such as rollback activities, can
complete even when a user is getting out of memory errors. If the value of LOW_MEMORY_RESERVE is 0, the low memory reserve functionality is turned off.
For example, 512 kilobytes is a reasonable amount of reserved memory. To reserve 512 kilobytes, specify:
LOW_MEMORY_RESERVE 512
You can also use the onmode -wm or onmode -wf command to dynamically adjust the value of the LOW_MEMORY_RESERVE configuration parameter.
Use the onstat -g seg command to monitor the LOW_MEMORY_RESERVE value. Look for the last two lines of output, which contain the phrase "low memory
reserve." The first of these output lines shows the size of memory reserved in bytes. The second of these lines shows the number times that the database server has
used this memory and the maximum memory required. Both of these values are reset when the server is restarted.
Related information:
LOW_MEMORY_RESERVE configuration parameter
onstat -g seg command: Print shared memory segment statistics
onmode -wf, -wm: Dynamically change certain configuration parameters
To disable automatic low-memory management, run an SQL administration API command with the scheduler lmm disable argument.
Related information:
LOW_MEMORY_MGR configuration parameter
scheduler lmm enable argument: Specify automatic low memory management settings (SQL administration API)
scheduler lmm disable argument: Stop automatic low memory management (SQL administration API)
onstat -g lmm command: Print low memory management information
Suppose you want to specify that when the database server has 10 MB or less of free memory, it starts running the low memory management processes that can stop
applications and free memory. Suppose you also want to specify that the server stops running the low memory management processes when the server has 20 MB or
more of free memory:
1. Set the LOW_MEMORY_MGR configuration parameter to 1 and restart the server, or run an onmode -wf command to change the value of the
LOW_MEMORY_MGR configuration parameter.
2. Run an SQL administration API command with the scheduler lmm enable argument and low memory parameters, as follows:
3. Run the onstat -g lmm command to display information about automatic low memory management settings, including the amount of memory that the server is
attempting to maintain, the amount of memory currently used by the server, the low memory start and stop thresholds, and other memory-related statistics.
You can also view low memory management information in the online.log file.
Parent topic: Configure the server response when memory is critically low
Related information:
LOW_MEMORY_MGR configuration parameter
scheduler lmm enable argument: Specify automatic low memory management settings (SQL administration API)
scheduler lmm disable argument: Stop automatic low memory management (SQL administration API)
onstat -g lmm command: Print low memory management information
You can use the onstat -o utility to capture a static snapshot of database server shared memory for later analysis and comparison.
Monitor buffers
You can obtain both statistics on buffer use and information about specific buffers.
The database server allocates shared-memory segments dynamically, so these numbers can change. If the database server is allocating too many shared-memory
segments, you can increase the SHMVIRTSIZE configuration parameter. For more information, see the topics about configuration parameters in the HCL Informix®
Administrator's Reference.
The onstat -g seg command lists information for each shared-memory segment, including the address and size of the segment, and the amount of memory that is
free or in use. For an example of onstat -g seg output, see information about the onstat utility in the HCL Informix Administrator's Reference.
You can obtain statistics on latch use and information about specific latches. These statistics provide a measure of the system activity.
To reset these statistics to zero, use the onstat -z option. For a description of all the fields that onstat displays, see information about the onstat utility in the HCL
Informix® Administrator's Reference.
SMI tables
Query the sysprofile table to obtain shared-memory statistics.
You can use the following command-line utilities to monitor shared memory and latches:
onstat -s
Use onstat -s command to obtain latch information.
onstat -p
Run onstat -p to display statistics on database server activity and waiting latches (in the lchwaits field). For an example of onstat -p output, see information
about the onstat utility in the HCL Informix® Administrator's Reference.
This table contains all of the statistics available in onstat -p output except the ovbuff, usercpu, and syscpu statistics.
The statistical information includes the percentage of data writes that are cached to buffers and the number of times that threads were required wait to obtain a buffer.
The percentage of writes that are cached is an important measure of performance. The number of waits for buffers gives a measure of system concurrency.
Information about specific buffers includes a listing of all the buffers in shared memory that are held by a thread. You can use this information to track the status of a
particular buffer. For example, you can determine whether another thread is waiting for the buffer.
You can obtain statistics that relate to buffer availability and information about the buffers in each LRU queue. The statistical information includes the number of times
that the database server attempted to exceed the maximum number of buffers and the number of writes to disk (categorized by the event that caused the buffers to
flush). These statistics help you determine if the number of buffers is appropriate. Information about the buffers in each LRU queue consists of the length of the queue
and the percentage of the buffers in the queue that were modified.
You can obtain information about buffer pool activity from the onstat utility, the sysprofile SMI table, or the HCL OpenAdmin Tool (OAT) for Informix®.
onstat -g buf
Run the onstat -g buf command to obtain statistics about how active and efficient each buffer is. The following types of statistics are shown:
Page reads and writes
Caching percentages
Waits for buffers
Flushes
Extensions of the buffer pool
Buffer pool segments
Fast cache
onstat -B
Run the onstat -B command to obtain information about all of the buffers that are not on the free-list, including:
The shared memory address of the buffer
The address of the thread that currently holds the buffer
The address of the first thread that is waiting for each buffer
Information about buffer pools
onstat -b
Run the onstat -b command to obtain the following information about each buffer:
Address of each buffer that is currently held by a thread
Page numbers for the page that is held in the buffer
Type of page that is held in the buffer (for example, data page, tblspace page, and so on)
You can compare the addresses of the user threads to the addresses that are shown in the onstat -u output to obtain the session ID number.
onstat -X
Run the onstat -X command to obtain the same information as for onstat -b, along with the complete list of all threads that are waiting for buffers, not just the
first waiting thread.
onstat -R
Run the onstat -R command to show information about buffer pools, the number of buffers in each LRU queue, and the number and percentage of the buffers that
are modified or free.
onstat -F
Run the onstat-F command to obtain a count by write type of the writes that are performed and the following information about the page cleaners:
Page-cleaner number
Page-cleaner shared-memory address
Current state of the page cleaner
LRU queue to which the page cleaner was assigned
Query the sysprofile table to obtain statistics on cached reads and writes, write types, and total buffer waits. The following rows are relevant.
bufreads
Number of reads from buffers
bufwrites
Number of writes to buffers
buffwts
Number of times that any thread was required to wait for a buffer
chunkwrites
Number of chunk writes
dskreads
Number of reads from disk
dskwrites
Number of writes to disk
fgwrites
Number of foreground writes
lruwrites
Number of LRU writes
Related concepts:
Types of writes during flushing
Related information:
onstat -g buf command: Print buffer pool profile information
onstat -b command: Print buffer information for buffers in use
onstat -B command: Prints information about used buffers
onstat -X command: Print thread information
onstat -R command: Print LRU, FLRU, and MLRU queue information
onstat -F command: Print counts
sysprofile
Important: This procedure must be performed by a DBA with experience using HCL Informix®. Consult technical support for assistance. This procedure is for UNIX
systems only.
In the event of a failure of a database server instance, follow this procedure to delete shared memory segments:
MT global info:
sessions threads vps lngspins
0 49 14 1
sched calls thread switches yield 0 yield n yield forever
total: 900100 898846 1238 27763 423778
per sec: 327 325 2 12 151
Virtual processor summary:
class vps usercpu syscpu total
cpu 4 0.92 0.10 1.02
aio 4 0.02 0.02 0.04
c. Use the kill command to terminate (in order) process IDs 2599, 2603, 2604, and 2605.
4. If the shared segments have not been removed then follow these steps:
a. Determine the server number. The server number can be found by examining the onconfig file of the Informix instance
b. Add the server number to 21078. For example, if the server number is 1, then add 1 to 21078, giving 21079.
c. Convert the sum from the previous step to hexadecimal. In the previous example, 21079 is 5257 hexadecimal.
d. Concatenate 48 to the hex value from the previous step. For example, 525748.
e. Run the ipcs utility as root to display the shared memory segments, if any, left open by the server. Search the key column for the number from 4.d.
f. Remove each shared memory ID associated with the number from 4.d.
For more information about the onclean utility, see the HCL Informix Administrator's Reference.
Consult your operating system documentation for the correct ipcm syntax for your system.
The following topics define terms and explain concepts that you must understand to manage disk space. These topics cover the following areas:
Definitions of the physical and logical units that the database server uses to store data on disk
Instructions on how to calculate the amount of disk space that you require to store your data
Guidelines on how to lay out your disk space and where to place your databases and tables
Instructions on using external tables
The database server uses the following physical units to manage disk space:
Chunks
Pages
Blobpages
Sbpages
Extents
The database server maintains the following storage structures to ensure physical and logical consistency of data:
Logical log
Physical log
Reserved pages
Disk-layout guidelines
The following goals are typical for efficient disk layout:
Logical-volume manager
You can use the logical-volume manager (LVM) utility to manage your disk space through user-defined logical volumes.
Related concepts:
Storage space creation and management
Manage disk space
Related information:
Limits in Informix
Reserved Pages
Chunks provide administrators with a significantly large unit for allocating disk space. The maximum size of an individual chunk is 4 TB. The number of allowable chunks
is 32,766. If you have upgraded from a version before version 10.00, you must run the onmode -BC 2 command to enable the maximum size of a chunk and the
maximum number allowable, otherwise, the maximum chunk size is 2 GB.
When you create a chunk, you specify its path, size, and the associated storage space name.
The database server also uses chunks for mirroring. When you mirror a chunk, the database server maintains two copies of the data on the chunk. Every write operation
to a primary chunk is automatically followed by an identical write operation to the mirror chunk. Read operations are evenly divided between the two chunks. If either the
primary chunk or the mirror chunk fails, the chunk that failed is marked as down, and the other chunk performs all operations without interrupting the user access to data.
When you create tables, indexes, and other database objects, chunk space is allocated, or assigned, to those objects. Space that is allocated is not necessarily used.
For example, when you create a table, you allocate space for it, but that space is not used until you add data to the table. When all the chunks in a dbspace report 0 free
pages, you cannot create new database objects in that dbspace. However, you can continue to add data to existing database objects as long as they have unused
space. You can monitor chunks by using the onstat -d command or the OpenAdmin Tool (OAT) for Informix®.
Extendable chunks
Extendable chunks are chunks that Informix can automatically extend or you can manually extend when additional storage space is required for an application. If
you have extendable chunks, you are not required to add new chunks or spend time trying to determine which storage space will run out of space and when it will
run out of space.
Related concepts:
Sbspaces
Blobspaces
Dbspaces
Mirroring
Related reference:
Specify names for storage spaces and chunks
Related information:
onstat -d command: Print chunk information
onmode -BC: Allow large chunk mode
A storage space can be on an NFS-mounted file system using regular operating-system files.
provides slight performance advantages. Because NTFS files are a more standard method of storing data, you must use NTFS files instead of raw disks. Consider using
raw disks if your database server requires a large amount of disk access.
On Windows, raw disk space can be either a physical drive without a drive letter or a logical disk partition that has been assigned a drive letter using the Disk
Administrator. The space can either be formatted or unformatted. If it contains data, the data is overwritten after the space has been allocated to the database server.
For more information, see Allocating raw disk space on Windows.
NTFS files
You must use NTFS files, not FAT files, for disk space on Windows. For more information, see Allocating NTFS file space on Windows.
Files that are buffered through the operating system are often called cooked files.
When dbspaces are located on raw disk devices (also called character-special devices), the database server uses unbuffered disk access.
To create a raw device, configure a block device (hard disk) with a raw interface. The storage space that the device provides is called raw disk space. A chunk of raw
disk space is physically contiguous.
The name of the chunk is the name of the character-special file in the /dev directory. In many operating systems, you can distinguish the character-special file from the
block-special file by the first letter in the file name (typically r). For example, /dev/rsd0f is the character-special device that corresponds to the /dev/sd0f block-
special device.
A cooked file is a regular file that the operating system manages. Cooked file chunks and raw disk chunks are equally reliable. Unlike raw disk space, the logically
contiguous blocks of a cooked file might not be physically contiguous.
You can more easily allocate cooked files than raw disk space. To allocate a cooked file, you must create the file on any existing partition. The name of the chunk is the
complete path name of the file. These steps are described in Allocating cooked file spaces on UNIX.
In a learning environment, where performance is not critical, or for static data, cooked files can be convenient. If you must use cooked UNIX files, store the least
frequently accessed data in those files. Store the files in a file system with minimal activity.
For cooked file chunks, the operating system processes all chunk I/O from its own buffer pool and ensures that all writes to chunks are physically written to the disk.
Important: While you must generally use raw disk devices on UNIX to achieve better performance, if you enable the DIRECT_IO configuration parameter, the
performance for cooked files can approach the performance of raw devices used for dbspace chunks. This occurs because direct I/O bypasses the use of the file system
buffers. If you have an AIX® operating system, you can also enable concurrent I/O for HCL Informix® to use with direct IO when reading and writing to chunks that use
cooked files. For more information about using direct IO or concurrent IO, see the HCL Informix Performance Guide.
To determine the best device for performance, perform benchmark testing on the system with both types of devices for the dbspace and table layout.
When using raw disks, you are not required to take any special action to create chunks and files that are larger than two gigabytes. If you want to create large chunks in
cooked files, or if you want to use the various database export and import utilities with large files, you must ensure that the files systems that hold the large files are
appropriately configured.
Configuring Informix to automatically add more storage space prevents the error that can occur if a partition requires additional storage space and cannot find that space
in one of the chunks in the space in which the partition is located.
You use an SQL administration API command with the modify space sp_sizes argument to modify the extend size and the create size for the space in which your
extendable chunk is located.
Related concepts:
Automatic space management
The storage pool
Related tasks:
Marking a chunk as extendable or not extendable
Manually expanding a space or extending an extendable chunk
Tip: With a 4-terabyte limit to the size of a chunk, you can avoid partitioning a disk by assigning a single chunk per disk drive.
You can use an offset to indicate the location of a chunk on the disk partition, file, or device. For example, suppose that you create a 1000 KB chunk that you want to
divide into two chunks of 500 KB each. You can use an offset of 0 KB to mark the beginning of the first chunk and an offset of 500 KB to mark the beginning of the
second chunk.
You can specify an offset whenever you create, add, or drop a chunk from a dbspace, blobspace, or sbspace.
You might also be required to specify an offset to prevent the database server from overwriting partition information.
Related concepts:
Disk-layout guidelines
Related tasks:
Allocating raw disk space on UNIX
The following figure illustrates the concept of a page, represented by a darkened sector of a disk platter.
On most UNIX platforms, the page size is 2 KB. On Windows, the page size is 4 KB. Because your hardware determines the size of your page, you cannot alter this
value.
A chunk contains a certain number of pages, as the following figure illustrates. A page is always entirely contained within a chunk; that is, a page cannot cross chunk
boundaries.
For information about how the database server structures data within a page, see the chapter on disk structures and storage in the HCL Informix Administrator's
Reference
You specify blobpage size as a multiple of the database server page size. Because the database server allocates blobpages as contiguous spaces, it is more efficient to
store simple large objects in blobpages that are as close to the size of the data as possible. The following figure illustrates the concept of a blobpage, represented as a
multiple (three) of a data page.
For information about how HCL Informix® structures data stored in a blobpage, see structure of a blobspace blobpage in the disk structures and storage topics of the
HCL Informix Administrator's Reference.
Just as with pages in a chunk, a certain number of blobpages compose a chunk in a blobspace, as the following figure illustrates. A blobpage is always entirely
contained in a chunk and cannot cross chunk boundaries.
Instead of storing simple-large-object data in a blobspace, you can choose to store it in a dbspace. However, for a simple large object larger than two pages,
performance improves when you store it in a blobspace. Simple large objects stored in a dbspace can share a page, but simple large objects stored in a blobspace do
not share pages.
For information about how to determine the size of a blobpage, see Determine blobpage size. For a description of blobspaces, see Blobspaces.
The unit of allocation in an sbspace is an extent, whereas the unit of allocation in a blobspace is a blobpage. Just as with pages in a chunk, a certain number of smart
large object extents compose a chunk in an sbspace, as the following figure illustrates. An extent is always entirely contained in a chunk and cannot cross chunk
boundaries.
Smart large objects cannot be stored in a dbspace or blobspace. For more information, see Sbspaces, and sbspace structure in the disk structures and storage chapter
of the HCL Informix® Administrator's Reference.
The database server calculates the extent size for a smart large object from a set of heuristics, such as the number of bytes in a write operation. For more information,
see Extent sizes for sbspaces.
When you create a table, the database server allocates a fixed amount of space to contain the data to be stored in that table. (See Tables.) When this space fills, the
database server must allocate space for additional storage. The physical unit of storage that the database server uses to allocate both the initial and subsequent storage
space is called an extent.
Figure 1. An extent that consists of six contiguous pages on a raw disk device
Every permanent database table has two extent sizes associated with it. The initial-extent size is the number of KB allocated to the table when it is first created. The
next-extent size is the number of KB allocated to the table when the initial extent (and any subsequent extents) becomes full. For permanent tables and user-defined
temporary tables, the next-extent size begins to double after each extent. For system-created temporary tables, the next-extent size begins to double after 4 extents
have been added.
When you create a table, you can specify the size of the initial extent, and the size of the extents to be added as the table grows. You can also modify the size of an
extent in a table in a dbspace, and you can modify the size of new subsequent extents. To specify the initial-extent size and next-extent size, use the CREATE TABLE
and ALTER TABLE statements. For more information, see the IBM® Informix® Guide to SQL: Syntax and disk structures in the HCL Informix Administrator's Reference.
When you create a table with a column for CLOB or BLOB data types, you also define extents for an sbspace. For more information, see Storage characteristics of
sbspaces.
The following figure shows how the database server allocates six pages for an extent:
An extent is always entirely contained in a chunk; an extent cannot cross chunk boundaries.
If the database server cannot find the contiguous disk space that is specified for the next-extent size, it searches the next chunk in the dbspace for contiguous
space.
Related concepts:
Tables
Related information:
Extent size doubling
When you create a standard or temporary dbspace, you can specify the page size for the dbspace. You cannot specify a page size for blobspaces, sbspaces, or external
spaces. If you do not specify a page size, the size of the root dbspace is the default page size.
When you create a standard dbspace, you can specify the first and next extent sizes for the tblspace in the dbspace. Specifying the extent sizes reduces the number of
tblspace extents and reduces the frequency of situations when you must place the tblspace extents in non-primary chunks.
You can mirror every chunk in a mirrored dbspace. As soon as the database server allocates a mirror chunk, it flags all space in that mirror chunk as full.
Root dbspace
The root dbspace is the initial dbspace that the database server creates.
Temporary dbspaces
A temporary dbspace is a dbspace reserved exclusively for the storage of temporary tables. It behaves differently from a standard dbspace in many ways.
Related concepts:
Chunks
By storing high-use access tables or critical dbspaces (root dbspace, physical log, and logical log) on your fastest disk drive, you can improve performance. By storing
critical data on separate physical devices, you ensure that when one of the disks that holds noncritical data fails, the failure affects only the availability of data on that
disk.
As the following figure shows, to control the placement of databases or tables, you can use the IN dbspace option of the CREATE DATABASE or CREATE TABLE
statements.
Before you create a database or table in a dbspace, you must first create the dbspace.
A dbspace includes one or more chunks, as the following figure shows. You can add more chunks at any time. A database server administrator must to monitor dbspace
chunks for fullness and to anticipate the necessity to allocate more chunks to a dbspace. When a dbspace contains more than one chunk, you cannot specify the chunk
in which the data is located.
Related concepts:
Tables
Manage dbspaces
Related reference:
Monitor disk usage
The root dbspace is special because it contains reserved pages and internal tables that describe and track all physical and logical units of storage. (For more information
about these topics, see Tables and the disk structures and storage chapter in the HCL Informix® Administrator's Reference.) The initial chunk of the root dbspace and its
mirror are the only chunks created during disk-space setup. You can add other chunks to the root dbspace after disk-space setup.
The following disk-configuration parameters in the onconfig configuration file refer to the first (initial) chunk of the root dbspace:
ROOTPATH
ROOTOFFSET
ROOTNAME
MIRRORPATH
MIRROROFFSET
TBLTBLFIRST
TBLTBLNEXT
The root dbspace is also the default dbspace location for any database created with the CREATE DATABASE statement.
The root dbspace is the default location for all temporary tables created by the database server to perform requested data management.
See Size of the root dbspace for information about how much space to allocate for the root dbspace. You can also add extra chunks to the root dbspace after you set up
database server disk space.
A temporary dbspace is temporary only in the sense that the database server does not preserve any of its contents when the database server restarts. The database
server never drops a temporary dbspace unless it is explicitly directed to do so.
Whenever you start the database server, all chunks in temporary dbspaces are recreated from scratch. These chunks can therefore be located on RAM drives if desired.
The database server does not perform logical or physical logging for temporary dbspaces. Because temporary dbspaces are not physically logged, fewer checkpoints
and I/O operations occur, which improves performance.
For a temporary table in a standard dbspace, at minimum the server logs table creation, the allocation of extents, and the dropping of the table. In contrast, the database
server does not log any operations on tables stored in temporary dbspaces. Logical-log suppression in temporary dbspaces reduces the number of log records to roll
forward during logical recovery as well, thus improving the performance during critical downtime.
Temporary dbspaces are never archived by the database server, reducing the size of your storage-space backup.
In addition to temporary tables, the database server uses temporary dbspaces to store the before images of data that is overwritten while backups are occurring and
overflow from query processing that occurs in memory. Make sure that you have correctly set the DBSPACETEMP environment variable or parameter to specify
dbspaces with enough space for your needs. If there is not enough room in the specified dbspaces, the root dbspace is used. If the root dbspace fills, the backup may
fail.
If you have more than one temporary dbspace and execute a SELECT statement into a temporary table, the results of the query are inserted in round robin order.
For detailed instructions on how to create a temporary dbspace, see Creating a temporary dbspace.
A blobspace is a logical storage unit composed of one or more chunks that store only TEXT and BYTE data.
A blobspace stores TEXT and BYTE data in the most efficient way possible. You can store TEXT and BYTE columns associated with distinct tables (see Tables) in the
same blobspace.
The database server writes data stored in a blobspace directly to disk. This data does not pass through resident shared memory. If it did, the volume of data might
occupy so many of the buffer-pool pages that other data and index pages would be forced out. For the same reason, the database server does not write TEXT or BYTE
objects that are assigned to a blobspace to either the logical or physical log. The database server logs blobspace objects by writing them directly from disk to the logical-
log backup tapes when you back up the logical logs. Blobspace objects never pass through the logical-log files.
When you create a blobspace, you assign to it one or more chunks. You can add more chunks at any time. One of the tasks of a database server administrator is to
monitor the chunks for fullness and anticipate the necessity to allocate more chunks to a blobspace. For instructions on how to monitor chunks for fullness, see Monitor
simple large objects in a blobspace. For instructions on how to create a blobspace, add chunks to a blobspace, or drop a chunk from a blobspace, see Manage disk
space.
For information about the structure of a blobspace, see the topics about disk structures and storage in the HCL Informix® Administrator's Reference.
Related concepts:
Chunks
Smart large objects consist of CLOB (character large object) and BLOB (binary large object) data types. User-defined data types can also use sbspaces. For more
information about data types, see the HCL Informix® Guide to SQL: Reference.
Temporary sbspaces
Use a temporary sbspace to store temporary smart large objects without metadata logging and user-data logging.
Related concepts:
Chunks
Programmers can use functions similar to UNIX and Windows functions to read, write, and seek smart large objects. HCL Informix® provides this smart-large-
object interface in the DataBlade API and the Informix ESQL/C programming interface.
You can log all write operations on data stored in sbspaces. You can commit or roll back changes if a failure occurs during a transaction.
You can lock smart large objects at different levels of granularity, and the lock durations obey the rules for transaction isolation levels. For more information about
locking and concurrency, see your HCL Informix Performance Guide.
Smart large objects within table rows are not required to be retrieved in one statement.
An application can store or retrieve smart large objects in pieces using either the DataBlade API or the Informix ESQL/C programming interface. For more
information about the DataBlade API functions, see the IBM Informix DataBlade API Function Reference. For more information about the Informix ESQL/C
functions, see the IBM Informix ESQL/C Programmer's Manual.
Specify the sbspace name in the CDR_QDATA_SBSPACE configuration parameter. Enterprise Replication uses the default log mode with which the sbspace was
created for spooling the row data. The CDR_QDATA_SBSPACE configuration parameter accepts multiple sbspaces, up to a maximum of 32 sbspaces. Enterprise
Replication can support a combination of logging and non-logging sbspaces for storing spooled row data. For more information, see the IBM® Informix® Enterprise
Replication Guide.
You can have Enterprise Replication automatically configure disk space from the storage pool and set the appropriate configuration parameters when defining a
replication server. If the CDR_QDATA_SBSPACE or the CDR_DBSPACE configuration parameter is not set or is set to blank, the cdr define server command
automatically creates the necessary disk space and sets the configuration parameters to appropriate values.
The following are the three areas of the first chunk of an sbspace:
Metadata area
Metadata identifies key aspects of the sbspace and each smart large object stored in the sbspace, and enables the database server to manipulate and recover smart
large objects stored within.
User-data area
User data is the smart large object data stored in the sbspace by user applications. The chunk has up to two user-data areas.
Reserved area
The database server allocates space from the reserved area to either the metadata or user-data area when more space is required. The chunk has up to two reserved
areas.
For information about correctly allocating metadata and user data for sbspaces, see Size sbspace metadata and the HCL Informix® Performance Guide.
When you add a chunk to an sbspace, you can specify whether it contains a metadata area and user-data area or whether to reserve the chunk exclusively for user data.
You can add more chunks at any time. If you are updating smart large objects, I/O to the user data is much faster on raw disks than cooked chunk files. For instructions
on how to create an sbspace, add chunks to an sbspace, or drop a chunk from an sbspace, see Manage disk space.
Important: Sbspace metadata is always logged, regardless of the logging setting of the database.
For smart large objects, you specify CLOB, BLOB, or user-defined data types. As the following figure shows, to control the placement of smart large objects, you can use
the IN sbspace option in the PUT clause of the CREATE TABLE statement.
Before you specify an sbspace in a PUT clause, you must first create the sbspace. For more information about how to create an sbspace with the onspaces -c -S
command, see Adding a chunk to a dbspace or blobspace. For more information about how to specify smart large object characteristics in the PUT clause, see the
CREATE TABLE statement in the IBM® Informix® Guide to SQL: Syntax.
If you do not specify the PUT clause, the database server stores the smart large objects in the default sbspace that you specify in the SBSPACENAME configuration
parameter. For more information about SBSPACENAME, see the configuration parameter topics of the HCL Informix Administrator's Reference.
An sbspace includes one or more chunks, as the following figure shows. When an sbspace contains more than one chunk, you cannot specify the chunk in which the
data is located.
You can add more chunks at any time. It is a high-priority task of a database server administrator to monitor sbspace chunks for fullness and to anticipate the necessity
to allocate more chunks to an sbspace. For more information about monitoring sbspaces, see your HCL Informix Performance Guide.
The database server uses sbspaces to store table columns that contain smart large objects. The database server uses dbspaces to store the rest of the table columns.
You can mirror an sbspace to speed recovery in event of a media failure. For more information, see Mirroring.
For information about using onspaces to perform the following tasks, see Manage disk space.
Creating an sbspace
Adding a chunk to an sbspace
Altering storage characteristics of smart large objects
Creating a temporary sbspace
Dropping an sbspace
The administrator or programmer can override these default values for storage characteristics and attributes for individual tables.
Buffering mode
When you create an sbspace, the default buffering mode is on, which means to use the buffer pool in the resident portion of shared memory.
Last-access time
When you create an sbspace, you can specify whether the database server must keep the last time that the smart large object was read or updated with the
ACCESSTIME tag of the onspaces -c -Df option.
Lock mode
When you create an sbspace, you can specify whether the database server locks the whole smart large object or a range of bytes within a smart large object with
the LOCK_MODE tag of the onspaces -c -Df option.
Logging
When you create an sbspace, you can specify whether to turn on logging for the smart large objects.
The unit of allocation in an sbspace is an extent. The database server calculates the extent size for a smart large object from a set of heuristics, such as the number of
bytes in a write operation. For example, if an operation asks to write 30 KB, the database server tries to allocate an extent the size of 30 KB.
Important: For most applications, you must use the values that the database server calculates for the extent size.
If you know the size of the smart large object, you can use one of the following functions to set the extent size. The database server allocates the entire smart large
object as one extent (if an extent of that size is available in the chunk):
For more information about the DataBlade API functions for smart large objects, see the IBM® Informix DataBlade API Function Reference.
For more information about the Informix ESQL/C functions for smart large objects, see the IBM Informix ESQL/C Programmer's Manual.
For information about tuning extent sizes, see smart large objects in the chapter on configuration effects on I/O utilization in your HCL Informix Performance Guide.
Smart large objects usually vary in length. You can provide an average size of your smart large objects to calculate space for an sbspace.
You specify the average size with the AVG_LO_SIZE tag of the onspaces -c -Df option.
To specify the size and location of the metadata area, specify the -Ms and -Mo flags in the onspaces command. If you do not use the -Ms flag, the database server
uses the value of AVG_LO_SIZE to estimate the amount of space to allocate for the metadata area. For more information, see Size sbspace metadata.
As the database administrator, you can specify the buffering mode with the BUFFERING tag of the onspaces -c -Df option. The default is “buffering=ON”, which
means to use the buffer pool. If you turn off buffering, the database server uses private buffers in the virtual portion of shared memory.
Important: In general, if read and write operations to the smart large objects are less than 8 KB, do not specify a buffering mode when you create the sbspace. If you are
reading or writing short blocks of data, such as 2 KB or 4 KB, leave the default of “buffering=ON” to obtain better performance.
For information about when to use private buffers, see the section on light-weight I/O operations in the topics about configuration effects on I/O utilization in your HCL
Informix® Performance Guide.
The default is “ACCESSTIME=OFF”. The database server keeps this last-access time in the metadata area.
For more information about how programmers use this last-access time, see the IBM® Informix® DataBlade API Programmer's Guide and IBM Informix ESQL/C
Programmer's Manual.
The default is “LOCK_MODE=BLOB”, which means to lock the entire smart large object. For more information, see the locking chapter in your HCL Informix®
Performance Guide.
The default is no logging. For more information, see Log sbspaces and smart large objects.
Important: When you use logging databases, turn logging on for the sbspaces. If a failure occurs that requires log recovery, you can keep the smart large objects
consistent with the rest of the database.
You specify the logging status with the LOGGING tag of the onspaces -c -Df option. The default is “LOGGING=off”. You can change the logging status with the
onspaces -c -Df option. You can override this logging status with the PUT clause in the SQL statements CREATE TABLE or ALTER TABLE. For more information
about these SQL statements, see the IBM® Informix Guide to SQL: Syntax.
The programmer can override this logging status with functions that the DataBlade API and Informix® ESQL/C provide. For more information about the DataBlade API
functions for smart large objects, see the IBM Informix DataBlade API Function Reference. For more information about the Informix ESQL/C functions for smart large
objects, see the IBM Informix ESQL/C Programmer's Manual.
When you turn on logging for an sbspace, the smart large objects pass through the resident portion of shared memory. Although applications can retrieve pieces of a
smart large object, you still must consider the larger size of data that might pass through the buffer pool and logical-log buffers. For more information, see Access smart
large objects.
The following figure shows the storage-characteristics hierarchy for a smart large object.
The figure shows that you can override the system default in the following ways:
Use the -Df tags of the onspaces -c -S command to override the system default for a specific sbspace.
You can later change these sbspace attributes for the sbspace with the onspaces -ch option. For more information about valid ranges for the -Df tags, see the
onspaces topics in the HCL Informix Administrator's Reference.
You override the system default for a specific column when you specify these attributes in the PUT clause of the CREATE TABLE or ALTER TABLE statements.
For more information about these SQL statements, see the IBM Informix Guide to SQL: Syntax.
The programmer can override the default values for sbspace attributes for specific smart large objects with functions that the DataBlade API and Informix®
ESQL/C programming interface provide.
The following table lists sources of information tasks related to using and managing sbspaces.
Setting up logging for an sbspace Log sbspaces and smart large objects
Backing up an sbspace Back up sbspaces
Checking consistency of an sbspace Validate metadata
Understanding an sbspace structure Topics about disk structures in the IBM Informix Administrator's Reference
Using onspaces for sbspaces Topics about utilities in the IBM Informix Administrator's Reference
Creating a table with CLOB or BLOB data types IBM Informix Guide to SQL: Syntax
Accessing smart large objects in an application IBM Informix DataBlade API Programmer's GuideIBM Informix ESQL/C
Programmer's Manual
Calculating the metadata area size Topics about table performance in HCL Informix Performance Guide
Understanding smart-large-object locking Topics about locking in HCL Informix Performance Guide
Configuring sbspaces for temporary smart large objects Topics about configuration effects on I/O activity in HCL Informix Performance
Guide
If you store temporary smart large objects in a standard sbspace, the metadata is logged. Temporary sbspaces are similar to temporary dbspaces. To create a temporary
sbspace, use the onspaces -c -S command with the -t option. For more information, see Creating a temporary sbspace.
You can store temporary large objects in a standard sbspace or temporary sbspace.
If you specify a temporary sbspace in the SBSPACETEMP parameter, you can store temporary smart large objects there.
If you specify a standard sbspace in the SBSPACENAME parameter, you can store temporary and permanent smart large objects there.
If you specify a temporary sbspace name in the CREATE TEMP TABLE statement, you can store temporary smart large objects there.
If you specify a permanent sbspace name in the CREATE TABLE statement, you can store temporary smart large objects there.
If you omit the SBSPACETEMP and SBSPACENAME parameters and create a smart large object, error message -12053 might display.
If you specify a temporary sbspace in the SBSPACENAME parameter, you cannot store a permanent smart large object in that sbspace. You can store temporary
smart large objects in that sbspace.
Fast recovery Yes No (the sbspace is emptied when the database server
restarts) To set up shared memory without cleaning up
temporary smart large objects, specify oninit -p. If
you keep the temporary large objects, their state is
indeterminate.
Backup and restore Yes No
Add and drop chunks Yes Yes
Configuration parameter SBSPACENAME SBSPACETEMP
Temporary smart large objects last for the user session and are much faster to update than smart large objects.
You create a temporary smart large object in the same way as a permanent smart large object, except you set the LO_CREATE_TEMP flag in the
ifx_lo_specset_flags or mi_lo_specset_flags function. Use mi_lo_copy or ifx_lo_copy to create a permanent smart large object from a temporary smart
large object. For details on creating temporary smart large objects, see the IBM® Informix® DataBlade API Programmer's Guide.
Important: Store pointers to temporary large objects in temporary tables only. If you store them in standard tables and reboot the database server, it results in an error
that says that the large object does not exist.
The following table compares standard and temporary smart large objects.
If you did not create a server during installation, the physical log is created in the root dbspace. However, you can create the plogspace to move the physical log to a
different dbspace to prevent the physical log from filling the root dbspace. For optimal performance, create the plogspace on a different disk from the root dbspace or the
location of the logical logs. If you created a server during installation, the plogspace is created automatically with a default size that depends on the value of the
AUTO_TUNE_SERVER_SIZE configuration parameter.
By default, the chunk that you assign to the plogspace is extendable, therefore, the initial size of the chunk can be small. The database server automatically expands the
chunk when the physical log requires more space.
Related concepts:
Manage the plogspace
Related reference:
Size and location of the physical log
Related information:
AUTO_TUNE_SERVER_SIZE configuration parameter
For example, a database user might require access to binary files encoded in a proprietary format. First, a developer creates an access method, which is a set of
routines that access the data. These routines are responsible for all interaction between the database server and the external file. A DBA then adds an extspace that has
the file as its target to the database. After the DBA creates a table in the extspace, the users can access the data in the proprietary files through SQL statements. To
locate those files, use the extspace information.
An extspace is not required to be a file name. For example, it can be a network location. The routines that access the data can use information found in the string
associated with the extspace in any manner.
For more information about user-defined access methods, see the IBM® Informix® Virtual-Table Interface Programmer's Guide. For more information about creating
functions and primary access methods, see the IBM Informix Guide to SQL: Syntax.
A database is stored in the dbspace that is specified by the IN clause of the CREATE DATABASE statement. When you do not explicitly name a dbspace in the CREATE
DATABASE statement, the database is stored in the root dbspace, unless automatic location is enabled. You can enable automatic location by setting the AUTOLOCATE
configuration parameter or session environment variable to a positive integer. The database server chooses the dbspaces in which to create new databases and new
tables that are created without specified storage locations. Tables are automatically fragmented by round robin in the dbspaces that are chosen by the server.
When you do specify a dbspace in the CREATE DATABASE statement, this dbspace is the location for the following tables:
The following figure shows the tables that are contained in the stores_demo database.
The size limits that apply to databases are related to their location in a dbspace. To be certain that all tables in a database are created on a specific physical device,
assign only one chunk to the device, and create a dbspace that contains only that chunk. Place your database in that dbspace. When you place a database in a chunk
that is assigned to a specific physical device, the database size is limited to the size of that chunk.
Related concepts:
Managing automatic location and fragmentation
Related reference:
Display databases
When you create a table, the database server allocates disk space for the table in a block of pages that is called an extent. You can specify the size of both the first and
any subsequent extents.
You can place the table in a specific dbspace by naming the dbspace when the table is created (with the IN dbspace clause of the CREATE TABLE statement). When
you do not specify the dbspace, the database server places the table in the dbspace where the database is located. You can fragment a table over more than one
dbspace or within a dbspace by specifying a fragmentation distribution scheme. However, if you set the AUTOLOCATE configuration parameter to a positive integer, the
database server automatically fragments new tables by round robin, in the dbspaces that are optimal for the table.
A table or table fragment is located completely in the dbspace in which it was created. The database server administrator can use this fact to limit the growth of a table
by placing a table in a dbspace and then refusing to add a chunk to the dbspace when it becomes full.
A table, which is composed of extents, can span multiple chunks, as the following figure shows.
Simple large objects are in blobpages in either the dbspace with the data pages of the table or in a separate blobspace.
Damaged tables
There are a number of ways you can damage a table.
Related concepts:
Extents
Table fragmentation and data storage
Disk-layout guidelines
Control of where simple large object data is stored
Damaged indexes can cause a table to seem damaged, even though it is not.
The oncheck commands cannot fix most damaged tables. If a page is damaged, oncheck can detect and try to fix the page, but cannot correct the data within the
page.
In a nonlogging database, both STANDARD tables and RAW tables are nonlogging. In a nonlogging database, the only difference between STANDARD and RAW tables
is that RAW tables do not support primary-key constraints, unique constraints, referential constraints, or rollback. However, these tables can be indexed and updated.
The following table lists the properties of the types of tables available with Informix. The flag values are the hexadecimal values for each table type in the flags column of
systables.
RAW tables
RAW tables are nonlogging permanent tables that are similar to tables in a nonlogging database.
Temp tables
Temp tables are temporary, logged tables that are dropped when the user session closes, the database server shuts down, or on reboot after a failure.
Temporary tables
The database server must provide disk space for specific types of temporary table.
STANDARD tables do not use light appends. All operations are logged, record by record, so STANDARD tables can be recovered and rolled back. You can back up and
restore STANDARD tables. Logging enables updates since the last physical backup to be applied when you perform a warm restore or point-in-time restore. Enterprise
Replication is allowed on STANDARD tables.
A STANDARD table is the default type on both logging and nonlogging databases. STANDARD tables are logged if stored in a logging database but are not logged if
stored in a nonlogging database.
Update, insert, and delete operations on rows in a RAW table are supported but are not logged. You can define indexes on RAW tables, but you cannot define unique
constraints, primary-key constraints, or referential constraints on RAW tables.
A RAW table has the same attributes, whether it is stored in a logging database or in a nonlogging database. If you update a RAW table, you cannot reliably restore the
data unless you perform a level-0 backup after the update. If the table has not been updated since that backup, you can restore the RAW table from the last physical
backup, but backing up only the logical logs is not sufficient for a RAW table to be recoverable. Fast recovery can roll back incomplete transactions on STANDARD
tables but not on RAW tables. For information about creating and altering RAW tables, see the IBM® Informix Guide to SQL: Syntax.
RAW tables are intended for the initial loading and validation of data. To load RAW tables, you can use any loading utility, including dbexport, the LOAD statement of
DB-Access, or the HPL in express mode. If an error or failure occurs while loading a RAW table, the resulting data is whatever was on the disk at the time of the failure.
Restriction: Do not use RAW tables within a transaction. After you have loaded the data, use the ALTER TABLE statement to change the table to type STANDARD and
perform a level-0 backup before you use the table in a transaction.
There are some restrictions when using RAW tables in a high-availability cluster environment. Because modifications made to RAW tables are not logged, and because
secondary servers (including HDR, RSS and SDS) use log records to stay synchronized with the primary server, you are restricted from performing certain operations on
RAW tables:
On a primary server, RAW tables can be created, dropped, and accessed. Altering the table mode, however, from unlogged to logged, or from logged to
unlogged, is not allowed. Altering the logging mode of a table in a high-availability cluster environment yields error -19845.
On secondary servers (HDR, SDS, or RSS), RAW tables are not accessible for any operation. Attempting to access a RAW table from SQL yields error -19846.
Temp tables support indexes, constraints, and rollback. You cannot recover, back up, or restore temp tables. Temp tables support bulk operations such as light appends,
which add rows quickly to the end of each table fragment. For more information about light appends, see your HCL Informix Performance Guide.
There are fast recovery scenarios for the table types available with HCL Informix.
HCL Informix® creates STANDARD tables that use logging by default. Data warehousing applications can have huge tables that take a long time to load. Nonlogging
tables are faster to load than logging tables.
For more information about how to improve the performance of loading very large tables, see your HCL Informix Performance Guide. For more information about using
ALTER TABLE to change a table from logging to nonlogging, see the IBM Informix Guide to SQL: Syntax.
The following table shows fast recovery scenarios for table types.
Inserts, updates, and deletions that occurred after the last checkpoint are lost.
Important: After you load a RAW table or change a RAW table to type STANDARD, you must perform a level-0 backup.
The following table shows restore scenarios for these table types.
Make sure that your database server has configured enough temporary space for both user-created and database server-created temporary tables. Some uses of the
database server might require as much temporary storage space as permanent storage space, or more.
By default, the database server stores temporary tables in the root dbspace. If you decide not to store your temporary tables in the root dbspace, use the DBSPACETEMP
environment variable or the DBSPACETEMP configuration parameter to specify a list of dbspaces for temporary tables.
Only the session that creates a temporary table can use the table. When the session exits, the table is dropped automatically.
When you create a temporary table, the database server uses the following criteria:
If the query used to populate the TEMP table produces no rows, the database server creates an empty, unfragmented table.
If the rows that the query produces do not exceed 8 KB, the temporary table is located in only one dbspace.
If the rows exceed 8 KB, the database server creates multiple fragments and uses a round-robin fragmentation scheme to populate them unless you specify a
fragmentation method and location for the table.
If you use the CREATE TEMP and SELECT...INTO TEMP SQL statements and DBSPACETEMP has been set:
LOGGING dbspaces in the list are used to create the tables that specify or imply the WITH LOG clause.
NON-LOGGING temporary dbspaces in the list are used to create the tables that specify the WITH NO LOG clause.
When CREATE TEMP and SELECT...INTO TEMP SQL statements are used and DBSPACETEMP has not been set or does not contain the correct type of dbspace,
HCL Informix® uses the dbspace of the database to store the temporary table. See the IBM® Informix Guide to SQL: Syntax for more information.
For information about creating temporary dbspaces, see the onspaces topics in the HCL Informix® Administrator's Reference.
If you do not specify the location of a temporary table, the database server stores the temporary table in one of the spaces that you specify as an argument to the
DBSPACETEMP configuration parameter or environment variable. The database server remembers the name of the last dbspace that it used for a temporary table.
When the database server receives another request for temporary storage space, it uses the next available dbspace to spread I/O evenly across the temporary storage
space.
For information about where the database stores temporary tables when you do not list any spaces as an argument to DBSPACETEMP, see the DBSPACETEMP
section in the HCL Informix Administrator's Reference.
When you use an application to create a temporary table, you can use the temporary table until the application exits or performs one of the following actions:
Closes the database in which the table was created and opens a database in a different database server
Closes the database in which the table was created
Explicitly drops the temporary table
The database server might create a temporary table in any of the following circumstances:
Statements that include a GROUP BY or ORDER BY clause
Statements that use aggregate functions with the UNIQUE or DISTINCT keywords
SELECT statements that use auto-index or hash joins
Complex CREATE VIEW statements
DECLARE statements that create a scroll cursor
Statements that contain correlated subqueries
Statements that contain subqueries that occur within an IN or ANY clause
CREATE INDEX statements
When the process that initiated the creation of the table is complete, the database server deletes the temporary tables that it creates.
If the database server shuts down without removing temporary tables, the database server removes the temporary tables the next time it is started. To start the database
server without removing temporary tables, run the oninit command with the -p option.
Applications and analytic tools can define queries in which a derived table contains multiple views joined with base tables, potentially including hundreds of columns. The
database server attempts to fold views or derived tables into the main query. Any such views or derived tables that cannot be folded are materialized into a temporary
table. The temporary table excludes all the columns that are not referenced in the main query. The temporary table is created with only the columns referenced in the
Projection clause and in other clauses of the parent query, including the WHERE, HAVING, GROUP BY, and ON clauses.
By excluding from the system-generated temporary table any columns that are not referenced in the main query, this reduced schema can improve query performance
by conserving storage resources, and by avoiding unnecessary I/O of data in the unused columns.
In a nested query, however, projected columns from views and derived table are checked only in the parent query, but not in the levels above the immediate parent
query.
Important: In addition to temporary tables, the database server uses temporary disk space to store the before images of data records that are overwritten while backups
are occurring, and for overflow from query processing that occurs in memory. Make sure that you have correctly set the DBSPACETEMP environment variable or the
DBSPACETEMP configuration parameter to specify dbspaces with enough space for your needs. If there is not enough room in the specified dbspaces, the backup fails,
root dbspace is used, or the backup fails after filling the root dbspace.
When you do not specify any temporary dbspaces in DBSPACETEMP, or the temporary dbspaces that you specify have insufficient space, the database server creates
the table in a standard dbspace according to the following rules:
If you created the temporary table with CREATE TEMP TABLE, the database server stores this table in the dbspace that contains the database to which the table
belongs.
If you created the temporary table with the INTO TEMP option of the SELECT statement, the database server stores this table in the root dbspace.
A tblspace, for example, does not correspond to any particular part of a chunk or even to any particular chunk. The indexes and data that make up a tblspace might be
scattered throughout your chunks. The tblspace, however, represents a convenient accounting entity for space across chunks devoted to a particular table. (See Tables.)
Extent interleaving
The database server allocates the pages that belong to a tblspace as extents. Although the pages within an extent are contiguous, extents might be scattered
throughout the dbspace where the table is located (even on different chunks).
The following table illustrates the tblspaces for three tables that form part of the stores_demo database. Only one table (or table fragment) exists per tblspace.
Blobpages represent TEXT or BYTE data stored in a dbspace.
The following figure depicts this situation with two noncontiguous extents that belong to the tblspace for table_1 and a third extent that belongs to the tblspace for
table_2. A table_2 extent is located between the first table_1 extent and the second table_1 extent. When this situation occurs, the extents are interleaved. Because
sequential access searches across table_1 require the disk head to seek across the table_2 extent, performance is worse than if the table_1 extents were contiguous.
For instructions on how to avoid and eliminate interleaving extents, see your HCL Informix® Performance Guide.
Figure 1. Three extents that belong to two different tblspaces in a single dbspace
Usually you fragment a table when you initially create it. The CREATE TABLE statement takes one of the following forms:
The FRAGMENT BY ROUND ROBIN and FRAGMENT BY EXPRESSION keywords refer to two different distribution schemes. Both statements associate fragments
with dbspaces.
If you set the AUTOLOCATE configuration parameter or session environment variable to a positive integer, and you do not specify a location for the table, new tables are
fragmented in round-robin order in dbspaces that are chosen by the database server.
When you fragment a table, you can also create multiple partitions of the table within the same dbspace, as shown in this example:
The following figure illustrates the role of fragments in specifying the location of data.
Figure 1. Dbspaces that link logical units (including table fragments) and physical units of storage
Related concepts:
Manage disk space
Managing automatic location and fragmentation
Tables
Related information:
Table fragmentation strategies
Fragmentation guidelines
Recommendation: Set up the system with a small physical log and a few small logical logs. For example, create three 1000 KB logical log files, or 3000 KB for the
total log space. After the initial setup is complete, create a new dbspace for logical logs in an area that does not compete for I/O with other dbspaces, and set the
AUTO_LLOG configuration parameter to that dbspace. Create a set of larger logical-log files in the dbspace for logical logs, and drop the original logs from the root
dbspace. Then create a plogspace for the physical log. Make the plogspace large enough to hold your final physical log, and isolate it from other dbspaces as much as
possible. This configuration optimizes logging performance and the root dbspace for the following reasons:
The unused space that is left in the root dbspace after you move the logs is minimized.
The physical and logical logs do not contend for space and I/O on the same disk as each other or the root dbspace.
The server automatically increases the total logical log space and the size of the physical log if increasing logs measurably improves performance.
Temporary tables
Analyze user applications to estimate the amount of disk space that the database server might require for temporary tables. Try to estimate how many of these
statements are to run concurrently. The space that is occupied by the rows and columns that are returned provides a good basis for estimating the amount of space
required. The largest temporary table that the database server creates during a warm restore is equal to the size of your logical log. You calculate the size of your
logical log by adding the sizes of all logical-log files. You must also analyze user applications to estimate the amount of disk space that the database server might
require for explicit temporary tables.
Data
Although the root dbspace is the default location for databases, do not store databases and tables in the root dbspace.
System databases (the size varies between versions)
The sysmaster, sysutils, syscdr, and sysuuid databases, and the system catalogs must be stored in the root dbspace. The sysadmin database is stored in the root
dbspace by default, however, you can move the sysadmin database to a different dbspace.
Reserved pages (~24 KB)
The reserved pages contain control and tracking information that is used by the database server. Reserved pages must be stored in the root dbspace.
Tblspace tblspace (100 - 200 KB minimum)
The tblspace tblspace contains information about tblspaces. The tblspace tblspace must be stored in the root dbspace.
This estimate is the root dbspace size before you initialize the database server. The size of the root dbspace depends on whether you plan to store the physical log,
logical logs, and temporary tables in the root dbspace or in another dbspace. The root dbspace must be large enough for the minimum size configuration during disk
initialization.
Allow extra space in the root dbspace for the system databases to grow, for the extended reserved pages, and ample free space. The number of extended reserved
pages depends on the number of primary chunks, mirror chunks, logical-log files, and storage spaces in the database server.
If you need to make the root dbspace larger after the server is initialized, you can add a chunk to the root dbspace. You can enable automatic space management to
expand the root dbspace as needed.
Important: Mirror the root dbspace and other dbspaces that contain critical data such as the physical log and logical logs.
Related concepts:
Automatic space management
Manage dbspaces
Move logical-log files
Related tasks:
Creating a temporary dbspace
Related reference:
Change the physical-log location and size
Related information:
ROOTSIZE configuration parameter
reset sysadmin argument: Move the sysadmin database (SQL administration API)
Every application that users run has different storage requirements. The following list suggests some of the steps that you can take to calculate the amount of disk space
to allocate (beyond the root dbspace):
Decide how many databases and tables you must to store. Calculate the amount of space required for each one.
Calculate a growth rate for each table and assign some amount of disk space to each table to accommodate growth.
Decide which databases and tables you want to mirror.
For instructions about calculating the size of your tables, see your HCL Informix® Performance Guide.
When the storage space falls below a threshold defined in the SP_THRESHOLD configuration parameter, the database server can automatically run a task that expands
the space, either by extending an existing chunk in the space or by adding a new chunk.
Add, delete, or modify an entry that describes one directory, cooked file, or raw device in the storage pool. The server can use the specified directory, cooked file,
or raw device when necessary to automatically add space to an existing storage space.
Control how a storage pool entry is used by modifying two different dbspace sizes that are associated with expanding a storage space, the extend size and the
create size.
Mark a chunk as extendable or not extendable.
Immediately expand the size of a space, when you do not want the database server to automatically expand the space.
Immediately extend the size of a chunk by a specified minimum amount.
Create a storage space or chunk from an entry in the storage pool
Return empty space from a dropped storage space or chunk to the storage pool
The storagepool table in sysadmin database contains information about all of the entries in a storage pool for a database server instance.
Related concepts:
Extendable chunks
Automatic space management
Related tasks:
Creating and managing storage pool entries
You must make some trade-offs among these goals when you design your disk layout. For example, separating the system catalog tables, the logical log, and the
physical log can help reduce contention for these resources. However, this action can also increase the chances that you must perform a system restore. For detailed
disk-layout guidelines, see the HCL Informix® Performance Guide.
Table-location guidelines
This topic lists some strategies for optimizing the disk layout, given certain characteristics about the tables in a database.
Related concepts:
Partitions and offsets
Tables
Associate disk partitions with chunks and allocate at least one additional chunk for the root dbspace.
A disk that is already partitioned might require the use of offsets. For details, see Allocating raw disk space on UNIX.
Tip: With the 4-terabyte maximum size of a chunk, you can avoid partitioning by assigning a chunk per disk drive.
Mirror critical dbspaces: the root dbspace, the dbspaces that contain the physical log and the logical-log files. Also mirror high-use databases and tables.
You specify mirroring at the dbspace level. Mirroring is either on or off for all chunks belonging to a dbspace. Locate the primary and the mirrored dbspaces on
different disks. Ideally, different controllers handle the different disks.
To define several dbspaces for temporary tables and sort files, use onspaces -t. When you place these dbspaces on different disks and list them in the
DBSPACETEMP configuration parameter, you can spread the I/O associated with temporary tables and sort files across multiple disks. For information about
using the DBSPACETEMP configuration parameter or environment variable, see the chapter on configuration parameters in the HCL Informix® Administrator's
Reference.
Keep the physical log in the root dbspace but move the logical logs from the root dbspace. However, if you plan to store the system catalogs in the root dbspace,
move the physical log to another dbspace.
For advice on where to store your logs, see Location of logical-log files. Also see Move logical-log files and Change the physical-log location and size.
For additional performance recommendations, see the IBM Informix Backup and Restore Guide.
You can implement many of these strategies with a higher degree of control using table fragmentation:
Isolate high-use tables on a separate disk.
To isolate a high-use table on its own disk device, assign the device to a chunk, and assign the same chunk to a dbspace. Finally, place the frequently used table
in the dbspace just created using the IN dbspace option of CREATE TABLE.
To display the level of I/O operations against each chunk, run the onstat -g iof option.
If a device that contains a dbspace fails, all tables in that dbspace are inaccessible. However, tables in other dbspaces remain accessible. Although you must
perform a cold restore if a dbspace that contains critical information fails, you must only perform a warm restore if a noncritical dbspace fails.
For more information, see the chapter on table performance considerations in your HCL Informix® Performance Guide. For information about onstat options, see the
IBM Informix Administrator's Reference.
High performance
High availability
Ease and frequency of backup and restore
Meeting any one of these objectives has trade-offs. For example, configuring your system for high performance usually results in taking risks regarding the availability of
data. The sections that follow present an example in which the database server administrator must make disk-layout choices given limited disk resources. These
sections describe two different disk-layout solutions. The first solution represents a performance optimization, and the second solution represents an availability-and-
restore optimization.
The setting for the sample disk layouts is a fictitious sporting goods database that uses the structure (but not the volume) of the stores_demo database. In this example,
the database server is configured to handle approximately 350 users and 3 gigabytes of data. The disk space resources are shown in the following table.
The database includes two large tables: cust_calls and items. Assume that both of these tables contain more than 1,000,000 rows. The cust_calls table represents a
record of all customer calls made to the distributor. The items table contains a line item of every order that the distributor ever shipped.
The database includes two high-use tables: items and orders. Both of these tables are subject to constant access from users around the country.
The remaining tables are low-volume tables that the database server uses to look up data such as postal code or manufacturer.
To optimize performance, use multiple storage spaces and multiple disks. The following figure shows a disk layout that is optimized for performance. This disk layout
uses the following strategies to improve performance:
Migration of the logical log and physical log files from the root dbspace
This strategy separates the logical log and the physical log and reduces contention for the root dbspace. For best performance, take advantage of automatic
performance tuning for the logical and physical logs:
Create a plogspace to enable the automatic expansion of the physical log.
Set the AUTO_LLOG configuration parameter to enable the automatic expansion of the logical log in a specified dbspace.
If you create a server during installation, the plogspace is created and the AUTO_LLOG configuration parameter is set to a non-critical dbspace.
Location of the two tables that undergo the highest use in dbspaces on separate disks
Neither of these disks stores the logical log or the physical log. Ideally you might store each of the items and orders tables on a separate high-performance disk.
However, in the present scenario, this strategy is not possible because one of the high-performance disks is required to store the large cust_calls table (the other
two disks are too small for this task).
The weakness of the previous disk layout is that if either Disk 1 or Disk 2 fails, the whole database server goes down until you restore the dbspaces on these disks from
backups. In other words, the disk layout is poor with respect to availability.
An alternative disk layout that optimizes for availability and involves mirroring is shown in following figure. This layout mirrors all the critical data spaces (the system
catalog tables, the physical log, and the logical log) to a separate disk. Ideally you might separate the logical log and physical log (as in the previous layout) and mirror
each disk to its own mirror disk. However, in this scenario, the required number of disks does not exist; therefore, the logical log and the physical log both are located in
the root dbspace.
Many computer manufacturers ship their computers with a proprietary LVM. You can use the database server to store and retrieve data on disks that are managed by
most proprietary LVMs. Logical-volume managers provide some advantages and some disadvantages, as explained in the remainder of this section.
Most LVMs can manage multiple gigabytes of disk space. The database server chunks are limited to a size of 4 terabytes, and this size can be attained only when the
chunk being allocated has an offset of zero. Consequently, you must limit the size of any volumes to be allocated as chunks to a size of 4 terabytes.
Because you can use LVMs to partition a disk drive into multiple volumes, you can control where data is placed on a given disk. You can improve performance by
defining a volume that consists of the middle-most cylinders of a disk drive and placing high-use tables in that volume. (Technically, you do not place a table directly in a
volume. You must first allocate a chunk as a volume, then assign the chunk to a dbspace, and finally place the table in the dbspace. For more information, see Control of
where simple large object data is stored.)
Tip: If you choose to use large disk drives, you can assign a chunk to one drive and eliminate the necessity to partition the disk.
You can also improve performance by using a logical volume manager to define a volume that spreads across multiple disks and then placing a table in that volume.
Many logical volume managers also allow a degree of flexibility that standard operating-system format utilities do not. One such feature is the ability to reposition logical
volumes after you define them. Thus getting the layout of your disk space right the first time is not so critical as with operating-system format utilities.
LVMs often provide operating-system-level mirroring facilities. For more information, see Alternatives to mirroring.
Your HCL Informix Performance Guide also contains information about managing disk space. In particular, it describes how to eliminate interleaved extents, how to
reclaim space in an empty extent, and how to improve disk I/O.
You can generate SQL administration API or onspaces commands for reproducing the storage spaces, chunks, and logs that exist in a file with the dbschema utility.
Manage dbspaces
This section contains information about creating standard and temporary dbspaces with and without the default page size, specifying the first and next extent
sizes for the tblspace tblspace in a dbspace when you create the dbspace, and adding a chunk to a dbspace or blobspace.
Manage blobspaces
This section explains how to create a blobspace and determine the blobpage size.
Manage sbspaces
This section describes how to create a standard or temporary sbspace, monitor the metadata and user-data areas, add a chunk to an sbspace, and alter storage
characteristics of smart large objects.
Drop a chunk
Use the onspaces utility to drop a chunk from a dbspace.
Manage extspaces
Display databases
You can display the databases that you create with SMI tables.
Multitenancy
You can segregate data, storage space, and processing resources for multiple client organizations by creating multiple tenant databases in a single instance of
Informix®.
Storage optimization
Data compression and consolidation processes can minimize the disk space that is used by your data and indexes.
Related concepts:
Table fragmentation and data storage
Storage space creation and management
Data storage
Related information:
Storage space, chunk, and log creation
SQL Administration API Functions
Managing extents
Managing sbspaces
Before you can create a storage space or chunk, or mirror an existing storage space, you must allocate disk space for the chunk file. You can allocate either an empty
file or a portion of raw disk for database server disk space.
UNIX only: On UNIX, if you allocate raw disk space, you must use the UNIX ln command to create a link between the character-special device name and another file
name. For more information about this topic, see Create symbolic links to raw devices (UNIX).
Using a UNIX file and its inherent operating-system interface for database server disk space is called using cooked space.
Windows only: On Windows, you must use NTFS files for database server disk space. For more information about this recommendation, see Unbuffered or buffered
disk access on UNIX.
You can balance chunks over disks and controllers. Placing multiple chunks on a single disk can improve throughput.
Specify an offset
When you allocate a chunk of disk space to the database server, specify an offset.
Many computer systems and some disk-drive manufacturers keep information for a physical disk drive on the drive itself. This information is sometimes called a volume
table of contents (VTOC) or disk label. The VTOC is commonly stored on the first track of the drive. A table of alternative sectors and bad-sector mappings (also called a
revectoring table) might also be stored on the first track.
If you plan to allocate partitions at the start of a disk, you might be required to use offsets to prevent the database server from overwriting critical information required by
the operating system. For the exact offset required, see your disk-drive manuals.
Important: If you are running two or more instances of the database server, be extremely careful not to define chunks that overlap. Overlapping chunks can cause the
database server to overwrite data in one chunk with unrelated data from an overlapping chunk. This overwrite effectively deletes overlapping data.
For more information, see the topics about configuration parameters in the HCL Informix® Administrator's Reference.
For more information, see Creating a dbspace that uses the default page size.
The offset specifies the beginning location of a chunk. The database server determines the location of the last byte of the chunk by adding the size of the chunk to the
offset.
For the first chunk, assign any initial offset, if necessary, and specify the size as an amount that is less than the total size of the allocated disk space. For each additional
chunk, specify the offset to include the sizes of all previously assigned chunks, plus the initial offset, and assign a size that is less than or equal to the amount of space
remaining in the allocation.
ls -l my_chunk -rw-rw----
1 informix informix
0 Oct 12 13:43 my_chunk
For information about how to create a storage space using the file you have allocated, see Creating a dbspace that uses the default page size, Creating a blobspace,
and Creating an sbspace.
For specific instructions on how to allocate raw disk space on UNIX, see your operating-system documentation and Unbuffered or buffered disk access on UNIX.
The file name of the character-special device usually begins with the letter r. For the procedure, see steps 4 and 5 in Allocating cooked file spaces on UNIX.
3. Verify that the operating-system permissions on the character-special devices are crw-rw----.
4. Create a symbolic link between the character-special device name and another file name with the UNIX link command, ln -s. For details, see Create symbolic
links to raw devices (UNIX).
Restriction: After you create the raw device that the database server uses for disk space, do not create file systems on the same raw device that you allocate for the
database server disk space. Also, do not use the same raw device as swap space that you allocate for the database server disk space.
Related concepts:
Partitions and offsets
To create a link between the character-special device name and another file name, use the UNIX link command (usually ln). To verify that both the devices and the links
exist, run the UNIX command ls -l (ls -lg on BSD) on your device directory. The following example shows links to raw devices. If your operating system does not
support symbolic links, hard links also work.
Why use symbolic links? If you create chunks on a raw device and that device fails, you cannot restore from a backup until you replace the raw device and use the same
path name. All chunks that were accessible at the time of the last backup must be accessible when you perform the restore.
Symbolic links simplify recovery from disk failure and enable you to replace quickly the disk where the chunk is located. You can replace a failed device with another
device, link the new device path name to the same file name that you previously created for the failed device, and restore the data. You are not required to wait for the
original device to be repaired.
To allocate NTFS file space for database server disk space or mirrored space, the first step is to create a null (zero bytes) file.
c:> cd \usr\data
4. If necessary, convert the partition to NTFS by running the following command: convert /fs:ntfs
5. Create a null file with the following command: c:> copy nul my_chunk
6. If you want to verify that the file was created, use the dir command to do so.
After you allocate the file space, you can create the dbspace or other storage space as you normally would, using onspaces. For information about how to create a
dbspace or a blobspace, see Creating a dbspace that uses the default page size and Creating a blobspace.
To find the drive letter or disk number, run the Disk Administrator. If the drives must be striped (multiple physical disks combined into one logical disk), only logical drive
specification would work.
You must be a member of the Informix-Admin group when you create a storage space or add a chunk. The raw disk space can be formatted or unformatted disk space.
Important: If you allocate a formatted drive or disk partition as raw disk space and it contains data, the database server overwrites the data when it begins to use the
disk space. You must ensure that any data on raw disk space is expendable before you allocate the disk space to the database server.
This example adds a chunk of 5000 KB on the e: drive, at an offset of 5200 KB, to dbspace dpspc3.
1. If the disk partition has not been assigned a drive letter, specify the following value for ROOTDBS in the onconfig file: \\.\PhysicalDrive<number>
2. To create a storage space or add a chunk, specify the physical drive partition.
This example adds a chunk of 5000 KB on PhysicalDrive0, at an offset of 5200 KB, to dbspace dpspc3.
\\.\x:
If you are using a file for database server disk space, the path name is the complete path and file name.
Use these naming rules when you create storage spaces or add a chunk. The file name must have the following characteristics:
Be unique and not exceed 128 bytes
Begin with a letter or underscore
Contain only letters, digits, underscores, or $ characters
The name is not case-sensitive unless you use quotation marks around it. By default, the database server converts uppercase characters in the name to lowercase. If
you want to use uppercase in names, put quotation marks around them and set the DELIMIDENT environment variable to ON.
Related concepts:
Chunks
To determine which chunk size your platform supports see your machine notes file. If you have upgraded from a version before version 10.00 and did not run the onmode
-BC 2 command, the maximum chunk size is 2 GB.
The storage spaces can be any combination of dbspaces, blobspaces, and sbspaces.
Considering all limits that can apply to the size of an instance of the database server, the maximum size of an instance is approximately 8 petabytes.
If you have upgraded from a version before version 10.00, you must run onmode -BC 2 to enable the maximum number of chunks and storage spaces.
Perform a level-0 backup to ensure that you can restore data when you:
Add or drop mirroring
Important: When you add a new logical log, you no longer are required to perform a level-0 backup of the root dbspace and modified dbspace to use the new logical log.
However, you must perform the level-0 backup to prevent level-1 and level-2 backups from failing.
You must perform a level-0 backup of the modified storage spaces to ensure that you can restore the unlogged data before you switch to a logging table type:
When you convert a nonlogging database to a logging database
When you convert a RAW table to standard
When a storage space or partition becomes full, a message is shown in the online message log file.
You can configure alarms that are triggered when storage spaces become full with the STORAGE_FULL_ALARM configuration parameter. You can specify how often
alarms are sent and the minimum severity level of alarms to be sent. By default, the alarm interval is 600 seconds and the alarm severity level is 3. For more information
about the STORAGE_FULL_ALARM configuration parameters and event alarms, see the HCL Informix Administrator's Reference.
If the primary server in a high-availability cluster encounters an out-of-space condition, and the STORAGE_FULL_ALARM configuration parameter is enabled, the event
alarm is triggered and an error status is returned on the primary server but not on any of the secondary servers. This is expected behavior because log records are no
longer sent from the primary server to the secondary servers when the primary server encounters an out-of-space condition. In this case, the secondary servers never
exceed their storage limits and thus do not trigger an event alarm or return an error status.
You can use the HCL Informix® Scheduler to set up a task that automatically monitors the status of storage spaces. The properties of the task define the information that
Scheduler collects and specifies how frequently the task runs. For example, you might define a task to monitor storage spaces every hour, five days a week. For more
information, see The Scheduler and Creating a task.
Rename dbspaces
You can use the onspaces utility to rename a dbspace if you are user informix or have DBA privileges and the database server is in quiescent mode (and not
any other mode).
Related concepts:
Control of where simple large object data is stored
Size of the root dbspace
For information about creating a dbspace with a non-default page size, see Creating a dbspace with a non-default page size.
Any newly added dbspace (and its mirror, if one exists) is available immediately. If you are using mirroring, you can mirror the dbspace when you create it. Mirroring
takes effect immediately.
KB is the default unit for the -s size and -o offset options. To convert KB to megabytes, multiply the unit by 1024 (for example, 10 MB = 10 * 1024 KB).
See Creating a dbspace with a non-default page size for information about additional onspaces options if you are creating a dbspace with a non-default page
size.
5. If you do not want to specify the first and next extent sizes for the tblspace tblspace in a dbspace, go to 6.
If you want to specify the first and next extent sizes for the tblspace tblspace in a dbspace, see additional information in Specifying the first and next extent sizes
for the tblspace tblspace.
6. After you create the dbspace, you must perform a level-0 backup of the root dbspace and the new dbspace.
The following example shows how to create a 10-megabyte mirrored dbspace, dbspce1, with an offset of 5000 KB for both the primary and mirror chunks, using raw disk
space on UNIX:
The following example shows how to create a 5-megabyte dbspace, dbspc3, with an offset of 200 KB, from raw disk space (drive e:) on Windows:
For more information about creating a dbspace with onspaces, see Dbspaces and information about the onspaces utility in the IBM® Informix® Administrator's
Reference.
Specifying the first and next extent sizes for the tblspace tblspace
You can specify first and next extent sizes if you want to reduce the number of tblspace tblspace extents and reduce the frequency of situations when you must
place the tblspace tblspace extents in non-primary chunks. (A primary chunk is the initial chunk in a dbspace.)
You can choose to specify the first extent size, the next extent size, both the first and the next extent size, or neither extent size. If you do not specify first or next extent
sizes for the tblspace tblspace, HCL Informix® uses the existing default extent sizes.
You can use the TBLTBLFIRST and TBLTBLNEXT configuration parameters to specify the first and next extent sizes for the tblspace tblspace in the root dbspace that is
created when the server is initialized.
You can use the onspaces utility to specify the first and next extent sizes for the tblspace tblspace in non-root dbspaces.
You can only specify the first and next extent sizes when you create dbspace. You cannot alter the specification of the first and next extent sizes after the creation of the
dbspace. In addition, you cannot specify extent sizes for temporary dbspaces, sbspaces, blobspaces, or external spaces. You cannot alter the specification of the first
and next extents sizes after the creation of the dbspace.
1. Determine the total number of pages required in the tblspace tblspace. The number of pages is equal to the sum of the number of tables, detached indexes, and
table fragments likely to be located in the dbspace plus one page for the tblspace tblspace.
2. Calculate the number of KB required for the number of pages. This number depends on the number of KB to a page on the system.
3. Determine the space management requirements on your system by considering the importance of having all of the extents for the tblspace tblspace allocated
during dbspace creation and whether the extents must be allocated contiguously. The more important these issues are, the larger the first extent size must be. If
you are less concerned with having non-contiguous extents, possibly in secondary chunks, then the first and next extent sizes can be smaller.
4. Specify the extent size as follows:
If the space requirement is for the root dbspace, specify the first extent size in the TBLTBLFIRST configuration parameter and the next extent size in the
TBLTBLNEXT configuration parameter. Then initialize the database server instance.
If the space requirement is for a non-root dbspace, indicate the first and next extent sizes on the command line using the onspaces utility to create the
dbspace.
Extent sizes must be in KB and must be multiples of the page size. When you specify first and next extent sizes, follow these guidelines:
You use the following onspaces utility -ef and -en options to specify the first and next extent sizes for the tblspace tblspace in non-root dbspaces:
First extent size: -ef size_in_kbytes
Next extent size: -en size_in_kbytes
You can use Oncheck -pt and oncheck -pT to show the first and next extent sizes of a tblspace tblspace.
If data replication is being used and a dbspace is created on the primary database server, the first and next extent sizes are passed to the secondary database server
through the ADDCHK log record.
For more information about the onspaces utility, oncheck commands, and specifying the first and next extent sizes for the tblspace tblspace, see the HCL Informix
Administrator's Reference.
Parent topic: Creating a dbspace that uses the default page size
Related information:
TBLTBLFIRST configuration parameter
TBLTBLNEXT configuration parameter
The root dbspace uses the default page size. If you want to create a dbspace with a different page size, the size must be an integral multiple of the default page size,
and cannot be greater than 16 KB.
For systems with sufficient storage, the performance advantages of a larger page size include:
Reduced depth of B-tree indexes, even for smaller index keys.
Decreased checkpoint time, which typically occurs with larger page sizes.
A table can be in one dbspace and the index for that table can be in another dbspace. The page size for these partitions can be different.
1. If you upgraded from a version before version 10.00, run the onmode -BC 2 command to enable the large chunk mode. By default, when HCL Informix® is first
initialized or restarted, Informix starts with the large chunk mode enabled.
2. Optional: Create a buffer pool that corresponds to the page size of the dbspace. You can use the onparams utility or the BUFFERPOOL configuration parameter.
If you create a dbspace with a page size that does not have a corresponding buffer pool, Informix automatically creates a buffer pool using the default values for
the BUFFERPOOL configuration parameter as defined in the onconfig file.
You cannot have multiple buffer pools with the same page size.
3. Define the page size of the dbspace when you create the dbspace. You can use the onspaces utility.
Tip: If you use non-default page sizes, you might be required to increase the size of your physical log. If you perform many updates to non-default pages you might
require a 150 - 200 percent increase of the physical log size. Some experimentation might be required to tune the physical log. You can adjust the size of the physical log
as necessary according to how frequently the filling of the physical log triggers checkpoints.
Direct I/O must be available and the file system must support direct I/0 for the page size used for the dbspace chunk.
You can use HCL Informix® to use either raw devices or cooked files for dbspace chunks. In general, cooked files are slower because of the additional overhead and
buffering provided by the file system. Direct I/O bypasses the use of the file system buffers, and therefore is more efficient for reads and writes that go to disk. You
specify direct I/O with the DIRECT_IO configuration parameter. If your file system supports direct I/O for the page size used for the dbspace chunk and you use direct
I/O, performance for cooked files can approach the performance of raw devices used for dbspace chunks.
1. Verify that you have direct I/O and the file system supports direct I/O for the page size used for the dbspace chunk.
2. Enable direct I/O by setting the DIRECT_IO configuration parameter to 1.
If you have an AIX® operating system, you can also enable concurrent I/O for Informix to use with direct IO when reading and writing to chunks that use cooked files.
For more information about using direct IO or concurrent IO, see the HCL Informix Performance Guide.
Storing multiple table or index fragments in a single dbspace improves query performance over storing each fragment in a different dbspace and simplifies management
of dbspaces.
Suppose you are creating a fragmented table using an expression-based distribution scheme in which each expression specifies the data sets that are placed in
particular fragments. You might decide to separate the data in the table with data from one month in one dbspace and data from the next 11 months in 11 other
dbspaces. However, if you want to use only one dbspace for all the yearly data, you can create named fragments so the data for each month is stored in one dbspace.
If you create a fragmented table with named fragments, each row in the sysfragments system catalog table contains a fragment name in the partition column. If you
create a fragmented table without named fragments, the name of the dbspace is in the partition column. The flags column in the sysfragments system catalog table
You can create tables and indexes with named fragments, and you can create, drop, and alter named fragments using the PARTITION keyword and the fragment name.
To create a fragmented table with named fragments, use SQL syntax as shown in the following example:
If you created a table or index fragment containing named fragments, you must use syntax containing the fragment name when you use the ALTER FRAGMENT
statement, as shown in the following examples:
You can use the PARTITION BY EXPRESSION keywords in place of the FRAGMENT BY EXPRESSION keywords in the CREATE TABLE, CREATE INDEX, and ALTER
FRAGMENT ON INDEX statements, as shown in this example:
Use ALTER FRAGMENT syntax to change fragmented tables and indexes that do not have named fragments into tables and indexes that have named fragments. The
following syntax shows how you might convert a fragmented table with multiple dbspaces into a fragmented table with named fragments:
The following syntax shows how you might convert a fragmented index into an index that contains named fragments:
See the HCL Informix® Performance Guide for more information about fragmentation, including fragmentation guidelines, procedures for fragmenting indexes,
procedures for creating attached and detached indexes with named fragments, and examples of SQL statements used to create attached and detached indexes
containing named fragments.
See the IBM Informix Guide to SQL: Syntax for more syntax details, including information about named fragments in the GRANT FRAGMENT and REVOKE
FRAGMENT statements, and details for using the DROP, DETACH, and MODIFY clauses of the ALTER FRAGMENT statement.
For more information, see Creating a dbspace that uses the default page size.
2. Use the DBSPACETEMP environment variables or the DBSPACETEMP configuration parameter to specify the dbspaces that the database server can use for
temporary storage.
The DBSPACETEMP configuration parameter can contain dbspaces with a non-default page size. Although you can include dbspaces with different page sizes in
the parameter list for DBSPACETEMP, the database server only uses dbspaces with the same page size as the first listed dbspace.
For further information about DBSPACETEMP, see the topics about configuration parameters in the HCL Informix® Administrator's Reference.
3. If you create more than one temporary dbspace, the dbspaces must be located on separate disks to optimize the I/O.
If you are creating a temporary dbspace, you must make the database server aware of the existence of the newly created temporary dbspace by setting the
DBSPACETEMP configuration parameter, the DBSPACETEMP environment variable, or both. The database server does not begin to use the temporary dbspace until you
take both of the following steps:
Set the DBSPACETEMP configuration parameter, the DBSPACETEMP environment variable, or both.
Restart the database server.
The following example shows how to create 5-megabyte temporary dbspace named temp_space with an offset of 5000 KB:
Related concepts:
Size of the root dbspace
If the size of the chunk is greater than the available space on the disk, a message is displayed and no dbspace is created. However, the cooked file that the database
server created for the initial chunk is not removed. Its size represents the space left on your file system before you created the dbspace. Remove this file to reclaim the
space.
Important: The newly added chunk (and its associated mirror, if one exists) is available immediately. If you are adding a chunk to a mirrored storage space, you must
also add a mirror chunk.
2. Ensure that the database server is in online, administration, or quiescent mode, or the cleanup phase of fast-recovery mode.
3. Allocate disk space for the chunk, as described in Allocate disk space.
4. To add a chunk, use the -a option of onspaces.
If the storage space is mirrored, you must specify the path name of both a primary chunk and mirror chunk.
If you specify an incorrect path name, offset, or size, the database server does not create the chunk and displays an error message. Also see What to do if you
run out of disk space.
5. After you create the chunk, you must perform a level-0 backup of the root dbspace and the dbspace, blobspace, or sbspace that contains the chunk.
The following example adds a 10-megabyte mirror chunk to blobsp3. An offset of 200 KB for both the primary and mirror chunk is specified. If you are not adding a
mirror chunk, you can omit the -m option.
The next example adds a 5-megabyte chunk of raw disk space, at an offset of 5200 KB, to dbspace dbspc3.
You can also define information that Informix® can use to automatically extend the size of a chunk when additional storage space is required for an application. If you
have extendable chunks, you are not required to add new chunks or spend time trying to determine which storage space will run out of space and when it will run out of
space.
Related concepts:
Automatic space management
Related tasks:
Adding a chunk to an sbspace
You can rename standard dbspaces and all other spaces, including blobspaces, smart blobspaces, temporary spaces, and external spaces. However, you cannot
rename any critical dbspace, such as a root dbspace or a dbspace that contains physical logs.
You cannot rename a dbspace and an sbspace on a secondary database server or when the secondary database server is part of the Enterprise Replication
configuration
The rename dbspace operation only changes the dbspace name; it does not reorganize data.
The rename dbspace command updates the dbspace name in all places where that name is stored. This includes reserved pages on disk, system catalogs, the
ONCONFIG configuration file, and in-memory data structures.
Important: After renaming a dbspace, perform a level-0 archive of the renamed dbspace and the root dbspace. For information, see the IBM® Informix® Backup and
Restore Guide.
If you rename a dbspace, you must rewrite and recompile any stored procedure code that references the old dbspace name.
If you have a stored procedure that contains the ALTER FRAGMENT keywords and a reference to the dbspace name, you must rewrite and recompile that stored
procedure.
If you rename dbspaces that are specified in the DATASKIP configuration parameter, you must manually update the DATASKIP configuration parameter after renaming
the dbspace.
If you enable automatic location and fragmentation, the database server performs the following tasks:
Stores new databases for which you do not specify a location in the optimal dbspace instead of in the root dbspace. By default, all dbspaces except dbspaces that
are dedicated to tenant databases are available.
Stores new tables and indexes for which you do not specify a location in the optimal dbspace instead of in the same dbspace as the database.
Allocates an initial number of round-robin fragments for new tables. A table fragment does not have an extent until a row is inserted into the fragment, unless you
include the FIRST EXTENT clause in the CREATE TABLE statement.
Adds more table fragments as the table grows.
To enable automatic location and fragmentation, set the AUTOLOCATE configuration parameter or the AUTOLOCATE session environment variable to a positive integer.
Automatic location is not applicable to tenant databases or the tables, fragments, and indexes within tenant databases.
To view the list of available dbspaces, query the sysautolocate system catalog table.
To add a dbspace to the list of available dbspaces, run the task() or admin() SQL administration API function with the autolocate database, the autolocate
database add, or the autolocate database anywhere argument.
To remove a dbspace from the list of available dbspaces, run the task() or admin() SQL administration API function with the autolocate database remove
argument.
To disable automatic location and fragmentation for tables in a particular database, run the task() or admin() SQL administration API function with the autolocate
database off argument.
To disable automatic location and fragmentation of tables in all databases, set the AUTOLOCATE configuration parameter or the AUTOLOCATE session environment
variable to 0.
Related concepts:
Table fragmentation and data storage
Storage space creation and management
Databases
Related information:
AUTOLOCATE configuration parameter
autolocate database argument: Specify dbspaces for automatic location and fragmentation (SQL administration API)
autolocate database add argument: Add a dbspace to the dbspace list (SQL administration API)
autolocate database remove argument: Remove a dbspace from the dbspace list (SQL administration API)
autolocate database anywhere argument: Add all dbspaces to the dbspace list (SQL administration API)
autolocate database off argument: Disable automatic fragmentation for a database (SQL administration API)
AUTOLOCATE session environment option
SYSAUTOLOCATE
The database server stores TEXT and BYTE data in dbspaces or blobspaces, but blobspaces are more efficient. For information about adding a chunk, see Adding a
chunk to a dbspace or blobspace.
Creating a blobspace
You can use onspaces to create a blobspace.
Specify a blobspace name of up to 128 bytes. The name must be unique and must begin with a letter or underscore. You can use letters, digits, underscores, and $
characters in the name.
Important: You can mirror the blobspace when you create it if mirroring is enabled for the database server. Mirroring takes effect immediately.
2. Ensure that the database server is in online, administration, or quiescent mode, or the cleanup phase of fast-recovery mode.
3. To add a blobspace, use the onspaces -c -b options.
a. Specify an explicit path name for the blobspace. If the blobspace is mirrored, you must specify the path name and size of both the primary chunk and
mirror chunk.
b. Use the -o option to specify an offset for the blobspace.
c. Use the -s option to specify the size of the blobspace chunk, in KB.
d. Use the -g option to specify the blobpage size in terms of the number of disk pages per blobpages.
See Determine blobpage size. For example, if your database server instance has a disk-page size of 2 KB, and you want your blobpages to have a size of
10 KB, enter 5 in this field.
If you specify an incorrect path name, offset, or size, the database server does not create the blobspace and displays an error message. Also see What to
do if you run out of disk space.
4. After you create the blobspace, you must perform a level-0 backup of the root dbspace and the new blobspace.
The following example shows how to create a 10-megabyte mirrored blobspace, blobsp3, with a blobpage size of 10 KB, where the database server page size is 2 KB.
An offset of 200 KB for the primary and mirror chunks is specified. The blobspace is created from raw disk space on UNIX.
For reference information about creating a blobspace with onspaces, see information about the onspaces utility in the HCL Informix® Administrator's Reference.
This requirement is true for all blobspaces, regardless of the logging status of the database. To accommodate this requirement, switch to the next logical-log file after you
create a blobspace. (For instructions, see Back up log files to free blobpages.)
For information about calculating an optimal blobpage size, see blobpage size considerations in the topics on the effect of configuration on I/O activity in the HCL
Informix® Performance Guide.
If a table has more than one TEXT or BYTE column, and the objects are not close in size, store each column in a different blobspace, each with an appropriately sized
blobpage. See Tables.
You can use one of the following methods to determine the database server page size for your system:
Run the onstat -b utility to display the system page size, given as buffer size on the last line of the output.
To view the contents of the PAGE_PZERO reserved page, run the oncheck -pr utility.
The following database server utility commands to determine the optimal blobpage size:
oncheck -pe
oncheck -pB
The oncheck -pe command provides background information about the objects stored in a blobspace:
Complete ownership information (displayed as database:owner.table) for each table that has data stored in the blobspace chunk
The total number of pages used by each table to store its associated TEXT and BYTE data
The total free and total overhead pages in the blobspace
The oncheck -pB command lists the following statistics for each table or database:
The number of blobpages used by the table or database in each blobspace
The average fullness of the blobpages used by each simple large object stored as part of the table or database
For more information, see Monitor blobspace usage with oncheck -pe, Determine blobpage fullness with oncheck -pB, and optimizing blobspace blobpage size in the
topics about table performance considerations in the HCL Informix® Performance Guide.
Creating an sbspace
Use the onspaces utility or IBM® Informix® Server Administrator (ISA) to create an sbspace.
2. Ensure that the database server is online, administration, or quiescent mode, or in the cleanup phase of fast-recovery mode.
3. Use the onspaces -c -S options to create the sbspace.
a. Use the -p option to specify the path name, the -o option to specify the offset, and the -s option to specify the sbspace size.
b. If you want to mirror the sbspace, use the -m option to specify the mirror path and offset.
c. If you want to use the default storage characteristics for the sbspace, omit the -Df option.
If you want to specify different storage characteristics, use the -Df option. For more information, see Storage characteristics of sbspaces.
You can specify a metadata area for an sbspace or let the database server calculate the size of the metadata area. For more information, see Size
sbspace metadata.
4. After you create the sbspace, you must perform a level-0 backup of the root dbspace and the new sbspace.
5. To start storing smart large objects in this sbspace, specify the space name in the SBSPACENAME configuration parameter.
6. Use onstat -d, onstat -g smb s, and oncheck -cs, -cS, -ps, or -pS to display information about the sbspace.
This shows how to create a 20-megabyte mirrored sbspace, sbsp4. Offsets of 500 KB for the primary and 500 KB for the mirror chunks are specified, and a metadata
size of 150 KB with a 200 KB offset. The AVG_LO_SIZE -Df tag specifies an expected average smart-large-object size of 32 KB.
For information about creating an sbspace and default options for smart large objects, see information about the onspaces utility in the HCL Informix Administrator's
Reference. For information about creating smart large objects, see the IBM Informix DataBlade API Programmer's Guide and IBM Informix ESQL/C Programmer's
Manual.
When you add smart large objects and chunks to the sbspace, the metadata area grows. In addition, the database server reserves 40 percent of the user area to be
used in case the metadata area runs out of space.
For instructions on estimating the size of the sbspace and metadata area, see table performance considerations in the HCL Informix® Performance Guide. Also see
Monitoring the metadata and user-data areas.
You can specify a metadata area for a chunk, let the database server calculate the metadata area, or use the chunk for user data only.
1. Ensure that the database server is online, administration, or quiescent mode, or in the cleanup phase of fast-recovery mode.
2. Use the onspaces -a option to create the sbspace chunk.
a. Use the -p option to specify the path name, the -o option to specify the offset, and the -s option to specify the chunk size.
b. If you want to mirror the chunk, use the -m option to specify the mirror path and offset.
c. To specify the size and offset of the metadata space, use the -Mo and -Ms options.
The database server allocates the specified amount of metadata area on the new chunk.
d. To allow the database server to calculate the size of the metadata for the new chunk, omit the -Mo and -Ms options.
The database server divides the estimated average size of the smart large objects by the size of the user data area.
e. To use the chunk for user data only, specify the -U option.
If you use the -U option, the database server does not allocate metadata space in this chunk. Instead, the sbspace uses the metadata area in one of the
other chunks.
3. After you add a chunk to the sbspace, the database server writes the CHRESERV and CHKADJUP log records.
4. Perform a level-0 backup of the root dbspace and the sbspace.
5. Use onstat -d and oncheck -pe to monitor the amount of free space in the sbspace chunk.
This example adds a 10-megabyte mirror chunk to sbsp4. An offset of 200 KB for both the primary and mirror chunk is specified. If you are not adding a mirror chunk,
you can omit the -m option. The -U option specifies that the new chunk contains user data exclusively.
You can also define information that Informix® can use to automatically expand the size of a chunk when additional storage space is required for an application. If you
have extendable chunks, you are not required to add new chunks or spend time trying to determine which storage space (dbspace, temporary dbspace, sbspace,
temporary sbspace, or blobspace) will run out of space and when it will run out of space.
Related concepts:
Automatic space management
Related tasks:
Adding a chunk to a dbspace or blobspace
Related reference:
Monitor sbspaces
For more information, see Storage characteristics of sbspaces and managing sbspaces in the topics about table performance considerations in your HCL Informix®
Performance Guide.
1. Allocate space for the temporary sbspace. For details, see Allocate disk space.
For information about SBSPACETEMP, see the configuration parameters topics in the HCL Informix Administrator's Reference.
4. Set the SBSPACETEMP configuration parameter to the name of the default temporary sbspace storage area.
For information and an example of onstat -d output, see the onstat utility in the HCL Informix Administrator's Reference.
6. Specify the LO_CREATE_TEMP flag when you create a temporary smart large object.
mi_lo_specset_flags(lo_spec,LO_CREATE_TEMP);
ifx_lo_specset_flags(lo_spec,LO_CREATE_TEMP);
For information about creating smart large objects, see the IBM Informix DataBlade API Programmer's Guide and IBM Informix ESQL/C Programmer's Manual.
To create the plogspace, run the onspaces -c -P command or the admin() or task() SQL administration API function with the create plogspace argument.
If you want to change the location of the plogspace to a different chunk, create a new plogspace. The physical log is moved to the new plogspace and the old plogspace
is dropped.
You can modify the chunk in the plogspace in the following ways:
Mark the chunk as not extendable. Run the admin() or task() SQL administration API function with the modify chunk extendable off argument
Change the extend size of the chunk. The default extend size is 10000 KB. Run the admin() or task() SQL administration API function with the modify
space sp_sizes argument.
Related concepts:
Plogspace
Related information:
onspaces -c -P: Create a plogspace
create plogspace: Create a plogspace (SQL administration API)
modify chunk extendable off argument: Mark a chunk as not extendable (SQL administration API)
modify space sp_sizes argument: Modify sizes of an extendable storage space (SQL administration API)
When the server expands a dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace, the server can add a chunk to the storage space. The server can
also extend a chunk in a dbspace, plogspace, or temporary dbspace that is not mirrored. If the storage space is a non-mirrored dbspace or a temporary dbspace, the
server can also extend a chunk in the storage space.
To configure for the automatic and manual space management, you run SQL administration API commands or the HCL OpenAdmin Tool (OAT) for Informix® to perform
these tasks:
1. Create, modify, and delete one or more entries in the storage pool. The storage pool contains entries for available raw devices, cooked files, and directories that
Informix uses to expand a storage space.
2. Mark a chunk as extendable.
3. Modify the create and extend size of a storage space (optional).
4. Change the threshold and wait time for the automatic addition of more space (optional).
5. Configure the frequency of the monitor low storage task (optional).
If your storage pool contains entries, you can also run SQL administration API commands to:
Manually expand the storage space or extend a chunk, when you do not want to wait for the task that automatically expands the space to run.
Manually create storage spaces from storage pool entries and return space from empty storage spaces to the storage pool.
By default, the SP_AUTOEXPAND configuration parameter is set to 1 to enable automatic expansion of storage spaces. If you do not want to the server to automatically
expand space, set the SP_AUTOEXPAND configuration parameter to 0 to disable the automatic creation or extension of chunks. You can also specify that a chunk is not
extendable.
Tip:
In some situations, the database server might not automatically expand a temporary dbspace that is listed in the DBSPACETEMP configuration parameter after you
configured the server to automatically expand an existing storage space. If operations (such as an index build or sort) that use the temporary dbspace run out of space,
you receive an out of space error. To work around this problem, you must manually add a chunk to the temporary dbspace or use a bigger temporary dbspace.
If you have a storage pool and the database server participates in Enterprise Replication, storage spaces that are necessary for replication are created automatically if
needed when you define a replication server.
Changing the threshold and wait time for the automatic addition of more space
While Informix can react to out-of-space conditions by automatically extending or adding chunks when a storage space is full, you can also configure the server to
extend or add chunks before a storage space is full.
Example of minimally configuring for and testing the automatic addition of more space
This example shows how you can minimally configure and then test the automatic addition of more space. You can do this by creating a dbspace, filling the
space, adding an entry to the Informix storage pool, and loading tables into the space. When the space fills, Informix automatically expands it.
Related concepts:
Storage space creation and management
Size of the root dbspace
Extendable chunks
The storage pool
Related tasks:
Adding a chunk to a dbspace or blobspace
Adding a chunk to an sbspace
Creating a tenant database
Related information:
SP_WAITTIME configuration parameter
Each entry in the storage pool contains information about a directory, cooked file, or raw device that a database server instance can use if necessary to automatically
expand an existing storage space.
To create a storage pool entry, run the admin() or task() function with the storagepool add argument, as follows:
The default units for storage pool sizes and offsets are KB. However, you can specify information in any of the ways shown in the following examples:
"100000"
"100000 K"
"100 MB"
"100 GB"
"100 TB"
Modifying a storage pool entry
To modify a storage pool entry, run the admin() or task() function with the storagepool modify argument, as follows:
To delete a storage pool entry, run the admin() or task() function with the storagepool delete argument, as follows:
To delete all storage pool entries, run the admin() or task() function with the storagepool purge all argument, as follows:
To delete all storage pool entries that are full, run the admin() or task() function with the storagepool purge full argument, as follows:
To delete storage pool entries that have errors, run the admin() or task() function with the storagepool purge errors argument, as follows:
Examples
The following command adds a directory named /region2/dbspaces with a beginning offset of 0, a total size of 0, an initial chunk size of 20 MB, and a high priority. In
this example the offset of 0 and the total size of 0 are the only acceptable entries for a directory.
The following command changes the total size, chunk size, and priority of storage pool entry 8 to 10 GB, 10 MB, and a medium priority.
The following command deletes the storage pool entry with an entry ID of 7:
Related concepts:
The storage pool
Related information:
storagepool purge argument: Delete storage pool entries (SQL administration API)
storagepool modify argument: Modify a storage pool entry (SQL administration API)
storagepool delete argument: Delete one storage pool entry (SQL administration API)
The server cannot automatically extend the chunk when there is little or no free space in the chunk. (However, if the storage pool contains entries, the server can
expand a storage space by adding another chunk to the storage space.)
You cannot manually extend the size of the chunk.
Related concepts:
Extendable chunks
Related information:
modify chunk extendable argument: Mark a chunk as extendable (SQL administration API)
modify chunk extendable off argument: Mark a chunk as not extendable (SQL administration API)
Run the admin() or task() SQL administration API function with the modify space sp_sizes argument, as follows:
The new_create_size is the minimum size that the server can use to create a new chunk in the specified dbspace, temporary dbspace, sbspace, temporary sbspace, or
blobspace.
The new_extend_size is the minimum size that the server can use to extend a chunk in the specified unmirrored dbspace or temporary dbspace.
Specify either sizes with a number (for the number of KB) or a percentage (for a percentage of the total space).
The max_size is the maximum size, in KB, to which the server can expand the storage space. A value of 0 indicates unlimited.
The following command sets the create size to 60 MB, the extend size to 10 MB, and the maximum size to 200 MB for a space that is named dbspace3:
The following command sets the create size to 20 percent and the extend size to 1.5 percent for a space that is named logdbs:
Related information:
modify space sp_sizes argument: Modify sizes of an extendable storage space (SQL administration API)
Specify a threshold for the minimum amount of free KB in a storage space to trigger a task that expands the space.
You can also use the SP_WAITTIME configuration parameter to specify the maximum number of seconds that a thread waits for a space to expand before returning an
out-of-space error.
1. Change the value of the threshold specified in the SP_THRESHOLD configuration parameter from 0 (disabled) to a non-zero value. Specify a value from either 1
to 50 for a percentage of a value from 1000 to the maximum size of a chunk in KB.
2. Change the value of the SP_WAITTIME configuration parameter, which specifies the maximum number of seconds that a thread waits for a space to expand
before returning an out-of-space error.
Related information:
SP_THRESHOLD configuration parameter
SP_WAITTIME configuration parameter
The default frequency of the mon_low_storage task is once per hour, but you can configure the task to run more or less frequently
Prerequisite: Specify a value in the SP_THRESHOLD configuration parameter for the minimum amount of free KB in a storage space.
Run the following SQL statements, where minutes is the number of minutes between each run:
DATABASE sysadmin;
UPDATE ph_task set tk_frequency = INTERVAL (minutes)
MINUTE TO MINUTE WHERE tk_name = “mon_low_storage”;
For example, to configure the task to run every 10 minutes, run the following SQL statements:
DATABASE sysadmin;
UPDATE ph_task set tk_frequency = INTERVAL (10) MINUTE TO MINUTE
WHERE tk_name = “mon_low_storage”;
Prerequisites:
You can extend a chunk only if it is in an unmirrored dbspace or temporary dbspace.
The chunk must be marked as extendable before it can be extended. If not, you must run the admin() or task() function with the
modify chunk extendable argument to specify that the chunk is extendable.
If a space cannot be expanded by extending a chunk, the storage pool must contain active entries that the server can use to create new chunks.
Either:
Manually expand a space by running the admin() or task() function with the modify space expand argument, as follows:
The server expands the space either by extending a chunk in the space or adding a new chunk. The server might round the requested size up, depending on the
page size of the storage space and the configured chunk size for any storage pool entry used during the expansion.
Manually extend a chunk by running the admin() or task() function with the modify chunk extend argument, as follows:
For example, the following command extends chunk number 12 by 5000 KB:
The server might round the requested size up, depending on the page size of the storage space.
Related concepts:
Extendable chunks
Related information:
modify space expand argument: Expand the size of a space (SQL administration API)
modify chunk extend argument: Extend the size of a chunk (SQL administration API)
To minimally configure for and test the automatic addition of more space:
1. Create a dbspace.
For example, create a dbspace named expandable_dbs and allocate an initial chunk using the first 10000 KB of a cooked file named /my_directory
/my_chunk, as follows:
For example, fill the dbspace without loading a row of data. Instead, create a table and allocate a large set of contiguous free pages to the first extent, as follows:
You can monitor the free pages in your chunks by using the onstat -d command or the HCL OpenAdmin Tool (OAT) for Informix. If your dbspace is full, you
receive out-of-space errors when attempting to create and load data into another new table.
For example, add the $INFORMIXDIR/tmp directory to the storage pool, as follows:
DATABASE sysadmin;
EXECUTE FUNCTION task("storagepool add", "$INFORMIXDIR/tmp",
"0", "0", "10000", "2");
4. In the SP_THRESHOLD configuration parameter, set a threshold for the minimum amount of free KB that can exist in a storage space before Informix
automatically runs a task to expand the space.
5. Create and load new tables into your database.
Now, if a storage space becomes full, instead of receiving an out-of-space error, Informix automatically creates a cooked file in the $INFORMIXDIR/tmp file and
add a chunk to the expandable_dbs database using the new cooked file. As you continue to fill this chunk, the server automatically extends it. The server will
always extend chunks if possible before adding new ones to a dbspace.
6. Reduce the free space in a storage space to test the value in the SP_THRESHOLD configuration parameter.
Allocate enough pages in a storage space to reduce the free space so it is below the threshold indicated by SP_THRESHOLD. However, do not completely fill the
space.
You must see the space automatically expanded the next time that the mon_low_storage task runs.
Allocate all pages in a storage space. Then try to allocate more pages. The allocation must be successful and you must not receive an out-of-space error.
Informix writes messages to the log whenever it extends or adds a chunk and marks new chunks as extendable.
Run the onstat -d command to display all chunks in the instance. Look for extendable chunks, which are marked with an E flag. The command output shows
that the server automatically expanded the space, either through the addition of a new chunk or by extending the size of an existing chunk.
For example, add the $INFORMIXDIR/tmp directory to the storage pool, as follows:
DATABASE sysadmin;
EXECUTE FUNCTION task("storagepool add", "$INFORMIXDIR/tmp",
"0", "0", "10000", "2");
2. Mark some chunks in unmirrored dbspaces and temporary dbspaces as extendable so that the server can extend the chunks if necessary in the future.
You can also change the mark to of an extendable chunk to not extendable. For example, specify that chunk number 10 cannot be extended:
3. In the SP_THRESHOLD configuration parameter, set a threshold for the minimum amount of free KB that can exist in a storage space before Informix®
automatically runs a task to expand the space. Specify either:
A value from 1 to 50 for a percentage,
A value from 1000 to the maximum size of the chunk in KB
If an individual storage space fills beyond this threshold that you define and remains that full until the space-monitoring task (mon_low_storage) next runs, the
server attempts to expand the space by extending an extendable chunk or by using the storage pool to add a chunk.
For example, suppose the SP_THRESHOLD value is 5.5, which the server treats as 5.5 percent. If a space runs low on free pages, and the free space
percentage falls below 5.5 percent and remains below that level until the mon_low_storage task runs next, that task attempts to expand the space. If
SP_THRESHOLD is set to 50000 and a space has fewer than free 50000 KB, that space is expanded the next time mon_low_storage runs.
4. Optional: Change how often the mon_low_storage task runs. This task periodically scans the list of dbspaces to find spaces that fall below the threshold
indicated by SP_THRESHOLD configuration parameter.
For example, to configure the task to run every 10 minutes, run the following SQL statements:
DATABASE sysadmin;
UPDATE ph_task set tk_frequency = INTERVAL (10) MINUTE TO MINUTE
WHERE tk_name = “mon_low_storage”;
5. Optional: Change the value of the SP_WAITTIME configuration parameter, which specifies the maximum number of seconds that a thread waits for a space to
expand before returning an out-of-space error.
6. Optional: Change two sizes that are associated with expanding a storage space:
The extend size, which is the minimum size that is used when extending a chunk in a dbspace, temporary dbspace, or the plogspace
The create size, which is the minimum size that is used when creating a new chunk in a dbspace, temporary dbspace, sbspace, temporary sbspace, or
blobspace that is not a mirror space
For example, the following command sets the create size and extend size to 60 MB and 10 MB, respectively, for space number 3:
After you configure for the automatic expansion of a storage space, you can also manually expand the space or extend a chunk in the space, as necessary.
Before you drop a chunk, ensure that the database server is in the correct mode, using the following table as a guideline.
If any pages remain allocated to nonoverhead entities, the utility returns the following error: Chunk is not empty.
In addition, when a dbspace consists of two or more chunks and the additional chunks do not contain user data, the additional chunks cannot be deleted if the chunks
contain a tblspace tblspace.
If you receive the Chunk is not empty message, you must determine which table or other entity still occupies space in the chunk by running oncheck -pe to list
contents of the extent.
Usually, the pages can be removed when you drop the table that owns them. Then reenter the utility command.
You cannot drop the initial chunk of a dbspace with the syntax in the previous example. Instead, you must drop the dbspace. Use the fchunk column of onstat -d to
determine which is the initial chunk of a dbspace. For more information about onstat, see information about the onspaces utility in the HCL Informix® Administrator's
Reference.
For information about dropping a chunk from a dbspace with onspaces, see the IBM Informix Administrator's Reference.
You cannot drop the initial chunk of an sbspace with the syntax in the previous example. Instead, you must drop the sbspace. Use the fchunk column of onstat -d to
determine which chunk is the initial chunk of an sbspace.
onspaces -d sbsp3 -f
Warning: If you force the drop of an sbspace, you might introduce consistency problems between tables and sbspaces.
Rarely, a smart large object with a reference count of 0 remains. You can use the onspaces -cl command to delete all smart large objects that have a reference count
of 0, if it is not open by any application.
For information about using onspaces -cl, see information about the onspaces utility in the HCL Informix® Administrator's Reference.
On UNIX, you must be logged in as root or informix to drop a storage space. On Windows, you must be a member of the Informix-Admin group to drop a storage
space.
You can drop a storage space only when the database server is in online, administration, or quiescent mode.
Before you drop a blobspace, you must drop all tables that have a TEXT or BYTE column that references the blobspace.
Run oncheck -pe to verify that no tables or log files are located in the dbspace or blobspace.
Before you drop an sbspace, you must drop all tables that have a CLOB or BLOB column that reference objects that are stored in the sbspace. For sbspaces, you are
not required to delete columns that point to an sbspace, but these columns must be null; that is, all smart large objects must be deallocated from the sbspace.
Tip: If you drop tables on dbspaces where light appends are occurring, the light appends might be slower than you expect. The symptom of this problem is physical
logging activity. If light appends are slower than you expect, make sure that no tables are dropped in the dbspace either before or during the light appends. If you have
dropped tables, force a checkpoint with onmode -c before you perform the light append.
Important: Dropping a chunk or a dbspace triggers a blocking checkpoint, which forces all database updates to wait while all the buffer pools are flushed to disk. This
update blocking can be significantly longer during a blocking checkpoint than during a non-blocking checkpoint, especially if the buffer pool is large.
If you want to drop only a storage-space mirror, turn off mirroring. (See End mirroring.) This action drops the dbspace, blobspace, or sbspace mirrors and frees the
chunks for other uses.
onspaces -d dbspce5
onspaces -d blobsp3
Use the -d option with the -f option if you want to drop an sbspace that contains data. If you omit the -f option, you cannot drop an sbspace that contains data. This
example drops an sbspace called sbspc4 and its mirrors.
onspaces -d sbspc4 -f
Warning: If you use the -f option, the tables in the database server might have dead pointers to the deleted smart large objects.
For information about dropping a storage space with onspaces, see information about the onspaces utility in the HCL Informix® Administrator's Reference.
Important: After you drop a dbspace, blobspace, or sbspace, the newly freed chunks are available for reassignment to other dbspaces, blobspaces, or sbspaces.
However, before you reassign the newly freed chunks, you must perform a level-0 backup of the root dbspace and the modified storage space. If you do not perform this
backup, and you subsequently must perform a restore, the restore might fail because the backup reserved pages are not up-to-date.
Prerequisite: The storage pool must contain entries (a directory, cooked file, or raw device).
Run the admin() or task() function with one of the following arguments for creating a space from the storage pool. The elements you use in the command vary, depending
on the type of space that you are creating.
"space_name", "size",);
Examples
The following command creates a mirrored blobspace named blobspace1. The new blobspace has a size of 100 gigabytes and a blobpage size of 100 pages.
The following command adds a chunk to the dbspace named logdbs. The new chunk has a size of 200 megabytes.
Related information:
create dbspace from storagepool argument: Create a dbspace from the storage pool (SQL administration API)
create tempdbspace from storagepool argument: Create a temporary dbspace from the storage pool (SQL administration API)
create blobspace from storagepool argument: Create a blobspace from the storage pool (SQL administration API)
create sbspace from storagepool argument: Create an sbspace from the storage pool (SQL administration API)
create tempsbspace from storagepool argument: Create a temporary sbspace from the storage pool (SQL administration API)
create chunk from storagepool argument: Create a chunk from the storage pool (SQL administration API)
To return storage space from an empty chunk, dbspace, temporary dbspace, blobspace, sbspace, or temporary sbspace to the storage pool:
Run the admin() or task() function with one of the following arguments for returning space to the storage pool. The elements you use in the command vary, depending on
the type of object that you are dropping.
Examples
The following command drops an empty blobspace named blob4 and adds all of the freed space to the storage pool.
The following command drops an empty chunk in a dbspace named health and adds all of the freed space to the storage pool.
Related information:
drop chunk to storagepool argument: Return space from an empty chunk to the storage pool (SQL administration API)
drop dbspace to storagepool argument: Return space from an empty dbspace to the storage pool (SQL administration API)
drop tempdbspace to storagepool argument: Return space from an empty temporary dbspace to the storage pool (SQL administration API)
drop blobspace to storagepool argument: Return space from an empty blobspace to the storage pool (SQL administration API)
drop sbspace to storagepool argument: Return space from an empty sbspace to the storage pool (SQL administration API)
drop tempsbspace to storagepool argument: Return space from an empty temporary sbspace to the storage pool (SQL administration API)
Create an extspace
Drop an extspace
To create an extspace with onspaces, use the -c option as illustrated in the following example. The following example shows how to create an extspace, pass_space,
that is associated with the UNIX password file.
Specify an extspace name of up to 128 bytes. The name must be unique and begin with a letter or underscore. You can use letters, digits, underscores, and $ characters
in the name.
Important: The preceding example assumes that you have coded a routine that provides functions for correctly accessing the file passwd and that the file itself exists.
After you have created the extspace, you must use the appropriate commands to allow access to the data in the file passwd. For more information about user-defined
access methods, see the IBM Informix Virtual-Table Interface Programmer's Guide.
For reference information about creating an extspace with onspaces, see information about the onspaces utility in the IBM Informix Administrator's Reference.
onspaces -d pass_space
For the complete syntax of this onspaces option, see information about the onspaces utility in the HCL Informix® Administrator's Reference.
When you run onstat -f, you receive a message that tells you whether the DATASKIP configuration parameter is set to on for all dbspaces, off for all dbspaces, or on
for specific dbspaces.
This statement causes the database server to skip dbspace1 or dbspace5 whenever both of these conditions are met:
The application attempts to access one of the dbspaces.
The database server finds that one of the dbspaces is unavailable.
If the database server finds that both dbspace1 and dbspace5 are unavailable, it skips both dbspaces.
A database server administrator can use the DEFAULT setting for the SET DATASKIP statement to control the dataskip feature. Suppose that an application developer
includes the following statement in an application:
When a query is run subsequent to this SQL statement, the database server checks the value of the configuration parameter DATASKIP. A database server administrator
can encourage users to use this setting to specify which dbspaces are to be skipped as soon as the database server administrator becomes aware that one or more
dbspaces are unavailable.
If the database server can eliminate the down fragments, the update or delete is successful, but this outcome is independent of the DATASKIP setting.
An INSERT operation for a table fragmented according to an expression-based distribution scheme where the appropriate fragment is down
Any operation that involves referential constraint checking if the constraint involves data in a down fragment
For example, if an application deletes a row that has child rows, the child rows must also be available for deletion.
Any operation that affects an index value (for example, updates to a column that is indexed) where the index in question is located in a down chunk
When you want to skip fragments, use the ON dbspace-list setting to specify a list of dbspaces with the fragments that the database server must skip.
The administrator can monitor the distribution of data over table fragments. If the goal of fragmentation is improved administration response time, it is important for data
to be distributed evenly over the fragments. To monitor fragmentation disk use, you must monitor database server tblspaces, because the unit of disk storage for a
fragment is a tblspace. (For information about how to monitor the data distribution for a fragmented table, see Monitor tblspaces and extents.)
The administrator must monitor I/O request queues for data that is contained in fragments. When I/O queues become unbalanced, the administrator must work with the
DBA to tune the fragmentation strategy. (For an explanation of how to monitor chunk use, including the I/O queues for each chunk, see Monitor chunks.)
The administrator must monitor fragments for availability and take appropriate steps when a dbspace that contains one or more fragments fails. For how to determine if a
chunk is down, see Monitor chunks.
SMI tables
Related concepts:
Databases
For background information about internal database server storage units mentioned in this section, see the chapter about disk structures and storage in the HCL
Informix® Administrator's Reference.
Monitor chunks
Monitor sbspaces
Related concepts:
Control of where simple large object data is stored
You can use this information to track the disk space used by chunks, monitor chunk I/O activity, and check for fragmentation.
SMI tables
If you issue the onstat -d command on an instance with blobspace chunks, the number of free pages shown is out of date. The tilde (~) that precedes the free value
indicates that this number is approximate. The onstat -d command does not register a blobpage as available until the logical login which a deletion occurred is backed
up and the blobpage is freed. Therefore, if you delete 25 simple large objects and immediately run onstat -d, the newly freed space is not in the onstat output.
To obtain an accurate number of free blobpages in a blobspace chunk, issue the onstat -d update command. For details, see The onstat -d update option.
In onstat -d update output, the flags column in the chunk section provides the following information:
Whether the chunk is the primary chunk or the mirror chunk
Whether the chunk is online, is down, is being recovered, or is a new chunk
For an example of onstat -d output, see information about the onstat utility in the HCL Informix® Administrator's Reference.
Important: You must perform a level-0 backup of the root dbspace and the modified dbspace before mirroring can become active and after turning off mirroring.
If one chunk has a disproportionate amount of I/O activity against it, this chunk might be a system bottleneck.
For an example of onstat -g iof output, see information about the onstat utility in the HCL Informix® Administrator's Reference.
To list the contents of the reserve pages, run oncheck -pr. The following example shows sample output for oncheck -pr. This output is essentially the same as the
onstat -d output; however, if the chunk information has changed since the last checkpoint, these changes are not in the oncheck -pr output.
DBspace number 1
DBspace name rootdbs
Flags 0x20001 No mirror chunks
Number of chunks 2
First chunk 1
Date/Time created 07/28/2008 14:46:55
Partition table page number 14
Logical Log Unique Id 0
Logical Log Position 0
Oldest Logical Log Unique Id 0
Last Logical Log Unique Id 0
Dbspace archive status No archives have occurred
.
.
Validating PAGE_1PCHUNK & PAGE_2PCHUNK...
Using primary chunk page PAGE_2PCHUNK.
Chunk number 1
Flags 0x40 Chunk is online
Chunk path /home/server/root_chunk
Chunk offset 0 (p)
Chunk size 75000 (p)
Number of free pages 40502
DBSpace number 1
.
.
.
The tables within a chunk are listed sequentially. This output is useful for determining chunk fragmentation. If the database server is unable to allocate an extent in a
chunk despite an adequate number of free pages, the chunk might be badly fragmented.
Related reference:
Change the physical-log location and size
chknum
Number of the chunk within the dbspace
dbsnum
Number of the dbspace
chksize
Total size of the chunk in pages
nfree
Number of pages that are free
is_offline
Whether the chunk is down
is_recovering
Whether the chunk is recovering
mis_offline
Whether the mirror chunk is down
mis_recovering
Whether the mirror chunk is being recovered
pagesread
Number of pages read from the chunk
pageswritten
Number of pages written to the chunk
Run oncheck -pt to obtain extent information. The oncheck -pT option returns all the information from the oncheck -pt option and the additional information
about page and index usage.
SMI tables
Query the sysextents table to obtain information about each extent. The sysextents table has columns that indicate the database and the table that the extent belongs
to, and the physical address and size of the extent.
Monitor blobspaces to determine the available space and whether the blobpage size is optimal.
Run oncheck -pB with either a database name or a table name as a parameter. The following example retrieves storage information for all simple large objects stored
in the table sriram.catalog in the stores_demo database:
For detailed information about interpreting the oncheck -pB output, see optimizing blobspace blobpage size in the chapter on table performance considerations in the
HCL Informix® Performance Guide.
This command takes a database name or a table name as a parameter. For each table in the database, or for the specified table, the database server displays a general
tblspace report.
Following the general report is a detailed breakdown of page use in the extent, by page type. See the Type column for information about TEXT and BYTE data.
The database server can store more than one simple large object on the same blobpage. Therefore, you can count the number of pages that store TEXT or BYTE data
in the tblspace, but you cannot estimate the number of simple large objects in the table.
Average Average
Level Total No. Keys Free Bytes
----- -------- -------- ----------
1 1 74 1058
----- -------- -------- ----------
Total 1 74 1058
Average Average
Level Total No. Keys Free Bytes
----- -------- -------- ----------
1 1 74 984
----- -------- -------- ----------
Total 1 74 984
If you attempt to insert a new smart large object, but no metadata space is available, you receive an error. The administrator must monitor metadata space availability to
prevent this situation from occurring.
Command Description
onstat -g smb s Displays the storage attributes for all sbspaces in the system:
sbspace name, flags, owner
Logging status
Average smart-large-object size
First extent size, next extent size, and minimum extent size
Maximum I/O access time
Lock mode
onstat -g smb c Displays the following information for each sbspace chunk:
Chunk number and sbspace name
Chunk size and path name
Total user data pages and free user data pages
Location and number of pages in each user-data and metadata areas
onstat -d Displays the following information about the chunks in each sbspace:
Number of free sbpages in each sbspace chunk, in the metadata area, and in the user-data areas
Total number of sbpages in each sbspace chunk, in the metadata area, and in the user-data areas
oncheck -cs Validates and displays information about the metadata areas for sbspaces..
oncheck -ps
oncheck -cS Displays information about smart-large-object extents and user-data areas for sbspaces.
oncheck -pS Displays information about smart-large-object extents, user-data areas, and metadata areas for sbspaces. For more information
about oncheck -cS and -pS, see managing sbspaces in the topics on table performance considerations in your HCL Informix®
Performance Guide.
Related tasks:
Adding a chunk to an sbspace
Monitoring the metadata and user-data areas
Related reference:
The oncheck -ce and oncheck -pe options
The onstat -d option
The oncheck -ps option
The oncheck -cs option
Related information:
onstat -g smb command: Print sbspaces information
For an example of onstat -d output, see information about the onstat utility in the IBM® Informix® Administrator's Reference.
To find out the total amount of used space, run the oncheck -pe command. For more information, see The oncheck -ce and oncheck -pe options.
The onstat -d option does not register an sbpage as available until the logical login which a deletion occurred is backed up and the sbpage is freed. Therefore, if you
delete 25 smart large objects and immediately run onstat -d, the newly freed space is not in the onstat output.
Related reference:
Monitor sbspaces
The system adds 53 extra pages for the reserved area to the totals for the user-data area and metadata area.
Tip: The oncheck -pe option provides information about sbspace use in terms of database server pages, not sbpages.
The following example shows sample output. In this example, the sbspace s9_sbspc has a total of 214 used pages, 60 free pages in the metadata area, and 726 free
pages in the user-data area.
You can use CHECK EXTENTS as the SQL administration API command equivalent to oncheck -ce. For information about using SQL API commands, see Remote
administration with the SQL administration API and the HCL Informix® Administrator's Reference.
Related reference:
Monitor sbspaces
Use the oncheck -cs output to see how much space is left in the metadata area. If it is full, allocate another chunk with adequate space for the metadata area. To find
the number of used pages in the metadata area, total the numbers in the Used column. To find the number of free pages in the metadata area, total the numbers in the
Free column.
For example, based on the field values displayed in the following figure, the total number of used pages in the metadata area for s9_sbspc is 33 2 KB pages (or 66 KB).
The metadata area contains a total of 62 free pages (or 124 KB).
Related reference:
Monitor sbspaces
To monitor the amount of free metadata space, run the following command:
The -ps output includes information about the locking granularity, partnum, number of pages allocated and used, extent size, and number of rows in the metadata area.
Use the oncheck -ps output to see how much space is left in the metadata area. If it is full, allocate another chunk with adequate space for the metadata area.
If you run oncheck -ps for the dbspace that contains the tables where the smart large objects are stored, you can find the number of rows in the table.
Related reference:
Monitor sbspaces
When all of the reserve area is used up, the database server cannot move space to the metadata area, even if the user-data area contains free space.
1. As you add smart large objects to the sbspace, use oncheck -pe or onstat -g smb c to monitor the space in the metadata area, user-data area, and
reserved area. For an example, see The oncheck -ce and oncheck -pe options.
2. Use the message log to monitor metadata stealing.
The database server prints messages about the number of pages allocated from the reserved area to the metadata area.
3. Add another chunk to the sbspace before the sbspace runs out of space in the metadata and reserved areas.
4. The database server writes the FREE_RE and CHKADJUP log records when it moves space from the reserve area to the metadata or user-data area.
Related reference:
Monitor sbspaces
For example, assume that you want to provide payroll services to small businesses. You sell the use of the payroll application as a service to small business clients.
Instead of providing a separate Informix instance to each client, you can configure a tenant database for each client in a single Informix instance.
When you configure multitenancy, you segregate the following aspects of a database server:
Data
You create a separate tenant database for each client.
Storage spaces
Each tenant database has dedicated storage spaces to store data. Tables, fragments, and indexes that are created in the tenant database must be created in the
dedicated storage spaces. Only the tenant database can use the dedicated storage spaces.
You can limit the amount of permanent storage space that is available to a tenant database to conserve system resources.
Temporary storage spaces can be dedicated to a specific tenant database or shared between databases.
You can encrypt tenant storage spaces if the DlSK_ENCRYPTION configuration parameter is set. Each encrypted storage space has a separate encryption key.
Users
You can set permissions for client users to access each tenant database. You can grant certain users permission to create, modify, or drop tenant databases. By
default, only a DBA or user informix can create a tenant database.
Processing resources
You can segregate CPU resources for a tenant database by defining a tenant virtual processor class and creating virtual processors for running the session threads for
the tenant database. Otherwise, the session threads for tenant databases have access to all CPU virtual processors.
Session limits
You can set the following limits for tenant sessions:
The number of locks a tenant session can acquire.
The amount of memory that can be allocated for a session.
The amount of temporary storage space that can be allocated for a session.
The size of transactions within a session, based on the amount of log space that individual transactions would fill.
The amount of time that a transaction is allowed to run within a session.
The amount of shared memory for all sessions that are connected to the tenant database.
The number of client connections to a tenant database.
The following illustration shows a possible configuration for two clients in the Informix server instance. Each client has a database and users who are allowed to access
the tenant database. Each tenant database has their own storage spaces. Both tenant databases share the default temporary sbspace. Tenant A has a tenant virtual
processor class with two virtual processors, while Tenant B has a virtual process class with one virtual processor.
You can replicate tenant databases with Enterprise Replication and high-availability clusters.
You can run the commands to create, modify, or delete tenant databases through an Enterprise Replication grid.
You cannot run the commands to create, modify, or delete tenant databases from an updatable secondary server in a high-availability cluster.
You can back up tenant databases as part of a database server backup or by specifying the tenant storage spaces in the backup command. You can restore a single
tenant database with ON-Bar by specifying the -T option in the onbar -r command.
If storage space encryption is enabled, you can encrypt all storage spaces that are assigned to a tenant during a restore. Whether storage space encryption is enabled
or not enabled, you can decrypt all tenant storage spaces during a restore.
Related concepts:
Storage space creation and management
Tenant virtual processor class
Related information:
Storage space encryption
DISK_ENCRYPTION configuration parameter
You must be user informix, a DBSA, or have the TENANT privilege to create a tenant database.
You cannot convert an existing database to a tenant database. You cannot convert a tenant database to a non-tenant database. You cannot run the CREATE
DATABASE statement to create a tenant database.
1. Create the storage spaces for the tenant database. All dedicated storage spaces must be empty when you create the tenant database. You can create the
following types of dedicated spaces for a tenant database:
dbspaces
You must create at least one dbspace for the tenant database. The tenant database must be stored in one or more dedicated dbspaces.
blobspaces
If the tenant database will contain simple large objects, you must create one or more blobspaces.
sbspaces
If the tenant database will contain smart large objects, you must create one or more sbspaces. Smart large objects can include BLOB or CLOB data, or data
and table statistics that are too large to fit in a row. Some Informix® features, such as Enterprise Replication, spatial data, and basic text searching, require
sbspaces.
temporary dbspaces
Optional: Create one or more temporary dbspaces to store temporary tables. Otherwise, temporary tables are stored in the temporary dbspaces that are
specified by the DBSPACETEMP configuration parameter or environment variable.
temporary sbspaces
Optional: Create one or more temporary sbspaces to store temporary smart large objects. Otherwise, temporary smart large objects are stored in the temporary
sbspaces that are specified by the SBSPACETEMP configuration parameter.
2. Optional: Set limits for the tenant database so that it cannot monopolize system resources. Tenant database limits do not apply to a user who holds administrative
privileges, such as user informix or a DBSA user. You can set the following limits for a tenant database:
Locks available to a session
Set the session_limit_locks property to specify the maximum number of locks available to a session.
Logspace available to transactions in a session
Set the session_limit_logspace property to specify the maximum amount of log space that a session can use for individual transactions.
Memory available to a session
Set the session_limit_memory property to specify the maximum amount of memory that a session can allocate.
Temporary table space available to a session
Set the session_limit_tempspace property to specify the maximum amount of temporary table space that a session can allocate.
Amount of time that a transaction can run
Set the session_limit_txn_time property to specify the maximum amount of time that a transaction can run in a session
Total space available to a tenant database
Set the tenant_limit_space property to specify the maximum amount of storage space available to a tenant user.
3. Optional: Set up a storage pool so that storage spaces can grow automatically. You can specify maximum sizes for extendable storage spaces to limit the growth
of tenant databases.
4. Optional: Provide TENANT privileges to specific users to create, modify, and delete tenant databases. For example, the following command gives the user jsmith
TENANT privileges:
5. Create a tenant database and define its properties by running the admin() or task() SQL administration API function with the tenant create argument. For
example, the following statement creates a tenant database that is named companyA:
The tenant database has two dbspaces, an sbspace, two tenant virtual processors, and the ANSI logging mode.
When you explicitly specify storage locations during the creation or altering of tables and indexes in the tenant database, you must specify the dbspaces that are listed in
the tenant database definition. Otherwise, the statement fails. If you do not explicitly specify storage for tables or indexes, they are created in the first dbspace that is
listed in the tenant definition.
Note: Improve the security of your databases by performing the following tasks:
Run GRANT and REVOKE statements to control user access to databases.
Set the DBCREATE_PERMISSION configuration parameter to restrict the ability to create non-tenant databases.
Related concepts:
Automatic space management
Related information:
tenant create argument: Create a tenant database (SQL Administration API)
The tenant table
Use the OpenAdmin Tool (OAT) for Informix® to view information about space usage, storage spaces, tenant virtual processors, chunks, tables, and indexes for tenant
databases.
To view the tenant database definition, query the tenant table in the sysadmin database. For example, the following statement lists the tenant databases and their
properties:
SELECT hex(tenant_id),tenant_dbsname,tenant_resources::json,
tenant_create_time,tenant_last_updated
FROM tenant;
The tenant_resources column, which contains the tenant properties, is of type BSON, so you must cast the column to JSON to view the properties.
To update properties, run the admin() or task() SQL administration API function with the tenant update argument. The updates take effect for new sessions.
You can append dbspaces, blobspaces, or sbspaces to the existing lists of storage spaces for a tenant database. The storage spaces must be empty. You must have
DBA or TENANT privileges to change tenant database properties.
You cannot remove dedicated storage spaces from a tenant database unless you delete the database.
When you specify new values for the following tenant database properties, existing values are replaced.
dbspacetemp (temporary dbspaces that are assigned to the tenant)
session_limit_logspace (limit on log space for individual transactions)
session_limit_memory (limit on memory that is allocated per session)
session_limit_tempspace (limit on temporary table space per session)
session_limit_txn_time (limit on the length of time a transaction can run)
sbspacetemp (temporary sbspaces that are assigned to the tenant)
session_limit_locks (limit on the number of locks per session)
tenant_limit_space (limit on total storage space)
vpclass (virtual processor classes names and quantities)
To delete a tenant database, run the admin() or task() SQL administration API function with the tenant drop argument. You must have DBA or TENANT privileges
to delete tenant databases. You cannot delete a tenant database with the DROP DATABASE statement. All dedicated storage spaces for the tenant database are
emptied and become available. Any tenant virtual processors that are not shared with other tenant databases are dropped.
Related information:
tenant update argument: Modify tenant database properties (SQL Administration API)
tenant drop argument: Drop a tenant database (SQL Administration API)
The tenant table
You must meet the following prerequisites before you start a tenant restore:
A tenant database point-in-time restore has the following additional prerequisites if the tenant database is in a high-availability cluster:
The cluster cannot include shared-disk secondary servers or updatable secondary servers. The cluster can include only read-only HDR secondary servers and
remote stand-alone secondary servers.
All secondary servers must be online.
You can run the restore command only on the primary server.
However, during the tenant point in time restore process, internally generated commands are run on the secondary servers to restore the new state of the
tenant database spaces on the secondaries. A tenant point in time restore command includes new physical backups of the tenant spaces in their new state.
The tenant point in time restore command that is run on the primary server does not return until all secondary servers acknowledge the completion of the
automatically generated restores of the new tenant space backups.
Like other warm restores, the logical logs that are required for the tenant database restore are restored to the temporary spaces that are specified by the
DBSPACETEMP configuration parameter. If the DBSPACETEMP configuration parameter is not set, temporary files are created in the root dbspace.
Substitute tenant_database with the name of the tenant database. Substitute time with the time of the last transaction to be restored from the logical logs.
If you omit the -O option, all the permanent tenant spaces must be marked as down before you run the restore command. Temporary spaces are never backed up or
restored.
If the restore fails, fix the problem and run the restore again. Until the restore succeeds, the tenant database is blocked from accepting connections. During the restore,
the value of the tenant_state field is set to restoring in the tenant_resources column of the sysadmin:tenant table. When a tenant database is blocked, the value of
the tenant_state field is set to blocked. You can view the value of the tenant_state field by running the following query:
Example
The following command restores a tenant database that is named tenant1 to the specified point in time:
Related information:
The tenant table
onbar -r syntax: Restoring data
The following table describes the processes that you can use to reduce the amount of disk space that is used by data in rows, simple large objects in dbspaces, and
index keys. You can automate any or all of these processes or do them as needed.
Repacking Consolidates free space in tables, fragments, and indexes. After you compress or when you want to consolidate
free space
Shrinking Returns free space to the dbspace. After you compress or repack or when you want to
return free space to the dbspace
Defragmenting Brings data rows or index keys closer together in contiguous, merged When frequently updated tables or indexes become
extents. scattered among multiple non-contiguous extents
The following illustration shows uncompressed data that uses most of the space in a fragment, free space that is created when the data is compressed, free space that is
moved to the end of the fragment after a repack operation, and data that remains in the fragment after a shrink operation. The process for storage optimization of
indexes is the same.
Figure 1. Data in a fragment during the compression and storage optimization process
Partition defragmentation
You can improve performance by defragmenting partitions to merge non-contiguous extents.
Compression
You can compress and uncompress row data in tables and fragments and simple large objects in dbspaces. You can compress B-tree indexes. You can also
consolidate free space in a table or fragment and you can return this free space to the dbspace. Before you compress data, you can estimate the amount of disk
space that you can save.
Related concepts:
Storage space creation and management
You can use the COMPRESSED option in the CREATE TABLE statement to enable automatic compression of the table when the table has at least 2000 rows.
You can use the COMPRESSED option in the CREATE INDEX statement to enable automatic compression of the index if the index has 2000 or more keys.
Compression is not enabled if the index has fewer than 2000 keys.
You can use the SQL administration API task or admin function to perform any type of storage optimization on a table, fragment, or index.
You can enable the auto_crsd Scheduler task to automatically compress, repack, shrink, and defragment all tables and table fragments.
You can use OAT to optimize any table, fragment, or index, or automatically optimize all tables and table fragments.
Related concepts:
Data that you can compress
Methods for viewing compression information
Compression
Related information:
COMPRESSED option for tables
COMPRESSED option for indexes
defragment argument: Dynamically defragment partition extents (SQL administration API)
Table and fragment compress and uncompress operations (SQL administration API)
index compress repack shrink arguments: Optimize the storage of B-tree indexes (SQL administration API)
You can enable and configure the auto_crsd task on the Server Optimization Policies page in the HCL OpenAdmin Tool (OAT) for Informix® or by updating Scheduler
tables in the sysadmin database.
UPDATE ph_task
SET tk_enable = 'T'
WHERE tk_name = 'auto_crsd';
3. Optional: Change the frequency of when the task is run by running an UPDATE statement on the ph_task table to change the value of the tk_frequency column.
The default value is 7 00:00:00, which indicates that the task runs once a week. For example, the following statement changes the frequency to once a day:
UPDATE ph_task
SET tk_frequency = '1 00:00:00'
WHERE tk_name = 'auto_crsd';
4. Optional: Disable individual operations by using an UPDATE statement on the ph_threshold table to set the value column for a threshold to F:
AUTOCOMPRESS_ENABLED: controls compression
AUTOREPACK_ENABLED: controls repacking
AUTOSHRINK_ENABLED: controls shrinking
AUTODEFRAG_ENABLED: controls defragmenting
For example, the following statement disables just the defragmentation operation of the auto_crsd task:
UPDATE ph_threshold
SET value = 'F'
WHERE name = 'AUTODEFRAG_ENABLED';
5. Optional: Change the thresholds of individual operations by using and UPDATE statement on the ph_threshold table to change the value of the value column for
a threshold:
AUTOCOMPRESS_ROWS: The threshold for compression is the number of uncompressed rows. The default threshold is 50 000 rows. A table is
compressed when the number of uncompressed rows exceeds 50 000.
AUTOREPACK_SPACE: The threshold for repacking a table is the percentage of noncontiguous space. The default is 90%. A table is repacked when
more than 90% of the space the table occupies is noncontiguous.
AUTOSHRINK_UNUSED: The threshold for shrinking a table or fragment is the percentage of unused, allocated space. The default is 50%. A table or
fragment is shrunk when more than 50% of the allocated space is unused.
AUTODEFRAG_EXTENTS: The threshold for defragmenting table or fragment extents is the number of extents. The default is 100. A table or fragment is
defragmented when the number of extents exceeds 100.
For example, the following statement changes the compression threshold to 5000 rows:
UPDATE ph_threshold
SET value = '5000'
WHERE name = 'AUTOCOMPRESS_ROWS';
When a threshold for an operation that you enabled is exceeded, the Scheduler runs the operation.
Related concepts:
Partition defragmentation
The Scheduler
Related information:
The Scheduler tables
Assume that you have a table named rock in a database named music that is owned by user mario. The rock table is not fragmented. You can run the same
operations on a table fragment as you can on a whole table, but the syntax is slightly different.
Prerequisites:
There must be at least 2,000 rows in each fragment of the table, not just a total of 2,000 rows in the whole table.
You must be able to connect to the sysadmin database (by default only user informix), and you must be a DBSA.
Logical and physical logs are large enough to handle normal processing and compression operations. Compression, repacking, and uncompressing,
operations can use large amounts of logs.
1. You run the following command to check how much space you might save by compressing the table:
You review the resulting report, which indicates you can save 75 percent of the space that is used by the rock table. You decide to compress the table.
2. Before you compress data, you want to create a compression dictionary, which contains information that HCL Informix® uses to compress data in the rock table.
You run the following command
Tip: If you do not create the compression dictionary as a separate step, Informix creates the dictionary automatically when you compress data.
3. You decide that you want to compress data in the rock table and simple large objects in dbspaces, consolidate the data, and then return the free space to the
dbspace. You run the following command:
You can perform the same operations faster by running them in parallel. You run the following command:
You can adjust the command by specifying what you want to compress or shrink. For example:
To compress only row data and then repack and shrink the data, specify:
After the existing rows and simple large objects are compressed, Informix consolidates the free space that is left at the end of the table, and then removes the
free space from the table, returning that space to the dbspace.
If the simple large objects or rows are not smaller when compressed, the database server does not compress them.
4. Now suppose that you want to uncompress the data. You run the following command:
If you do require the dictionaries for Enterprise Replication, do not remove compression dictionaries for uncompressed or dropped tables and fragments.
b. Archive the dbspace that contains the table or fragment with a compression dictionary.
c. Run this command:
To run compression and other storage optimization commands on table fragments, include the fragment argument instead of the table argument and the fragment
partition number instead of the table name.
Related concepts:
Compression
Related information:
table or fragment arguments: Compress data and optimize storage (SQL administration API)
A frequently updated table can become fragmented over time, which degrades performance every time the table is accessed by the server. Defragmenting a table brings
data rows closer together and avoids partition header page overflow problems. Defragmenting an index brings the entries closer together, which improves the speed at
which the table information is accessed.
Before you defragment a table, index, or partition, be sure that none of the following conflicting operations are in progress:
To determine how many extents a table, index, or partition has, you can run the oncheck -pt command.
To defragment a table, index, or partition, run the SQL administration API task() or admin() function with the defragment argument or the defragment partnum
argument and specify the table name, index, or partition number that you want to defragment.
You cannot stop a defragment request after you run the command.
If there are problems in completing a defragment request, error messages are sent to the online log file.
Related tasks:
Scheduling data optimization
Related information:
oncheck -pt and -pT: Display tblspaces for a Table or Fragment
defragment argument: Dynamically defragment partition extents (SQL administration API)
Compressing data, simple large objects, or indexes, consolidating data, and returning free space have the following benefits:
If your applications run with high buffer cache hit ratios and high performance is more important than space usage, you might not want to compress your data, because
compression might slightly decrease performance.
Because compressed data covers fewer pages and has more rows per page than uncompressed data, the query optimizer might choose different plans after
compression.
If you use Enterprise Replication, compressing data on one replication server does not affect the data on any other replication server.
If you use high-availability clusters, data that is compressed in the source table is compressed in the target table. You cannot perform compression operations on
secondary servers, because secondary servers must have the same data and physical layout as the primary server.
The main alternative to compression is to buy more physical storage. The main alternative for reducing bottlenecks in IO-bound workloads is to buy more physical
memory to enable the expansion of the buffer pools.
Compression dictionaries
A compression dictionary is a library of frequently occurring patterns in data or index keys and the symbol numbers that replace the patterns.
Related concepts:
Storage optimization methods
Related tasks:
Example: Optimizing data storage on demand
Table or table-fragment data with frequently repeating long patterns is very compressible. Certain types of data, such as text, might be more compressible than other
types of data, such as numeric data, because data types like text might contain longer and more frequently repeating patterns.
I/O-bound tables, for example, tables that have bad cache hit ratios, are good candidates for compression. In OLTP environments, compressing I/O-bound tables can
improve performance.
HCL Informix® can compress any combination of data types, because it treats all data to be compressed as unstructured sequences of bytes. Thus, the server can
compress patterns that span columns, for example, in city, state, and zip code combinations. (The server uncompresses a sequence of bytes in the same sequence that
existed before the data was compressed.)
Related concepts:
Storage optimization methods
You cannot compress a table while an online index build is occurring on the table.
Encrypted data, data that is already compressed by another algorithm, and data without long repeating patterns compresses poorly or does not compress. Try to avoid
placing columns with data that compresses poorly between columns that have frequent patterns to prevent the potential disruption of column-spanning patterns.
HCL Informix® compresses images of the rows only if the images of the compressed rows are smaller than the uncompressed images. Even if compressed rows are
only slightly smaller than their uncompressed images, a small saving of space can enable the server to put more rows onto pages.
Very small tables are not good candidates for compression, because you might not be able to gain back enough space from compressing the rows to offset the storage
cost of the compression dictionary.
Informix cannot compress an individual row to be smaller than four bytes long. The server must leave room in case the row image later grows beyond what the page can
hold. Therefore, you must not try to compress fragments or non-fragmented tables with rows that contain four bytes or are shorter than four bytes.
You can compress a detached B-tree index that is on a fragmented or non-fragmented table.
The compression operation compresses only the leaves (bottom level) of the index.
You cannot uncompress a compressed index. If you no longer need the compressed index, you can drop the index and then re-create it as an uncompressed index.
You can compress a new index when you create it by including the COMPRESSED option in the CREATE INDEX statement. You compress an existing index with an
SQL administration API command.
Related information:
index compress repack shrink arguments: Optimize the storage of B-tree indexes (SQL administration API)
index estimate_compression argument: Estimate index compression (SQL administration API)
CREATE INDEX statement
Creation of Root and Leaf Nodes
The compression algorithm that HCL Informix® uses is a dictionary-based algorithm that performs operations on the patterns of the data that were found to be the most
frequent, weighted by length, in the data that was sampled at the time the dictionary was built.
If the typical data distribution skews away from the data that was sampled when the dictionary was created, compression ratios can decrease.
The maximum compression ratio is 90 percent. The maximum compression of any sequence of bytes occurs by replacing each group of 15 bytes with a single 12-bit
symbol number, yielding a compressed image that is ten percent of the size of the original image. However, the 90 percent ratio is never achieved because Informix adds
a single byte of metadata to each compressed image.
HCL Informix estimates the compression ratios by random sampling of row data and then summing up the sizes of the following items:
Uncompressed row images
Compressed row images, based on a new compression dictionary that is temporarily created by the estimate compression command
Compressed row images, based on the existing dictionary, if there is one. If there is no existing dictionary, this value is the same as the sum of the sizes of the
uncompressed row images.
The actual space saving ratios that are achieved might vary from the compression estimates due to a sampling error, the type of data, how data fits in data pages, or
whether other storage optimization operations are also run.
Compression estimates are based on raw compressibility of the rows. The server generally puts a row onto a single data page. How the rows fit on data pages can affect
how much the actual compression ratio varies from the estimated compression ratio:
When each uncompressed row nearly fills a page and the compression ratio is less than 50 percent, each compressed row fills more than half a page. The server
puts each compressed row on a separate page. In this case, although the estimated compression ratio might be 45 percent, the actual space savings is nothing.
When each uncompressed row fills slightly more than half a page and the compression ratio is low, each compressed row might be small enough to fit in half a
page. The server puts two compressed rows on a page. In this case, even though the estimated compression ratio might be as low as 5 percent, the actual space
savings is 50 percent.
Informix does not store more than 255 rows on a single page. Thus, small rows or large pages can reduce the total savings that compression can achieve. For example,
if 200 rows fit onto a page before compression, no matter how small the rows are when compressed, the maximum effective compression ratio is approximately 20
percent, because only 255 rows can fit on a page after compression.
If you are using a page size that is larger than the minimum page size, one way to increase the realized compression space savings is to switch to smaller pages, so
that:
The 255 row limit can no longer be reached.
If this limit is still reached, there is less unused space on the pages.
More (or less) space can be saved, compared to the estimate, if the compress operation is combined with a repack operation, shrink operation, or repack and shrink
operation. The repack operation can save extra space only if more compressed rows fit on a page than uncompressed rows. The shrink operation can save space at the
dbspace level if the repack operation frees space.
Related information:
Output of the estimate compression operation (SQL administration API)
One compression dictionary exists for each compressed fragment, each compressed non-fragmented table, each compressed simple large object in a dbspace, and
each compressed index partition.
A compression dictionary is built using data that is sampled randomly from a fragment or non-fragmented table that contains at least 2,000 rows, or an index that has at
least 2,000 keys. Typically, approximately 100 KB of space is required for storing the compression dictionary.
The compression dictionary can store a maximum of 3,840 patterns, each of which can be from two to 15 bytes in length. (Patterns that are longer than seven bytes
reduce the total number of patterns that the dictionary can hold.) Each of these patterns is represented by a 12-bit symbol number in a compressed row. To be
compressed, a sequence of bytes in the input row image must exactly match a complete pattern in the dictionary. A row that does not have enough pattern matches
against the dictionary might not be compressible because each byte of an input row that did not completely match is replaced in the compressed image by 12 bits (1.5
bytes).
Informix® attempts to capture the best compressible patterns (the frequency of the pattern that is multiplied by the length). Data is compressed by replacing occurrences
of the patterns with the corresponding symbol numbers from the dictionary, and replacing occurrences of bytes that do not match any pattern with special reserved
symbol numbers.
All dictionaries for the tables or fragments in a dbspace are stored in a hidden dictionary table in that dbspace. The syscompdicts_full table and the syscompdicts
view in the sysmaster database provide information about the compression dictionaries.
Related information:
syscompdicts_full
You cannot use the onunload and onload utilities to move compressed data from one database to another. You must uncompress data in compressed tables and
fragments before you use the onunload and onload utilities.
The dbexport utility uncompresses compressed data. Therefore, you must recompress the data after you use the dbimport utility to import the data.
Related information:
Data migration utilities
High-Performance Loader User's Guide
All TEXT and BYTE BLOBs may be compressed, whether they are stored in the partition or a BLOBspace. If a table containing TEXT or BYTES columns is created with
the COMPRESSED keyword, the BLOB data will be automatically compressed along with the home row data once the number of rows has reached a certain threshold
(2000 by default). Separate compression dictionaries are built for the home rows and each BLOB column.
By using the "compress" sysadmin task() command rather than the auto compress feature, it is possible to compress only the BLOB data in a table, or to
compress only the home row data without compressing BLOBs.
ER is able to replicate BLOB data whether it is compressed or uncompressed.
The following table describes the different methods that you can use to view compression information.
For example, for row data, oncheck -pT displays the number of any compressed rows
in a table or table fragment and the percentage of table or table-fragment rows that are
compressed.
onlog -c option Uses the compression dictionary to expand compressed data and display the
uncompressed contents of compressed log records.
onstat –g dsk option Displays information about the progress of currently running compression operations.
onstat -g ppd option Displays information about the active compression dictionaries that exist for currently
open compressed fragments (also referred to as partitions). This option shows the same
information as the syscompdicts view in the sysmaster database.
Method Description
syscompdicts_full table in the sysmaster database Displays metadata about the compression dictionary and the compression dictionary
binary object.
syscompdicts view in the sysmaster database Displays the same information as the syscompdicts_full table, except that for security
reasons, it excludes the dict_dictionary column, which contains the compression
dictionary binary object.
UNLOAD TO 'compression_dictionary_file' SELECT * View the compression dictionary in a file.
FROM sysmaster:syscompdicts_full; SQL statement
Related concepts:
Storage optimization methods
Related information:
onstat -g dsk command: Print the progress of the currently running compression operation
onstat -g ppd command: Print partition compression dictionary information
oncheck -pt and -pT: Display tblspaces for a Table or Fragment
The onlog utility
syscompdicts_full
IBM OpenAdmin Tool (OAT) for Informix
Method to load data TEXT or BYTE data CLOB or BLOB data Reference
DB-Access LOAD statement Yes Yes LOAD statement in the IBM® Informix Guide
to SQL: Syntax
dbload utility Yes Yes IBM Informix Migration Guide
dbimport utility Yes Yes IBM Informix Migration Guide
Informix® ESQL/C programs Yes Yes IBM Informix ESQL/C Programmer's Manual
Insert MERGE, using an EXTERNAL Yes Yes IBM Informix Guide to SQL: Syntax
source table
onload utility No No IBM Informix Migration Guide
onpladm utility Yes, deluxe mode Yes, deluxe mode IBM Informix High-Performance Loader
User's Guide
High-Performance Loader (HPL) Yes, deluxe mode Yes, deluxe mode IBM Informix High-Performance Loader
User's Guide
Important: The database server does not contain any mechanisms for compressing TEXT and BYTE data after the data has been loaded into a database.
You issue a series of SQL statements that perform the following functions:
Transfer operational data efficiently to or from other systems
Transfer data files across platforms in HCL Informix® internal data format
Use the database server to convert data between delimited ASCII, fixed-ASCII, and HCL Informix internal (raw) representation
Use SQL INSERT and SELECT statements to specify the mapping of data to new columns in a database table
Provide parallel standard INSERT operations so that data can be loaded without dropping indexes
Use named pipes to support loading data to and unloading data from storage devices, including tape drives and direct network connections
Maintain a record of load and unload statistics during the run
Perform express (high-speed) and deluxe (data-checking) transfers
You can issue the SQL statements with DB-Access or embed them in an ESQL/C program.
External tables
An external table is a data file that is not managed by the HCL Informix database server. The definition of the external table includes data-formatting type, external
data description fields, and global parameters.
To map external data to internal data, the database server views the external data as an external table. Treating the external data as a table provides a powerful method
for moving data into or out of the database and for specifying transformations of the data.
When the database server runs a load task, it reads data from the external source and performs the conversion required to create the row and then inserts the row into
the table. The database server writes errors to a reject file.
If the data in the external table cannot be converted, you can specify that the database server write the record to a reject file, along with the reason for the failure. To do
this, you specify the REJECTFILE keyword in the CREATE EXTERNAL TABLE statement.
The database server provides a number of different conversion mechanisms, which are performed within the database server and therefore provide maximum
performance during the conversion task. The database server optimizes data conversion between ASCII and HCL Informix data representations, in both fixed and
delimited formats.
To perform customized conversions, you can create a filter program that writes converted data to a named pipe. The database server then reads its input from the
named pipe in one of the common formats.
To set up loading and unloading tasks, you issue a series of SQL statements:
CREATE EXTERNAL TABLE to describe the data file to load or unload
CREATE TABLE to define the table to load
INSERT...SELECT to load and unload
1. The CREATE EXTERNAL TABLE statement describes the location of the various external files, which can be on disk or come from a pipe (tape drive or direct
network connection), and the format of the external data. The following example is a CREATE EXTERNAL TABLE statement:
2. The CREATE TABLE statement defines the table to load. The following sample CREATE TABLE statement defines the employee table:
3. The INSERT...SELECT statement maps the movement of the external data from or to the database table. The following sample INSERT statement loads the
employee table from the external table:
Important: If you specify more than one INSERT...SELECT statement to unload data, each subsequent INSERT statement overwrites the data file. Use absolute paths
for data files.
When you load data into the database, the FROM table portion of the SELECT clause is the external table that the CREATE EXTERNAL statement defined. When you
unload data to an external file, the SELECT clause controls the retrieval of the data from the database.
Unlike a TEMP table, the external table has a definition that remains in the catalog until it is dropped. When you create an external table you can save the external
description of the data for reuse. This action is particularly helpful when you unload a table into the HCL Informix® internal data representation because you can later use
the same external table description to reload that data.
On Windows systems, if you use the DB-Access utility or the dbexport utility to unload a database table into a file and then plan to use the file as an external table
datafile, you must define RECORDEND as '\012' in the CREATE EXTERNAL TABLE statement.
The external table definition contains all the information required to define the data in the external data file as follows:
The description of the fields in the external data.
The DATAFILES clause.
This clause specifies the type of data formatting in the external data file. The database server converts external data from several data formats, including
delimited and fixed ASCII, and HCL Informix internal.
If you map the external table directly into the internal database table in delimited format, you can use the CREATE EXTERNAL TABLE statement to define the columns
and add the clause SAMEAS internal-table instead of enumerating the columns explicitly.
With this method, the insert columns are mapped to match the field order of the external table.
Another way to reorder columns is to use the SELECT clause to match the order of the database table.
Unlike ordinary operating-system files, named pipes do not have a 2-gigabyte size limitation. The operating system opens and checks for the end of file differently for
named pipes than for ordinary files.
Copying data from one instance to another using the PIPE option
You can use a named pipe to copy data from one Informix® instance to another without writing the data to an intermediate file.
To use a named pipe to load data from an external table, follow these steps:
1. Specify the named pipes in the DATAFILES clause of the CREATE EXTERNAL TABLE statement in SQL.
2. Create the named pipes that you specified in the DATAFILES clause. Use operating-system commands to create the named pipes.
Use the mknod UNIX command with the -p option to create a named pipe. To avoid blocking open problems for pipes on UNIX, start separate UNIX processes for
pipe-readers and pipe-writers or open the pipes with the O_NDELAY flag set.
3. Open the named pipes with a program that reads the named pipe.
4. Execute the INSERT statement in SQL.
Important: If you do not create and open the named pipes before you execute the INSERT statement, the INSERT succeeds, but no rows are loaded.
The database server uses one FIFO VP for each named pipe that you specify in the DATAFILES clause of the CREATE EXTERNAL TABLE statement. For example,
suppose you define an external table with the following SQL statement:
If you use the default value of 1 for FIFO VPs, the database server does not read from pipe2 until it finishes reading all the data from pipe1, and does not read from
pipe3 until it finishes reading all the data from pipe2.
Note: If you create multiple external tables with pipes, concurrent operations on such tables can create dependencies between the pipes. To avoid this, enable automatic
tuning for FIFO VPs. If there is no VPCLASS parameter for FIFO VPs, then the AUTO_TUNE setting controls the automatic tuning.
To use named pipes to unload data to external tables, follow these steps:
1. Specify the named pipe in the DATAFILES clause of either the CREATE EXTERNAL TABLE statement or the SELECT INTO EXTERNAL statement of SQL.
DATAFILES ("PIPE:/usr/local/TAPE")
2. Create the named pipes that you specified in the DATAFILES clause. Use operating-system commands to create the named pipes.
3. Open the named pipes with a program that writes to the named pipe.
4. Unload data to the named pipe.
Important: If you do not create and open the named pipes before you execute the SELECT or INSERT statement, the unload fails with the ENXIO error message (no
such device or address).
You can use a named pipe to unload data from one Informix instance and load it into another instance without writing data to an intermediate file. You can also use a
named pipe to copy data from one table to another on the same Informix instance. In the following example, data is copied from a source table on one instance to a
destination table on a second instance.
Depending on the hardware platform you are using, you must first create a named pipe using one of the following commands. For this example, the named pipe is
called pipe1.
% mkfifo /work/pipe1
% mknod /work/pipe1
Follow these steps to copy data from a table on a source instance to a table on a destination instance on the same computer.
1. Create the source table on the source instance. In this example, the source table is called source_data_table:
2. Create the external table on the source instance. In this example, the external table is named ext_table:
3. Create the destination table on the destination instance. In this example, the destination table is called destin_data_table:
4. Create the external table on the destination instance. In this example, the external table is named ext_table:
(
empid CHAR(5),
empname VARCHAR(40),
empaddr VARCHAR(100)
)
USING
(DATAFILES
(
'PIPE:/work/pipe1_1'
)
);
5. Run the following command from a UNIX shell. The command redirects data from /work/pipe1 to /work/pipe1_1
6. Run the following command on the destination instance to direct data from the named pipe to the destination table:
7. Run the following command on the source instance to spool data to the named pipe:
You can use more than one pipe by inserting multiple PIPE statements in the DATAFILES clause and creating a named pipe for each.
You might want to monitor the load or unload operations for the following situations:
If you expect to load and unload the same table often to build a data mart or data warehouse, monitor the progress of the job to estimate the time of similar jobs
for future use.
If you load or unload from named pipes, monitor the I/O queues to determine if you have a sufficient number of FIFO virtual processors.
To determine if a load or unload operation can use parallel execution, execute the SET EXPLAIN ON statement before the INSERT statement. The SET EXPLAIN output
shows the following counts:
Number of parallel SQL operators that the optimizer chooses for the INSERT statement
Number of rows to be processed by each SQL operator
To monitor a load operation, run onstat -g sql to obtain the session ID.
Use the onstat -g ioq option to display the length of each FIFO queue that is waiting to perform I/O requests. The following example shows sample output.
gfd 19 0 1 2 2 0 0
The q name field in the sample output in the previous example shows the type of the queue, such as fifo for a FIFO VP or aio for an AIO VP. If the q name field shows
gfd or gfdwq, it is a queue for a file whose global file descriptor matches the id field of the output. Disk files have both read and write requests in one queue. One line
per disk file displays in the onstat -g ioq output. Pipes have separate read and write queues. Two lines per pipe display in the output: gfd for read requests and
gfdwq for write requests.
The len or maxlen field has a value of up to 4 for a load or 4 * number_of_writer_threads for an unload. The xuwrite operator controls the number of writer threads.
Use the values in the totalops field rather than the len or maxlen field to monitor the number of read or write requests done on the file or pipe. The totalops field
represents 34 KB of data read from or written to the file. If totalops is not increasing, it means the read or write operation on a file or pipe is stalled (because the FIFO
VPs are busy).
To improve performance, use the onmode -p command to add more FIFO VPs. The default number of FIFO VPs is 1. In this sample output, the FIFO queue does not
contain any data. For example, if you usually define more than two pipes to load or unload, increase the number of FIFO VPs with the following sample onmode
command:
onmode -p +2 FIFO
You can perform the following operations on the primary and on secondary servers:
Loading data on SDS, RSS, or HDR secondary servers is slower than loading data on the primary server.
The CREATE EXTERNAL TABLE statement and the SELECT ... INTO EXTERNAL ... statement are not supported on secondary servers.
When unloading data from a database table to an external table, data files are created on the secondary server but not on the primary server. External table data files
created on secondary servers are not automatically transferred to the primary server, nor are external table data files that are created on the primary server automatically
transferred to secondary servers.
When creating an external table on a primary server, only the schema of the external table is replicated to the secondary servers, not the data file.
To synchronize external tables between the primary server and a secondary server, you can either copy the external table file from the primary server to the secondary
servers, or use the following steps:
1. On the primary server:
a. Create a temporary table with the same schema as the external table.
b. Populate the temporary table:
HCL Informix® updates the sysexternal and sysextdfiles system catalog tables each time an external table is created. The sysextcols system catalog table is updated
when the external format type (fmttype) FIXED is specified.
See the HCL Informix Guide to SQL: Reference for more information.
A row is inserted into the systables system catalog when an external table is created; however, the nrows (number of rows) and the npused (number of data pages
used) columns might not accurately reflect the number of rows and the number of data pages used by the external table unless the NUMROWS clause was specified
when the external table was created.
When an external table is created without specifying a value for the NUMROWS clause, Informix is unable to determine the number of rows in the external table because
the data exists outside the database in data files. Informix updates the nrows column in the systables system catalog by inserting a large value (MAXINT – 1), and
computes the number of data pages used based on the nrows value. The values stored in npused and nrows are later used by the optimizer to determine the most
efficient execution plan. While the NUMROWS clause is not required to be specified precisely, the more accurately it is specified, the more accurate the values for nrows
and npused are.
The High Performance Loader utility provides best performance for loading external data into a database table with indexes.
External tables provide the best performance for loading data into a RAW table with no indexes.
Note: Locking an external table prior to loading data increases the load performance
These topics describe how to use the reject file and error messages to manage errors, and how to recover data loaded into the database.
Reject files
Rows that have conversion errors during a load are written to a reject file on the server that performs the conversion.
The REJECTFILE keyword in the CREATE EXTERNAL TABLE statement determines the name given to the reject file.
Instead of using a reject file, you can use the MAXERRORS keyword in the CREATE EXTERNAL TABLE statement to specify the number of errors that are allowed
before the database server stops loading data. (If you do not set the MAXERRORS keyword, the database server processes all data regardless of the number of errors.)
The database server removes the reject files, if any, at the beginning of a load. The reject files are recreated and written only if errors occur during the load.
Reject file entries are single lines with the following comma-separated fields:
file name
Name of the input file
record
Record number in the input file where the error was detected
reason-code
Description of the error
field-name
The external field name where the first error in the line occurred or <none> if the rejection is not specific to a particular column
bad-line
For delimited or fixed-ASCII files only, the bad line itself
The load operation writes file name, record, field-name, and reason-code in ASCII.
Parent topic: Manage errors from external table load and unload operations
Additional messages are -615, -999, -23852, and -23855. In the messages, n macro and r macro refer to the values generated from the substitution character
%r(first..last). For a list of error messages, see IBM® Informix® Error Messages or use the finderr utility. For information about the violations table error messages,
see your HCL Informix Administrator's Reference.
Parent topic: Manage errors from external table load and unload operations
If the logging type of the table is RAW, the database server can use light append (or EXPRESS) mode to load data and to process check constraints. However, if
the database server crashes while inserting data rows into a RAW table in EXPRESS mode, this unlogged light append operation is not rolled back, and the table
might be left in an unknown state.
Only DELUXE mode supports data recoverability. DELUXE mode uses logged, regular inserts. To recover data after a failed express-mode load, revert to the
most recent level-0 backup. The table type must be STANDARD for this level of recoverability.
For information about restoring tables of RAW or STANDARD logging types, see the IBM® Informix® Backup and Restore Guide.
Parent topic: Manage errors from external table load and unload operations
Logging
Logical log
All the databases managed by a single database server instance store their log records in the same logical log, regardless of whether they use transaction logging. Most
database users might be concerned with whether transaction logging is buffered or whether a table uses logging.
If you want to change the database-logging status, see Settings or changes for logging status or mode.
Transaction logging
Database-logging status
The database server stores the logical-log records in a logical log. The logical log is made up of logical-log files that the database server manages on disk until they have
been safely transferred offline (backed up). The database server administrator keeps the backed up logical-log files until they are required during a data restore, or until
the administrator decides that the records are no longer required for a restore. See Logical log for more information about logical logs.
The logical-log records themselves are variable length. This arrangement increases the number of logical-log records that can be written to a page in the logical-log
buffer. However, the database server often flushes the logical-log buffer before the page is full. For more information about the format of logical-log records, see the
topics about interpreting logical-log records in the HCL Informix Administrator's Reference.
The database server uses logical-log records when it performs various functions that recover data and ensure data consistency, as follows:
Transaction rollback
If a database is using transaction logging and a transaction must be rolled back, the database server uses the logical-log records to reverse the changes made during
the transaction. For more information, see Transaction logging.
Fast recovery
If the database server shuts down in an uncontrolled manner, the database server uses the logical-log records to recover all transactions that occurred since the oldest
update not yet flushed to disk and to roll back any uncommitted transactions. (When all the data in shared memory and on disk are the same, they are physically
consistent.) The database server uses the logical-log records in fast recovery when it returns the entire database server to a state of logical consistency up to the point
of the most recent logical-log record. (For more information, see Fast recovery after a checkpoint.)
Data restoration
The database server uses the most recent storage-space and logical-log backups to recreate the database server system up to the point of the most recently backed-
up logical-log record. The logical restore applies all the log records since the last storage-space backup.
Deferred checking
If a transaction uses the SET CONSTRAINTS statement to set checking to DEFERRED, the database server does not check the constraints until the transaction is
committed. If a constraint error occurs while the transaction is being committed, the database server uses logical-log records to roll back the transaction. For more
information, see SET Database Object Mode in the IBM Informix Guide to SQL: Syntax.
Cascading deletes
Cascading deletes on referential constraints use logical-log records to ensure that a transaction can be rolled back if a parent row is deleted and the system fails
before the children rows are deleted. For information about table inheritance, see the IBM Informix Database Design and Implementation Guide. For information about
primary key and foreign key constraints, see the IBM Informix Guide to SQL: Tutorial.
Distributed transactions
Each database server involved in a distributed transaction keeps logical-log records of the transaction. This process ensures data integrity and consistency, even if a
failure occurs on one of the database servers that is performing the transaction. For more information, see Two-phase commit and logical-log records.
Data Replication
Data Replication environments that use HDR secondary, SD secondary, and RS secondary servers use logical-log records to maintain consistent data on the primary
and secondary database servers so that one of the database servers can be used quickly as a backup database server if the other fails. For more details, see How
data replication works.
Enterprise Replication
You must use database logging with Enterprise Replication because it replicates the data from the logical-log records. For more information, see the IBM Informix
Enterprise Replication Guide.
The database-logging status indicates whether a database uses transaction logging. The log-buffering mode indicates whether a database uses buffered or unbuffered
logging, or ANSI-compliant logging. For more information, see Database-logging status and Manage the database-logging mode.
When you create a database, you specify whether it uses transaction logging and, if it does, what log-buffering mechanism it uses. After the database is created, you can
turn off database logging or change to buffered logging, for example. Even if you turn off transaction logging for all databases, the database server always logs some
events. For more information, see Activity that is always logged and Database logging in an X/Open DTP environment.
You can use logging or nonlogging tables within a database. The user who creates the table specifies the type of table. Even if you use nonlogging tables, the database
server always logs some events. For more information, see Table types for Informix.
The following table lists statements that generate operations that are logged even if transaction logging is turned off.
ALTER ACCESS_METHOD
ALTER FRAGMENT
ALTER FUNCTION
ALTER INDEX
ALTER PROCEDURE
ALTER ROUTINE
ALTER SECURITY LABEL COMPONENT
ALTER SEQUENCE
ALTER TABLE
ALTER TRUSTED CONTEXT
ALTER USER
CLOSE DATABASE
CREATE ACCESS_METHOD
CREATE AGGREGATE
CREATE CAST
CREATE DATABASE
CREATE DISTINCT TYPE
CREATE EXTERNAL TABLE
CREATE FUNCTION
CREATE FUNCTION FROM
CREATE INDEX
CREATE OPAQUE TYPE
CREATE OPCLASS
CREATE PROCEDURE
CREATE PROCEDURE FROM
CREATE ROLE
CREATE ROUTINE FROM
CREATE ROW TYPE
CREATE SCHEMA
CREATE SECURITY LABEL
CREATE SECURITY LABEL COMPONENT
CREATE SECURITY POLICY
CREATE SEQUENCE
CREATE SYNONYM
CREATE TABLE
CREATE TEMP TABLE
CREATE TRIGGER
CREATE TRUSTED CONTEXT
CREATE USER
CREATE VIEW
CREATE XADATASOURCE
CREATE XADATASOURCE TYPE
DROP ACCESS_METHOD
DROP AGGREGATE
DROP CAST
DROP DATABASE
DROP FUNCTION
DROP INDEX
DROP OPCLASS
DROP PROCEDURE
DROP ROLE
DROP ROUTINE
DROP ROW TYPE
DROP SECURITY
DROP SEQUENCE
DROP SYNONYM
DROP TABLE
DROP TRIGGER
DROP TRUSTED CONTEXT
DROP TYPE
DROP USER
DROP VIEW
DROP XADATASOURCE
DROP XADATASOURCE TYPE
GRANT
GRANT FRAGMENT
RENAME COLUMN
RENAME DATABASE
RENAME INDEX
RENAME SECURITY
RENAME SEQUENCE
RENAME TABLE
RENAME TRUSTED CONTEXT
RENAME USER
REVOKE
REVOKE FRAGMENT
TRUNCATE
UPDATE STATISTICS
SAVE EXTERNAL DIRECTIVES
SET CONSTRAINTS
SET Database Object Mode
SET INDEXES
SET TRIGGERS
START VIOLATIONS TABLE
STOP VIOLATIONS
INSERT
LOAD
MERGE
PUT
SELECT INTO TEMP
UNLOAD
UPDATE
The following SQL statements do not produce log records, regardless of the database logging mode.
ALLOCATE COLLECTION
ALLOCATE DESCRIPTOR
ALLOCATE ROW
CLOSE
CONNECT
DATABASE
DEALLOCATE COLLECTION
DEALLOCATE DESCRIPTOR
DEALLOCATE ROW
DECLARE
DESCRIBE
DISCONNECT
FETCH
FREE
GET DESCRIPTOR
GET DIAGNOSTICS
INFO
LOCK TABLE
OPEN
OUTPUT
PREPARE
RELEASE SAVEPOINT
SAVEPOINT
SELECT
SET AUTOFREE
SET COLLATION
SET CONNECTION
SET DATASKIP
SET DEBUG FILE
SET DEFERRED_PREPARE
SET DESCRIPTOR
SET ENCRYPTION PASSWORD
SET ISOLATION
SET LOCK MODE
SET LOG
SET OPTIMIZATION
SET PDQPRIORITY
SET ROLE
SET SESSION AUTHORIZATION
SET STATEMENT CACHE
SET TRANSACTION
SET Transaction Mode
SET USER PASSWORD
UNLOCK TABLE
WHENEVER
SET ENVIRONMENT
SET EXPLAIN
For temporary tables in temporary dbspaces, nothing is logged, not even the SQL statements that are always logged for other types of tables. If you include temporary
(nonlogging) dbspaces in the value of the DBSPACETEMP configuration parameter, the database server places nonlogging tables in these temporary dbspaces first.
Every database that the database server manages has a logging status. The logging status indicates whether the database uses transaction logging and, if so, which
log-buffering mechanism the database employs. To find out the transaction-logging status of a database, use the database server utilities, as explained in Monitor the
logging mode of a database. The database-logging status indicates any of the following types of logging:
Unbuffered transaction logging
Buffered transaction logging
ANSI-compliant transaction logging
No logging
All logical-log records pass through the logical-log buffer in shared memory before the database server writes them to the logical log on disk. However, the point at which
the database server flushes the logical-log buffer is different for buffered transaction logging and unbuffered transaction logging. For more information, see Figure 1 and
Flush the logical-log buffer.
No database logging
When the database server flushes the buffer, only the used pages are written to disk. Used pages include pages that are only partially full, however, so some space is
wasted. For this reason, the logical-log files on disk fill up faster than if all the databases on the same database server use buffered logging.
Unbuffered logging is the best choice for most databases because it guarantees that all committed transactions can be recovered. In the event of a failure, only
uncommitted transactions at the time of the failure are lost. However, with unbuffered logging, the database server flushes the logical-log buffer to disk more frequently,
and the buffer contains many more partially full pages, so it fills the logical log faster than buffered logging does.
If you use buffered logging and a failure occurs, you cannot expect the database server to recover the transactions that were in the logical-log buffer when the failure
occurred. Thus, you might lose some committed transactions. In return for this risk, performance during alterations improves slightly. Buffered logging is best for
databases that are updated frequently (when the speed of updating is important), as long as you can recreate the updates in the event of failure. You can tune the size of
the logical-log buffer to find an acceptable balance for your system between performance and the risk of losing transactions to system failure.
If you are satisfied with your recovery source, you can decide not to use transaction logging for a database to reduce the amount of database server processing. For
example, if you are loading many rows into a database from a recoverable source such as tape or an ASCII file, you might not require transaction logging, and the
loading would proceed faster without it. However, if other users are active in the database, you would not have logical-log records of their transactions until you reinitiate
logging, which must wait for a level-0 backup.
If the CREATE DATABASE statement does not specify a logging status, the database is created without logging.
Only the database server administrator can change logging status. Manage the database-logging mode, describes this topic. Ordinary users cannot change database-
logging status.
If a database does not use logging, you are not required to consider whether buffered or unbuffered logging is more appropriate. If you specify logging but do not specify
the buffering mode for a database, the default is unbuffered logging.
Users can switch from unbuffered to buffered (but not ANSI-compliant) logging and from buffered to unbuffered logging for the duration of a session. The SET LOG
statement performs this change within an application. For more information about the SET LOG statement, see the IBM Informix Guide to SQL: Syntax.
As a database server administrator, you can alter the logging mode of a database as follows:
Change transaction logging from buffered to unbuffered.
Change transaction logging from unbuffered to buffered.
Make a database ANSI compliant.
Add transaction logging (buffered or unbuffered) to a database.
End transaction logging for a database.
For information about database-logging mode, when to use transaction logging, and when to buffer transaction logging, see Logging. To find out the current logging
mode of a database, see Monitor the logging mode of a database.
For information about using SQL administration API commands instead of some ondblog and ontape commands, see Remote administration with the SQL
administration API and the HCL Informix® Administrator's Reference.
Monitor transactions
For information about ON-Bar and ontape, see the IBM® Informix® Backup and Restore Guide.
The following table shows how the database server administrator can change the database-logging mode. Certain logging mode changes take place immediately, while
other changes require a level-0 backup.
To change the buffering mode from unbuffered to buffered logging on a database called stores_demo, run the following command:
You cannot cancel the logging changes that are executed immediately.
1. To unload the data, use dbexport or any other migration utility. The dbexport utility creates the schema file.
For information about how to load and unload data, see the IBM® Informix® Migration Guide.
2. To recreate a database with buffered logging and load the data, use the dbimport -l buffered command.
To recreate a database with unbuffered logging and load the data, use the dbimport -l command.
You add logging to a database with ontape at the same time that you create a level-0 backup.
For example, to add buffered logging to a database called stores_demo with ontape, run the following command:
ontape -s -B stores_demo
To add unbuffered logging to a database called stores_demo with ontape, run the following command:
ontape -s -U stores_demo
In addition to turning on transaction logging, these commands create full-system storage-space backups. When ontape prompts you for a backup level, specify a level-0
backup.
Tip: With ontape, you must perform a level-0 backup of all storage spaces.
ontape -N stores_demo
ontape -U stores_demo
To change the buffering mode from unbuffered to buffered logging on a database called stores_demo, using ontape, without creating a storage-space backup, run the
following command:
ontape -B stores_demo
ontape -A stores_demo
To make a database called stores_demo, which does not already use transaction logging, into an ANSI-compliant database with ontape, run the following command:
ontape -s -A stores_demo
In addition to making a database ANSI compliant, this command also creates a storage-space backup at the same time. Specify a level-0 backup when you are
prompted for a level.
Tip: After you change the logging mode to ANSI compliant, you cannot easily change it again. To change the logging mode of ANSI-compliant databases, unload the
data, recreate the database with the new logging mode, and reload the data. For details, see Changing the logging mode of an ANSI-compliant database.