Administrator Guide
Administrator Guide
Administrator Guide
SC23-9869-05
Note
Before using this information and the product it supports, read the information in “Notices” on page 331.
Note: Information in this document might have been updated since this document was made available. For the
latest available information, see the IBM solidDB and IBM solidDB Universal Cache V6.5 Information Center at
http://pic.dhe.ibm.com/infocenter/soliddb/v6r5/index.jsp.
© Oy International Business Machines Ab Ltd. 1993, 2012
Contents
Figures . . . . . . . . . . . . . . vii Making backups over network . . . . . . . 28
Configuring and automating backups . . . . 30
Tables . . . . . . . . . . . . . . . ix What happens during backup . . . . . . . 31
Administering network backup server . . . . 32
Monitoring and controlling backups . . . . . 33
Summary of changes . . . . . . . . . xi Correcting a failed backup . . . . . . . . 33
Troubleshooting backups . . . . . . . . . 34
About this manual . . . . . . . . . . xv Restoring backups . . . . . . . . . . . 34
Typographic conventions . . . . . . . . . . xv Transaction logging. . . . . . . . . . . 35
Syntax notation conventions . . . . . . . . xvi Creating checkpoints . . . . . . . . . . . 36
Entering timed commands . . . . . . . . . 36
1 solidDB fundamentals . . . . . . . . 1 Compacting database files (database reorganization) 37
solidDB data management components . . . . . 1 Encrypting a database . . . . . . . . . . . 38
Programming interfaces (ODBC and JDBC) . . . 2 Encrypting database and log files . . . . . . 38
Network communications layer . . . . . . . 3 Disabling encryption . . . . . . . . . . 39
SQL parser and optimizer . . . . . . . . . 3 Starting an encrypted database . . . . . . . 40
solidDB . . . . . . . . . . . . . . . 3 Changing the encryption password . . . . . 40
System tools and utilities . . . . . . . . . 4 Decrypting a database . . . . . . . . . . 41
Automated and manual administration . . . . 5 Querying database encryption level . . . . . 41
solidDB architecture . . . . . . . . . . . . 5 Making backups of encrypted databases . . . . 41
Data storage for disk-based tables . . . . . . 5 Encrypting HotStandby servers . . . . . . . 41
Data storage for memory-based tables . . . . . 6 Encryption and performance . . . . . . . 41
solidDB SQL optimizer . . . . . . . . . . 7
solidDB Network Services . . . . . . . . . 8 3 Configuring solidDB . . . . . . . . 43
Multithread processing . . . . . . . . . . 8 Managing parameters . . . . . . . . . . . 43
Configuration files and parameter settings . . . 44
2 Administering solidDB . . . . . . . 11 Viewing and setting parameters with ADMIN
User roles for database administration . . . . . 11 COMMAND . . . . . . . . . . . . . 45
Starting and stopping solidDB . . . . . . . . 11 Setting parameters through the solid.ini
Starting solidDB . . . . . . . . . . . . 11 configuration file . . . . . . . . . . . 49
Closing a database . . . . . . . . . . . 13 Format of configuration parameter names and
Shutting down solidDB . . . . . . . . . 14 values . . . . . . . . . . . . . . . 52
Creating a new database . . . . . . . . . . 15 Most important server-side parameters . . . . 53
Usernames, passwords, and system catalog Most important client-side parameters . . . . 58
names . . . . . . . . . . . . . . . 15 Using solidDB command line options. . . . . . 60
Unicode and partial Unicode database modes . . 16 Setting solidDB-specific environment variables . . 60
Setting up database environment . . . . . . 17
solidDB configuration file (solid.ini) . . . . 19 4 Monitoring solidDB . . . . . . . . 63
Setting database block size (BlockSize) and Viewing error messages and log files . . . . . . 63
location (FileSpec) . . . . . . . . . . . 19 Controlling message log output. . . . . . . 63
Defining database objects . . . . . . . . . 20 Viewing error message descriptions with ADMIN
Connecting to solidDB with solidDB tools (solsql COMMAND 'errorcode' . . . . . . . . . . 64
and solcon) . . . . . . . . . . . . . . 20 Using trace files . . . . . . . . . . . . 64
Running solidDB as a Windows service . . . . . 21 Tracing failed login attempts . . . . . . . 64
Starting solidDB as a service for the first time . . 21 Checking solidDB version . . . . . . . . . 65
Starting and stopping solidDB services . . . . 23 Checking overall database status . . . . . . . 65
Removing solidDB services . . . . . . . . 23 Obtaining currently connected users . . . . . . 66
Running several servers on one computer . . . . 24 Throwing out a connected solidDB user . . . . . 66
Using solidDB with SELinux . . . . . . . . 25 Querying the status of the most recent backup . . 66
Using solidDB audit trail (AuditTrailEnabled) . . . 26 Producing status reports . . . . . . . . . . 66
Enabling and disabling audit trail . . . . . . 27 Performance counters (perfmon) . . . . . . . 67
Querying audit trail data in the ADMIN COMMAND 'perfmon'. . . . . . . . . 67
SYS_AUDIT_TRAIL system table . . . . . . 27 ADMIN COMMAND 'perfmon diff' - producing a
Performing backup and recovery . . . . . . . 27 continuous performance monitoring report . . . 68
Making local backups . . . . . . . . . . 28 Full list of perfmon counters. . . . . . . . 68
iii
5 Managing network connections . . . 81 Tuning network messages . . . . . . . . . 131
Communication between client and server . . . . 81 Tuning I/O . . . . . . . . . . . . . . 132
Network listening names (Com.Listen) . . . . . 81 Distributing I/O . . . . . . . . . . . 132
Viewing supported protocols for the server . . . 83 Setting the MergeInterval parameter. . . . . 132
Viewing network names for the server . . . . 83 Tuning checkpoints . . . . . . . . . . . 133
Adding and modifying a network name for the Reducing Bonsai Tree size by committing
server . . . . . . . . . . . . . . . 84 transactions . . . . . . . . . . . . . . 134
Removing network name from the server . . . 84 Preventing excessive Bonsai Tree growth . . . 134
Connect strings for clients . . . . . . . . . 84 Diagnosing poor performance . . . . . . . . 136
Default connect string (Com.Connect) . . . . 86
Logical data source names . . . . . . . . 86 8 Troubleshooting and support. . . . 139
Communication protocols . . . . . . . . . 87 Troubleshooting a problem . . . . . . . . . 139
TCP/IP protocol . . . . . . . . . . . . 88 Tools for troubleshooting . . . . . . . . 141
UNIX Pipes . . . . . . . . . . . . . 89 Troubleshooting solidDB Universal Cache . . . 147
Named Pipes . . . . . . . . . . . . . 90 Troubleshooting SMA . . . . . . . . . 149
Shared Memory . . . . . . . . . . . . 90 Troubleshooting database file size (file write
Summary of protocols . . . . . . . . . . 91 fails) . . . . . . . . . . . . . . . 151
Troubleshooting MME.ImdbMemoryLimit . . . 152
6 Using solidDB data management Troubleshooting solidDB Data Dictionary
tools. . . . . . . . . . . . . . . . 93 (soldd) . . . . . . . . . . . . . . 153
solidDB Remote Control (solcon) . . . . . . . 93 Searching knowledge bases . . . . . . . . . 154
Starting solidDB Remote Control (solcon) . . . 94 Getting fixes. . . . . . . . . . . . . . 155
Entering commands in solidDB Remote Control IBM Software Support for solidDB and solidDB
(solcon). . . . . . . . . . . . . . . 95 Universal Cache . . . . . . . . . . . . 155
solidDB SQL Editor (solsql) . . . . . . . . . 95 Contacting IBM Support. . . . . . . . . 155
Starting solidDB SQL Editor . . . . . . . . 96 Collecting diagnostics data . . . . . . . . 157
Executing SQL statements with solidDB SQL Subscribing to Support and other updates . . . 162
Editor . . . . . . . . . . . . . . . 98
Executing an SQL script from a file . . . . . 98 Appendix A. Server-side configuration
solidDB SQL Editor (solsql) commands . . . . 99 parameters . . . . . . . . . . . . 165
solidDB Speed Loader (solloado and solload). . . 100 Accelerator section . . . . . . . . . . . 165
File types. . . . . . . . . . . . . . 100 Cluster section . . . . . . . . . . . . . 166
Starting solidDB Speed Loader (solloado and Communication section . . . . . . . . . . 166
solload) . . . . . . . . . . . . . . 101 General section . . . . . . . . . . . . . 168
Tips for speeding up loading . . . . . . . 104 HotStandby section . . . . . . . . . . . 179
Examples . . . . . . . . . . . . . . 104 IndexFile section . . . . . . . . . . . . 182
Control file syntax. . . . . . . . . . . 106 Logging section . . . . . . . . . . . . 184
solidDB Export (solexp) . . . . . . . . . . 114 LogReader section . . . . . . . . . . . . 187
Starting solidDB Export (solexp) . . . . . . 114 MME section . . . . . . . . . . . . . 189
solidDB Data Dictionary (soldd) . . . . . . . 116 Passthrough section . . . . . . . . . . . 193
Starting solidDB Data Dictionary (soldd) . . . 116 SharedMemoryAccess section . . . . . . . . 195
Entering password from a file . . . . . . . . 119 Sorter section . . . . . . . . . . . . . 196
Configuration file . . . . . . . . . . . . 119 SQL section . . . . . . . . . . . . . . 197
Using solidDB tools with Unicode . . . . . . 120 Srv section . . . . . . . . . . . . . . 201
Example: Reloading a database using solidDB tools 121 Synchronizer section . . . . . . . . . . . 214
Contents v
vi IBM solidDB: Administrator Guide
Figures
1. solidDB components . . . . . . . . . . 2 2. solidDB components . . . . . . . . . . 4
vii
viii IBM solidDB: Administrator Guide
Tables
1. Typographic conventions . . . . . . . . xv 46. Communication parameters . . . . . . . 166
2. Syntax notation conventions. . . . . . . xvi 47. General parameters . . . . . . . . . 168
3. Starting the server . . . . . . . . . . 12 48. HotStandby parameters . . . . . . . . 179
4. solidDB default files . . . . . . . . . 18 49. IndexFile parameters . . . . . . . . . 182
5. Connecting to solidDB . . . . . . . . . 21 50. Logging parameters . . . . . . . . . 184
6. Options for the backup command . . . . . 28 51. Log Reader parameters . . . . . . . . 187
7. Options for the netbackup command . . . . 29 52. MME parameters . . . . . . . . . . 189
8. Parameter correspondence to the solid.ini file 53. SQL passthrough parameters . . . . . . 193
for local backup . . . . . . . . . . . 30 54. Shared memory access parameters . . . . 195
9. Parameter correspondence to the solid.ini file 55. Sorter parameters . . . . . . . . . . 196
for netbackup . . . . . . . . . . . . 30 56. SQL parameters . . . . . . . . . . . 197
10. Backup and netbackup commands . . . . . 33 57. Srv parameters . . . . . . . . . . . 201
11. Arguments and defaults for different timed 58. Synchronizer parameters . . . . . . . . 214
commands . . . . . . . . . . . . . 37 59. Client parameters . . . . . . . . . . 217
12. Connect string options . . . . . . . . . 59 60. Communication parameters . . . . . . . 218
13. solidDB environment variables . . . . . . 61 61. Data Sources parameters . . . . . . . . 219
14. Perfmon counters . . . . . . . . . . 68 62. Shared memory access parameters
15. Network listening name options . . . . . 82 (client-side) . . . . . . . . . . . . 220
16. Com.Listen factory values . . . . . . . . 83 63. TransparentFailover parameters . . . . . 220
17. Connect string options . . . . . . . . . 85 64. solidDB environment variables. . . . . . 225
18. TCP/IP protocol in the network listening name 65. solidDB error categories . . . . . . . . 227
(Com.Listen) . . . . . . . . . . . . 88 66. solidDB system errors. . . . . . . . . 229
19. TCP/IP protocol in the client connect string 67. solidDB database errors . . . . . . . . 232
(Com.Connect) . . . . . . . . . . . . 88 68. solidDB session errors . . . . . . . . 256
20. UNIX Pipes protocol in the network name 89 69. solidDB communication errors . . . . . . 257
21. Named Pipes protocol in the network name 90 70. solidDB server errors . . . . . . . . . 260
22. Shared Memory protocol in the network name 90 71. solidDB SA API errors . . . . . . . . 269
23. solidDB protocols and network names . . . 91 72. solidDB sorter errors . . . . . . . . . 269
24. Application protocols and network names 91 73. solidDB RPC errors and messages . . . . 270
25. solcon command options . . . . . . . . 94 74. solidDB synchronization errors. . . . . . 271
26. solcon specific commands . . . . . . . . 95 75. solidDB HotStandby errors . . . . . . . 286
27. solsql command options . . . . . . . . 96 76. solidDB SSA (SQL API) errors . . . . . . 287
28. solidDB SQL Editor (solsql) commands 99 77. solidDB COM (communication) messages 288
29. solloado and solload command options 102 78. solidDB SRV errors . . . . . . . . . 289
30. Speed Loader reserved words . . . . . . 106 79. solidDB DBE errors and messages . . . . 291
31. Full syntax of the control file . . . . . . 106 80. solidDB CP (checkpoint) messages . . . . 293
32. Data masks . . . . . . . . . . . . 108 81. solidDB BCKP (backup) messages. . . . . 293
33. solexp command options. . . . . . . . 114 82. solidDB AT (timed commands) messages 293
34. soldd command line options . . . . . . 117 83. solidDB LOG (logging) messages . . . . . 294
35. Command line options for solidDB tools for 84. solidDB INI (configuration file) messages 294
partial Unicode and Unicode databases . . . 121 85. solidDB HSB errors and messages . . . . 295
36. Determining command status . . . . . . 135 86. solidDB SNC (synchronization) messages 297
37. Determining which connections have 87. solidDB XS (external sorter) errors . . . . 298
committed transactions . . . . . . . . 135 88. solidDB FIL (file system) messages . . . . 298
38. Diagnosing poor performance . . . . . . 136 89. solidDB TAB (table) messages . . . . . . 299
39. SQL Info levels . . . . . . . . . . . 142 90. solidDB SMA (shared memory access) errors 299
40. Ping facility levels . . . . . . . . . . 146 91. solidDB passthrough errors . . . . . . . 299
41. SMA default address spaces. . . . . . . 150 92. solidDB SQL errors . . . . . . . . . 300
42. solidsupport . . . . . . . . . . . 157 93. solidDB executable errors . . . . . . . 306
43. solidDB Support Assistant (solidsupport) 94. solidDB Speed Loader (solloado and solload)
options . . . . . . . . . . . . . 158 errors . . . . . . . . . . . . . . 307
44. Accelerator parameters . . . . . . . . 165 95. ADMIN COMMAND syntax and options 310
45. Cluster parameters. . . . . . . . . . 166
ix
x IBM solidDB: Administrator Guide
Summary of changes
Changes for revision 05
v Editorial corrections.
xi
v Formatting rules for solid.ini file clarified in section Rules for formatting the
solid.ini file; the maximum line length is 79 characters.
v ADMIN COMMAND ’info’ updated in section ADMIN COMMAND: added unit
values for all size related output options.
v ADMIN COMMAND ’filespec’ and ADMIN COMMAND ’parameter’ updated
in section ADMIN COMMAND:
– Previously undocumented options -d and -a added to ADMIN COMMAND
’filespec’.
– Previously undocumented option -t added to ADMIN COMMAND
’parameter’.
v Section Troubleshooting database file size (file write fails) updated: new database
file specifications can be added dynamically with the command ADMIN
COMMAND ’filespec -a’
v Section Troubleshooting backups updated with information about backups that
can slow down the database.
v New High Availability Controller (HAC) related option added for solidDB
Support Assistant (solidsupport) in section solidDB Support Assistant: option -c
HAC_directory_path specifies the location of the HAC related message and trace
files.
v New section added: Using solidDB with SELinux
v New error codes for Fix Pack 5 added is section Error codes; see Release Notes for
Fix Pack 5 for details.
v Missing solidDB SQL Editor (solsql) startup option -tt added in section
Starting solidDB SQL Editor; option -tt prints the prepare, execute, fetch, and
execution time for each command executed in solsql.
All of the above mentioned components are orthogonal, that is they can be used in
the presence of other components. An administrator of solidDB can use a wide
range of configuration options and tools to set up the product in the most
appropriate way.
This guide describes how to set up, monitor, manage, and optimize the basic
database server function of the product. More detailed information about
configuring specific solidDB components are included in the related manuals.
This guide assumes the reader has general database management system (DBMS)
knowledge and a familiarity with SQL.
Typographic conventions
solidDB documentation uses the following typographic conventions:
Table 1. Typographic conventions
Format Used for
NOT NULL Uppercase letters on this font indicate SQL keywords and
macro names.
xv
Table 1. Typographic conventions (continued)
Format Used for
File path presentation Unless otherwise indicated, file paths are presented in the
UNIX format. The slash (/) character represents the
installation root directory.
.
. A column of three dots indicates continuation of previous
. lines of code.
In addition, solidDB has synchronization features that allow updated data in one
solidDB to be sent to one or more other solidDBs.
solidDB also allows you to run a pair of solidDBs in a hot standby configuration,
and link your client application directly to the database server routines for higher
performance and tighter control over the server. These features are called
HotStandby and shared memory access (SMA) or linked library access (LLA).
The sections below describe the underlying components and processes that make
solidDB the solution for managing distributed data in today's complex distributed
system environments. They provide background information necessary to
administer and maintain solidDB in your network environment.
1
Application
Query executor
solidDB
solidDB
database
1. SA API is solidDB's own API for use with the shared memory access (SMA) or linked library access (LLA)
libraries. For details, see IBM solidDB Shared Memory Access and Linked Library Access User Guide.
ODBC
The solidDB ODBC Driver conforms to the Microsoft ODBC 3.51 API standard.
solidDB ODBC Driver supported functions are accessed with solidDB ODBC API, a
Call Level Interface (CLI) for solidDB databases, which is compliant with ANSI
X3H2 SQL CLI.
For more details on the solidDB ODBC Driver, see IBM solidDB Programmer Guide.
For more details on the solidDB JDBC Driver, see IBM solidDB Programmer Guide.
Proprietary interfaces
solidDB also provides two proprietary interfaces, solidDB Application
Programming Interface (SA API) and solidDB Server Control API (SSC API). These
allow, for example, C programs to directly call functions inside the database server.
These proprietary interfaces are provided with the solidDB shared memory access
(SMA) and linked library access (LLA) libraries.
For more details on the solidDB proprietary programming interfaces, see IBM
solidDB Shared Memory Access and Linked Library Access User Guide and IBM solidDB
Programmer Guide.
solidDB
The solidDB processes the data requests submitted via solidDB SQL. The solidDB
server shown in Figure 2 on page 4 stores data and retrieves it from the database.
1 solidDB fundamentals 3
Application
Query executor
solidDB
solidDB
database
1. SA API is solidDB's own API for use with the shared memory access (SMA) or linked library access (LLA)
libraries. For details, see IBM solidDB Shared Memory Access and Linked Library Access User Guide.
solidDB provides the following tools for exporting and loading data:
v solidDB Speed Loader (solloado or solload) loads data from external files into a
solidDB database.
solidDB provides the following tools and methods for performing manual
administration:
v ADMIN COMMANDs
To perform administration tasks, you can issue solidDB SQL's own ADMIN
COMMANDs in solidDB SQL Editor (solsql). For a comprehensive list of
commands, refer to Appendix F, “solidDB ADMIN COMMAND syntax,” on
page 309.
v solidDB Server Control API (SSC API)
If you are using solidDB with linked library access, the solidDB Server Control
API (SSC API) gives a user application programmatic control over task
execution. The SSC API functions are available for assigning priorities for such
tasks as database backup, database checkpoint, and merge of the Bonsai Tree.
The priority assignment determines in what order a task is run once it is
executed. For details, see IBM solidDB Shared Memory Access and Linked Library
Access User Guide.
v solidDB Remote Control (solcon)
solidDB Remote Control (solcon) lets you enter administrative commands
without using the ADMIN COMMAND syntax. See “solidDB Remote Control
(solcon)” on page 93 for details.
solidDB architecture
This section provides conceptual information that can give you an understanding
in configuring solidDB to meet the needs of your own applications and platforms.
It looks at the following:
v Data Storage
– Main Storage Tree
– Bonsai Tree Multiversioning and Concurrency Control
v Dynamic SQL Optimization
v Network Services
v Multithread processing
1 solidDB fundamentals 5
The internal part of the server taking care of storing D-tables is called the
disk-based engine (DBE).
All the indexes are stored in a single tree, which is the main storage tree. Within
that tree, indexes are separated from each other by a system-defined index
identification inserted in front of every key value. This mechanism divides the
index tree into several logical index subtrees, where the key values of one index
are clustered close to each other. For details on data clustering and primary key
indexes, read the discussion of Primary Key Indexes in IBM solidDB SQL Guide.
Old versions of rows (and the newer version(s) of those same rows) are kept in the
Bonsai Tree for as long as there are transactions that need to see those old versions.
After the completion of all transactions that reference the old versions, the "old"
versions of the data are discarded from the Bonsai tree, and new committed data is
moved from the Bonsai Tree to the main storage tree. The presorted key values are
merged as a background operation concurrently with normal database operations.
This offers significant I/O optimization and load balancing. During the merge, the
deleted key values are physically removed.
Index compression
Two methods are used to store key values in the Bonsai Tree and the storage tree.
First, only the information that differentiates the key value from the previous key
value is saved. The key values are said to be prefix-compressed. Second, in the
higher levels of the index tree, the key value borders are truncated from the end;
that is, they are suffix-compressed.
The internal part of the server taking care of storing M-tables is called
Main-Memory Engine (MME)
solidDB maintains the statistical information about the actual data automatically,
ensuring optimal performance. Even when the amount and content of data
changes, the optimizer can still determine the most effective route to the data.
Query processing
Query processing is performed in small steps to ensure that one time-consuming
operation does not block another application's request. A query is processed in a
sequence containing the following phases:
v Syntax analysis
v Creating the execution graph
v Processing the execution graph
Syntax analysis
An SQL query is analyzed and the server produces either a parse tree for the
syntax or a syntax error. When a statement is parsed, the information necessary for
its execution is loaded into the statement cache. A statement can be executed
repeatedly without re-optimization, as long as its execution information remains in
the statement cache.
For details on each operation or unit in the execution plan, read the discussion of
the EXPLAIN PLAN FOR statement in the IBM solidDB SQL Guide.
1 solidDB fundamentals 7
Processing the execution graph
Processing of the execution graph is performed in three consecutive phases:
v Type-evaluation phase
The column data types of the result set are derived from the underlying table
and view definitions
v Estimate-evaluation phase
The cost of retrieving first rows and also entire result sets is evaluated, and an
appropriate search strategy is dynamically selected based on the parameter
values that are bound to the statement.
The SQL Optimizer bases cost estimates on automatically maintained
information about key value distribution, table sizes, and other dynamic
statistical data. No manual updates to the index histograms or any other
estimation information is required.
v Row-retrieval phase
The result rows of the query are retrieved and returned to the client application.
Optimizer hints
Optimizer hints (which are an extension of SQL) are directives specified through
embedded pseudo comments within query statements. The optimizer detects these
directives or hints and bases its query execution plan accordingly. Optimizer hints
allow applications to be optimized under various conditions to the data, query
type, and the database. They not only provide solutions to performance problems
occasionally encountered with queries, but shift control of response times from the
system to the user.
For more details on optimizer hints, read IBM solidDB SQL Guide.
Multithread processing
solidDB's multithread architecture provides an efficient way of sharing the
processor within an application. A thread is a dispatchable piece of code that
Threads are loaded into memory as part of the calling program; no disk access is
therefore necessary when a thread is invoked. Threads can communicate using
global variables, events, and semaphores.
Types of threads
The solidDB threading system consists of general purpose threads and dedicated
threads.
General purpose threads execute tasks from the server's tasking system. They
execute such tasks as serving user requests, making backups, executing timed
commands, merging indexes, and making checkpoints (storing consistent data to
disk).
General purpose threads take a task from the tasking system, execute the task step
to completion and switch to another task from the tasking system. The tasking
system works in a round-robin fashion, distributing the client operations evenly
between different threads.
The number of general purpose threads can be set in the solid.ini configuration
file.
Dedicated threads
1 solidDB fundamentals 9
10 IBM solidDB: Administrator Guide
2 Administering solidDB
This section describes how to maintain your solidDB installation. The
administration tasks covered in this section are:
v Performing basic solidDB operations, such as starting and stopping the server
v Backing up the server
v Encrypting a database
Important: If you are using solidDB with shared memory access (SMA) or linked
library access (LLA), there are some differences in administration from standard
solidDB. Wherever necessary, this chapter refers you to the IBM solidDB Shared
Memory Access and Linked Library Access User Guide for SMA or LLA specific
information.
You define these roles using the GRANT ROLE statement. For details, see Managing
user privileges and roles in IBM solidDB SQL Guide.
Starting solidDB
You can start solidDB by issuing the command solid [options] at the command
prompt, or in Windows environments, using the Start > Programs > IBM solidDB
menu path.
11
For options, see Appendix C, “solidDB command line options,” on page 221.
To start solidDB, a valid license file must be located in your working directory or
in the location specified with a SOLIDDIR environment variable.
Table 3. Starting the server
Linux and UNIX In the working directory, enter the command solid [options] at the command prompt.
When you start the server for the first time, use the command line option -f to force the
server to run in the foreground:
solid -f
Windows v Click the icon labeled Start IBM solidDB server through the Start > Programs > IBM
solidDB menu path.
v In the working directory, enter the command solid [options] at the command prompt.
v To start the server to run in the background, enter the command start solid.
When solidDB is started, it checks if a database already exists. The server first
looks for a solid.ini configuration file and reads the value of FileSpec parameter.
Then the server checks if there is a database file with the names and paths
specified in the FileSpec parameter. If a database file is found, solidDB will
automatically open that database. If no database is found, the server creates a new
database.
Note:
This section applies to standard solidDB only. If you are using solidDB with shared
memory access (SMA) or linked library access (LLA), see the IBM solidDB Shared
Memory Access and Linked Library Access User Guide for instructions on how to start
solidDB SMA or LLA server.
Related reference:
Appendix C, “solidDB command line options,” on page 221
Related information:
“FileSpec_[1...n] parameter” on page 53
Login
solidDB database requires users to login to the database with their username and
password.
If you try to login four times with an incorrect username and/or password, the
system will block your IP address for a maximum of 60 seconds. This feature
cannot be configured or switched off.
By modifying the Properties of the Start IBM solidDB server shortcut you can
specify the working directory, login data and system catalog name, and additional
command line options used when starting solidDB.
1. Right-click on the Start IBM solidDB server icon.
2. Select Properties and then the Shortcut tab.
3. To change the login data and catalog name (or other startup options), modify
the command line options given in the Target field:
v -C — system catalog name
v -P — password
v -U — username
For example:
"C:\Program Files\IBM\solidDB\solidDB6.5\bin\solid.exe" -C mycatalog -P
mypassword -U myname
See section Appendix C, “solidDB command line options,” on page 221 for a
list of available solidDB startup options.
4. To change the working directory, modify the directory path in the Start in field.
For example:
"C:\Program Files\IBM\solidDB\solidDB6.5\eval_kit\mytest\"
By modifying the Properties of the solsql SQL Editor shortcut you can specify the
connection information and the login data for the solidDB server to which solidDB
SQL Editor (solsql) connects to.
1. Right-click on the solsql SQL Editor icon.
2. Select Properties and then the Shortcut tab.
3. To change the connection information and login data, modify the server name,
username, and password given in the Target field.
For example:
"C:\Program Files\IBM\solidDB\solidDB6.5\bin\solsql.exe" "tcp 2315"
myname mypassword
You can also specify startup options in the Target field. See section “Starting
solidDB SQL Editor” on page 96 for a list of available solsql startup options.
Closing a database
You can close the database, which means no new connections to the database are
allowed. To do this, issue the following command in solidDB SQL Editor (solsql):
You use the close command when you want to prevent users from connecting to
the database. For example, when you are shutting down solidDB, you must
prevent new users from connecting to the database. As part of the shut down
procedure you use the close command. Read “Shutting down solidDB” on page 14
for procedures to shut down a database.
2 Administering solidDB 13
After closing the database, connections from solidDB Remote Control will only be
accepted. Closing the database does not affect existing user connections. When the
database is closed no new connections are accepted (clients will get solidDB Error
Message 14506).
This section applies to standard solidDB only. If you are using solidDB with shared
memory access (SMA) or linked library access (LLA), see the IBM solidDB Shared
Memory Access and Linked Library Access User Guide for instructions on how to stop
the solidDB SMA or LLA server.
Note: When using solidDB SQL Editor for the steps 1-3 above, enter the full
SQL Syntax,
ADMIN COMMAND ’<command_name>’
For example:
ADMIN COMMAND ’close’
v Using command ADMIN COMMAND 'shutdown force" that includes all of the above.
v Right-clicking the server icon and selecting Close from the menu appearing in
the Microsoft Windows environment.
v Remotely, using the command 'net stop' through the Windows system services.
Note that you may also start up solidDB remotely, using the 'net start'
command.
Each of these shutdown mechanisms will start the same routine, which writes all
buffered data to the database file, frees cache memory, and finally terminates the
server program. Shutting down a server may take a while since the server must
write all buffered data from main memory to the disk.
By default the database is created as one file (solid.db) in the solidDB working
directory.
An empty database that contains only the system tables and views uses
approximately four megabytes of disk space. The time it takes to create the
database depends on the hardware platform you are using. If you have a very
small database (less than four megabytes) and want to keep the disk space less
than four megabytes, set the value of the IndexFile.ExtendIncrement parameter in
the solid.ini configuration file to less than 500 (default). This parameter and other
parameters are discussed in Appendix A, “Server-side configuration parameters,”
on page 165.
After the database has been created, solidDB starts listening to the network for
client connection requests. In Windows environments, a solidDB icon appears, but
in most environments solidDB runs invisibly in the background as a daemon
process.
Important:
v You must remember your username and password to be able to connect to
solidDB. There are no default usernames; the administrator username you enter
when creating the database is the only username available for connecting to the
new database for the first time.
v Lowercase characters in usernames, passwords, and system catalog names are
converted to uppercase.
Username
v Minimum length: 2 characters.
v Maximum length: 80 characters
2 Administering solidDB 15
v The username must begin with a letter or an underscore. Use lower case letters
from a to z, upper case letters from A to Z, the underscore character "_", and
numbers from 0 to 9.
The database system administrator's username cannot be changed with the ALTER
USER command. See Changing administrator's username and password in the IBM
solidDB SQL Guide.
Password
v Minimum length: 3 characters.
v Maximum length: 80 characters
v The password can begin with any letter, underscore, or number. Use lower case
letters from a to z, upper case letters from A to Z, the underscore character "_",
and numbers from 0 to 9.
v You cannot use the double quotation mark (") character in the password. The
use of apostrophe ('), semicolon (;), or space (' ') is strongly discouraged, because
some tools may not accept these characters in the password.
v If you plan to use solidDB Remote Control (solcon), do not create passwords
with non-ASCII characters, because solcon does not perform UTF-8 translation
for any input.
v You can also enter the password from a file. For more information, see “Entering
password from a file” on page 119.
System catalog
v Minimum length: 1 character.
v Maximum length: 39 characters
v The system catalog name must not contain spaces.
When creating objects in solidDB, if you do not specify the catalog and schema
name, the server uses the system catalog and the username of the object creator to
determine which object to use.
For details on solidDB catalogs and schemas, see section Managing database objects
in IBM solidDB SQL Guide.
Important: The database mode must be defined when the database is created and
it cannot be changed later.
If the database already exists in either mode and the database mode contradicts the
value of the parameter, the server startup fails with the following error message in
the solerr.out:
Parameter General.InternalCharEncoding contradicts the existing database mode
A working directory is the directory that contains the files related to running a
particular solidDB instance.
The following table shows the most common solidDB files, their factory value
locations, and how to modify the locations.
2 Administering solidDB 17
Table 4. solidDB default files
Factory value
File location How to modify
license file (solid.lic, working Define path in SOLIDDIR environmental
soliduc.lic, or directory variable
solideval.lic)
solid.ini configuration file working Define path in SOLIDDIR environmental
directory variable
database files (solid.db) working Define with IndexFile.FileSpec parameter
directory
transaction log files working Define location with Logging.LogDir
(sol#####.log) directory parameter
or
The solid.ini file specifies parameters that help customize and optimize the
solidDB database server. For example, the FileSpec parameter in the solid.ini file
specifies the directory and file names of the data files in which the server stores
the user data. Another parameter specifies the block size for the database. The
block size affects performance and also limits the maximum record size. The
FileSpec and BlockSize parameters are described in the next section.
You can find a complete description of all parameters, details about the proper
format of the solid.ini file, and instructions for specifying solid.ini
configuration parameters in Appendix A, “Server-side configuration parameters,”
on page 165. For more details about setting parameters, read 3, “Configuring
solidDB,” on page 43.
The block size is set with the parameter Indexfile.BlockSize. If you want solidDB
to create a database with a different block size, you have to set the
Indexfile.BlockSize value before creating a new database. If you have an existing
database, remember to move the old database (.db) and log files (.log) to another
directory; the next time you start solidDB, a new database is created.
To modify the constant value for the new database, add the following lines in the
solid.ini file, providing the size in bytes :
[Indexfile]
BlockSize=size_in_bytes
The unit of size is 1 byte (as in all size-related parameters). The unit symbols of K
and M (for KB and MB, respectively) can also be used.
After you save the file and start solidDB, it creates a new database with the new
constant value from the solid.ini file.
2 Administering solidDB 19
Similarly, you can also modify the Indexfile.FileSpec parameter to define the
following:
v name and location of the database files – the default file name is solid.db and
the default location is the solidDB directory
v maximum size (in bytes) the database file can reach – the default value is
2147483647, which equals 2 G-1 bytes. The maximum file size is (4
G-1)*blocksize. With the default 16 KB block size, this makes 64 TB - 1.
You can also use the Indexfile.FileSpec parameter to divide the database file into
multiple files and onto multiple disks. This may be required if you want to create a
large physical database.
For details on configuring the database file locations and sizes with the
Indexfile.FileSpec parameter, read “Managing database files and caching
(IndexFile section)” on page 53.
For efficiency, solidDB can store BLOB data outside the table. When BLOBs (Binary
Large Objects), such as objects, images, video, graphics, or digitized sound are
larger than a particular size, solidDB automatically detects this and stores the
objects to a special file area that has optimized block sizes for large files. No
administrative action is required. For more information, see section BLOBs and
CLOBs in the Appendix: Data Types in the IBM solidDB SQL Guide.
Note: This section applies to standard solidDB only. If you are using solidDB with
shared memory access (SMA) or linked library access (LLA), see the IBM solidDB
Shared Memory Access and Linked Library Access User Guide for instructions on how
to connect to a SMA or LLA server.
To connect to solidDB:
Tool Command
If you did not specify the database administrator's user name and password on the
command line, solcon will prompt you to enter them.
Important: You must have administrator rights (SYS_ADMIN_ROLE) to use solcon.
If you did not specify the database administrator's user name and password on the
command line, solsql will prompt you to enter them.
After a while you will see a message indicating that a connection to the server has
been established.
Related information:
5, “Managing network connections,” on page 81
6, “Using solidDB data management tools,” on page 93
solidDB data management tools are a set of utilities for performing various
database tasks.
2 Administering solidDB 21
v The solidDB that you intend to run as a service cannot be located on a network
drive.
Procedure
1. Allow (install) Windows to run solidDB as a service.
In the command prompt, issue the following command:
solid -s"install,<service_name>,<fullexepath> -c<working directory>[,autostart] [<option>]"
where
<service_name> is the name of the service
<fullexepath> is the full path for solid.exe
<working directory> is the full path for solidDB working directory (where
your solid.ini configuration file and license file are located)
autostart is an optional parameter that sets the Startup Type of the service to
Automatic, that is, solidDB runs automatically as a service when Windows is
started.
Note:
Example 2
The following command installs a service named SOLID (with Startup Type
Automatic) when solidDB is installed into the directory C:\soliddb and the
working directory is C:\soliddb. The next time Windows is started, solidDB
runs automatically as a service.
solid -s"install,SOLID,C:\soliddb\bin\solid.exe -cC:\soliddb,autostart"
Example 3
The following command installs a service named SOLID (with Startup Type
Manual) when solidDB is installed into the directory C:\soliddb and the
working directory is C:\soliddb. The solidDB database is encrypted; the
encryption password is abcd.
solid -s"install,SOLID,C:\soliddb\bin\solid.exe -Sabcd -cC:\soliddb"
Tip:
Alternatively, you can create the service using the Windows command-line
utility sc.exe. In that case, to start solidDB in a services mode, you must
include the solidDB -sstart command-line option in the command. For
example:
sc create SOLID binPath= "c:\soliddb\bin\solid.exe -cC:\soliddb -sstart"
Results
When running as an Windows service, solidDB will log warning and error
messages to the Windows event log. These messages can be viewed from Windows
by using the Event Viewer, available through Control Panel: Control Panel >
Administrative Tools > Event Viewer. Messages are also logged to the solmsg.out
file.
Procedure
v You can access the Services dialog through Control Panel: Control Panel >
Administrative Tools > Services.
v In the command prompt,
– issue the following command to start the service:
sc start <service_name>
or
net start <service_name>
– issue the following command to stop the service:
sc stop <service_name>
or
net stop <service_name>
where <service_name> is the name of the service you want to start or stop.
Procedure
1. Stop the service in the Windows Services dialog or command prompt.
v You can access the Windows Services dialog through Control Panel: Control
Panel > Administrative Tools > Services.
v In the command prompt, issue the following command:
sc stop <service_name>
or
net stop <service_name>
2 Administering solidDB 23
where <service_name> is the name of service you want to stop.
2. Remove the solidDB service.
In the command prompt, issue the following command:
solid -s"remove,<name>"
Example
The following command removes a service named SOLID.
solid -s"remove,SOLID"
If you want to run several servers concurrently on one computer, you have to set
up separate working directories for each solidDB instance.
To avoid network conflicts, use different network listen names for each server in
the solid.ini configuration files.
Example:
The instructions in this section assume that you are familiar with SELinux for
RHEL 6. For information on SELinux on RHEL 6, see the Red Hat Enterprise Linux
6 Security-Enhanced Linux User Guide.
You also need to have the following SELinux policy tools installed on your system:
v selinux-policy-version (for example, selinux-policy-3.7.19-54.el6.noarch)
v policycoreutils-python-version (for example, policycoreutils-python-2.0.83-
19.1.el6.x86_64)
With default installation, all solidDB processes run in an unconfined domain, that
is, unconfined users can run solidDB processes without any further action.
The following procedure uses the sepolgen utility to create and install SELinux
policy modules for solidDB so that also confined system level users (system_u) can
start solidDB processes.
Tip: You need to run the sepolgen utility separately for each solidDB process.
Procedure
1. In the selinux/devel directory, create the policy modules by issuing the
following command:
sepolgen <solidDB_installation_directory>/bin/<solidDB_executable>
The sepolgen utility creates the policy modules; the file names use the
<solidDB_executable>.xx naming pattern, for example,
<solidDB_executable>.te.
2. Install and apply the security policy permanently by issuing the following
command:
sh <solidDB_executable>.sh
Results
The sepolgen utility creates the source and binary files for the policy module. If
you want to enforce a more strict policy, for example, for specific users, you need
to modify, recompile, and reinstall the policy modules - for details, see the Red Hat
Enterprise Linux 6 Security-Enhanced Linux User Guide.
2 Administering solidDB 25
Examples
Creating and applying the system's default SELinux policy on the solidDB server
(solid) executable.
# cd /usr/share/selinux/devel
# secpolgen <solidDB_installation_directory>/bin/solid
# sh solid.sh
Creating and applying the system's default SELinux policy on the SMA server
(solidsma) executable.
# cd /usr/share/selinux/devel
# secpolgen <solidDB_installation_directory>/bin/solidsma
# sh solidsma.sh
Creating and applying the system's default SELinux policy on the solidDB High
Availability Controller (solidhac) executable.
# cd /usr/share/selinux/devel
# secpolgen <solidDB_installation_directory>/bin/solidhac
# sh solidhac.sh
When audit trail is enabled, the system records the following database activities:
v Changes in user and login information
v Changes in schemas and catalogs
v Status of audit trail (enabled/disabled/deletes)
The status of audit trail is written at each server startup. This status message can
be used to check when the audit trail data has been collected, and when the server
has been started up with the audit trail disabled. If auditing is disabled later on, at
the next startup, the system will write a status message to indicate that audit trail
is disabled.
User access
In a High Availability setup, only the primary server can write the audit trail.
However, audit trail must be enabled in both servers. This is because each server
records database activities according to the configuration settings in its own
solid.ini file. In case of a switchover (old primary had
SQL.AuditTrailEnabled=yes), the new primary will continue recording the changes
only if the Sql.AuditTrailEnabled parameter for it was set to 'yes' at the last
startup. The state of the new primary is stored as a status message in the system
table (AUDIT TRAIL ENABLED (HSB) or AUDIT TRAIL DISABLED (HSB).
Procedure
v Enabling audit trail
1. Set the Sql.AuditTrailEnabled parameter to yes in the solid.ini
configuration file.
[SQL]
AuditTrailEnabled=yes
2. Restart solidDB.
ResultAt the startup, the system writes a status message to the
SYS_AUDIT_TRAIL system table to indicate that audit trail is enabled. Changes
in database activities are recorded in the SYS_AUDIT_TRAIL system table until
audit trail is disabled.
v Disabling audit trail
1. Set the Sql.AuditTrailEnabled parameter to no in the solid.ini
configuration file.
2. Restart solidDB.
ResultAt the startup, the system writes a STATUS message to the
SYS_AUDIT_TRAIL system table to indicate that audit trail is disabled. Changes
in database activities are not recorded in the SYS_AUDIT_TRAIL system table
until audit trail is enabled again.
Procedure
v Example: Viewing the SYS_AUDIT_TRAIL system table
SELECT CREATIME, LOGIN_USER, SQLSTR FROM sys_audit_trail
2 Administering solidDB 27
solidDB main memory engine supports both local backups and backups made over
the network, that is, network backups.
v Local backup produces a copy — one database file — of the current logical
database, which possibly consists of multiple files.
v Network backup does the same local backup except that the backup database is
sent over the network to Network Backup Server.
The sections below describe how to back up your solidDB in-memory databases
and recover from system failure. Furthermore, means of configuring,
administering, and monitoring backup operations are presented.
For guidelines for backing up and restoring the master and replica databases, see
the IBM solidDB Advanced Replication User Guide.
Option Description
The backup directory can be set beforehand in the configuration file by setting the
parameter BackupDirectory in the [General] section of the configuration file. For
the full list of available configuration parameters see Appendix A, “Server-side
configuration parameters,” on page 165.
CAUTION:
If two databases are copied to the same directory, the earlier will be overwritten
by the latter. The backup dir must be different at least for each database.
Moreover, although database files may be stored to different directories and
partitions at the source server they all are copied to the same backup directory.
Therefore equally named database files will conflict in the backup directory. As
a consequence, only the last backed-up file among the equally named ones has
backup copy in the backup directory.
where
v options can be:
Table 7. Options for the netbackup command
Option Description
-s Synchronized execution.
v DELETE_LOGS | KEEP_LOGS defines whether backup logs are deleted or kept in the
source server. Default is DELETE_LOGS.
Note:
– DELETE_LOGS is referred to as Full backup
– KEEP_LOGS is referred to as Copy backup. Using KEEP_LOGS corresponds to
setting the General.NetbackupDeleteLog parameter to "no".
v connect connect str specifies the connection to the NetBackup Server. If the
connect str is omitted, it must be specified in the solid.ini configuration file. For
the full connect string syntax see “Format of the connect string” on page 58.
v dir backup dir defines the backup directory in the NetBackup Server. The path
can be either absolute or relative to the netbackup root directory.
Important: If two databases are copied to the same directory, the earlier will be
overwritten by the latter. The backup dir should never point, for instance, to the
root directory of the Netbackup Server.
Note:
v The command ADMIN COMMAND ’netbackup’ is not supported within the Srv.At
configuration parameter.
v The ADMIN COMMAND ’status netbackup’ is synonymous to ADMIN COMMAND
’status backup’; it reports on both local and network backups.
v The ADMIN COMMAND ’netbackuplist' is synonymous to ADMIN COMMAND
’backuplist’; it reports on both local and network backups.
It is, however, possible to explicitly specify the directories, the names and sizes of
the backup files stored into the file system of the NetBackup Server. This is done
by creating a backup.ini netbackup configuration file to the netbackup directory.
The netbackup configuration file follows the syntax of [IndexFile] section in
2 Administering solidDB 29
solidDB configuration file. Therefore, in addition to the section name, it may
include multiple specifications for file names and sizes. Formally the syntax is as
follows:
[IndexFile]
FileSpec_[1...N]=[path/]file name [maximum file size]
A NetBackup Server having such a backup.ini file receives the incoming database
as a whole, splits it into N separate parts and stores the parts as files in accordance
with the specifications in the backup.ini file.
Tip:
An easy way to retain the directory structure of the source server is to copy and
rename the source server's solid.ini to backup.ini and move it to the backup
directory at the NetBackup Server. The NetBackup Server reads only the
FileSpec_[1...N] specifications from the [IndexFile] section, creates similar directory
structure and stores backup files with their original properties to the NetBackup
Server.
default: no default
default: no default
default: no default
default: yes
default: yes
Every backup makes a checkpoint as its first action. This guarantees that the
possible restore starts with as fresh backup as possible. This way, the slower
roll-forward portion of the restore is minimized. The following files are then
copied by default to the specified backup directory:
v the database files containing the checkpointed database itself,
v the log files including changes made by those transactions that are active when
the backup takes place,
v the solmsg.out database message file (this is for convenience in diagnosing
problems — the message file is not required during a restore), and
v the solid.ini configuration file is also copied by default because after a disk
crash the original might be destroyed (the configuration file is not required
during a restore).
Note: The name of the database files and their maximum size are specified in the
FileSpec[1...N] parameters in the [IndexFile] section of the solid.ini
configuration file. The name and location of log files is specified in the [Logging]
section of the configuration file.
The log files are typically deleted from the source server after they have been
copied to the backup directory since they have become useless. This is the default
backup procedure and it is referred to as Full backup.
It is, however, possible to retain all the log files produced over time by the update
transactions in the database server directory. Keeping all the log files is
space-consuming but allows, for instance, bringing the database up-to-date by
re-executing all the updates by using the log files only. This backup type is called
Copy backup.
2 Administering solidDB 31
Note: If you want to use Copy backups, that is, retain the full log file history, you
also must ensure that the log files are not deleted at the end of checkpoint. This
can be done by ensuring that you do not have the line CheckpointDeleteLog=yes in
section [General] of the solid.ini configuration file.
Local backup
In local backup the database and the log files are copied from the database
directory to user specified backup directory accessible from within the same
machine.
If the backup directory already includes files with same names, they will be
overwritten. If the specified backup directory does not exist, the backup fails and
the call returns an error.
CAUTION:
Ensure that backup and database directories are both on different physical
device and in different file system than database files. If one disk drive is
damaged, you will lose either your database files or backup files but not both.
Similarly, if one file system fails, either the backup or the database files will
survive.
Network backup
Netbackup is a facility for storing the whole database at some remote location.
This is done by way of a solidDB Netbackup Server whose function is to receive
backups over the network. One Netbackup Server can serve multiple simultaneous
backup source servers.
Similarly to local backup, the files are written into a user specified directory in the
Netbackup Server. If the target netbackup directory includes files with the same
names, they will be overwritten. Unlike the local backup, if the specified remote
directory does not exist, it is created automatically.
solidDB Netbackup Server requires the administrator privileges from the caller of
netbackup. Less privileged users can perform netbackups by using stored
procedures that are created by an administrator. In that case the user must be
granted the right to execute the procedure.
Netbackup can be performed between different server versions provided that they
are netbackup compatible. By principle, a newer version of the Netbackup Server
will serve older versions of source servers. In other cases, the protocol version is
checked and an incompatibility error is returned at the netbackup's request.
The path is relative to the working directory and the default is the working
directory.
You can shut down a Netbackup Server by following the normal shutdown
sequence and using the normal close and shutdown commands.
1. ADMIN COMMAND 'close'
status backup status netbackup Displays the status of the most recent
backup.
To query the list of all completed backups and their success status, issue the
following command:
ADMIN COMMAND ’backuplist’
returns the value "ACTIVE". The default option is backup. Once the backup is
completed, the command returns either "OK" or "FAILED".
If the backup failed, you can find the error message that describes the reason for
the failure in the solmsg.out file in the database directory. Correct the cause of the
error and try again.
2 Administering solidDB 33
Troubleshooting backups
Backup media is out of disk space
Making a backup requires the same amount of disk space as the database being
backed-up. Ensure that you have enough disk space in the backup storage device.
The backup directory must be a valid path name in the server operating system.
For example, if the server runs on a UNIX operating system, path separators must
be slashes, not backslashes.
If you specify a non-existent backup directory, the server prints an error message
and the backup fails. If you perform backups as timed operations, you can ensure
the success of backups from solmsg.out file.
Because the backup copies database files with their original names to the target
directory, using the same source and target directories leads to a file sharing
conflict.
If you try to start a network backup without setting up solidDB network backup
server properly, the netbackup will fail.
Backup can slow down the database if the backup uses same storage resources as
the database. This can happen, for example, in the following cases:
v The backup write uses the same device controller as the database.
v The backup write uses the same physical storage device as the database.
v The operating system buffers large amounts of the backup data into memory.
Restoring backups
You can restore the database to the state it was in when the backup was created by
following the instructions below. Furthermore, you can revive a backup database to
the current state by using log files generated after the backup was made. Those log
files include information about the data inserted or updated since the latest
backup.
This method will not perform any recovery because no log files exist.
solidDB will automatically use the log files to perform a roll-forward recovery.
Transaction logging
Transaction logging guarantees that no committed operations are lost in the case of
a system failure. When an operation is executed in the server, the operation is also
saved to a transaction log file. The log file is used for recovery in case the server is
shut down abnormally.
solidDB allows you to decide whether you want to use logging or not. If logging is
used, abnormally shut down databases can be restored to the state they were at the
moment the failure took place. If the logging is disabled, databases can be restored
to the backup state only. Transaction logging is enabled by default. If the full
transaction recovery is not needed, logging can be disabled. To do this, set the
[Logging] parameter LogEnabled to "no".
2 Administering solidDB 35
Creating checkpoints
A checkpoint updates the database files on disk. Specifically, a checkpoint copies
pages from the database server's memory cache to the database file on the disk
drive. The server does the copy in a transactionally-consistent way; that is, only
results of committed transactions are copied. The result is that all of the data in the
database file is committed data from complete transactions. If the server fails
between checkpoints, the disk drive will have a consistent and valid (although not
necessarily up-to-date) snapshot of the data.
Checkpoints can be seen as the main write operations to the database files on disk.
The server does not write the results of each individual INSERT/UPDATE/
DELETE statement (or even the result of each transaction) to the disk as it
happens. Instead, the server accumulates committed transactions in the form of
updated pages in memory and writes them to the disk only during checkpoints.
The server can also use part of the database file as swap space if the server's cache
overflows. In this situation, the server also writes to the database file.
Checkpoints apply also to persistent in-memory tables, not just disk-based tables.
Note: There can only be one checkpoint in the database at a time. When a new
checkpoint is created successfully, the older checkpoint is automatically erased. If
the server process is terminated in the middle of checkpoint creation, the previous
checkpoint is used for recovery.
A checkpoint can require a substantial amount of I/O, and can affect the server's
responsiveness while the checkpoint is occurring.
You can also create a checkpoint using a timed command. See “Entering timed
commands” for more details.
To enter a timed command, edit the At parameter in the [Srv] section of the
solid.ini file. The syntax is:
Example:
[Srv]
At = 20:30 makecp, 21:00 backup, sun 23:00 shutdown
The following table contains a list of valid commands and their arguments.
Table 11. Arguments and defaults for different timed commands
backup backup directory the default backup directory that is set in the
configuration file
cp solmsg.out solmsg2.out
When databases grow, solidDB server allocates new disk pages. However, it does
not free the space allocated previously in the database files even if it is not needed
any more. Instead, it maintains a list of unused pages for later use.
The solidDB database file compaction feature works in offline mode at the page
level. Offline means that a database file being compacted cannot be actively used
2 Administering solidDB 37
by the server. Page level means that only empty pages are discovered and removed
from the file. No intra-page compaction is performed; data is not moved among
pages.
Procedure
1. View information about the database file size by starting solidDB with the
following command:
solid -x infodbfreefactor
The -x infodbfreefactor option outputs a report of how many free pages there
are in the database, how much space in kilobytes is free, and a percentage
value of free space. After printing the report to ssdebug.log and console, the
solidDB process returns with a success return value.
Example output:
------------------------------------------------------------
2010-10-26 16:45:05
IBM solidDB - Version 6.5.0.3 Build 2010-10-04 (Linux 2.6.18 AMD64 64bit MT)
Infodbfreefactor option is activated.
------------------------------------------------------------
Database file size = 152064 Kbytes
Free blocks = 82128 Kbytes
Log file size = 0 Kbytes
Free space = 54.01%
Block size = 16384 bytes
2. Start database reorganization by starting solidDB with the following command:
solid -x reorganize
Encrypting a database
By default, solidDB always encrypts passwords using the DES algorithm. If you
want to encrypt also the database files and log files, you need to create an
encrypted database using solidDB command line options. You can also disable the
encryption of passwords.
The solidDB DES algorithm is a weak DES algorithm that is not recommended for
applications that require strong security.
Procedure
v Creating a new encrypted database
To create an encrypted database, include the -E and -x keypwdfile:<filename>
options in the solidDB startup command.
For example:
solid -C mycatalog -U admin -P admin -E -x keypwdfile:pwd.txt
v Encrypting an existing database
To encrypt an existing database, include the -E and -x keypwdfile:<filename>
options in the solidDB startup command.
For example:
solid -U admin -P admin -E -x keypwdfile:pwd.txt
Disabling encryption
The default encryption of passwords can be disabled with server-side or client-side
parameters, or at connection time using ODBC Connect Info settings or
non-standard JDBC connection properties.
If you want to create an database without any encryption, disable the encryption
of passwords using the parameter settings or connection properties described
below.
Disabling the encryption of passwords disables also the encryption of database and
log files, if used.
2 Administering solidDB 39
Client-side parameter setting
To disable the encryption for a specific ODBC client connection, set the client-side
parameter Client.UseEncryption to No.
[Client]
UseEncryption=No
The option must be given before the server connect string, for example:
USE_ENCRYPTION=NO tcp 1964
Procedure
For example:
solid -x keypwdfile:pwd.txt
Alternative, you can provide the password using the -S command line option:
solid -S <password>
Procedure
Alternatively, you can specify the new and old password in the command line
using the -S option
solid -E -S <old_password> -S <new_password>
Decrypting a database
You can decrypt a database with the option -x decrypt. You also need to provide
the encryption password.
Procedure
Decrypting a database
To decrypt a database, start solidDB with the following command syntax:
solid -x decrypt -x keypwdfile:<filename>
For example:
solid -x decrypt -x keypwdfile:pwd.txt
Procedure
Encrypt the Primary database first and then copy or netcopy it.
2 Administering solidDB 41
1. On read type operations, performance impact is mostly determined by the
cache hit rate and is not significant when the cache hit rate is high.
2. On insert and update operations, the server encrypts and decrypts the log files
(if they are used) and in this case performance penalty can be more significant.
Generally the factory values offer good performance and operability but in some
cases modifying some parameter values can improve performance. You might also
need to set configuration parameters to enable or disable certain functionality.
You can set the configuration parameter values by editing the solid.ini
configuration file manually or, in most cases, using ADMIN COMMANDs.
Some parameter settings can also be overridden per session or per transaction by
using the SQL commands SET or SET TRANSACTION, or, by defining the settings
per connection with the ODBC connection attributes or JDBC connection
properties. The precedence hierarchy is the following (from high precedence to
low):
v SET TRANSACTION: transaction-level settings
v SET: session-level settings
v ODBC connection attributes and JDBC connection properties
v Parameter settings specified by the value in the solid.ini configuration file
v Factory value for the parameter
Additionally, you can control some solidDB server operations with the following
options:
v solidDB command line options at solidDB startup
v environment variables
v ODBC client connect string arguments
Related reference:
Appendix C, “solidDB command line options,” on page 221
Related information:
Appendix A, “Server-side configuration parameters,” on page 165
Appendix B, “Client-side configuration parameters,” on page 217
The client-side configuration parameters are stored in the solid.ini configuration
file and are read when the client starts.
Managing parameters
You can view and modify server-side configuration parameters using ADMIN
COMMANDs or by editing the solid.ini configuration file. Client-side
configuration parameters can only be viewed and modified using the solid.ini
file.
43
Configuration files and parameter settings
There are two different solid.ini configuration files, one for the server and one
for the ODBC client. Neither configuration file is obligatory. If there is no
configuration file, the factory values are used.
v The server-side solid.ini is used as the main configuration file for the server.
v The client-side solid.ini file is used with the solidDB ODBC client (driver). The
client-side solid.ini file can also be used with the solidDB data management
tools, for example, to define logical data source names.
When solidDB (or the ODBC client) starts, it attempts to open solid.ini first from
the directory set by the SOLIDDIR environment variable. If the file is not found
from the path specified by this variable or if the variable is not set, the server or
client attempts to open the file from the current working directory. The current
working directory is normally the same as the directory from which you started
the solidDB server, or a client application. You may also specify a different
working directory by using the -c command line option at solidDB startup.
If a value for a specific parameter is not set in the solid.ini file, solidDB will use
a factory value for the parameter. The factory values may depend on the operating
system you are using.
The configuration parameters are defined as parameter name – value pairs. The
parameters are grouped according to section categories. Each section category
starts with a section name inside square braces, for example:
[Com]
The [Com] section lists communication information. The section names are case
insensitive. The section names [COM], [Com], and [com] are equivalent.
Example
The samples directory in the solidDB installation directory contains samples for
different use cases. Each sample contains a solid.ini file with relevant settings for
each use case; you can use the sample solid.ini files as a reference when
configuring your environment.
Tip: If the solidDB server and the client are run on the same machine and use the
same working directory, a single solid.ini configuration file can be both the
server-side and the client-side configuration file. For example, the solid.ini
configuration file in the solidDB_installation_directory\eval_kit\standalone
directory contains both the server-side Com.Listen and the client-side Com.Data
Sources parameter settings.
Viewing parameters
You can view the parameter settings by all parameters, all parameters in a section,
or a single parameter at a time.
where:
v -r specifies that only the current value is shown
v section_name is the category name where the parameter is located in solid.ini
Procedure
v To view all parameters, use the following command:
ADMIN COMMAND ’parameter’;
RC TEXT
-- ----
0 Accelerator ImplicitStart Yes Yes Yes
0 Accelerator ReturnListenErrors No No No
0 Com Listen tcpip 2315, tcpip 2315, tcpip 1964
0 Com MaxPhysMsgLen 8192 8192 8192
0 Com RConnectLifetime 60 60 60
0 Com RConnectPoolSize 10 10 10
0 Com RConnectRPCTimeout 0 0 0
0 Com ReadBufSize 2048 2048 2048
0 Com SocketLinger Yes Yes Yes
0 Com SocketLingerTime 0 0 0
.
.
.
192 rows fetched.
v To view a single parameter, include the section name and parameter name in the
command. For example:
admin command ’parameter logging.durabilitylevel’;
RC TEXT
-- ----
0 Logging DurabilityLevel 3 2 2
1 rows fetched.
3 Configuring solidDB 45
v To view all parameters in a section, include the section name in the command.
For example:
admin command ’parameter logging’;
RC TEXT
-- ----
0 Logging BlockSize 16384 16384 16384
0 Logging DigitTemplateChar # # #
0 Logging DurabilityLevel 1 1 1
0 Logging FileFlush Yes Yes Yes
0 Logging FileNameTemplate sol#####.log sol#####.log sol#####.log
0 Logging LogDir logs logs
0 Logging LogEnabled Yes Yes Yes
0 Logging LogWriteMode 2 2 2
0 Logging MinSplitSize 10485760 10485760 10485760
0 Logging RelaxedMaxDelay 5000 5000 5000
0 Logging SyncWrite No No No
11 rows fetched.
Results
To show only the current value, use the -r option. For example:
admin command ’parameter -r logging’;
RC TEXT
-- ----
0 Logging BlockSize 16384
0 Logging DigitTemplateChar #
0 Logging DurabilityLevel 1
0 Logging FileFlush Yes
0 Logging FileNameTemplate sol#####.log
0 Logging LogDir logs
0 Logging LogEnabled Yes
0 Logging LogWriteMode 2
0 Logging MinSplitSize 10485760
0 Logging RelaxedMaxDelay 5000
0 Logging SyncWrite No
11 rows fetched.
3 Configuring solidDB 47
– In some cases, changing a parameter may take effect immediately and be
written to the solid.ini file so that it also applies the next time that the
server starts. This depends on the access mode of the parameter.
The commands return the new value as the resultset. If the parameter's access
mode is RO (read-only) or the value entered is invalid, the ADMIN COMMAND
statement returns an error.
Note: Parameter management operations are not part of a transaction and cannot
be rolled back.
Related information:
“Access mode and persistence of parameter modifications”
The access mode of a parameter defines whether the parameter can be changed
dynamically via an ADMIN COMMAND, and when the change takes effect.
All the changes made to parameters having the access mode RW* are stored in the
solid.ini file at the next checkpoint. This does not apply to values set with the
temporary option.
Saving parameters
If you want to use a different constant value, you have to create a new database.
Before creating a new database, set the new parameter constant value by editing
the solid.ini file.
The following example sets a new block size for the index file by adding the
following lines to the solid.ini file :
After editing and saving the solid.ini file, move or delete the old database and
log files, and start solidDB.
Tip: The log block size can be changed between startups of the server.
By default, the server looks for the solid.ini file in the current working directory,
which is normally the directory from which you started the server.
You can specify a different directory to be used as the current working directory in
the following ways:
v Use the -c solidDB command line option.
v Set the SOLIDDIR environment variable to specify the location of the solid.ini
file.
When searching for the solid.ini file, solidDB uses the following precedence
(from high to low):
v location specified by the SOLIDDIR environment variable (if set)
v current working directory
Related reference:
Appendix C, “solidDB command line options,” on page 221
[section_name2]
param_name3=param_value
;This is a comment line (less than 79 characters)
Section names
The solid.ini configuration file is divided into sections. Each section contains a
group of one or more loosely-related parameters.
Each section has a unique name. The name is delimited with square brackets. For
example:
[SQL]
Every parameter must be under a section header. If you put a parameter before
any section header, you get an error message indicating that there is an
unrecognized entry in the section named "<no section>".
3 Configuring solidDB 49
Section names can be repeated. For example:
[Index]
BlockSize=2048
[Com]
...
[Index]
CacheSize=8m
However, repeating sections names makes it more difficult for users to keep the
file up-to-date and consistent.
For example:
Listen=tcp 127.123.45.156 1313
DurabilityLevel=2
Blank spaces around the equals sign are allowed but not required. The following
are equivalent:
DurabilityLevel=2
DurabilityLevel = 2
If you omit the parameter value, the server will use the factory value. For example:
; Use the factory value
DurabilityLevel=
If you omit the parameter value and the equals sign, you get an error message.
Parameter names can be repeated (you will not get a warning message), but this is
very strongly discouraged. The last occurrence of the parameter in the file takes
the precedence.
There are a few cases where two or more sections have parameters with the same
name. Therefore, you must be careful to place each parameter in the correct
section.
Most sections and parameters are optional. You do not need to specify a value for
every parameter in every section, and in fact you can omit entire sections. If you
omit a parameter, the server will use the factory value.
Comments
The configuration file can contain comments; comments must begin with a
semicolon (;). The comments can be put on separate lines or on the same line as a
parameter.
; This is a valid comment.
DurabilityLevel=2 ; This is also a valid comment.
The maximum length of a line is 79 characters. If you create comments longer than
79 characters, the server splits the comments on separate lines using a backslash (\)
at the end of the line but without adding a comment marker (;) on the new line.
The server can handle the lines that have been split in this way; however,
applications such as watchdogs might see the file as corrupted and thus fail. If you
Validation of entries
Important:
v The server does not necessarily display an error message if you use an invalid
value for a parameter. The server may simply use the factory value without
issuing an error message.
v The solid.ini parameter file is checked only when the server starts. If you edit
it after the server starts, the server will not see the changes until the next time
that the server starts.
v If you make changes to the solid.ini file AND you make changes to
parameters in the server by using an ADMIN COMMAND, the behavior is
unpredictable. While the server is running, you can safely change the solid.ini
3 Configuring solidDB 51
file OR make changes to server values using the ADMIN COMMAND, but you
should not do both during the same "run" of the server.
Example
The following example shows a simple solid.ini file entry that contains a section
heading, a parameter, and a comment:
[Logging]
; Use "relaxed logging", which improves performance but may
; risk losing the last few transactions during a failure.
DurabilityLevel=1
[Com]
...
The network name is defined with the Listen parameter in the [Com] section, for
example:
[Com]
Listen = tcpip localhost 1313
The default value for this parameter is solid.db 2147483647 (2 GB-1 expressed in
bytes).
The size unit is 1 byte. You can use K and M unit symbols to denote kilobytes and
megabytes, respectively. The maximum file size is (4G-1)*blocksize. With the
default 16 KB block size, this makes 64 TB - 1.
3 Configuring solidDB 53
The FileSpec parameter is also used to divide the database into multiple files and
onto multiple disks. To divide the database into multiple files, specify another
FileSpec parameter identified by the number 2. The index file will be written to
the second file if it grows over the maximum value of the first FileSpec parameter.
In the following example, the parameters divide the database file on the disks C:,
D: and E: to be split after growing larger than about 1 GB (=1073741824 bytes).
This example does not use the optional device number.
[IndexFile]
FileSpec_1=C:\soldb\solid.1 1000M
FileSpec_2=D:\soldb\solid.2 1000M
FileSpec_3=E:\soldb\solid.3 1000M
Note:
The index file locations entered must be valid path names in the server's operating
system. For example, if the server runs on a UNIX operating system, path
separators must be slashes instead of backslashes.
Although the database files reside in different directories, the file names must be
unique. In the above example, the different device numbers indicate that C:, D: and
E: partitions reside on separate disks.
There is no practical limit to the number of database files you may use.
Splitting the database file on multiple disks will increase the performance of the
server because multiple disk heads will provide parallel access to the data in your
database.
You may need to have multiple files on a single disk if your physical disk is
partitioned into multiple logical disks and no single logical disk can accommodate
the size of the database file you expect to create.
If the database file is split into multiple physical disks, the multithreaded solidDB
is capable of assigning a separate disk I/O thread for each device. This way the
server can perform database file I/O in a parallel manner. Read section Dedicated
threads in “Types of threads” on page 9 for more details.
The optional device number that you may specify for each data file helps the server
optimize its performance. The actual device number serves only as a means for
you to designate a distinct number for each physical device; the device number
serves no other purpose, such as indicating the brand, model, or other
characteristics of your storage device.
If you have different files on the same physical device, use the same device
number for each of those files. For example, on a Windows system that has two
physical disk drives, the first physical disk drive is typically C:. A second physical
disk drive could be partitioned into two logical disk drives, D: and E:. If one data
file is put on C:, one on D:, and one on E:, then the solid.ini file might look like
the following:
FileSpec_1=C:\soldb\solid.1 1000M 1
FileSpec_2=D:\soldb\solid.2 1000M 2
FileSpec_3=E:\soldb\solid.3 1000M 2
If your database has reached the maximum size specified by the FileSpec
parameter, you need to increase the maximum file size limit or divide the database
into multiple files.
CAUTION:
Do not attempt to use the FileSpec parameter to decrease the size of a database;
you risk losing existing data and corrupting the database.
Related concepts:
“Troubleshooting database file size (file write fails)” on page 151
If your database has reached the maximum size specified by the
IndexFile.FileSpec parameter, you need to increase the maximum file size limit or
divide the database into multiple files.
CacheSize: The CacheSize parameter defines the amount of main memory used to
maintain the shared buffer pool of the disk database. This buffer pool is called the
database cache. The factory value depends on the server operating system. For the
pure in-memory database operation, the cache size is mostly irrelevant once it is
not less than 8 MB. The absolute minimum size is 512 kilobytes. For example:
[IndexFile]
CacheSize=512
The size unit is bytes. You may also specify the amount of space in units of
megabytes, for example, "10M" for 10 megabytes. Although solidDB is able to run
with a small cache size, a larger cache size generally speeds up the server. The
cache size needed depends on the size of the database, the number of connected
users, and the nature of the operations executed against the server.
You can override the value set with General.DefaultStoreIsMemory by using the
STORE clause in the CREATE TABLE statement.
For example:
CREATE TABLE employees (name CHAR(20)) STORE MEMORY;
CREATE TABLE ... STORE DISK;
ALTER TABLE network_addresses SET STORE MEMORY;
The name and location for your backup directory is defined with the
BackupDirectory parameter in the [General] section.
3 Configuring solidDB 55
The default location is a directory relative to your solidDB working directory.
For example:
[General]
BackupDirectory=backup
With the above value 'backup', the backup will be written to a directory that is a
subdirectory of the solidDB directory.
The backup directory entered must be a valid path name in the server's operating
system. For example, if the server runs on a UNIX operating system, path
separators must be slashes instead of backslashes.
in the source server sets the remote directory for use of Network Backup. The
netbackupdir is either absolute or relative to the root directory of the NetBackup
Server.
in the NetBackup Server sets the root directory to all netbackup operations using
relative path expressions by their NetBackupDirectory specifications. The netbackup
root dir is either absolute or relative to the working directory.
Important:
NetBackup copies logical database consisting of multiple files to one flat file to the
NetBackupDirectory by default. Instead of flattening the structure to one file you
can define multiple files to which the source database files are mapped in
netbackup. Mapping source database file(s) to multiple backup database files is
done by way of using the backup.ini file.)
Note:
Placing log files on a physical disk separate from database files improves
performance.
The Threads parameter in the [Srv] section defines the number of general purpose
worker threads used by solidDB. For example:
[Srv]
Threads=9
The optimum number of threads depends on the number of processors the system
has installed. Usually it is most efficient to have between two and eight threads
per processor.
You must experiment to find the value that provides the best performance on your
hardware and operating system. A good formula to start with is:
The SQL Info facility is turned on by setting the Info parameter in the [SQL]
section to a non-zero value of the configuration file. The output is written to a file
named soltrace.out in the solidDB directory.
Use this parameter for troubleshooting purposes only as it slows down the server
performance significantly. This parameter is typically used for analyzing
performance for a specific single query or specific queries. Standard solidDB
monitoring is a better choice for generic application SQL database tracing.
3 Configuring solidDB 57
Trace: If you change the Trace parameter default setting from No to Yes, solidDB
starts logging trace information about network messages for all the established
network connections to the default trace file or to the file specified in the
TraceFile parameter.
TraceFile: If the Trace parameter is set to Yes, then trace information about
network messages is written to a file specified by the TraceFile parameter. If no
file name is specified, the server uses the default value soltrace.out, which is
written to the current working directory of the server or client, depending on
which end the tracing is started at.
Connect parameter: The Com.Connect parameter defines the default connect string
for a client to connect to when it communicates with a server. Since the client
should talk to the same network name as the server is listening to, the value of the
Com.Connect parameter on the client should match the value of the Com.Listen
parameter on the server.
The following connect line tells the client to communicate with the server by using
the TCP/IP protocol to talk to a computer named 'spiff' using server port number
'1313'.
[Com]
connect = tcpip spiff 1313
When an application program is using a solidDB ODBC Driver, the ODBC Data
Source Name can used instead of the Com.Connect parameter.
Format of the connect string: A default connect string can be defined with the
client-side Com.Connect configuration parameter. The connect string can also be
supplied, for example, at connection time or when configuring data sources with
an ODBC driver manager.
The same format of the connect string applies to the Com.Connect parameter as
well as to the connect string used by solidDB tools or ODBC applications.
where
v options can be any combination of the following:
See Network trace facility in the IBM solidDB Administrator Guide for details.
-plevel Pings the server at the given level (0-5). All
Clients can always use the solidDB Ping facility at level 1 (0 is no operation/default).
Levels 2, 3, 4 or 5 may only be used if the server is set to use the Ping facility at least at
the same level.
See Ping facility in the IBM solidDB Administrator Guide for details.
-t Turns on the Network trace facility All
See Network trace facility in the IBM solidDB Administrator Guide for details.
Note:
v The protocol_name and the server_name must match the ones that the server is
using in its network listening name.
v If given at the connection time, the connect string must be enclosed in double
quotation marks.
v All components of the connect string are case insensitive.
Examples
[Com]
Connect=tcp -z -c1000 1315
[Com]
Connect=nmpipe host22 SOLID
solsql "tcp localhost 1315"
solsql "tcp 192.168.255.1 1315"
rc = SQLConnect(hdbc, "upipe SOLID", (SWORD)SQL_NTS, "dba", 3, "dba", 3);
rc = SQLDriverConnect(hdbc,
(SQLHWND)NULL,
(SQLCHAR*)"DSN=tcp localhost 1964;UID=dba;PWD=dba",
38,
out_string,
3 Configuring solidDB 59
255,
&out_length,
SQL_DRIVER_NOPROMPT);
When Com.Trace is set to Yes, solidDB writes the trace log to the default trace file
(soltrace.out) in the current working directory or to the file specified with the
Com.TraceFile parameter.
Procedure
Example
solid -Udba -Pdba -x listen:"tcp 2315" -E -Sadmin
The above command starts a solidDB server and encrypts an existing database
where:
v -U = user name: admin
v -P = password: admin123
v -x listen = network listening name: tcp 2315
v -E = encrypts the database
v -S = encryption password: admin
Procedure
v In Linux and UNIX environments, use following command:
export <environment_variable>=<value>
v In Windows environments, use following command:
set <environment_variable>=<value>
3 Configuring solidDB 61
62 IBM solidDB: Administrator Guide
4 Monitoring solidDB
The following sections describe the methods used for querying the status of a
solidDB database.
You can view the message log files with a text editor.
The message log file size is controlled with the Srv.MessageLogSize parameter.
When the maximum size of the message log file is reached, the current solxxx.out
file is renamed to solxxx.bak, and a new solxxx.out file is started. To avoid
overwriting the contents of the backup solxxx.bak message log the next time the
maximum size of the message log file is reached, use the Srv.KeepAllOutFiles
parameter to enable the log files to be named incrementally.
Each error and status message is identified with an 8-character unique code. If the
message files are processed programmatically, it is easier to parse them if the
message codes are included. To enable the message code output, set the
Srv.PrintMsgCode to yes (default is no).
To disable the generation of the solmsg.out and the solerror.out log files, set the
Srv.DisableOutput parameter to yes (default is no).
63
Viewing error message descriptions with ADMIN COMMAND
'errorcode'
Each error and status message is identified with a unique number that you can use
with ADMIN COMMAND ’errorcode’ to view the error description.
For example:
ADMIN COMMAND ’errorcode 14706’;
RC TEXT
-- ----
0 Code: SRV_ERR_HSBINVALIDREADTHREADMODE (14706)
0 Class: Server
0 Type: Error
0 Text: Invalid read thread mode for HotStandby, only mode 2 is supported.
4 rows fetched.
The command ADMIN COMMAND ’errorcode all’ displays the descriptions of all
error messages in a Comma Separate Value (CSV) format.
The error codes and their descriptions are also available in Appendix E, “Error
codes,” on page 227.
Monitoring the trace files is not necessary for everyday operation of the server. For
more details about the trace files and how to use them, see “Network trace
facility” on page 144.
Messages include the IP address and the username of the attempt, for instance. The
syntax of the message is as follows:
timestamp [message code] User username tried to
connect from {hostname | unnamed host} with an
illegal username or password. [SOLAPPINFO is solappinfo value.]
Example:
Thu May 12 17:55:17 2005
12.05 17:55:17 User ’FOO’ tried to connect
from localhost.localdomain (127.0.0.1)
with an illegal username or password.
Note: The message code is only included if message code printing is enabled
(Srv.PrintMsgCode=yes) in solid.ini.
4 Monitoring solidDB 65
large value indicates that there is a long-running transaction active in the engine.
Note that an excessively large Bonsai Tree causes performance degradation. For
details on reducing Bonsai tree size, read “Reducing Bonsai Tree size by
committing transactions” on page 134.
v User count statistics shows the current and the maximum number of concurrent
users.
Note that this command throws out user connections; it does not break the
connection between a HotStandby Primary and HotStandby Secondary server.
Obtaining the status of the most recently made network backup, enter the
command:
If the latest backup has failed, then the RC column returns an error code. Return
code 14003 with text "ACTIVE" means that the backup is currently running.
In general, IBM Software Support and development teams use the reports for
troubleshooting. IBM Support may ask you to produce the report for
troubleshooting purposes. You can also generate the report to gain information
about a problem that you are investigating, but its use might be limited without
knowledge of the solidDB source code.
There are two commands for viewing and collecting performance information:
v ADMIN COMMAND ’perfmon’ returns performance information for the past few
minutes at approximately one minute intervals.
v ADMIN COMMAND ’perfmon diff’ collects performance information at given
intervals and outputs it into a file in a comma-separated value format.
Example output:
ADMIN COMMAND ’perfmon’;
RC TEXT
-- ----
0 Performance statistics:
0 Time (sec) 30 42 44 30 34 32 32 33 Total
0 File open : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 File read : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 File write : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 File append : 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0
0 File flush : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 File lock : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache find : 0.0 0.0 0.5 0.2 0.2 6.1 0.9 0.0 0.4
0 Cache read : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache write : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache prefetch : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache prefetch wait : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache preflush : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0 Cache LRU write : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
...
Most values are shown as the average number of events per second. Counters that
cannot be expressed as events per second (for example, database size) are
expressed in absolute values.
The ADMIN COMMAND ’perfmon’ command syntax also has options that allow you to
specify output options. For example, you can restrict the output by providing a list
of prefixes of counter names ADMIN COMMAND ’perfmon name_prefix_list’.
Example:
4 Monitoring solidDB 67
ADMIN COMMAND ’perfmon db’ returns all pmon counters starting with 'db':
ADMIN COMMAND ’perfmon db’;
RC TEXT
-- ----
0 Performance statistics:
0 Time (sec) 19 Total
0 DBE insert : 0.0 0.0
0 DBE delete : 0.0 0.0
0 DBE update : 0.0 0.0
0 DBE fetch : 0.0 41.2
0 DBE dd operation : 0 0
0 Db size : 8064 8064
0 Db free size : 7440 7440
0 DB actiongate lock time, latest: 0 0
0 DB actiongate lock time, sum : 0 0
0 DB actiongate lock count : 0 0
12 rows fetched.
For more information about the ADMIN COMMAND ’perfmon’ options, see “ADMIN
COMMAND” on page 309.
where
v filename is the name of the output file. The performance data is output in
comma-separated value format; the first row contains the counter names, and
each subsequent row contains the performance data per each sampling time.
The default file name is pmondiff.out.
v interval is the interval in milliseconds at which performance data is collected.
The default interval is 1000 milliseconds.
Example
To start logging performance counters into the counter_log.csv file with 2 second
interval, issue the following command:
ADMIN COMMAND ’pmon diff start counter_log.csv 2000’
Cache LRU write A write from cache is done when performing an LRU
replacement. This indicates that the client thread must write one
block to disk before reading a new block from the disk because
there has not been a free disk block available. A very high value
can indicate just high I/O load, or it can indicate that I/O
preflusher values are not optimal.
Cache slot wait This counter indicates that there is concurrent access to the same
block and one thread must wait for the other. Depending on the
cache configuration, it can also indicate that the mutex count for
the cache is not optimal and there are false conflicts. The default
mutex count does not cause false conflicts here.
Cache slot replace Database cache slot is replaced and old slot is removed.
Cache write storage leaf Database cache has written a storage tree leaf page to disk.
Cache write storage index Database cache has written a storage tree index page to disk.
Cache write bonsai leaf Database cache has written a Bonsai-tree leaf page to disk.
Cache write bonsai index Database cache has written a Bonsai-tree index page to disk.
Cache preflush bytes Number of bytes written by preflusher before log file is flushed.
The counter is reset at each flush.
Cache preflush flush Number of preflush calls/sec before log file is flushed.
4 Monitoring solidDB 69
Table 14. Perfmon counters (continued)
Trans read level This counter indicates the current transaction read level. This
counter value increases all the time. Because the counter value is
32-bit variable, it can have a negative value, but still logically
the value is increasing. If the value stays the same for a long
time with concurrent write transactions, it indicates that a long
transaction is blocking the read level and can cause merge
blocking and an increase in the Bonsai tree size.
4 Monitoring solidDB 71
Table 14. Perfmon counters (continued)
Merge cleanup Transaction buffer cleanup calls/sec (if split purge enabled)
Merge nomrg write Current number of index entries waiting for merge
Merge level Current merge level (read level of the oldest active transaction)
Log wait flush Current number of user threads waiting for log operation
Log writeq full rec Log writes while log write queue full (in number of records)
Log writeq full byt (byte size) Log writes while log write queue full (in bytes)
Log writeq records Number of records in current log writer queue.
Log writeq bytes Number of bytes in log writer queue.
Log writeq pending bytes Number of bytes for the next log writer queue flush.
Log availq items Number of records added to available items queue
Log writeq add Number of records added to log writer queue.
Log writeq write Number of records written from log writer queue to log file.
Log writeq item count Number of write queue items in the system
HSB cached bytes Primary/Secondary: current size memory based log buffer, in
bytes
HSB flusher bytes Number of bytes of the HSB log in the send queue to the
Secondary
HSB notsent bytes Number of bytes in the HSB log that has been accumulated (for
example, during a catchup) and not sent to the Secondary yet
4 Monitoring solidDB 73
Table 14. Perfmon counters (continued)
HSB grouped acks Secondary: current number of ack groups (physical acks)
HSB wait cpmes Yes/no (1/0) Primary: waiting for checkpoint ack from the
Secondary
HSB log freespc HSB: number of log operations there is space for in the protocol
window
HSB catchup freespc HSB: number of log operations there is space for in the protocol
window, for catchup
HSB alone freespc Primary: in Primary alone, bytes there is room for in the
transaction log
Tabcur create Number of internal table cursor calls
Tabcur reset full Number of full constraint reset calls in table cursor
Tabcur reset smpl Number of simple constraint reset calls in table cursor
Tabcur estimate Number of cost estimate calls in table cursor
Tabcur table scan Number of table scans executed in SQL statements.
A high number of table scans can mean that SQL statements are
not executed optimally or some index definitions are missing
from tables.
Tabcur index access Number of index accesses executed in SQL statements
MME max num of locks Peak number of MME locks (since startup)
MME cur num of lock chains Current no. MME hash buckets
MME max num of lock chains Peak no. MME hash buckets (since startup)
MME longest lock chain path MME: longest hash overflow path
MME mem used by page structs MME memory allocated to the shadow structures in kilobytes
MME page splits Number of MME page splits
MME page joins Number of MME page joins
MME unnec mutexed searches Number of MME rows fetched while unnecessarily in exclusive
mode
MME nonmatched (RO) Number of MME rows that did not match search criteria fetched
in shared mode
MME nonmatched (EXCL) Number of MME rows that did not match search criteria fetched
in exclusive mode
MME inserts with x gate Number of inserts done in exclusive mode. Insert switches from
shared mode to exclusive mode for example, when the insert
causes index node split.
MME deletes with x gate Number of MME deletes performed in exclusive mode
MME hotspot protection Number of times an MME search enters exclusive mode to
access a hotspot
Number of keys inserted to MME indexes, includes keys
MME index key inserts inserted during a database recovery (not accurate1)
MME index key deletes Number of keys deleted from MME indexes. (not accurate1)
MME bnode resizes Number of times a MME bnode has been resized
Number of times optimistic mutexing in vtrie has collided (not
MME vtrie mutex collisions accurate1, congestion2)
Number of times a version check in vtrie has collided (not
MME vtrie version colls accurate1, congestion2)
4 Monitoring solidDB 75
Table 14. Perfmon counters (continued)
This branching factor is only for the vtrie part of the index, the
MME vtrie new branches bnode leaf level branching factor cannot be estimated.
Number of times a key delete from vtrie has caused a node on
MME vtrie vertical joins the search path to be deleted (not accurate1)
Number of times a key delete from vtrie has caused a branch to
MME vtrie branch deletes be removed from a vtrie node (not accurate1)
Number of times a vtrie insert has been retried because of a
MME vtrie insert retries collision (not accurate1, congestion2)
Number of times a vtrie delete has been retried because of a
MME vtrie delete retries collision (not accurate1, congestion2)
Number of times bnode accesses have caused a mutex collision
MME bnode mutex collisions (not accurate1, congestion2)
Number of times bnode accesses have failed because of a
MME bnode version colls version collision (not accurate1, congestion2)
Posted events queue Number of posted events that has not been consummated by
the subscribers
Index search both Search is done from both the Bonsai tree and the storage tree
Index search storage Index search is done from storage tree only
B-tree node search mismatch A search was done by using the mismatch index search
structure within a B-tree node. Mismatch index is a search
structure where an array of mismatch index positions is built
within a B-tree node. This mismatch index is a compact and
linear data structure that is used to perform a fast scan over
compressed key information to find a key position within the
B-tree node. It attempts to optimize the search by using fast
access in the processor cache row by packing relevant search
information in one to three processor cache pages.
B-tree node build mismatch A new mismatch index search structure is built within a B-tree
node. Mismatch index is a search structure where an array of
mismatch index positions is built within a B-tree node. This
mismatch index is a compact and linear data structure that is
used to perform a fast scan over compressed key information to
find a key position within the B-tree node. It attempts to
optimize the search by using fast access in the processor cache
row by packing relevant search information in one to three
processor cache pages.
B-tree node relocate A B-tree node is relocated. This happens when a block that
belongs to a previous checkpoint is changed for the first time.
Typically, this value is highest immediately after a checkpoint.
B-tree node exclusive Exclusive access to the B-tree is used. This can happen, for
example, in a node split case such as when the tree root is split.
B-tree key read Normal key value is read from the B-tree.
B-tree key read delete Delete mark is read from the B-tree.
B-tree key read oldversion Old row version is read from the B-tree.
B-tree key read abort A row from an aborted transaction is read from the B-tree. This
includes all transactions that were not successfully completed.
B-tree storage leaf len Average length for storage tree leaf node.
B-tree storage index len Average length for storage tree index node.
B-tree bonsai leaf len Average length for Bonsai-tree leaf node.
B-tree bonsai index len Average length for Bonsai-tree index node.
Bonsai-tree height Current Bonsai tree height in levels.
B-tree lock node Number of B-tree node lock calls.
B-tree lock tree Number of whole B-tree lock calls.
B-tree lock full path Number of B-tree full node path lock calls.
B-tree lock partial path Number of B-tree partial node path lock calls.
B-tree get no lock Number of B-tree no lock calls.
B-tree get shared lock Number of B-tree shared lock calls.
Pessimistic gate wait Number of waits for pessimistic disk based table gate.
Merge gate wait Number of waits for merge gate.
Storage gate wait Number of waits for storage tree gate.
Bonsai Gate wait Number of waits for Bonsai-tree gate.
Action gate wait Number of action gait waits
MME pages gate wait Number of gate waits when accessing pages in MME storage
MME index gate wait Number of gate waits when accessing MME index
Each log reader cursor has its own free space counter; if there
are multiple open log reader cursors, the value is the minimum
value of free space of all open cursors. If the free space of any
log reader cursor is zero, the value of this counter is zero and
transaction throttling (slowdown) is enacted.
4 Monitoring solidDB 77
Table 14. Perfmon counters (continued)
Logreader logdata queue len Logreader: number of log record blocks waiting for processing.
Logreader record queue len Logreader: number of log records waiting for propagation.
Logreader stmt queue len Logreader: number of statements waiting for statement
commit/rollback.
Logreader records sent Logreader: number of log records sent for propagation/sec.
Logreader catchup queue len Logreader: number of log records in catchup queue.
Logreader catchup queue size Logreader: size of the catchup queue, in bytes.
Logreader pending queue len Logreader: number of pending log records in the in-memory log
buffer.
Logreader memcache queue len Logreader: length of the in-memory buffer queue, in operations.
Logreader batch queue len Logreader: current number of operations queued for the next
batch.
Logreader: a full transaction back was flushed from logreader.
Logreader flush batch full
Logreader flush batch force Logreader: a non-full transaction batch was flushed from
logreader.
Number of transactions applied into solidDB by InfoSphere®
TS applied transactions CDC instance when solidDB is a target datastore.
Passthrough open connections Number of SQL passthrough connections to back-end
Passthrough open statements Number of prepared statements to back-end
Passthrough reads Number of executed read-type statements that return rows (for
example, SELECT statements)
Passthrough non reads Number of executed write-type statements that return rows (for
example, INSERT statements)
Passthrough commits Number of committed statements
Passthrough rollbacks Number of rollback statements
1
Counters marked as not accurate are not accurate because they are not mutex
protected for performance reasons.
2
In counters marked as congestion, large increases imply that there is congestion in
parallel access when several threads are updating same parts of the database at the
same time.
4 Monitoring solidDB 79
80 IBM solidDB: Administrator Guide
5 Managing network connections
solidDB provides simultaneous support for multiple network protocols and
connection types. Both the database server and the client applications can connect
simultaneously to multiple sites using multiple different network protocols.
Note: Some operating systems may limit the number of concurrent users to a
single solidDB server process.
At the server side, the network name is defined as a network listening name that
identifies the server in the network. When a database server process is started, it
publishes at least one network listening name. The server starts to listen to the
network using the given network listening name. The network listening name is
defined with the Com.Listen configuration parameter.
At the client side, the network name is defined as a connect string that the client
process uses to specify which server it will connect to. To establish a connection
from a client to a server, the client has to know the network listening name of the
server and in some cases, also the location of the server in the network. A default
connect string can be defined with the client-side Com.Connect configuration
parameter. The connect string can also be supplied, for example, at connection time
or when configuring data sources with an ODBC driver manager.
Tip:
v Because the network listening name and the connect string must match, the
generic term network name is used for referring to either one as it is the string
that defines the connection between the server and the client.
v In connection with the ODBC API, the network name can also be called
servername (following the ServerName argument in the SQLConnect() function).
The syntax of the Com.Listen parameter and the network listening name is the
following:
81
[Com]
Listen = network_listening_name, network_listening_name, ...
where
network_listening_name = protocol_name [options] server_name | none
v [options] can be any combination of the following:
Table 15. Network listening name options
Example:
[Com]
Listen = tcp -i127.0.0.1 1313
A server with the above setting accepts connection requests only from inside
the same machine, either referred by IP address 127.0.0.1 or with the name
'localhost', if the DNS is correctly configured:
For example, if the server side is set to -p3, clients applications can run the
Ping facility at levels 1, 2, and 3, but not at 4 and 5.
Note:
v A server may use an unlimited number of network names.
v All components of network names are case insensitive.
v When a database server process is started, it publishes the network names that it
starts to listen to. This information is also written to the solmsg.out file.
The example contains two network names which are separated by a comma. The
first one uses the protocol TCP/IP and the service port 1313; the other one uses the
Named Pipes protocol with the name 'soliddb'. The 'tcpip' and 'nmpipe' are
communication protocols, while '1313' and 'soliddb' are server names.
If the Listen parameter is not set in the solid.ini file or if the value is empty,
solidDB listens to the following network names by default:
Table 16. Com.Listen factory values
Platform Com.Listen factory values
Windows NmPipe SOLID
ShMem SOLID
TCP/IP 1964
Linux and UNIX UPipe SOLID
TCP/IP 1964
To view supported protocols for your server, use the following command:
ADMIN COMMAND ’protocols’
Note: The ADMIN COMMAND ’par com.listen=value’ command does not replace
existing network listening names; it appends new listening names to the existing
list.
v Modify the Com.Listen setting in the solid.ini file.
Use a comma (,) to separate network names.
Example:
[Com]
Listen = tcpip 1313, nmpipe soliddb
You must restart the solidDB server to activate the changes.
v To enable a network name temporarily, use the option -x listen:<connect-
string> at solidDB startup, enclosing the network name in double quotation
marks.
Example:
solid -x listen:"tcp 2313"
The same format of the connect string applies to the Com.Connect parameter as
well as to the connect string used by solidDB tools or ODBC applications.
See Network trace facility in the IBM solidDB Administrator Guide for details.
-plevel Pings the server at the given level (0-5). All
Clients can always use the solidDB Ping facility at level 1 (0 is no operation/default).
Levels 2, 3, 4 or 5 may only be used if the server is set to use the Ping facility at least at
the same level.
See Ping facility in the IBM solidDB Administrator Guide for details.
-t Turns on the Network trace facility All
See Network trace facility in the IBM solidDB Administrator Guide for details.
Note:
v The protocol_name and the server_name must match the ones that the server is
using in its network listening name.
v If given at the connection time, the connect string must be enclosed in double
quotation marks.
v All components of the connect string are case insensitive.
Examples
[Com]
Connect=tcp -z -c1000 1315
[Com]
Connect=nmpipe host22 SOLID
solsql "tcp localhost 1315"
solsql "tcp 192.168.255.1 1315"
rc = SQLConnect(hdbc, "upipe SOLID", (SWORD)SQL_NTS, "dba", 3, "dba", 3);
The value of the Com.Connect parameter is read by all solidDB tools (solsql and so
on) and client libraries when no network name is specified for the connection. The
client libraries do not need this value if a valid connect string is supplied at run
time, or when an ODBC driver manager is used.
If the Com.Connect parameter is not found in the solid.ini configuration file, the
client uses the default value tcp localhost 1964 (Windows ) or upipe (Linux and
UNIX) instead. The server-side Com.Listen and client-side Com.Connect factory
values are set so that if the parameter settings are not available in the solid.ini
file, the application (client) will always connect to a local solidDB server that is
listening with the default network name. Thus, local communication (inside one
machine) does not necessarily need a solid.ini configuration file for establishing a
connection.
Example
The logical data source name can be mapped to a data source as a 'logical name'
and 'connect string' (network name) pair in the following ways:
v Using the [Data Sources] section in the client-side solid.ini file
The syntax of the parameters is the following:
[Data Sources]
logical_name = connect_string; Description
where Description can be used for comments on the purpose of the logical
name
Example:
To map a logical name My_application to a database that you want to connect
using TCP/IP, include the following lines in the solid.ini file:
[Data Sources]
My_application = tcpip irix 1313; Sample data source
Tip: The solidDB data management tools use the solidDB ODBC API. If you
have defined an ODBC Data Source, you can use the logical name source name
also when connecting to solidDB server with the solidDB tools.
For example, if you have created a data source named 'solid_1' with ServerName
'tcp 2525', you can connect to solidDB with solidDB SQL Editor (solsql) with
the following command:
solsql solid_1
When connecting to the solidDB server, if the network name is not a valid connect
string, the solidDB tools and clients assume it is a logical data source name. To
find a mapping between the logical data source name and a valid connect string,
the solidDB tools and clients check the client-side solid.ini file.
In Windows environments, if the solid.ini file is not found or the logical data
source name is not defined in the [Data Sources] section, the data source settings
made with the Windows registry settings are checked in the following order.
1. Look for the Data Source Name from the following registry path:
HKEY_CURRENT_USER\software\odbc\odbc.ini\DSN
2. Look for the Data Source Name from the following registry path
HKEY_LOCAL_MACHINE\software\odbc\odbc.ini\DSN
The check for the logical data source mappings might impact performance:
v If the file system is particularly slow, for example, because the working directory
is mapped to a network drive, checking the existence of the solid.ini file can
have a measurable performance impact.
v In Windows environments, all logical data source mappings in the ODBC
registry are checked. The time consumed for this operation is proportional to the
amount of defined data sources.
– With only few (1 to 5) data sources, the connection time will be
approximately 5 ms.
– With 1000 data sources, the connection time will be approximately 200 ms.
However, if the solid.ini file contains the logical data source name mapping,
the tools and clients do not try to access the ODBC registry for the mapping.
Communication protocols
The client process and the solidDB server communicate with each other by using
computer networks and network protocols. Supported communication protocols
depend on the type of computer and network you are using.
TCP/IP protocol
solidDB supports both TCP/IPv4 and TCP/IPv6 protocols. To use the TCP/IP
protocol, you need to specify tcp as the protocol, specify the host computer
(optional), and use a non-reserved port number.
There are differences in the use of the TCP/IPv4 and TCP/IPv6 protocols,
depending on the platform.
v In Linux and UNIX environments, solidDB can listen to both the TCP/IPv4 and
TCP/IPv6 protocols automatically, based on the format of the IP address in the
network name. If the network name does not specify an IP address, solidDB
tries to start listening on IPv6 (::0) first, if it is not possible, it retries on IPv4
(0.0.0.0).
If you want solidDB to listen to only one protocol type, you can specify the
protocol explicitly with the -4 (IPv4) and -6 (IPv6) option in the network name.
v In Windows environments, solidDB listens to the IPv4 protocol by default.
To use the IPv6, you need to specify the IPv6 protocol using the option -6 in the
network name.
Table 18. TCP/IP protocol in the network listening name (Com.Listen)
Platform IPv4 syntax IPv6 syntax
Listen = tcp [-4] [-ihost_computer] port_number Listen = tcp [-6] [-ihost_computer] port_number
Linux and
UNIX Examples: Examples:
Listen = tcp 1315 Listen = tcp 1315
Listen = tcp -i9.11.22.314 1315 Listen = tcp -ife80::9:1122::0314 1315
Listen = tcp [-4] [-ihost_computer] port_number Listen = tcp -6 [-ihost_computer] port_number
Windows
Examples: Examples:
Listen = tcp 1315 Listen = tcp -6 1315
Listen = tcp -i9.11.22.314 1315 Listen = tcp -6 -ife80::9:1122::0314 1315
where
host_computer = ip_address|host_name
port_number must be an unreserved port; reserved port numbers are listed in the
/etc/services file of your system. Select a free number greater than 1024 – smaller
numbers are usually reserved for the operating system.
UNIX Pipes
The UNIX domain sockets (UNIX Pipes) are typically used when communicating
between two processes running in the same UNIX machine. UNIX Pipes usually
have a very good throughput. They are also more secure than TCP/IP, since Pipes
can only be accessed from applications that run on the computer where the server
executes.
When using the UNIX Pipes protocol, you must reserve a unique listening name
(server name) within the node for the server, for instance, 'soliddb'. Because UNIX
Pipes handle the UNIX domain sockets as standard file system entries, there is
always a corresponding file created for every listened pipe. In solidDB's case, the
entries are created under the path /tmp.
For example, the server name 'soliddb' creates the directory /tmp/solunp_SOLIDDB
and shared files in that directory. The /tmp/solunp_ is a constant prefix for all
created objects while the latter part ('SOLIDDB' in this case) is the server name in
upper case format.
To use the UNIX Pipes protocol, select upipe or unp as the protocol and enter a
server name.
Table 20. UNIX Pipes protocol in the network name
Note:
v To use the UNIX Pipes protocol, the server and client processes must run in the
same machine.
v The server process must have "write" permission to the directory /tmp.
v The client that is accessing UNIX Pipes must have "execute" permission on the
directory /tmp.
v The directory /tmp must exist.
Note:
v server_name must be a character string at most 50 characters long.
v If the server is running in the same computer with the client program, the
host_computer_name must not be specified.
v If host_computer_name is used, the host_computer_name must be listed in
the/etc/hosts file or it must be recognized by the DNS (Domain Name Server).
v To connect to the solidDB server with the Named Pipes protocol, the user must
have at least the same rights as the user who started the server.
For example, if an administrator starts the server, only users with administrator
rights are able to connect to the server through Named Pipes. Similarly, if a user
with normal user rights starts the server, all users with equal or greater rights
are able to connect the server through Named Pipes.
If a user does not have proper rights, the solidDB Communication Error 21306
message is given.
v Do not use the Named Pipes protocol with solidDB Remote Control (solcon); the
asynchronous nature of communication between solcon and the solidDB server
may cause problems with the Named Pipes protocol (solidDB server can output
messages to solcon command prompt even though solcon does not query for
such messages explicitly).
Shared Memory
In some cases, the Shared Memory protocol can be the fastest way two processes
can exchange information. The Shared Memory protocol can be used only when
solidDB and application processes are both running in the same computer. The
Shared Memory protocol uses a shared memory location for moving data from one
process to another.
To use the Shared Memory protocol in solidDB, select shmem as the protocol and
enter the server name.
Table 22. Shared Memory protocol in the network name
Note:
v server_name must be a character string less than 128 characters long.
Summary of protocols
The following tables summarize the possible operating systems and required forms
for network names for the various communication protocols.
Table 23. solidDB protocols and network names
solidDB provides the following tools for exporting and loading data:
v solidDB Speed Loader (solloado or solload) loads data from external files into a
solidDB database.
v solidDB Export (solexp) exports data from a solidDB database to files. It also
creates control files used by solidDB Speed Loader (solloado or solload) to
perform data load operations.
v solidDB Data Dictionary (soldd) exports the data dictionary of a database. It
produces an SQL script that contains data definition statements that describe the
structure of the database.
Note: solidDB data management tools do not support the Transparent Failover
(TF) feature used in High Availability configurations. Transparent Failover hides
the server change from the user. For more information, refer to IBM solidDB High
Availability User Guide.
Because solcon can be used to issue only ADMIN COMMANDs, it can be useful to
deploy only solcon on a production node; with solcon, administrators cannot
accidentally access or change data in the database by issuing SQL statements.
With solcon, the ADMIN COMMAND syntax differs from the syntax used with
solidDB SQL Editor (solsql). In solcon, the command includes the ADMIN
COMMAND option only, without the single quotation marks. The semicolon used
with solsql is not used with solcon either.
For example:
93
solcon:
backup
solsql:
ADMIN COMMAND ’backup’;
where
v options can be:
Table 25. solcon command options
v network_name is the network name of a solidDB server that you are connected
to.
The given network name must be enclosed in double quotation marks.
Note: Logical data source names can also be used with tools; refer to 5,
“Managing network connections,” on page 81 for further information.
v username is required to identify the user and to determine the user's
authorization. Without appropriate rights, command execution is denied.
v password is the user's password for accessing the database.
solcon connects to the first server specified in the Com.Connect parameter in the
solid.ini file. If you specify no arguments, you are prompted for the database
administrator's user name and password. You can give connection information at
the command line to override the connect definition in solid.ini.
Access rights
Error messages
When there is an error in the command line, solcon gives you a list of the possible
syntax options as a result. Check the command line you entered.
Start up solcon with the server name "tcp localhost 1313" and the administrator
username 'admin' and password 'iohi4y':
solcon "tcp localhost 1313" admin iohi4y
You can execute all commands at the command line with the -e option or in a text
file with the -f option.
When there is an error in the command line, solidDB Remote Control gives you a
list of the possible options as a result. Check the command line you entered.
Table 26. solcon specific commands
For a formal definition of SQL statements, see section solidDB SQL syntax in the
IBM solidDB SQL Guide.
For a list of ADMIN COMMANDs and the ADMIN COMMAND syntax, see
“ADMIN COMMAND” on page 309.
where
v options can be:
Table 27. solsql command options
-e sql-string Execute the SQL string; if used commit can only be done using
-a
You can switch between the two connections with the command
switch.
-x stoponerror This command-line switch is used to force stop and exit solsql
immediately when an error is detected.
v network_name is the network name of a solidDB server that you are connected
to.
The given network name must be enclosed in double quotation marks. Refer to
5, “Managing network connections,” on page 81 for further information.
Note:
v If the username and password are specified at the command line, the
network_name must also be specified.
v If the name of the SQL script file is specified at the command line (except with
the -f option), the network_name, username, and password must also be specified.
Remember to commit work at the end of the SQL script or before exiting solsql.
The solidDB tools connect to the first server specified in the Com.Connect parameter
in the solid.ini file. If you specify no arguments, you are prompted for the
database administrator's user name and password.
Error messages
When there is an error in the command line, solsql gives you a list of the possible
syntax options as a result. Check the command line you entered.
Exiting solsql
Example:
create table testtable (value integer, name varchar);
commit work;
For example:
---Execute the SQL script named "insert_rows.sql" in the
-- root ("\") directory of the C: drive.
@\c:\insert_rows.sql;
Both absolute and relative path names are supported. If you specify a relative path,
it should be relative to the solsql working directory.
To execute an SQL script from a file at solsql startup, the name of the script file
must be given as a command line parameter:
solsql network_name username password filename
All statements in the script must be terminated by a semicolon. solsql exits after all
statements in the script file have been executed.
Example:
solsql "tcp localhost 1313" admin iohe4y tables.sql
Note:
Remember to commit work at the end of the SQL script or before exiting solsql. If
an SQL string is executed with the option -e, commit can only be done using the
-a option.
As of solidDB 6.5 Fix Pack 2, there are two variants of the solidDB Speed Loader:
v solloado provides support for Unicode and partial Unicode databases. It also
enables loading of data with multiple threads. solloado is based on the solidDB
ODBC API; the client-side configuration parameters can be used to control the
behavior of solloado.
v solload provides support for partial Unicode databases only. solload is based
on the solidDB SA API.
The solidDB Speed Loader can load data in a variety of formats and produce
detailed information of the loading process into a log file. The format of the import
file, that is, the file containing the external data, is specified in a control file.
The data is loaded into the database through the solidDB program. This enables
online operation of the database during the loading. The data to be loaded does
not have to reside in the server computer.
v The table must exist in the database in order to perform data loading.
v Catalogs are supported with the following syntax:
catalog_name.schema_name.table_name
v The following constraints are checked:
– referential
– NOT NULL
– unique
v solidDB Speed Loader does not support check constraints that are defined using
the CREATE TABLE and ALTER TABLE statement and specify data value
restrictions in columns.
However, solidDB Speed Loader always checks for unique or foreign key
constraints that are defined using the CREATE TABLE statement. For more
details on constraints, see the CREATE TABLE syntax in the Appendix: solidDB
SQL Syntax in the IBM solidDB SQL Guide.
File types
Control file
The control file provides information about the structure of the import file. It gives
the following information:
v name of the import file
v format of the import file
v table and columns to be loaded
Note: Each import file requires a separate control file. solidDB Speed Loader
loads data into one table at a time.
For more details about the control file format, read “Control file syntax” on page
106.
The import file may contain the data either in a fixed or a delimited format:
v In fixed-length format data records have a fixed length, and the data fields
inside the records have a fixed position and length.
v In delimited format data records can be of variable length. Each data field and
data record is separated from the next with a delimiting character such as a
comma (this is what solidDB Export produces). Fields containing no data are
automatically set to NULL.
Data fields within a record may be in any order specified by the control file.
v Data in the import file must be of a suitable type. For example, numbers that are
presented in a float format cannot be loaded into a field of INTEGER or
SMALLINT type.
v Data of VARBINARY and LONG VARBINARY type must be hexadecimal
encoded in the import file.
v When using any fixed-width field, regardless of the data type, solloado or
solload expects the import file to have the specified width, even when NULL is
used.
If the log file cannot be created, the loading process is terminated. By default the
name of the log file is generated from the name of the import file by substituting
the file extension of the import file with the file extension .log. For example,
my_table.ctr creates the log file my_table.log. To specify another file name, use
the option -l.
If you start solidDB Speed Loader with no arguments, you will see a summary of
the arguments with a brief description of their usage.
v The syntax for starting solloado is:
solloado [options] [network_name] username [password]control_file
v The syntax for starting solload is:
solload [options] [network_name] username [password]control_file
Default is 4.
X X
-h Help = Usage
X X
-x emptytable Load data only if there are no rows in the table
X X
-x errors: Maximum error count
count
X X
-x nointegrity No integrity checks during load
X X
-x pwdfile: Read password from the file
filename
X X
-x skip: Number of records to skip
records
X
-x utf8 WCHAR data is in UTF-8 format
v network_name is the network name of a solidDB server that you are connected
to.
The given network name must be enclosed in double quotation marks. Refer to
5, “Managing network connections,” on page 81 for further information.
Tip: Logical data source names can also be used with the solidDB tools.
v username is required to identify the user and to determine the user's
authorization. Without appropriate rights, command execution is denied.
v password is the user's password for accessing the database. The password is
– mandatory, if the password is not read from a file (defined with option -x
pwdfile: filename)
– optional, if the password is read from a file
For details on the control_file, see section “Control file syntax” on page 106.
Examples
The following solloado example loads data from a file specified by a control file
named DBA_TBL.ctr, reading data as UTF-8 characters and using 8 threads to insert
data with 30 records in one statement:
solloado -w 8 -B 30 -u "tcpip 1964" dba dba DBA_TBL.ctr
The following solload example loads data from a file specified by a control file
named delim.ctr:
solload "tcpip 1964" dba dba delim.ctr
Error messages
When there is an error in the command line, solload gives you a list of the
possible syntax options as a result. Check your command line entry.
Tip: After the loading has been completed, remember to enable logging again
(Logging.LogEnabled=yes). Running the server in production use with logging
disabled is strongly discouraged. If logs are not written, no recovery can be
made if an error occurs due to, for example, power failure or disk error.
Examples
Example: Loading fixed-format records
In fixed-length format import files, data records have a fixed length and the data
fields inside the records have a fixed position and length.
LOAD
INFILE ’test1.dat’
INTO TABLE SLTEST
(
"NAME" POSITION(1-5),
ADDRESS POSITION(6:10),
ID POSITION(11-15)
)
solidDB Speed Loader reserved words must be enclosed in quotes if they are used
as data dictionary objects, that is, table or column names. The following list
contains all reserved words for the solidDB Speed Loader control file:
Table 30. Speed Loader reserved words
The control file begins with the statement LOAD [DATA] followed by several
statements that describe the data to be loaded. Only comments or the OPTIONS
statement may optionally precede the LOAD [DATA] statement.
Table 31. Full syntax of the control file
::= [option_part]
control_file load_data_part
into_table_part
fields
column_list
::= X’hex_byte_string’
hex_literal
Note:
1. Masks that are used as part of the load-data-part element must be in the
following order: DATE, TIME, and TIMESTAMP. Each is optional.
2. Data must be of the same type in the import-file, the mask, and the column in
the table into which the data is loaded.
Table 32. Data masks
DATE YYYY/YY-MM/M/B-DD/D
TIME HH/H:NN/N:SS/S
v Mask parts:
– Year masks: YYYY and YY
– Month masks: MM, M, and B (B refers to a three-letter abbreviation (case
insensitive) of the month in English)
– Day masks: DD and D
– Hour masks: HH and H
– Minute masks: NN and N
– Second masks: SS and S
v Masks within a DATE mask may be in any order; for example, the DATE mask
could be 'MM-DD-YYYY' (12-18-2010) or 'DD-B-YYYY' (18-DEC-2010).
v If the date data of the import file is formatted as 1995-01-31 13:45:00, use the
mask YYYY-MM-DD HH:NN:SS.
v The masks must be separated
The following example uses the POSITION keyword. For details on this keyword,
read “POSITION” on page 113.
OPTIONS(SKIP=1)
LOAD DATA
RECLEN 12
INTO TABLE SLTEST2
(
ID POSITION(1:2) NULLIF BLANKS,
DT POSITION(3:12) DATE ’DD.MM.YYYY’ NULLIF ((4:6) = ’ ’)
)
The following example uses the FIELDS TERMINATED BY keyword. For details
on this keyword, read “FIELDS TERMINATED BY” on page 112.
LOAD
DATE ’MM/DD/YY’
TIME ’HH-NN-SS’
TIMESTAMP ’HH.NN.SS YY/MM/DD’
INTO TABLE SLTEST3
FIELDS TERMINATED BY ’,’
(
ID,
DT,
TM,
TS
)
PRESERVE BLANKS
The PRESERVE BLANKS keyword is used to preserve all blanks in text fields.
INTO_TABLE_PART
The into_table_part element is used to define the name of the table and columns
that the data is inserted into.
FIELDS ENCLOSED BY
The FIELDS ENCLOSED BY clause is used to define delimiting characters around
each field. The delimiter may be one character or two separate characters that
precede and follow each data field in the input file. You might use one character
(such as the double quote character) or a pair of characters (such as left and right
parentheses) to delimit your fields. If you use the double quote mark as the
delimiter and the comma as the terminator/separator, then your input might look
like the following:
"field1", "field2"
If you use left and right parentheses, then your input might look like the
following:
(field1),(field2)
Note that if the keyword OPTIONALLY is used, then the delimiters are optional
and do not need to appear around every single piece of data.
If you specify a character value, it must be enclosed in single or double quotes. For
example, the following examples have the same effect:
ENCLOSED BY ’(’ AND ’)’
ENCLOSED BY "(" AND ")"
Note that if you are using single quotes as the enclosing characters, you must
double the apostrophes as shown in the clause above. For example, to produce in
the database:
Didn’t I warn you?
Almost any printable characters may be used as the "enclosing" characters. The
enclosing characters may also be specified using the hexadecimal format. For
example, if a hexadecimal string is used, then the format is:
X ’hex_byte_string’
For example:
The opening and closing characters in an enclosing pair can be identical. For
example, the following is valid inside the control file:
ENCLOSED BY ’"’ AND ’"’
If both the opening and closing characters are the same, then the ENCLOSED BY
clause only needs to show the character once. For example, the following should
have the same effect:
ENCLOSED BY ’"’
ENCLOSED BY ’"’ AND ’"’
When the preceding is defined in the control file, here are some examples of input
and the corresponding values actually stored in the table.
"Hello."
Hello.
"""Ouch!"", he cried."
"Ouch!", he cried.
Note that there may be enclosing characters used in the column data itself
(embedded field separators). If this is the case, then you can use the
TERMINATED BY clause together with the OPTIONALLY ENCLOSED BY clause
to be sure the column data is enclosed correctly as described in “FIELDS
TERMINATED BY” on page 112.
This section contains basic rules and examples when using enclosing characters.
Each example, unless stated otherwise, contains the following control file lines:
FIELDS TERMINATED BY X’3a’
OPTIONALLY ENCLOSED BY "(" AND ")"
This means that the enclosing characters are parentheses and the separator
(terminator) character is the colon — hexadecimal 3A specifies the colon (":").
v The data is to be loaded into a table with two columns, the first of which is of
type VARCHAR and the second of which is type INTEGER.
The ENCLOSED BY characters themselves may occur within the data. However,
when occurring within the data, each of the enclosing characters should occur
twice in the input for each time that it should occur once in the database.
This works for deeply nested parentheses as well. If the input file contains:
(You((can((safely((try))this))at))home.):2
The final enclosing character must occur an odd number of times at the end of the
input. For example:
Of the last three closing parentheses, the first two are treated as a single instance of
the character, while the last one is treated as the enclosing character.
When enclosing characters are used, newline characters (carriage return and/or
line feed) can be embedded within a string. For example:
(This is a long line that can be split across two or more input
lines ((and keep the end-of-line characters)) if the enclosing
characters are used):1
If the field separator (the colon in the above example) is not used in the data and if
there is no need to preserve newlines in the input data, then only the field
separator (not the enclosing characters) is required in the input data.
FIELDS TERMINATED BY
The FIELDS TERMINATED BY clause is used to define the separator character that
distinguishes where fields end in the input file. The character must be specified in
one of the following three ways:
v Surrounded by double quotes, for example, ":"
v Surrounded by single quotes, for example, ':'
v In hexadecimal format, for example, X'3A'
When using hexadecimal format, the quotation marks must be single quotes, not
double quotes.
Note that the FIELDS TERMINATED BY clause specifies a separator, not a true
terminator; the specified character is not required after the last field. For example,
if the colon is the separator, the following two data file formats are equivalent and
valid:
1:2:3:
or
1:2:3
Note that the trailing colon is accepted, but not required, after the final field.
The single quote is defined as the character that encloses embedded field
separators (commas) in the data file. Note that the OPTIONALLY ENCLOSED BY
clause may use either single or double quotes to delimit the enclosing characters.
The following example:
OPTIONALLY ENCLOSED BY ’(’AND")"
illustrates the use of both single and double quotes for enclose_char in the syntax:
ENCLOSED BY enclose_char [AND enclose_char]
The example is unusual, but its potential for confusion makes it worth noting.
The following example summarizes the use of separators and enclosing characters.
In this example, the ":" (colon) is defined as the separator (FIELDS TERMINATED
BY) and the parentheses are used to enclose the ":" (colon), which is embedded in
the field and should not be interpreted as a separator. The example also contains
two fields, the first of which is VARCHAR and the second of which is INTEGER.
POSITION
The POSITION keyword is used to define a field's position in the logical record.
Both the start and the end position must be defined.
NULLIF
The NULLIF keyword is used to give a column a NULL value if the appropriate
field has a specified value. An additional keyword specifies the value the field
must have. The keyword BLANKS sets a NULL value if the field is empty; the
keyword NULL sets a NULL value if the field is the string 'NULL'; the definition
'string' sets a NULL value if the field matches the string 'string'; the definition
'((start : end) = 'string')' sets a NULL value if a specified part of the field matches
the string 'string'.
The following example shows the use of the NULLIF keyword with the keyword
BLANKS to set a NULL value if the field is empty. It also shows the use of the
keyword NULL to set a NULL value if the field is the string 'NULL'.
LOAD
INFILE ’test7.dat’
INTO TABLE SLTEST
FIELDS TERMINATED BY ’,’
(
NAME VARCHAR NULLIF BLANKS,
ADDRESS VARCHAR NULLIF NULL,
ID INTEGER NULLIF BLANKS
)
The following example uses the definition '((start : end) = 'string')' for the third
field in the input file. This syntax only works with fixed-width fields because the
exact position of the 'string' must be specified.
LOAD
INFILE ’7b.dat’
INTO TABLE t7
(
NAME CHAR(10) POSITION(1:10) NULLIF BLANKS,
ADDRESS CHAR(10) POSITION(11:20) NULLIF NULL,
ADDR2 CHAR(10) POSITION(21:30) NULLIF((21:30)=’MAKEMENULL’)
)
Note that in this example, the string is case sensitive. 'MAKEMENULL' and
'makemenull' are not equivalent.
The default file name is the same as the exported table name.
solidDB Speed Loader can use the data and control files to load data into a
solidDB database.
Note:
The user name used for performing the export operation must have SELECT rights
on the table exported. Otherwise no data is exported.
If you start solidDB Export without any arguments, a summary of the arguments
with a brief description is displayed.
where
v options can be:
Table 33. solexp command options
-C catalog_name Set the default catalog from where data is read from or written to
This option can be used only when exporting the data of a single
table.
The default data and control file name is the same as the exported
table name (<tablename>.dat and <tablename>.ctr).
v network_name is the network name of a solidDB server that you are connected
to.
The given network name must be enclosed in double quotation marks. Refer to
5, “Managing network connections,” on page 81 for further information.
Tip: Logical data source names can also be used with the solidDB tools.
v username is required to identify the user and to determine the user's
authorization. Without appropriate rights, command execution is denied.
v password is the user's password for accessing the database. The password is
– mandatory, if the password is not read from a file (defined with option -x
pwdfile: filename)
– optional, if the password is read from a file
Example
solexp -CMyCatalog -sMySchema -ofile.dat "tcp 1315" MyID My_pwd MyTable
Error messages
v When there is an error in the command line entry, solexp gives you a list of the
possible syntax options as a result. Check your entries on the command line.
v Username, password and table name are always expected:
For example, with the command
solexp "tcp 1315" dba dba
you may receive a SOLID Communication Error 21306. This is because there was
no server listening to the environment-dependent default. In this case, solexp
assumes:
– "tcp 1315" is the username
– dba is the password
– dba is the table name
In this case, the correct command is, for example:
solexp "tcp 1315" dba dba myTable
v If you omit the name of the schema, you may get a message saying that the
specified table could not be found. The solexp program cannot find the table if
it does not know which schema to look in.
solidDB Data Dictionary produces an SQL script that contains data definition
statements describing the structure of the database. The generated script contains
definitions for tables, views, indexes, triggers, procedures, sequences, publications,
and events.
Note:
1. User and role definitions are not listed for security reasons.
2. The user name used for performing the export operation must have select
right on the tables. Otherwise the connection is refused.
Related concepts:
“Troubleshooting solidDB Data Dictionary (soldd)” on page 153
where
v options can be:
Table 34. soldd command line options
-C catalog_name Set the default catalog from where data definitions are read from or
written to
v network_name is the network name of a solidDB server that you are connected
to.
The given network name must be enclosed in double quotation marks. Refer to
5, “Managing network connections,” on page 81 for further information.
Tip: Logical data source names can also be used with the solidDB tools.
v username is required to identify the user and to determine the user's
authorization. Without appropriate rights, command execution is denied.
v password is the user's password for accessing the database. The password is
– mandatory, if the password is not read from a file (defined with option -x
pwdfile: filename)
– optional, if the password is read from a file
Note:
v If no table name is given, all definitions to which the user has rights are listed.
v If the objectname parameter is provided with one of the -x options, the name is
used to print only the definition of the named object.
v The -t tablename option is still supported in order to keep old scripts valid.
When there is an error in the soldd startup command line, soldd gives you a list of
the possible syntax options as a result. Check the command line you entered.
where
v command can be any of the following:
– solcon
– soldd
– solexp
– solid
– solload
– solloado
– solsql
v filename can be either absolute or relative to the working directory
Password file
In the file where the password is stored, the first character string ending at newline
character is read and considered as the password. Preceding space and newline
characters are ignored. If the password includes space or newline characters, it
must be enclosed in single or double quotation marks. However, using quotation
marks means that quotation mark and backslash characters that belong to the
password must be escaped by a backslash character.
Examples
solsql -x pwdfile:userpwd "tcp solsrv 1313" dba
solid -f -c soldb -x pwdfile:solpwd -U dba
Configuration file
A configuration file is not required for solidDB Speed Loader. The configuration
values for the server parameters are included in the solidDB configuration file
solid.ini.
For example, to connect to a server using the UNIX Pipes protocol and with the
server name solidDB, the following lines are needed in the configuration file:
[Com]
Connect=upipe SOLIDDB
The following solidDB tools can be used to output and import data in the system
default locale or a specified locale in both Unicode and partial Unicode databases.
v solidDB SQL Editor (solsql)
v solidDB Data Dictionary (soldd)
v solidDB Export (solexp)
v solidDB Speed Loader (solloado)
solidDB Remote Control (solcon) does not support conversions of data to UTF-8.
For example, if an error message that is output to solcon contains Unicode
encoded data, it is not displayed correctly in the console.
The locale to be used in conversions is defined with the command line options
when starting the tool.
Important:
v The solidDB tools use the solidDB ODBC API 3.5.1; this means that if the
binding method for character data types is defined with the server-side
Srv.ODBCDefaultCharBinding or client-side Client.ODBCCharBinding parameters,
this setting also impacts the behavior of the solidDB tools.
v The Unicode and partial Unicode databases behave differently in reference to
conversions of CHAR and WCHAR data types:
– Unicode databases
Both CHAR and WCHAR data types are converted between the
UTF-8/UTF-16 format in solidDB and the locale/codepage defined with the
chosen binding method.
– partial Unicode databases
CHAR data types are not converted; instead, they are handled in the raw
(binary) format that is used to store CHAR data in partial Unicode databases.
WCHAR data types are converted between the UTF-16 format in solidDB and
the locale/codepage defined with the chosen binding method.
For example, in Linux environments, the locale name for the code
page GB18030 in Chinese/China is zh_CN.gb18030.
Note: If the server-side or client-side parameters in the solid.ini file are set to use
'Raw' binding, you should always use the -m, -M or -u option to override the
solid.ini settings.
The database reload procedure can be useful, for example, for minimizing the
database file size by removing gaps (unused space) that are created during delete
and update operations; the reload rewrites the database without gaps.
Overview:
1. Extract data definitions from the old database.
2. Extract data from the old database.
3. Replace the old database with a new one.
4. Load data definitions into a new database.
5. Load data into the new database.
In this example, the server name is solidDB and the protocol used for connections
is TCP/IP, using port 1964 (network name is "tcpip 1964"). The database has been
created with the user name "dbadmin" and the password "password".
1. Data definitions are extracted with solidDB Data Dictionary (soldd).
Use the following command to extract an SQL script containing definitions for
all tables, views, triggers, indexes, procedures, sequences, and events.
soldd "tcpip 1964" dbadmin password
The soldd command lists all data definitions into one SQL file; the default file
name is soldd.sql.
Tip: The previous two steps can be performed together by starting solidDB
with the following command:
solid -Udbadmin -Ppassword -x execute:soldd.sql
The option -x creates a new database, executes commands from a file, and
exits. The -U and -P options define the username and password.
5. Data is loaded into the new database using the solidDB Speed Loader
(solload).
Use the following command to load data into the new database:
solload "tcpip 1964" dbadmin password table_name.ctr
Tip: The following parameters help you improve database performance or balance
performance against safety. These parameters are discussed in more detail in
Appendix A, “Server-side configuration parameters,” on page 165.
v IsolationLevel
v DurabilityLevel
v DefaultStoreIsMemory
For tips on optimizing solidDB advanced replication, see the IBM solidDB Advanced
Replication User Guide.
Standards compliance
Background
When a transaction is committed, the database server writes data to two locations:
the database file, and the transaction log file. However, the data is not necessarily
written to those two locations at the same time. When a transaction is committed,
the server normally writes the data to the transaction log file immediately — that
is, as soon as the server commits the transaction. The server does not necessarily
write the data to the database file immediately. The server may wait until it is less
busy, or until it has accumulated multiple changes, before writing the data to the
database file.
If the server shuts down abnormally (due to a power failure, for example) before
all data has been written to the database file, the server can recover 100% of
committed data by reading the combination of the database file and the transaction
log file. Any changes since the last write to the database file are in the transaction
log file. The server can read those changes from the log file and then use that
information to update the database file. The process of reading changes from the
log file and updating the database file is called recovery. At the end of the recovery
process, the database file is 100% up to date.
The recovery process is automatically executed always when the server restarts
after an abnormal shutdown. The process is generally invisible to the user (except
that there may be a delay before the server is ready to respond to new requests).
To have 100% recovery, you must have 100% of the transactions written to the log
file. Normally, the database server writes data to the log file at the same time that
the server commits the data. Thus committed transactions are stored on disk and
123
will not be lost if the computer is shut down abnormally. This is called strict
durability. The data that has been committed is durable, even if the server is shut
down abnormally.
With strict durability, the user is not told that the data has been committed until
AFTER that data was successfully written to the transaction log on disk — this
ensures that the data is recoverable if the server shuts down abnormally. Strict
durability makes it almost impossible to lose committed data unless the hard disk
drive itself fails.
If durability is relaxed, the user may be told that the data has been committed even
before the data has been written to the transaction log on disk. The server may
choose to delay writing the data, for example, by waiting until there are several
transactions to write. If durability is relaxed, the server may lose a few committed
transactions if there is a power failure before the data is written to disk.
solidDB allows to control the durability level in variety of ways. For the
server-wide setting, the parameter Logging.DurabilityLevel may take three values:
3 (for 'strict"), 1 (for "relaxed") and 2 (for "adaptive").
Note:
v The above behavior is observed only if the value of the [HotStandby] parameter
SafenessLevel is set to 2safe (default). If this parameter is set to any other value,
the server uses relaxed durability in all cases.
v If HotStandby is not enabled, the "adaptive" setting is treated as 'strict".
If you set the transaction durability level to "relaxed", you risk losing some data if
the server shuts down abnormally after it has committed some data but before it
has written that data to the transaction log. If you use relaxed durability, some
transactions may not have been written to the log file yet, even though those
transactions were committed. Therefore, you should use relaxed durability ONLY
when you can afford to lose a small amount of recent data.
If you want to set a maximum delay time before the server writes data, use the
Logging.RelaxedMaxDelay parameter.
Note: The SERIALIZABLE isolation level is available for disk-based tables only.
For example:
SET ISOLATION LEVEL REPEATABLE READ;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
Note: Immediately after solidDB startup, the reported process size in Windows
operating systems is significantly smaller than the actual allocated size. This is
because cache pages are allocated at this stage, but excluded them from the process
size until they are used for the first time. In UNIX operating systems, the cache
pages are included thus the reported process size is bigger than in Windows
operating systems.
You can control the process size by using the ADMIN COMMAND and parameters
presented in the following sections. The violations of process limits are logged in
the solmsg.out log file.
Srv.ProcessMemoryLimit
The Srv.ProcessMemoryLimit parameter specifies the maximum amount of virtual
memory that can be allocated to the in-memory database process.
When the limit is reached, that is, when the in-memory database process uses up
100% of the memory specified by Srv.ProcessMemoryLimit, the server will accept
ADMIN COMMANDs only. You can use the Srv.ProcessMemoryWarningPercentage
and Srv.ProcessMemoryLowPercentage parameters to warn you about increasing
process memory consumption.
Note:
v The Srv.ProcessMemoryLimit and Srv.ProcessMemoryCheckInterval parameters
are interlinked; if the ProcessMemoryCheckInterval parameter is set to 0, the
ProcessMemoryLimit parameter is not effective, that is, there is no process
memory limit.
v You should not set the Srv.ProcessMemoryLimit parameter when using SMA. If
you need to limit the memory the SMA server uses, use the
SharedMemoryAccess.MaxSharedMemorySize parameter.
Prior to exceeding the limit, you have exceeded the warning limit defined with the
ProcessMemoryWarningPercentage parameter and received a warning. When the
Srv.ProcessMemoryLowPercentage limit is exceeded, a system event is given.
Srv.ProcessMemoryWarningPercentage
The Srv.ProcessMemoryWarningPercentage parameter sets the first warning limit for
the total process size. The warning limit is expressed as percentage of the
Srv.ProcessMemoryLimit parameter value.
Srv.ProcessMemoryCheckInterval
The Srv.ProcessMemoryCheckInterval parameter defines the interval for checking
the process size limits. The interval is given in milliseconds.
The factory value is 0, that is, the process size checking is disabled.
Your operating system may also move information from one location to another.
Depending on your operating system, this movement is called paging or
swapping. Many operating systems page and swap to accommodate large amounts
of information that do not fit into real memory. However, this takes time. Excessive
paging or swapping can reduce the performance of your operating system and
indicates that your system's total memory may not be large enough to hold
Database cache
The information managed by solidDB is stored either in memory or on disk. Since
memory access is faster than disk access, it is desirable for data requests to be
satisfied by access to memory rather than access to disk.
or
v 2-5% of the database size,
When estimating the necessary cache size by using the values above, use the larger
value. If the database is purely an in-memory database, the factory value will
suffice. When decreasing the cache size, note that in order to facilitate efficient
checkpoint activity, the size should not be less than 8 MB.
You should increase the value of CacheSize carefully. If the value is too large, it
leads to poor performance because the server process does not fit completely in
memory and therefore swapping of the server code itself occurs. If, on the other
hand, the cache size is too small, the cache hit rate remains poor. The symptoms of
poor cache performance are database queries that seem to be slower than expected
and excessive disk activity during queries.
You can verify if the server is retrieving most of the data from disk instead of from
RAM by checking the cache hit rate using the command ADMIN COMMAND ’status’
or by checking the overall cache and file ratio statistics using ADMIN COMMAND
’perfmon’. For details on these commands, read “Performance counters (perfmon)”
on page 67 and “Checking overall database status” on page 65. Note that the cache
hit rate should be better than 95%.
Note:
solidDB uses a hash table to ease access to the cache. The hash table size equals
the number of pages in the cache. This guarantees almost collision-free access. If
Note:
Sorting
When the solidDB SQL Optimizer chooses an execution plan, it considers the
performance impact of sorting data. Sorting occurs if the result set is not returned
automatically in the correct order. If sorting is needed, the Optimizer chooses
whether to use the internal sorter or the external sorter. The internal sorter is used
with small result sets (hundreds of rows) while the external sorter is used with
large result sets (thousands of rows).
Sorting occurs when no index satisfies the requested ordering of fetched rows; if
the table data is accessed using the primary key or index, the result set is
automatically in the order specified by the index in use. Hence, you can improve
server performance by designing primary keys and indexes to support the
ordering requirements of frequently used, performance-critical queries.
Note: Some queries require sorting implicitly. For example, if the optimizer
chooses a JOIN operation to use the MERGE JOIN algorithm, the result sets to be
joined require sorting before the join can occur.
Internal sorter
The internal sorter performs all sorting in the main memory. The amount of
memory used for sorting is defined with the SQL.SortArraySize parameter; the
parameter value defines the size of the array (in rows) that is used for ordering the
result set of a query. For example, if you specify a value of 1000, the server will
create an array big enough to sort 1000 rows of data. If the amount of data to be
sorted does not fit into the allocated memory, increase the value of the parameter
SQL.SortArraySize.
External sorter
If the sorting task does not fit in the main memory (typically with large result
sets), the Optimizer uses the external sorter, which stores intermediate information
to disk. The external sorter is enabled by default (Sorter.SorterEnabled=yes).
The temporary files used by the external sort are created in a directory or
directories specified with the Sorter.TmpDir_N parameter. The files are deleted
automatically after sorting has finished.
An external sort requires space both on disk and in memory, not just space on the
disk. You can configure the maximum amount of memory used for sorting with
the Sorter.MaxMemPerSort and Sorter.MaxCacheUsePercent parameters.
You can query the Optimizer decisions for sorting using the EXPLAIN PLAN FOR
statement.
If the Optimizer is not choosing the optimal query execution plan, you can
override the Optimizer decision by using optimizer hints. For more information,
see Hints in the IBM solidDB Programmer Guide.
Additionally, the performance counters with the prefix Sorter provide information
on the external sorter tasks. To view the Sorter performance counters, issue the
following command:
ADMIN COMMAND ’pmon sorter’
For example, high values of the Sorter start sort counter indicate excessive use of
the external sorter. If you have enough memory available, you can increase the
value of the SQL.SortArraySize parameter to avoid the use of the external sorter.
For more information about these two parameters, see Appendix A, “Server-side
configuration parameters,” on page 165.
Distributing I/O
Disk contention occurs when multiple processes try to access the same disk
simultaneously. To avoid this, move files from heavily accessed disks to less active
disks until they all have roughly the same amount of I/O.
It is usually faster to scan a table if the disk file is contiguous on the disk, rather
than spread across many non-contiguous disk blocks. To reduce existing
fragmentation, you may want to run defragmentation software if one is available
on your system. If your database file is growing, you may be able to reduce future
file fragmentation by using the configuration parameter ExtendIncrement.
Increasing the size of this parameter tells the server to allocate larger amounts of
disk space when it runs out of space. (Note that this does not guarantee contiguity
because the operating system itself may allocate non-contiguous sectors to satisfy
even a single request for more space.) As a general rule, larger values of
ExtendIncrement improve performance slightly, while smaller values keep the
database size slightly smaller. See Appendix A, “Server-side configuration
parameters,” on page 165, for more details about ExtendIncrement.
As the Bonsai Tree performs concurrency control, storing delete, insert, and update
operations, as well as key values, it merges new committed data to the storage tree
as a highly-optimized batch insert. This offers significant I/O optimization and
load balancing.
You can adjust the number of index inserts made in the database that causes the
merge process to start by setting the following parameter in the General section of
the solid.ini file. For example:
MergeInterval = 1000
Normally the recommended setting is the default value, which is cache size
dependent. The default is calculated dynamically from the cache size, so that only
part of the cache is used for the Bonsai Tree. If you change the merge interval, be
sure that the cache is large enough to accommodate the Bonsai Tree. The longer the
merge interval is (i.e. the more data that is stored in memory before being moved
to the main storage tree), the larger the cache needs to be.
Note: Although the server will have higher performance if merge intervals are
less frequent (i.e. batch inserts are larger), you may also see less consistent
response times. If your highest priority is not overall throughput, but is instead to
minimize the longest response time, then you may want to make merge intervals
more frequent rather than less frequent. More frequent merges will reduce the
worst case delays that interactive users may experience.
Tuning checkpoints
Checkpoints are used to store a transactionally-consistent state of the database
quickly onto the disk.
Checkpoints affect:
v runtime performance
v recovery time performance
Checkpoints cause solidDB to perform data I/O with high priority, which
momentarily reduces the runtime performance. This overhead is usually small.
Similar to merge intervals, less frequent checkpoints may mean less frequent, but
longer delays before the system responds to interactive queries. More frequent
checkpoints tend to minimize the worst case delays that an interactive user might
experience. However, such delays may be more frequent even if they are shorter.
Frequent checkpoints can reduce the recovery time in the event of a system failure.
If the checkpoint interval is small, relatively few changes to the database are made
between checkpoints and consequently, few changes need to be made during
recovery. To speed up recoveries, create checkpoints frequently; however, the server
performance is reduced during the creation of a checkpoint. Furthermore, the
speed of checkpoint creation depends on the amount of database cache used; the
more database cache is used, the longer the checkpoint creation will take. The
database cache size is controlled with the IndexFile.CacheSize parameter.
When other connections perform many write operations, the server must use a
large amount of memory to provide a consistent image of the database. If an open
transaction remains uncommitted for a long duration of time, solidDB requires
more memory; if the amount of memory available is insufficient, then solidDB
performs excessive paging or swapping, which slows performance.
Note that even in autocommit mode, SELECT statements are not automatically
committed after the data is read. solidDB cannot immediately commit SELECTs
since the rows need to be retrieved by the client application first. Even in
autocommit mode, you must either explicitly commit work, or you must explicitly
close the cursor for the SELECT statement. Otherwise, the SELECT transaction is
left open until the connect timeout expires.
Command/File Information
solmsg.out Obtain the date and time when new connections are created.
ADMIN COMMAND 'trace on sql' Obtain information when new connections are started. The
results are written to the soltrace.out file.
ADMIN COMMAND 'report filename.txt' Obtain a list of internal variables containing connection and
status information.
Command/File Information
ADMIN COMMAND 'report filename.txt' Obtain a list of internal variables containing connection and
status information. To find out connections that have not
committed their transaction, look for the Readlevel for each
connection. If the transaction at a particular connection is
properly closed, the Readlevel should be zero (0) for that
connection.
Make sure these operations succeed by checking the return code or by properly
catching the possible exception. Be aware how many database connections your
application has, when and where they are created, and when the transactions at
these connections are committed.
This means that the application program only receives return code 'SUCCESS' from
the ODBC Driver Manager, even though no transaction is committed in the
Make sure that you are aware of all database connections. Note that each FETCH
after COMMIT (keeping the statement handle alive) also causes a new transaction
to start.
Slow response time is experienced for all Insufficient cache size. Increase the cache size. Allocate for cache
queries. An increase in the number of at least 0.5 MB per concurrent user or
concurrent users deteriorates the 2-5% of the database size. For more
performance more than linearly. When all details, read the section Defining database
users are thrown out and then cache size in IBM solidDB Administrator
reconnected, performance still does not Guide.
improve.
Slow response time is experienced for all The Bonsai Tree is too large to fit into the Make sure that there are no
queries and write operations. When all cache. unintentionally long-running transactions.
users are thrown out and are connected, Verify that all transactions (also read-only
performance only improves temporarily. transactions) are committed in a timely
The disk is very busy. manner. For more details, read Reducing
Bonsai Tree size by committing transactions
in IBM solidDB Administrator Guide.
The server process footprint grows SQL statements have not been closed and Make sure that the statements that are no
excessively and causes the operating dropped after use. longer in use by the client application are
system to swap. The disk is very busy. closed and dropped in a timely manner.
The ADMIN COMMAND 'report' output
shows a long list of currently active
statements.
To resolve a problem on your own, you can find out how to identify the source of
a problem, how to gather diagnostic information, where to get fixes, and which
knowledge bases to search. If you need to contact IBM Software Support, you can
find out what diagnostic information the service technicians require to help you
address a problem.
Troubleshooting a problem
Troubleshooting is a systematic approach to solving a problem. The goal of
troubleshooting is to determine why something does not work as expected and
how to resolve the problem.
The first step in the troubleshooting process is to describe the problem completely.
Problem descriptions help you and your IBM Support representative know where
to start to find the cause of the problem. This step includes asking yourself basic
questions:
v What are the symptoms of the problem?
v Where does the problem occur?
v When does the problem occur?
v Under which conditions does the problem occur?
v Can the problem be reproduced?
The answers to these questions typically lead to a good description of the problem,
which can then lead you a problem resolution.
When starting to describe a problem, the most obvious question is "What is the
problem?" This question might seem straightforward; however, you can break it
down into several more-focused questions that create a more descriptive picture of
the problem. These questions can include:
v Who or what is reporting the problem?
v What are the error codes and messages?
v How does the system fail? For example, is it a loop, hang, crash, performance
degradation, or incorrect result?
Determining where the problem originates is not always easy, but it is one of the
most important steps in resolving a problem. Many layers of technology can exist
between the reporting and failing components. Networks, disks, and drivers are
only a few of the components to consider when you are investigating problems.
The following questions help you to focus on where the problem occurs to isolate
the problem layer:
139
v Is the problem specific to one platform or operating system, or is it common
across multiple platforms or operating systems?
v Is the current environment and configuration supported?
v Is the application running locally on the database server or on a remote server?
If one layer reports the problem, the problem does not necessarily originate in that
layer. Part of identifying where a problem originates is understanding the
environment in which it exists. Take some time to completely describe the problem
environment, including the operating system and version, all corresponding
software and versions, and hardware information. Confirm that you are running
within an environment that is a supported configuration; many problems can be
traced back to incompatible levels of software that are not intended to run together
or have not been fully tested together.
Responding to these types of questions can give you a frame of reference in which
to investigate the problem.
Knowing which systems and applications are running at the time that a problem
occurs is an important part of troubleshooting. These questions about your
environment can help you to identify the root cause of the problem:
v Does the problem always occur when the same task is being performed?
v Does a certain sequence of events need to occur for the problem to surface?
v Do any other applications fail at the same time?
Answering these types of questions can help you explain the environment in
which the problem occurs and correlate any dependencies. Remember that just
because multiple problems might have occurred around the same time, the
problems are not necessarily related.
The ADMIN COMMAND 'trace' command controls the solidDB trace facility. The ADMIN
COMMAND 'trace on sql' enables tracing of SQL statements. The tracing
information is output by default to the soltrace.out file.
The ADMIN COMMAND 'monitor' command controls the solidDB monitoring facility.
The ADMIN COMMAND 'monitor on' enables monitoring of user activity and SQL
calls. The monitoring logs are output to the soltrace.out file.
The SQL Info facility generates information for each SQL statement processed by
solidDB.
To generate the SQL Info, you run your application with the SQL Info facility
enabled. The SQL Info facility can be enabled in the following ways:
The tracing level (info_level) is defined as an integer between 0 (no tracing) and 8
(solidDB info from every fetched row).
Table 39. SQL Info levels
0 no output
The trace information is output by default to the soltrace.out file in the solidDB
working directory. You can also specify the output file using the SQL.InfoFileName
Examples
[SQL]
Info = 1
InfoFileName = solidsql_trace.txt
The following command turns on the SQL Info facility on level 3, outputting the
trace information to a my_query.txt file in the working directory. This SQL Info
facility is turned on only for the client that executes the statement.
SET SQL INFO ON LEVEL 1 FILE ’my_query.txt’
The following SQL statement turns off the SQL Info facility:
SET SQL INFO OFF
Additionally, you can generate the stack traces information for all currently
running threads by sending the server the SIGUSR1 signal.
Note: The stack traces facility is not supported on Windows operating systems.
Procedure
v To enable or disable the stack traces facility, set the Srv.StackTraceEnabled
parameter to 'yes' or 'no'.
v To output the stack trace information manually without shutting down the
server, send the server the SIGUSR1 signal.
For example, use the following command in Linux environments:
kill -SIGUSR1 <process_id>
Network trace facility: Network tracing can be done on the solidDB node, on the
application node, or concurrently on both nodes. The trace information is written
to the default trace file or the file specified with the Com.TraceFile parameter.
The default name of the output file is soltrace.out. This file is created in the
current working directory of the server or client depending on which end the
tracing is started.
You can turn the network trace facility on in the following ways:
v Use the Com.Trace and Com.TraceFile parameters.
Defining the TraceFile configuration parameter automatically turns on the
Network trace facility.
v Use the environment variables SOLTRACE and SOLTRACEFILE.
The environment variable settings override the definitions in the solid.ini file.
Defining the SOLTRACEFILE environment variable automatically turns on the
Network trace facility.
v Use the option -t and/or -ofilename as a part of the network name.
– Option -t turns on the Network trace facility.
– Option -o turns on the facility and defines the name of the trace output file.
Example:
[Com]
Connect = nmp SOLIDDB
Listen = nmp SOLIDDB
Trace = Yes
or
set SOLTRACEFILE = trace.out
or
[Com]
Connect = nmp -oclient.out soliddb
Listen = nmp -oserver.out soliddb
Ping facility: The solidDB ping facility can be used to test the performance and
functionality of the network connection. The ping facility is built into all solidDB
client applications and is turned on with the network name option -p level .
Clients can always use the ping facility at level 1. Levels 2, 3, 4 or 5 may only be
used if the server is set to use the ping facility at least at the same level.
Table 40. Ping facility levels
Note:
If the solidDB client does not have an existing server connection, you can use the
SQLConnect() function with the connect string option -p1 (ping test, level 1) to
check if solidDB is listening in a certain address. Without logging into solidDB,
SQLConnect() can then check the network layer and ensure solidDB is listening.
When used in this manner, SQLConnect() generates error code 21507, which means
the server is alive.
Turn on the ping facility by using the following network name syntax:
protocol_name -p level server_name
For example, to run the ping facility with solidDB SQL Editor (solsql), use the
following command
solsql "tcp -p1 -oping.out 1964"
This runs the ping facility at the level 1 and outputs the results into soltrace.out.
This test checks if the server is alive and exchanges one 100 byte message to the
server.
After the ping facility has been run, the client exits with the following message:
SOLID Communication return code xxx: Ping test successful/failed,
results are in file FFF.XX
The server-side ping level that is set with the Com.Listen parameter restricts the
available ping levels on the client side. Clients can always use the ping facility at
level 1 (0 is no operation/default). Levels 2, 3, 4 or 5 may only be used if the
server is set to use the ping facility at least at the same level.
Note: Ping clients running at level greater than 3 may cause heavy network traffic
and may slow down any application that is using the network, including any SQL
clients connected to the same solidDB.
The components for the solidDB Universal Cache must be installed and configured
in the order described in section Overview of installation and configuration steps.
Review the steps below and ensure that the installation and configuration steps
were followed.
To set up replication between databases, you need define and create various
entities and components which are dependent on each other. These entities and
components must be created in the following order and modified or deleted in the
reverse order. For more details and instructions, see the InfoSphere Change Data
Capture Management Console, Administration Guide.
1. Databases
2. InfoSphere CDC instances
3. Datastores
4. Subscriptions
5. Table mappings
If you need to make changes to your replication subscriptions, you must first end
replication on your subscriptions. For more details and instructions, see section
Ending replication on a subscription in the InfoSphere Change Data Capture Management
Console, Administration Guide.
Causes
The command ADMIN COMMAND ’hsb netcopy’ does not copy any log files.
Subsequently, because InfoSphere CDC replication is asynchronous in nature,
InfoSphere CDC for solidDB might not have processed all the transactions up to
the point from which the netcopy was made. This means that the log position
InfoSphere CDC for solidDB tries to use after the switchover might not be valid –
the log entry for the last transaction on node 1 before the netcopy might not exist
on the new primary (node 2).
Workaround
To ensure that InfoSphere CDC for solidDB has access to a valid log entry in the
new primary server (node 2) after a switchover:
v Before performing netcopy, copy the log files from the primary server (node 1)
to the secondary server (node 2). This ensures that InfoSphere CDC for solidDB
has access to the log positions of the transactions that were executed before the
netcopy was made.
or
v Do not perform switchover shortly after netcopy or wait for several transactions
to be replicated to the back-end database before performing the switchover. This
ensures that log positions in the primary server (node 1) and secondary server
(node 2) are synchronized.
or
v If the switchover has already taken place (for example, due to a failure of node
1):
1. Recover the old primary server (node 1).
2. Perform a switchover to return the old primary server (node 1) back to a
primary server.
3. Restart replication on the subscription.
InfoSphere CDC for solidDB connections to solidDB server can be idle for long
periods of time, causing connection idle timeouts. By default, the solidDB server
timeout for idle connections is set to 480 minutes (specified with the
Srv.ConnectTimeOut parameter).
Workaround:
Set the connection idle timeout for the InfoSphere CDC for solidDB connection to
infinite by using the non-standard solidDB JDBC connection property
solid_idle_timeout_min=0. The InfoSphere CDC for solidDB specific connection
settings are specified with the InfoSphere CDC configuration tool (dmconfigurets),
using the Database area > Advanced button in Windows operating systems or the
Configure advanced parameters > Modify settings option in Linux and UNIX
operating systems.
Note: The timeout setting specified for the InfoSphere CDC for solidDB instance
does not impact the server setting (Srv.ConnectTimeOut) for other connections.
Troubleshooting SMA
This section provides instructions and guidelines on how to prevent or
troubleshoot common problems while configuring or using SMA.
[solid1]~ ./solidsma -f -c .
Server could not allocate shared memory segment by id -1
Causes
The SMA server startup fails because there is no memory available. This
situation can occur if:
v When a SMA application or SMA server terminates abnormally, they can
leave shared memory allocated. Even if you shut down all SMA
processes, the shared memory is still left reserved.
v You have allocated too little memory for SMA use.
This leads to a situation where all memory is used and you cannot start a
SMA server any more.
Resolving the problem
In Linux and UNIX environments, clear the hanging shared memory
segments with the ipcrm command.
if [ $# -ne 1 ]
then
echo "$0 user"
exit 1
fi
for shm_id in $(ipcs -m|grep $1|awk -v owner=$1 ’ { if ( owner == $3 ) {print $2} }’)
do
ipcrm -m $shm_id
done
For more details on the ipcrm command, see your operating system
documentation.
1. Force the start address space for the SMA server to a different address
space using the environment variable SOLSMASTART.
v Linux and UNIX operating systems:
export SOLSMASTART=<start_address_space>
For example:
export SOLSMASTART=0x2b0000000000
Symptom
solidDB goes down with Error 11003 File write failed, configuration
exceeded (SU_ERR_FILE_WRITE_CFG_EXCEEDED).
Note:
v You can only add new database files with the ADMIN COMMAND ’filespec -a’
command, you cannot modify the size of existing database files.
v The new database file specification is stored in the solid.ini configuration
file at next shutdown.
or
1. Shut down solidDB.
2. Modify the IndexFile.FileSpec parameter in the solid.ini file.
v Increase the maximum limit for the database file.
or
v Divide the database into multiple files by using the FileSpec_[1..n] format.
For example:
Important: If you have not defined the FileSpec_1 parameter earlier, use the
default file size (2147483647) as shown above.
3. Restart solidDB.
Related information:
“FileSpec_[1...n] parameter” on page 53
“ADMIN COMMAND” on page 309
Troubleshooting MME.ImdbMemoryLimit
If you get an error message indicating that the limit set with MME.ImdbMemoryLimit
has been reached, you need to take action immediately.
You must address both the immediate problems and the long term problems. The
immediate problems are to prevent users from experiencing serious errors, and to
free up some memory before shutting down the server so that your system is not
out of memory when you restart the server. For long term, you need to ensure that
you will not run out of memory in the future as tables expand.
If there were not enough temporary tables and transient tables to free enough
memory, do the following:
1. Drop one or more indexes on in-memory tables.
2. Shut down the server.
3. If there was absolutely nothing in memory that you could discard (for example,
you had only normal in-memory tables, none of which had indexes, and all of
which had valuable data), increase the MME.ImdbMemoryLimit slightly before
restarting the server. This may force the server to start paging virtual memory
which will greatly reduce performance, but it will allow you to continue using
the server and address the long-term problems. If you previously set the
ImdbMemoryLimit a little bit lower than the maximum, you will be able to raise
it slightly now without forcing the system to start paging virtual memory.
4. Restart the server.
5. Minimize the number of people using the system until you have had time to
address the long-term problem. Ensure that users do not create temporary
tables or transient tables until the long-term problem has been addressed.
After you have solved the immediate problem and have ensured that the server
has at least some free memory, you are ready to address the long term problems.
For long term, reduce the amount of data stored in in-memory tables. The ways to
do this are to reduce the number or size of in-memory tables (including temporary
tables and transient tables), or reduce the number of indexes on in-memory tables.
v If the problem was caused solely by heavy usage of temporary or transient
tables, ensure that not too many sessions create too many large temporary or
transient tables at the same time.
v If the problem was caused by using too much memory for normal in-memory
tables, and if you cannot increase the amount of memory available to the server,
move one or more tables out of main memory and onto the disk.
Note: The intermediate table does not need indices. The indices should be
re-created in the new table after the data has been successfully copied.
3. Drop the in-memory table.
4. Rename the disk-based table to have the original name of the dropped
in-memory table.
Tip:
v You should set the MME.ImdbMemoryLimit to a slightly lower value than the
maximum you really have available. If you run out of memory and have no
unnecessary in-memory tables or indexes that you can get rid of, you can
increase the MME.ImdbMemoryLimit slightly, restart the server with enough free
memory that you can address the long-term need.
v Use the MME.ImdbMemoryWarningPercentage to warn you about increasing
memory consumption.
v Not all situations require you to reduce the number of in-memory tables. In
some cases, the most practical solution may be to simply install more memory in
the computer.
Symptom
Causes
The error 23007 is a generic solidDB procedure that is returned when you attempt
to create a stored procedure with a name that already exists in the database. soldd
creates system stored procedures to export the current value of a database
sequence object and drops the same once the sequence object is exported. If soldd
is interrupted during the schema export, dropping the system stored procedures
might fail. When soldd is rerun, error 23007 is returned.
To search knowledge bases for information that you need, use one or more of the
following approaches:
Procedure
v Find the content that you need by using the IBM Support Portal.
The IBM Support Portal is a unified, centralized view of all technical support
tools and information for all IBM systems, software, and services. The IBM
Support Portal lets you access the IBM electronic support portfolio from one
place. You can tailor the pages to focus on the information and resources that
you need for problem prevention and faster problem resolution.
The following link provides a list of all solidDB product family TechNotes,
ordered by publication date.
– solidDB product family TechNotes
v Search for content about solidDB products in developerWorks®
developerWorks is an IBM resource for developers and IT professionals.
v Search for content by using the IBM masthead search. You can use the IBM
masthead search by typing your search string into the Search field at the top of
any ibm.com® page.
v Search for content by using any external search engine, such as Google, Yahoo,
or Bing. If you use an external search engine, your results are more likely to
include information that is outside the ibm.com domain. However, sometimes
Tip: Include "IBM" and the name of the product in your search if you are
looking for information about an IBM product.
Getting fixes
A product fix might be available to resolve your problem.
All solidDB fix packs and interim fixes are available through Fix Central
(http://www.ibm.com/support/fixcentral/).
Procedure
1. Visit the following solidDB Support page for a list of available fix packs and
download links to the installation images: Fix packs by version for solidDB and
solidDB Universal Cache
2. Determine which fix pack you need. In general, the installation of the most
recent fix pack is recommended in order to avoid encountering problems
caused by software defects already known and corrected.
3. Download the fix pack and extract the files into a directory of your choice.
4. Apply the fix. Follow the instructions in the readme.txt file provided with the
fix.
Tip: You can view and download the readme.txt file separately using the Fix
Central HTTP download option.
Before contacting IBM Software Support, your company must have an active IBM
software maintenance contract, and you must be authorized to submit problems to
IBM. For information about the types of available support, see the Support
portfolio topic in the Software Support Handbook.
Procedure
1. Define the problem, gather background information, and determine the severity
of the problem. For more information, see the Getting IBM support topic in the
Software Support Handbook.
2. Collect diagnostic information.
See “Collecting diagnostics data” on page 157 for details.
3. Submit the problem to IBM Software Support in one of the following ways:
Results
If the problem that you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround that you can implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
IBM Support Portal daily, so that other users who experience the same problem
can benefit from the same resolution.
The steps assume that you have already opened a problem management record
(PMR) with IBM Software Support.
Procedure
v To submit files (via FTP) to the Enhanced Centralized Client Data Repository
(EcuRep):
1. Package all files into ZIP or TAR format, and name the package according to
your Problem Management Record (PMR) identifier.
Your file must use the following naming convention in order to be correctly
associated with the PMR: xxxxx.bbb.ccc.yyy.yyy, where xxxxx is the PMR
number, bbb is the PMR's branch number, ccc is the PMR's territory code,
and yyy.yyy is the file name.
2. Using an FTP utility, connect to the server ftp.emea.ibm.com.
3. Log in as the userid "anonymous" and enter your e-mail address as your
password.
4. Go to the toibm directory. For example, cd toibm.
5. Go to one of the operating system-specific subdirectories. For example, the
subdirectories include: aix, linux, unix, or windows.
6. Change to binary mode. For example, enter bin at the command prompt.
7. Put your file on the server by using the put command. Use the following file
naming convention to name your file and put it on the server. Your PMR will
be updated to list where the files are stored using the format:
xxxx.bbb.ccc.yyy.yyy. (xxx is the PMR number, bbb is the branch, ccc is the
territory code, and yyy.yyy is the description of the file type such as tar.Z or
xyz.zip.) You can send files to the FTP server, but you cannot update them.
Any time that you must subsequently change the file, you must create a new
file name.
8. Enter the quit command.
v To submit files using the ESR tool:
1. Sign onto ESR.
The solidsupport utility collects diagnostic files such as solmsg, soltrace, and
ssdebug from the database instance in question and stores them in a compressed
archive file (solidsupport.zip). The utility also produces directory listings of
database, logging, and sorter directories, and collects various operating system and
environment specific information.
Important: To protect the security of your data, solidsupport does not capture any
user data from tables or logs by default. To include database and log files and all
files from database working directory, use the option -a.
Note:
v The solidsupport utility only collects existing files; it does not generate any
diagnostics files, such as the trace files (soltrace.out). You need to first enable
the generations of log files, as described in the Notes column in the above table.
v The solidsupport utility does not collect any information from the client side
(ODBC/JDBC drivers). You need to collect the client-specific information
manually; for more information, see section “Collecting client and other
diagnostics data” on page 160.
Default is solidsupport.zip.
The solidsupport utility collects data from the machine where it is run. The
configuration file path is used as a working directory for solidsupport; all output
files are written to that directory.
v In a client-server environment, database-related information are collected from
the machine where the database resides and from the location specified by
solid.ini configuration file.
v In HotStandby setups, you need to run solidsupport on both HotStandby
nodes.
Examples
Example 1
Example 2
Gathering diagnostics data on solidDB ODBC API: If the problem concerns the
performance of a specific solidDB ODBC API or SQL statement, run the SQL Info
facility at level 4.
If the problem concerns the cooperation of solidDB and any independent software
vendor (ISV) software package, include the following information:
v Full name of the software
v Version and language
v Manufacturer
v Error messages from the ISV software package
In Windows environments, you may also use the ODBC trace facility
Administrative Tools > ODBC (Data Sources) > Tracing to get a log of the ODBC
statements.
Procedure
v To subscribe to RSS feeds, copy the RSS feed URL to your RSS reader.
– solidDB Support RSS - http://www.ibm.com/software/support/rss/db2/
3457.xml?rss=s3457&ca=rssdb2
– solidDB Product Family forum RSS - http://www.ibm.com/developerworks/
forums/rss/rssmessages.jspa?forumID=1310
For general information about RSS, including steps for getting started and a list
of RSS-enabled IBM webpages, visit the IBM Software Support RSS feeds site.
Most parameters in most sections apply to all solidDB components. The sections
that do not apply to all components are listed below:
v The MME section applies only to in-memory databases.
v The Synchronizer section applies only to solidDB advanced replication
capability.
v The HotStandby section only applies to the High Availability component.
The descriptions of a some parameters specify that those parameters (or some
specific settings of those parameters) apply only to a particular component. Each
exception is documented in the description of the parameter itself.
Accelerator section
Table 44. Accelerator parameters
Factory Access
[Accelerator] Description Value Mode
ImplicitStart If set to yes, solidDB starts automatically as soon as the ODBC yes RW/
API function SQLConnect is called in a user application. If set to Startup
no, solidDB must be explicitly started with a call to the SSC API
function SSCStartServer.
165
Table 44. Accelerator parameters (continued)
Factory Access
[Accelerator] Description Value Mode
ReturnListenErrors If this parameter is set to yes and network listening fails, the no RW/
SSCStartServer function returns an error. Startup
Cluster section
Table 45. Cluster parameters
Communication section
Table 46. Communication parameters
Access
[Com] Description Factory Value Mode
Defines the network (listening) name for a server. The format of the tcp 1964
Listen network name is: RW
protocol_name [options] server_name[,protocol_name
[options] server_name, ...]
You can define several network listening names. When the solidDB
server process is started, it publishes at least one network name
that distinguishes it in the network. The server can then start to
listen to the network using the given network name.
Note: The ADMIN COMMAND ’par com.listen=value’ command does
not replace existing network listening names; it appends new
listening names to the existing list.
Defines the maximum length of a single physical network message OS dependent RW/
MaxPhysMsgLen in bytes; longer network messages will be split into smaller Startup
messages of this size.
RConnectLifetime A time period in seconds for how long the idle connections are 60 RW
kept open in the pool. Whenever the connection is used, the timer
starts from zero. Valid values range from 0-3600 Unit: 1 second
Access
[Com] Description Factory Value Mode
ReadBufSize Sets the buffer size in bytes for the data read from the network OS dependent RW/
Startup
SocketLinger This parameter controls the TCP socket linger (SO_LINGER) no RW/
behavior after a close on the socket connection is issued. It Startup
indicates if the system attempts to deliver any buffered data (yes),
or if the system discards it (no), when a close() is issued. The
parameter affects all server-side connections, including advanced
replication and HotStandby.
SocketLingerTime This parameter defines the length of the time interval (in seconds) 0 RW/
the socket lingers after a close is issued. If the time interval expires Startup
before the graceful shutdown sequence completes, an abortive
shutdown sequence occurs (the data is discarded). The default
value zero indicates that the system default is used (typically, 1
second).
TcpKeepAlive This parameter can only be used on Linux, HP-UX, and Solaris no RW/
platforms. On other platforms, the parameter has no effect. Startup
Access
[Com] Description Factory Value Mode
TcpKeepAliveProbeCount This parameter can only be used on Linux, HP-UX, and Solaris 9 RW/
platforms. On other platforms, the parameter has no effect. Startup
Trace If this parameter is set to yes, trace information about network no RW/
messages for the established network connection is written to a file Startup
specified with the TraceFile parameter. The factory value for the
TraceFile parameter is soltrace.out.
If the Trace parameter is set to yes, trace information about
TraceFile network messages is written to a file specified with this TraceFile soltrace.out RW/
parameter. (written to the Startup
current working
directory of the
server or client
depending on
which end the
tracing is started)
WriteBufSize Sets the buffer size in bytes for the data written into the network OS dependent RW/
Startup
General section
Table 47. General parameters
BackupCopyLog If set to yes, backup operation will copy log yes RW/Startup
files to the backup directory
BackupDeleteLog If set to yes, old log files will be deleted yes RW/Startup
after backup operation
BackupDirectory Makes a backup of the database, log files, 'backup' directory RW/Startup
and the configuration file solid.ini, using
the factory value 'backup' or a given name.
For example, BackupDirectory=abc, creates a
backup on directory 'abc'.
LockHashSize The server uses a hash table (array) to store 1000000 RW/Startup
lock information. If the size of the array is
remarkably underestimated the performance
degrades. Too large hash table doesn't affect
directly to the performance although it
causes memory overhead. The LockHashSize
determines the number of elements in hash
table.
0 (no timeout)
NetBackupCopyLog If set to yes, the log files are copied to the yes RW/Startup
remote backup directory.
NetBackupDeleteLog If set to yes, the backed-up log files are yes RW/Startup
deleted from the source server after the
NetBackup has accomplished.
NetBackupDirectory Sets the remote backup directory. The path No factory value. RW/Startup
expression may be relative or absolute.
Non-absolute paths are related to the
working directory of the NetBackup server.
For example:
ADMIN COMMAND ’report myfile.txt’
tablesize = setting
IndexFile section
Table 49. IndexFile parameters
BlockSize Sets the block size of the database file in bytes; use multiple of 2 16 KB RO
KB: minimum 2 KB, maximum 64 KB
Unit: 1 byte
k=KB
CacheSize Sets the size of database cache memory for the server in bytes; the 32 MB RW
minimum is 512 kilobytes. Although solidDB is able to run with a
small cache size, a larger cache size speeds up the server. The Unit: 1 byte
cache size needed depends on the size of the database file, the k=KB m=MB
number of connected users, and the nature of the operations
executed against the server.
You can change the CacheSize value dynamically with the ADMIN
COMMAND. For example:
ADMIN COMMAND ’parameter IndexFile.CacheSize=40mb’
For example:
FileSpec_1=c:\soldb\solid.db 200000000
You can also have multiple files on a single disk if your physical
disk is partitioned into multiple logical disks and no single logical
disk can accommodate the size of the database file you expect to
create.
PreFlushPercent Sets the percentage of page buffer which is kept clean by the 25 RW/Startup
preflush thread.
Note that the preflush operations prepare the cache for the
allocation of new blocks. The blocks are written onto the disk from
the tail of the cache based on a Least Recently Used (LRU)
algorithm. Therefore, when the new cache blocks are needed, they
can be taken immediately without writing the old contents onto
the disk.
ReadAhead Sets the number of prefetched index reads during long sequential 4 RW/Startup
searches.
ReferenceCacheSizeForHash solidDB uses a hash table to ease access to the cache. The hash 0 RW/Startup
table size equals the number of pages in the cache. This guarantees
almost collision-free access. If the cache size is increased
dynamically, the hash table is not automatically enlarged. This
results in a higher collision probability. To avoid this, you can use
the ReferenceCacheSizeForHash parameter to accommodate the
enlarged cache. The ReferenceCacheSizeForHash parameter value
is used for calculating the cache hash table size. You should only
use the parameter if you know, in advance, what will be the
maximum cache size during the server lifecycle. On the other
hand, if the value is not given, hash table collisions may occur
when the cache size is increased.
Note: The ReferenceCacheSizeForHash parameter value must not
be smaller than the CacheSize value. If it is, the
ReferenceCacheSizeForHash parameter value is rejected and the
default value is used. Also, a message is printed to the solmsg.out
log file.
SynchronizedWrite On UNIX/Linux platforms this parameter may be set to "no" to yes RW/Startup
enact asynchronous I/0. Asynchronous I/O provides, in general,
more performance but it can cause higher variance of response
latencies (lower latency determinism).
Logging section
Table 50. Logging parameters
The specified log directory has to exist prior to starting the server.
If the directory does not exist, solidDB returns the following type
of error:
SsBOpenLocal failed, file ’log/sol00001.log’,
errno = 2, retries = 0, open files = 1
Specifies whether transaction logging is enabled or not. If
LogEnabled transaction logging is disabled, you will get better performance yes RW/Startup
but lower transaction durability; (if solidDB shuts down
unexpectedly, you lose any transactions since the last checkpoint).
This parameter applies to in-memory tables and disk-based tables.
Specifies the mode in which the log will be written. The following
LogWriteMode two modes are available: 2 (Overwrite RW/Startup
v 0: ping-pong method method)
The choice of logging method depends on the log file media and
the level of security required. For details on each of these methods,
read “Transaction logging” on page 35.
Unit: 1 KB k=KB
m=MB
This parameter sets the maximum time in milliseconds that the
RelaxedMaxDelay server waits until the committed transaction(s) are written to the 5000 milliseconds RW/Startup
log. This parameter applies only when the transaction durability (5 seconds)
level is set to RELAXED with the Logging.DurabilityLevel=1
parameter or the SET DURABILITY statement). Unit: 1 ms
LogReader section
Table 51. Log Reader parameters
Unit: megabytes.
Unit: megabytes.
Silent If set to Yes, the Log Reader no RW/Startup
activities are not output to
solmsg.out.
MME section
Table 52. MME parameters
[MME] Description Factory Value Access Mode
When this parameter is set to yes, the lock level is escalated from
row-level to table-level after a specified number of rows (in the
same table) have been locked within the current transaction.
LockHashSize The server uses a hash table (array) to store lock information. If 1000000 RW/Startup
the size of the array is remarkably underestimated the
performance degrades. Too large hash table doesn't affect directly
to the performance although it causes memory overhead. The
LockHashSize determines the number of elements in hash table.
In general, the more locks you need, the larger this array should
be. However, it is difficult to calculate the number of locks that
you need, so you may need to experiment to find the best value
for your applications.
The value that you enter is the number of hash table entries. Each
table entry has a size of one pointer (4 bytes in 32-bit
architectures). Thus, for example, if you choose a hash table size of
1,000,000, then the amount of memory required is 4,000,000 bytes
(assuming 32-bit pointers).
This parameter defines the maximum bytes stored into the free list 100000 RW/Startup
MaxBytesCachedInPrivateMemoryPool of MME's private memory pool (private memory pool is private
for each main-memory index). If there is more free memory in the
private pool, the extra memory is merged into global pools.
MaxCacheUsage The value of MaxCacheUsage limits the amount of D-table cache 8MB RW/Startup
used while checkpointing M-tables. The value is expected to be
given in bytes. Regardless of the value of the MaxCacheUsage at
most half of the D-table cache (IndexFile.CacheSize) is used for
checkpointing M-tables. Value MaxCacheUsage=0 sets the value
unlimited, which means that the cache usage is
IndexFile.CacheSize/2.
MaxTransactionSize This parameter defines the maximum approximate size of a 0 RW
transaction in bytes.
When set to Table, only objects that belong to the same database
table are allocated from a single memory segment. This ensures,
for example, that dropping a whole table frees the memory
segment back to operating system. Only unused memory
segments can be returned back to system.
When set to Global, memory pools are shared between all MME
data.
...
1 rows fetched.
NumberOfMemoryPools This parameter defines the number of global memory pools. 1 RW/Startup
Bigger values may give better performance on multicore systems
with certain load scenarios but they also increase memory slack
and hence server process size.
Possible values are between 1 and 65536. Value 1 means that the
load is executed in single thread.
If a statement has less than the estimated number of sortable rows, the
statement is not complex and it is not passed through to the back end.
Value 0 (zero) means that number of sortable rows is not used when
estimating if the statement is complex.
If a statement has less tables than specified with this parameter, the
statement is not complex and it is not passed through to the back end.
For example:
[Passthrough]
ErrorMapFileName=myfiles/db2tosoliderrors.txt
Example:
; this file maps DB2 native errors to solidDB native errors
-207 13015 ; column not found
-407 13110 ; NULL not allowed for non NULL column
; end of errormappings
When set to yes, solidDB server treats the ODBC handles as 32-bit
integers instead of the 64-bit void pointers that are native on the 64-bit
platforms.
IgnoreOnDisabled The IgnoreOnDisabled parameter defines how the application program yes R/W
perceives the fact that passthrough is disabled. If the value is yes, all
the statements related to passthrough (SET PASSTHROUGH ...) are
ignored. If the value is no, an error is return on any effort to execute
those statements.
The connect string must in the format of the ODBC call SQLConnect(),
as ServerNam.
SqlPassthroughRead The SqlPassthroughRead parameter defines how read statements are none R/W
passed from the solidDB server to the back-end.
SharedMemoryAccess section
Table 54. Shared memory access parameters
[SharedMemoryAccess] Description Factory value Startup
MaxSharedMemorySize This parameter sets the maximum total size of the 0 (automatic) RW
shared memory area used by solidDB.
Unit: 1 byte,
If the SMA server tries to allocate more, an "out of G=GB, M=MB,
memory" error occurs. With value "0", the maximum K=KB
value is set automatically to be the size of the
physical memory of the computer (platform
specific).
Note: The value set with the
SharedMemoryAccess.MaxSharedMemorySize parameter
takes precedence over the value set with any
corresponding kernel parameter (for example,
SHMALL in Linux environments). Because of this,
the value set with the
SharedMemoryAccess.MaxSharedMemorySize parameter
should never be higher than the value set with the
corresponding kernel parameter.
SQL section
Table 56. SQL parameters
InfoFileFlush If set to yes, flushes info file after every yes RW/Startup
write operation
InfoFileName Default info file name. The default name is soltrace.out RW/Startup
soltrace.out. Since the soltrace.out file
may contain information from several
sources, we recommend that you explicitly
set InfoFileName to another name if you
set the Info or SQLInfo parameters to a
number larger than 0.
2 (REPEATABLE READ)
1 (READ COMMITTED)
MaxBlobExpressionSize Certain string operations use only the first 1024KB (1MB) RW/Startup
N bytes of a character value, not the entire
value. For example, the LOCATE() Unit: 1 KB m=MB
operation checks only the first N bytes of
the string. If you want to tell the server to
check further into (or less far into) long
strings, you may set this parameter.
Srv section
Table 57. Srv parameters
timed_command ::=
[ day ] HH:MM argument
day ::= sun |
mon |
tue |
wed |
thu |
fri |
sat
For example:
At = 20:30 makecp,
21:00 backup,
sun 23:00 shutdown
Unit: seconds
HealthCheckTimeout This parameter sets the deadlock detection 60 RW
timeout time.
Unit: seconds.
MaxConstraintLength This parameter controls the maximum number 254 (254 bytes = RW
of bytes that the server will search through in a 254 ASCII
string, for example in WHERE clauses such as: characters, or 127
WHERE LOCATE(sought_string, Unicode
column1) > 0; characters)
CHAR(#)
VARCHAR(#)
LONG VARCHAR
MaxRPCDataLen This allows you to specify the maximum string 512K (524288) RW/Startup
length of a single SQL statement sent to the
server. This is particularly useful if you are
sending CREATE PROCEDURE commands that
are longer than 64K. The value should be
between 64K (65536) and 1024K (1048576). If
the value is less than 64K, the server will use a
minimum of 64K.
MemoryReportLimit This parameter defines the minimum size for 0 (no reporting) RW/Startup
memory allocations after which reporting to
solmsg.out is done.
NetBackupRootDir Sets the root directory for the network backups The working RW
in NetBackup Server. The path is relative to the directory
working directory.
The ProcessMemoryWarningPercentage
parameter value is automatically checked for
consistency. It must be lower than the
ProcessMemoryLowPercentage parameter value.
Synchronizer section
Table 58. Synchronizer parameters
For example,
ConnectStrForMaster=
tcp replicahost 1316
The size of the statement cache
MasterStatementCache used during one propagation 10 RO
in Master. The statement cache
is used to store prepared
statements received by Master
in one propagation from
Replica.
1. READ COMMITTED
2. REPEATABLE READ
Generally, the factory value settings offer the best performance and operability, but
in some special cases modifying a parameter will improve performance. You can
change the parameters by editing the solid.ini configuration file.
The parameter values set in the client side configuration file come to effect each
time an application issues a call to the SqlConnect ODBC function. If the values are
changed in the file during the program's run time, they affect the connections
established thereafter.
Client section
Table 59. Client parameters
217
Table 59. Client parameters (continued)
Communication section
Table 60. Communication parameters
This value is used also when the SQLConnect() call is issued with an
empty data source name.
The client-side Com.ConnectTimeout parameter defines the login timeout in
ConnectTimeout milliseconds. OS-specific
The value of the parameter can be overridden with the connect string
option -c or the ODBC attribute SQL_ATTR_LOGIN_TIMEOUT.
Note: This parameter applies for the TCP protocol only.
This parameter controls the TCP socket linger (SO_LINGER) behavior no
SocketLinger after a close on the socket connection is issued. It indicates if the system
attempts to deliver any buffered data (yes), or if the system discards it
(no), when a close() is issued.
This parameter defines the length of the time interval (in seconds) the
SocketLingerTime socket lingers after a close is issued. If the time interval expires before the 0
graceful shutdown sequence completes, an abortive shutdown sequence
occurs (the data is discarded). The default value zero indicates that the
system default is used (typically, 1 second)
Trace If this parameter is set to yes, trace information about network messages no
for the established network connection is written to a file specified with
the TraceFile parameter.
TraceFile If the Trace parameter is set to yes, trace information about network soltrace.out
messages is written to a file specified with this TraceFile parameter. (written to the
current working
directory of the
server or client
depending on which
end the tracing is
started)
logical name = network name, These parameters can be used to give a logical name to a N/A
Description solidDB server in a solid.ini file of the client
application.
The SMA driver signal handler installs itself when the first
SMA connection is established and uninstalls itself when
the last SMA connection is closed. Previously installed
signal handlers are retained.
Signals This parameter defines the signals that can break the SMA Linux and UNIX: NA
connection and should be handled by the SMA driver. SIGINT, SIGTERM
The signals are defined as integers or with the following Windows: SIGINT
mnemonics: SIGSTOP, SIGKILL, SIGINT, SIGTERM,
SIGQUIT, SIGABORT.
Note: If the SMA application loops outside of the SMA
driver (for example, does not call any functions), the
signal can fail to terminate the application. In such a case:
1. Throw out the connections at the server.
admin command ’throwout <userid>’
2. Use SIGKILL signal to force the SMA application to
exit.
kill -SIGKILL <pid>
TransparentFailover section
Table 63. TransparentFailover parameters
[TransparentFailover] Description Factory value
ReconnectTimeout This parameter specifies how long (in milliseconds) the driver 10000
should wait until it tries to reconnect to the primary in case of
TF switchover or failover. If the driver cannot find the new
primary (reconnect), an error is returned and the TF connection
becomes broken.
WaitTimeout This parameter specifies how long (in milliseconds) the driver 10000
should wait for the server to switch state. When the driver tries
to reconnect to the servers, it might connect to the server being
in an intermediate (switching or uncertain) state.
solid -c /data/solid
-c directory Changes working directory
221
Option Description Examples
solid.exe -x execute:init.sql
-x execute: file_name Prompts for the database
administrator's user name and solid.exe -x execute:init.sql -Udba -Pdba
password, creates a new database,
executes SQL statements from a file,
and exits. The options -U and -P can
be used to give the database the
administrator's user name and
password.
solid.exe -x executeandnoexit:init.sql
-x executeandnoexit: file_name Prompts for the database
administrator's user name and solid.exe -x executeandnoexit:init.sql -Udba -Pdba
password, creates a new database,
executes SQL statements from a file,
but does not exit.
solid.exe -x exit
-x exit Prompts for the database solid.exe -x exit -Udba -Pdba
administrator's user name and
password, creates a new database,
and exits. Options -U and -P can be
used to give the database
administrator's user name and
password.
-? Help = Usage
-h Help = Usage
225
226 IBM solidDB: Administrator Guide
Appendix E. Error codes
This appendix lists error and message codes that can be generated by the server.
This appendix lists the errors and messages according to the error class, following
the order the error descriptions appear in the ADMIN COMMAND ’errorcode all’
output.
Error classes
Table 65. solidDB error categories
System System errors are detected by the operating system and demand administrative actions.
For the list of errors, see “solidDB system errors” on page 229.
Database or DBE The errors in these classes are detected by the solidDB and may demand administrative actions.
(database engine) Messages typically do not require administrative actions.
For the list of errors and messages, see “solidDB database errors” on page 232 and “solidDB DBE
(database engine) errors and messages” on page 291.
Table or TAB (table) These errors and messages are caused by erroneous SQL statements detected by solidDB.
Administrative actions are not needed.
For the list of errors and messages, see “solidDB table errors” on page 241 and “solidDB TAB (table)
messages” on page 299.
Communication, COM, The communication type errors are encountered by network problems, faulty configuration of the
Session, or RPC solidDB software, or ping facility errors. These errors in these classes usually demand administrative
actions. Messages typically do not require administrative actions.
Server These errors are caused by erroneous administrative actions or client requests. They may demand
administrative actions.
For the list of errors, see “solidDB server errors” on page 260
Procedure These errors are encountered when defining or executing a stored procedure. Administrative actions
are not needed.
For the list of errors, see “solidDB procedure errors” on page 266.
SA API The SA API errors are return codes for the SA function SaSQLExecDirect.
For more information, see “solidDB API errors” on page 269 and SaSQLExecDirect in the IBM solidDB
Programmer Guide.
These errors are encountered when the external sorter algorithm is solving queries that require
Sorter or XS ordering rows.
For the list of errors, see “solidDB sorter errors” on page 269 and “solidDB XS (external sorter) errors
and messages” on page 298.
227
Table 65. solidDB error categories (continued)
For the list of errors, see “solidDB synchronization errors” on page 271 and “solidDB SNC
(synchronization) messages” on page 297.
HotStandby or HSB The HotStandby errors occur when using the ADMIN COMMAND ’HotStandby’ commands.
For the list of errors, see “solidDB HotStandby errors” on page 286 and “solidDB HSB (HotStandby)
errors and messages” on page 295.
SSA (solidDB SQL API) These errors are caused by erroneous use of the solidDB SQL API (SSA). solidDB ODBC and JDBC
drivers are implemented on this API.
For the list of errors, see “solidDB SSA (SQL API) errors” on page 287
CP (checkpoint) The CP messages provide information about the status or conditions of checkpoint operations.
For the list of messages, see “solidDB CP (checkpoint) messages” on page 293.
BCKP (backup) The BCKP messages provide information about the status or conditions of backup operations.
For the list of messages, see “solidDB BCKP (backup) messages” on page 293.
AT (timed commands) The AT messages provide information about the status or conditions of executing timed commands.
For the list of messages, see “solidDB AT (timed commands) messages” on page 293.
LOG (logging) The LOG messages provide information about the status or conditions of transaction logging.
For the list of messages, see “solidDB LOG (logging) messages” on page 294.
INI (configuration file) The INI messages provide information about the use of the solid.ini configuration file.
For the list of messages, see “solidDB INI (configuration file) messages” on page 294.
FILE (file system) The FILE messages provide information about file system operations, for example, for database and
log files.
For the list of messages, see “solidDB FIL (file system) messages” on page 298.
SMA (shared memory The SMA messages provide information about operations when solidDB is used with shared memory
access) access.
For the list of errors, see “solidDB SMA (shared memory access) errors” on page 299.
PT (passthrough) The PT errors provide information about operations when solidDB is used with SQL passthrough.
For the list of messages, see “solidDB PT (passthrough) errors” on page 299.
These errors are caused by erroneous SQL statements detected by the solidDB SQL Parser.
SQL errors Administrative actions are not needed.
For the list of errors, see “solidDB SQL errors” on page 300
These errors are caused by the failure of the solidDB executable or a command line argument related
Executable errors error. They enable implementing intelligent error handling logic in system startup scripts.
For the list of errors, see “solidDB executable errors” on page 306
These errors are encountered when running the solidDB Speed Loader utility (solloado or solload) to
solidDB Speed Loader load data from external files into the solidDB database.
(solloado or solload)
For the list of errors, see “solidDB Speed Loader (solloado and solload) errors” on page 307
In addition to the errors and messages described above, you might receive an
internal error. In such a case, contact solidDB Technical Support at
http://www.ibm.com/software/data/soliddb/support/.
The server is unable to open the database file. Reason for the failure can be:
v The database file has been set to read-only.
v You do not have rights to open the database file in write mode.
v Another solidDB is using the database file.
The server is unable to write to the disk. The database files may have a read-only
attribute set or you may not have rights to write to the disk. Add rights or unset
read-only attribute and try again.
System Fatal Error
11002 File write failed, disk full.
The server failed to write to the disk, because the disk is full. Free disk space or
move the database file to another disk. You can also split the database file to several
disks using the IndexFile.FileSpec parameter.
11003 System Fatal Error
File write failed, configuration exceeded.
Writing to the database file failed because the maximum database file size set in
IndexFile.FileSpec parameter has been exceeded.
See “Troubleshooting database file size (file write fails)” on page 151 for more details.
11004 System Fatal Error
File read failure.
An error occurred reading a file. This may indicate a disk error in your system.
11005 System Fatal Error File read beyond end of file.
This error is given, if the file EOF is reached during the read operation.
11006 System Fatal Error File read failed, illegal file address.
An error occurred reading a file. This may indicate a disk error in your system, or
insufficient file read or write permissions.
With this error, the following type of error message can be written to the solmsg.out
file:
SsBOpenLocal failed, file
’/home/solid/sol00001.log’,
error = 13, retries = 0, open files = 1
The error 13 refers to an operating system error code 13 which is defined as:
#define EACCES 13 /* Permission denied */
This means that the solid process does not have operating system permissions to
read or create the file.
System Fatal Error File lock failure.
11007
The server failed to lock the database file.
System Fatal Error
11008 File unlock failure.
This error is given when reading data from disk to memory, but the memory space is
already allocated for another purpose.
System Error
11010 Too long file name.
This error is given if read or write fails when handling a binary stream object.
System Error
11024 Desktop license is only for local communication, cannot use name for listening.
System Error
11025 License file filename is not compatible with this server executable.
The server has been started with an incompatible license file. You need to update
your license file to match the server version.
System Error
11026 Backup directory contains a file which could not be removed.
Some file could not be removed from the backup directory. The backup directory
may point to a wrong location.
Parameter was not found from the specified section in the solid.ini file.
System Error
11028 No such parameter section.name.
This error is given if solidDB attempts to use a file, whose given size is larger that
the size that solidDB can use.
System Error
11040 Password file cannot be opened.
This error is given if solidDB cannot find the database password file.
System Error
11041 No password found in password file.
This error is given if the database password is not in the password file.
11042 System Error Internal error: Empty diagnostic record. Contact technical support for more
information.
Internal error: a key value cannot be found from the database index.
10002 Database Error
Operation failed.
This is an internal error indicating that the index of the table accessed is in inconsistent
state. Try to drop and create the index again to recover from the error.
You may also receive this error if you try to SET TRANSACTION READ ONLY when the
transaction already contains some write operations.
10004 Database Error
Redefinition.
This error may also occur during recovery: either an index or a view has been redefined
during recovery. The server is not able to do the recovery. Delete log files and start the
server again.
10005 Database Error
Unique constraint violation.
You have violated a unique constraint. This happens when you have tried to insert or
update a column which has a unique constraint and the value inserted or updated is not
unique.
This error message applies not only to user tables, but also to the system tables. For
example, if you try to create a table that has the same name as an existing table, you may
see this message. The same applies to other database object names, such as names of
users, roles, and triggers.
Two separate transactions have modified a same row in the database simultaneously. This
has resulted in a concurrency conflict.
The error is returned when the tables are set with optimistic concurrency control and two
or more concurrent connections attempt to obtain a exclusive lock on the same row/or set
of rows at the same time (same row in the database is being modified simultaneously).
The transaction that has been committed first is allowed to make the modifications to the
database. The latter transactions is rolled back and this error message is returned to the
application. To handle this update conflict, for example, the application could try to
re-read the data and retry the update.
You can also switch to pessimistic locking method where row-level locking is used to
avoid update conflicts. The pessimistic locking mode is suggested for tables that are
modified frequently. To turn the pessimistic locking on for a table, use the ALTER TABLE
statement.
10007 Database Error
Transaction is not serializable.
This error occurs when the server has crashed in the middle of creating a new database.
Delete the database and log files and try to create the database again.
10011 Database Fatal Error
Database headers are corrupted.
The headers in the database are corrupted. This may be caused by a disk error or other
system failure. Restore the database from the backup.
10012 Database Fatal Error
Node split failed.
This error is given if the node split of the in-memory database (B+ tree) fails.
1) Execute conflicting SET TRANSACTION statements, for example, you executed SET
TRANSACTION READ WRITE after you already SET TRANSACTION READ ONLY
within the same transaction.
3) Write inside a transaction that is set read-only. Remove the write operation or unset the
read-only mode in the transaction.
If you see this message in the first transaction that you try to execute after connecting to a
server, and if you haven't done anything to set the transaction or server to read-only
mode, then try simply executing a COMMIT WORK statement and then re-executing the
statement that caused the 10013 error.
10014 Database Error
Resource is locked.
This error occurs when you are trying to use a key value in an index which has been
concurrently dropped.
10016 Database Error
Log file is corrupted.
One of the log files of the database is corrupted. You can not use these log files. Delete
them and start the server again.
10017 Database Error
Too long key value.
The maximum length of the key value has been exceeded. The maximum value is one
third of the size of the index leaf.
If there are blobs (long varchars or long varbinaries) among the columns, the capacity
requirements for a row can be reduced by storing the blob separately in the blob storage.
However, when storing data in the blob storage, the first 254 bytes are also stored on the
actual row. Therefore, with 8K block size, only 11 varchar columns with 254 characters of
data is sufficient to exceed the key value limitation and cause this error message.
You have tried to start a backup when a backup process is already in progress.
Database Error
10020 Checkpoint creation is active.
You have tried to start a checkpoint when a checkpoint creation is already in progress.
The log file in the database directory is from another solidDB database. Copy the correct
log files to the database directory.
Database Error
10024 Illegal backup directory.
The backup directory is either an empty string or a dot indicating that the backup will be
created in the current directory.
Database Error
10026 Transaction is timed out.
An idle transaction has exceeded the maximum idle transaction time. The transaction has
been aborted.
The maximum value is set in parameter AbortTimeOut in SRV section. The default value
is 120 minutes.
Database Error
10027 No active search.
This error is given during the UPDATE or DELETE operation if it is found that the active
search identifying the data in the database to be updated or deleted does not exist.
Database Error
10028 Referential integrity violation, foreign key values exist.
The definition of a foreign key does not uniquely identify a row in the referenced table.
Database Error
10030 Backup directory 'directory name' does not exist.
Backup directory is not found. Check the name of the backup directory.
Database Error
10031 Transaction detected a deadlock or a lock wait timeout, transaction is rolled back.
The block size of the database file differs from the block size given in the configuration
file solid.ini.
Database Error
10033 Primary key unique constraint violation.
Choose a unique name for a sequence. The specified name is already used.
Database Error
10035 Sequence does not exist.
A create or drop operation is active for the accessed sequence. Finish the current
transaction and then try again.
Database Error
10037 Can not store sequence value, the target data type is illegal.
The valid target data types are BIGINT, INTEGER, and BINARY.
Database Error
10038 Illegal column value for descending index.
Corrupted data found in descending index. Drop the index and create it again.
Database Error INTERNAL: Assertion failure
10039
For more information, contact solidDB Technical Support at http://www.ibm.com/
software/data/soliddb/support/.
Database Error
10040 Log file write failure, probably the disk containing the log files is full.
Shut down the server and reserve more disk space for log files.
Database Error
10041 Database is read-only.
Database Error
10042 Database index check failed, the database file is corrupted.
Database Error
10043 Database free block list corrupted, same block twice in free list.
Database Error
10044 Primary key can not contain blob attributes.
Database Error
10045 This database is a HotStandby secondary server, the database is read only.
Database Error
10046 Operation failed, data dictionary operation is active. Wait and try again.
Database Error
10047 Replicated transaction is aborted.
Database Error
10048 Replicated transaction contains schema changes, operation failed.
Database Error
10049 Slave server not available any more, transaction aborted
Database Error
10050 Replicated row contains BLOb columns that cannot be replicated.
Database Error
10051 Log file is corrupted.
Database Fatal Error
10052 Cannot convert an abnormally closed database. Use the old solidDB database version to
recover the database first.
Database Error
10053 Table is read only.
Read-only mode can be specified in 3 ways. To restart solidDB in normal mode, verify
that:
v solidDB process is not started with command-line option -x read only
v solid.ini does not contain the following parameter setting:
[General]
ReadOnly=yes
v license file does not have read-only limitation
10061 Database Fatal Error
Out of database cache memory blocks.
solidDB process cannot continue because there is too little cache memory allocated for the
solidDB process. Typical cause for this problem is a heavy load from several concurrent
users. To allocate more cache memory, set the following solid.ini parameter to a higher
value:
[IndexFile]
CacheSize=cache_size_in_bytes
NOTE: Allocated cache memory size should not exceed the amount of physical memory.
10062 Database Fatal Error
Failed to write to log filename at offset.
Verify that the disk containing the log files is not full and is functioning properly. Also,
log files should not be stored on shared disks over the network.
Probably your log file directory also contains logs from some other database. solidDB
process cannot continue until invalid log files are removed from the log file directory.
To recover:
v Remove log filename and all other log files with greater sequence numbers.
v Change the value of the Logging.FileNameTemplate parameter to point to a directory
that does not contain any solidDB transaction log files.
Database Fatal Error
10064 Illegal log file name template.
contains too few or too many sequence number digit positions. There should be at least 4
and at most 10 digit positions.
Database Fatal Error
10065 Unknown log write mode. Recheck the configuration parameter.
Database Fatal Error
10066 Cannot open log filename. Check the following log file name template in solid.ini:
[Logging]
FileNameTemplate=name
Possibly the database has been deleted without deleting the log files or there are log files
from some other database in the log files directory of the database to be created.
Database Fatal Error
10068 Roll-forward recovery cannot be performed because the configured log file block size
number does not match with block size number of existing filename.
and restart the solidDB process. After successful recovery, you can change the log file
block size by performing these steps:
1. Shut down the solidDB process.
2. Remove old log files.
3. Edit new block size into solid.ini.
4. Restart solidDB.
Database Fatal Error
10069 Roll-forward recovery failed because relation id number was not found. Database has been
irrevocably corrupted. Restore the database from the last backup.
Database Fatal Error
10070 Roll-forward failed because relation id number was not found. Database has been
irrevocably corrupted. Restore the database from the latest backup.
Database Fatal Error
10071 Restore the database from the latest backup.
Database Fatal Error
10072 Database operation failed because of the file I/O problem.
solidDB process cannot use this corrupted log file to recover. In order to continue, you
have the following alternatives:
1. Revert to the last backup
2. Revert to the last checkpoint
3. Revert to the last committed transaction within the last valid log file
Database Fatal Error
10077 No base catalog given for database conversion (use -C catalogname )
A database's base catalog must be provided when converting the database to a new
format.
10078 Database Error User rolled back the transaction.
10079 Database Error Cannot remove filespec. File is already in use.
10080 Database Error HotStandby Secondary server can not execute operation received from Primary server.
Meaning: A possible cause for this error is that the database did not originate from the
Primary server using HotStandby copy or netcopy command.
10081 Database Error The database file is incomplete or corrupt.
Meaning: If the file is on a hot standby secondary server, use the hotstandby copy or
hotstandby netcopy command to send the file from the primary server again.
10082 Database Error Backup aborted.
10083 Database Error Failed to abort HSB transaction because commit is already sent to secondary.
10084 Database Error Table is not locked.
10085 Database Error Checkpointing is disabled.
Database Error
10086 Deleted row not found.
A key value being deleted cannot be found in the b-tree. This is an internal error.
10087 Database Error HotStandby not allowed for main memory tables.
10088 Database Error Specified lock timeout is too large.
10089 Database Error Operation failed, server is in HSB primary uncertain mode.
This error is returned when a transaction tries to access a table whose schema has been
altered by a later transaction. The recommended action is to retry the failing SQL
command in a new transaction.
Database Error
10091 Backup detected a log file with wrong block size, backup aborted.
Database Fatal Error
10092 HotStandby cannot operate when logging is disabled.
Database Fatal Error
10093 HotStandby migration is not possible if Hotstandby is not configured.
Database Fatal Error
10094 Only %d cache pages configured for M-table usage, at least %d needed.
Database Error
10095 Cursor is closed after isolation change.
The current cursor is closed, because its isolation level has been changed.
Database Fatal Error
10096 Only <kilobytes> kilobytes configured for M-table checkpointing, at least <kilobytes>KB
needed.
Aggregate functions SUM and AVG are not supported for character type
parameters.
Table Error
13006 SUM or AVG not supported for DATE type.
Aggregate functions SUM and AVG are not supported for date type parameters.
Table Error
13007 Function function is not defined.
You have referenced a table which does not exist or you do not have
REFERENCES privilege on the table.
Table Error
13013 Table name table conflicts with an existing entity.
Choose a unique name for a table. The specified name is already used.
Table Error
13014 Index index does not exist.
You have tried to update using a cursor, but you do not have a current row in the
cursor.
Table Error
13026 Delete through a cursor with no current row
You have tried to delete using a cursor, but you do not have a current row in the
cursor.
Table Error
13028 View view_name does not exist.
Choose a unique name for a view. The specified name is already used.
You have not specified a value for a column which is defined NOT NULL.
Table Error
13031 Data dictionary operation is active for accessed table or key.
You can not access the table or key, because a data dictionary operation is
currently active. Try again after the data dictionary operation has completed.
Table Error
13032 Illegal type type.
You have tried to create a table with a column having an illegal type.
Table Error
13033 Illegal parameter parameter for type type.
You have entered an illegal integer type constant. Check the syntax of the
statement and try again.
Table Error
13036 Illegal DECIMAL constant constant.
You have entered an illegal decimal type constant. Check the decimal number and
try again.
Table Error
13037 Illegal DOUBLE PREC constant constant.
Typically, this is a general parse error. The SQL statement may contain a syntax
error before the constant. As a last resort, the parser has attempted to parse a
DOUBLE PREC constant, but has failed.
This error also occurs if you entered an illegal double precision type constant.
(More specifically, this error occurs when a space is placed between the asterisk
and the closing parenthesis ("*)") in an optimizer hint.)
In any of these cases, be sure to check the syntax of the statement and try again.
Table Error
13038 Illegal REAL constant constant.
You have entered an illegal real type constant. Check the real number and try
again.
Table Error
13039 Illegal assignment.
You have tried to assign an illegal value for a column. For example, you may have
tried to assign a value that was too large or was of the wrong data type.
Table Error
13040 Aggregate function function is not defined.
A date constant is illegal. The correct form for date constants is: YYYY-MM-DD.
Table Error
13046 Illegal user name user.
User name entered is not legal. A legal user name is at least 2 and at most 31
characters in length. A user name may contain characters from A to Z, numbers
from 0 to 9 and underscore character '_'.
13047 Table Error
No privileges for operation.
You have no privileges for the attempted operation. To carry out this operation,
you must be granted appropriate privileges. Alternatively, the operation can be
performed by another user who already has the appropriate privileges. See the
GRANT statement for more information.
NOTE: If you are trying to drop a catalog that you previously created, and you get
this error message, then your SYS_ADMIN_ROLE (i.e. DBA) privileges have been
revoked. Only the creator of the database or users having SYS_ADMIN_ROLE (i.e.
DBA) have privileges to create or drop a catalog. Even the creator of a catalog
cannot drop that catalog if she loses SYS_ADMIN_ROLE privileges. (Creating a
catalog, unlike creating most other objects (such as tables) does not make you the
owner; instead, the ownership of all catalogs belongs to the DBA/
SYS_ADMIN_ROLE.)
13048 Table Error
No grant option privilege for entity name.
Maximum constraint length has been exceeded. Maximum constraint length is 255
characters.
13051 Table Error
Illegal column name column.
You have tried to use an illegal comparison operator for a pseudo column. Legal
comparison operators for pseudo columns are: equality '=' and non-equality '<>'.
13053 Table Error
Illegal data type for a pseudo column.
You have tried to use an illegal data type for a pseudo column. Data type of
pseudo columns is BINARY.
You have tried to compare pseudo column data with non-pseudo column data.
Pseudo column data can only be compared with data received from a pseudo
column.
13055 Table Error
Update not allowed on pseudo column.
You have tried to create an index, but an index with the same name already exists.
Use another name for the index.
13058 Table Error
Constraint checks were not satisfied on column column.
Column has constraint checks which were not satisfied during an insert or update.
13059 Table Error
Reserved system name name.
You tried to use a name which is a reserved system name such as PUBLIC and
SYS_ADMIN_ROLE.
13060 Table Error
User name user not found.
You tried to use a role or user which already exists. User names and role names
must all be different, that is, you can not have a user named HOBBES and a role
named HOBBES.
13064 Table Error
Not a valid user name user.
You tried to create an invalid user name. A valid user name has at least 2
characters and at most 31 characters. A user name may contain characters from A
to Z, numbers from 0 to 9 and underscore character '_'.
13065 Table Error
Not a valid role name role.
You tried to create an invalid role name. A valid role name has at least 2 characters
and at most 31 characters. A role name may contain characters from A to Z,
numbers from 0 to 9 and underscore character '_'.
13066 Table Error
User user not found in role role.
You tried to revoke a role from a user and the user did not have that role.
You have entered a too short password. Password length must be at least 3
characters.
13068 Table Error
Shutdown is in progress.
You are unable to complete this operation, because server shutdown is in progress.
13070 Table Error
Numerical overflow.
A numerical overflow has occurred. Check the values and types of numerical
variables.
13071 Table Error
Numerical underflow.
A numerical underflow has occurred. Check the values and types of numerical
variables.
13072 Table Error
Numerical value out of range.
A numerical value is out of range. Check the values and types of numerical
variables.
13073 Table Error
Math error.
A mathematical error has occurred. Check the mathematics in the statement and
try again.
13074 Table Error
Illegal password.
You have tried to enter an illegal role name. A legal role name is at least 2 and at
most 31 characters in length. A user role may contain characters from A to Z,
numbers from 0 to 9 and underscore character '_'.
13077 Table Error
Last column can not be dropped.
You have tried to drop the final column in a table. This is not allowed; at least one
column must remain in the table.
13078 Table Error
Column already exist on table.
Check the search engine. There may be mismatch between data types.
13080 Table Error
Incompatible types, can not modify column column fromtype type to type type.
You have tried to modify column to a data type that isincompatible with the
original definition, such as VARCHARand INTEGER
13081 Table Error
Descending keys are not supported for binary columns.
You can not use parameter star (*) with ODBC Scalar Functions.
13083 Table Error
Function function: Too few parameters.
An error was detected during the execution of the function. Check the parameters.
Table Error
13086 Function function: type mismatch in parameter parameter number.
An erroneous type of parameter was detected in the given position of the function
call. Check the function call.
Table Error
13087 Function function: illegal value in parameter parameter number.
An illegal value for a parameter detected in the given position of the function call.
Check the function call.
Table Error
13088 No primary key for table.
Table Error
13090 Foreign key column column data type not compatible with referenced column data
type.
References specification error. Check that the column data type are compatible
between referencing and referenced tables.
Table Error
13091 Foreign key does not match to the primary key or unique constraint of the
referenced table.
References specification error. Check that the column data types are compatible
between referencing and referenced tables and that the foreign key is unique for
the referenced table.
Table Error
13092 Event name event conflicts with an existing entity.
Choose a unique name for an event. The specified name is already used.
Table Error
13093 Event event does not exist.
This error occurs if the DBA tries to grant privileges to herself or himself (to the
DBA).
13104 Table Error Sequence name sequence conflicts with an existing entity.
Choose a unique name for a sequence. The specified name is already used.
13105 Table Error Sequence sequence does not exist.
This message occurs if the name of the specified database object (for example, a
table name) does not exist in the schema that you are currently in, but more than
one other schema contains an object with that name.
If the database object that you want is in a different schema than the schema you
are currently in, then change to the appropriate schema by using the SET
SCHEMA command, or specify the desired object by using a more fully qualified
object name, for example:
sales_catalog.jan_wong_schema.table.1
13112 Table Error
Foreign keys are not supported with main memory tables.
13113 Table Error
Illegal arithmetic between types datatype and datatype.
13114 Table Error
String operations are not allowed on values stored as BLOBs or CLOBs.
13115 Table Error
Function function_name: Too long value (stored as CLOB) in parameter parameter.
The parameter value was stored as CLOB and cannot be used with a function.
13116 Table Error
Column column_name specified more than once.
Column was specified more than once in the GRANT or REVOKE statement.
13117 Table Error
Wrong number of parameters
Column privileges are allowed only for base tables; they cannot be used, for
example, for views.
13119 Table Error
Types column_type and column_type are not union compatible.
Column types are not union compatible. When a UNION operation is performed,
two columns from two different tables are used to generate one column of output.
The operation is successful as long as the two columns are of the same type or
"compatible" types. Types are compatible if one type can reasonably be converted
into the other type. For example, you can UNION a column of FLOAT with a
column of INT because any integer value can also be represented as a
corresponding float value (for example, 2 can be converted to 2.0). However, if you
attempt a UNION operation on two incompatible types, such as FLOAT and
DATE, you will receive 13119.
13120 Table Error
Too long entity name 'entity_name'.
Note that the maximum number of columns may be less if each column requires a
large number of bytes.
13122 Table Error
Operation is not supported for a table with sync history.
Operation is not supported because the table has synchronization history defined.
Internal user id was not found; the user may have been dropped.
Table Error
13125 Illegal LIKE pattern 'pattern'.
Comparison failed because at least one of the column values was stored as a BLOB
or CLOB.
Table Error
13128 LIKE predicate failed because value is too long.
Table Error
13132 Too many nested procedures.
Commit or rollback are not supported inside trigger execution. This error is also
given if a trigger calls a procedure that tries to execute commit or rollback
command.
Table Error
13145 Sync parameter not found.
Parameter name given in command SET SYNC PARAMETER name NONE is not
found.
Table Error
13146 There are schema objects for this catalog, drop failed.
Catalog contains schema object and cannot be dropped. Schema objects like tables
and procedures need to be dropped before catalog can be dropped.
Table Error
13147 Current catalog can not be dropped.
The catalog that you want to drop must not be the current catalog. If you get this
message, you should switch to another catalog, then re-execute the DROP
CATALOG command.
Table Error
13148 There are objects for this schema, drop failed.
Altering the name of the table would prevent the trigger from working properly.
Table Error
13161 An M-table is being updated with UPDATE ... WHERE CURRENT OF CURSOR
and CURSOR is not declared FOR UPDATE.
When you update an in-memory table (an "M-table") using the command UPDATE
... WHERE CURRENT OF CURSOR, you must have declared the cursor using the
FOR UDPATE clause. This is required when the table is an in-memory table; it is
strongly recommended, but not required, when the table is a disk-based table.
Table Error
13162 A record in an M-table is being deleted with DELETE ... WHERE CURRENT OF
CURSOR and CURSOR is not declared FOR UPDATE.
When you delete a record from an in-memory table (an "M-table") using the
command DELETE ... WHERE CURRENT OF CURSOR, you must have declared
the cursor using the FOR UDPATE clause. This is required when the table is an
in-memory table; it is strongly recommended, but not required, when the table is a
disk-based table.
Table Error
13163 Descending keys are not supported for bigint columns.
If you try to create a DESCending index on a column of type BIGINT, you will get
this message. Use an ASCending key instead.
Table Error
13164 Transaction is active, operation failed.
Table Error
13165 Cannot fetch previous row from an M-table.
This message can occur only when fetching rows from in in-memory table
("M-table") by using solidDB's low-level SA API.
Table Error
13166 License does not allow accessing M-tables
For more details, see the discussion on persistent and transient tables under the
CREATE TABLE command in the "Solid® SQL Syntax" appendix in solidDB SQL
Guide.
Table Error
13173 A persistent table can not reference a temporary table.
For more details, see the discussion on persistent and transient tables under the
CREATE TABLE command in the "Solid SQL Syntax" appendix in solidDB SQL
Guide.
Table Error
13174 A transient table can not reference a temporary table.
For more details, see the discussion on persistent and transient tables under the
CREATE TABLE command in the "Solid SQL Syntax" appendix in solidDB SQL
Guide.
Table Error
13175 A reference between temporary and non-temporary table is not allowed.
Table Error
13176 Cannot change STORE for a table with sync history.
Table Error
13177 Cannot define UNIQUE constraint with duplicated or implied restriction.
Table Error
13178 Constraint not found.
Table Error
13179 Foreign key actions other than restrict are not supported.
Table Error
13180 Constraint name already exists.
Table Error
13181 Constraint check fails on existing data.
Table Error
13182 Added column with NOT NULL must have a non-NULL default.
Table Error
13183 Index is referenced by foreign key, it cannot be dropped.
Table Error
13184 Primary key not found for table. Cannot define foreign key.
Table Error
13185 Cannot set NOT NULL on column that already has NULL value.
Table Error
13186 Cannot drop NOT NULL on column that is used as part of unique key.
Table Error
13187 The cursor cannot continue accessing M-tables after the transaction has committed
or aborted. The statement must be re-executed.
Table Error
13188 Foreign key refers to itself.
A foreign key creates a dependency between one or more tables in such a way that
update to one row in one table might cause multiple updates to the same row in
the same or another table. Such update might be ambiguous and the server does
not allow creation of such dependencies.
This restriction does not apply to cascaded deletes (when deletion of one row
causes multiple deletions of another row), but it still applies when the deletion of
one row causes multiple updates (SET NULL or SET DEFAULT) to another row.
13194 Table Error
Can not drop a table that is part of a foreign key
13195 Table Error
Update failed, READ COMMITTED isolation requires FOR UPDATE
13196 Table Error
Delete failed, READ COMMITTED isolation requires FOR UPDATE
13197 Table Error
M-tables are not supported
13198 Table Error
Commit and rollback are not allowed inside function.
13199 Table Error
Duplicate index definition
After this step, solidDB returns error 13199. In the example above, the second
index is a superset of the unique first index. This implies that the second index
(although it is not explicitly specified as unique) is also unique. In practice, the
second index is useless. It only affects space consumption and update
performance, not lookup performance.
13200 Table Error
Update failed.
Check that the SYS_SERVER table contains correct login data for the back-end.
13452 Table Error Passthrough backend database not available.
solidDB cannot connect to the back-end data server. Check your configuration
settings.
13453 Table Error Passthrough cursors are forward only cursors.
This error is returned to user if the back-end data server reports a failure but
solidDB cannot read the actual error.
This error is caused by violation of the set isolation level. To preserve consistency
of the back-end database when using SQL passthrough, the isolation level of the
front-end must be the same (or similar) or higher than in the back-end.
13456 Table Error Passthrough backend error: SQLState=<value>, NativeError=<back-end error
identifier>, MessageText=<back-end error description>.
13457 Table Error Passthrough error: resultset mismatch.
The table definitions in the front-end and back-end database do not match (for
example, the number of columns is different).
13458 Table Error Passthrough error: parameter mismatch.
The parameters used in an SQL statements do not match when executed in the
front-end and back-end database.
13459 Table Error Passthrough error: Datatype is not supported.
13460 Table Error Server <name> already exists
The back-end login data for the specified server has been created already.
Note: The default name for the back-end data server is 'default'.
13461 Table Error Server <name> not found.
The back-end login data for the specified server does not exist.
13463 Table Error Passthrough error. Distributed transaction must be read only in back-end.
13501 Table Warning String data truncation in assignment from <value> to <value>
13502 Table Warning Numeric value right truncation in assignment from <value> to <value>
This error is returned if the server or client is trying to write to an underlying communication
channel (socket, named pipe, shared memory) that is broken.
20010 Session Error
Read operation failed.
20011 Session Error
Accept operation failed.
20012 Session Error
Network not found.
20013 Session Error
Out of network resources.
20023 Session Error
Too many name resolver requests already in progress.
20024 Session Error
Timeout while resolving host name.
20025 Session Error
Timeout while connecting to a remote host.
An illegal value was given to the parameter parameter. The server will use a
default value for this parameter.
21101 Communication Warning
Invalid protocol definition protocol in configuration file.
The protocol is defined illegally in the configuration file. Check the syntax of
the definition.
21300 Communication Error
Protocol protocol is not supported.
The server was unable to load the dynamic link library or a component
needed by this library. Check the existence of necessary libraries and
components.
21302 Communication Error
Wrong version of dynamic link library library.
The version of this library is wrong. Update this library to a newer version.
The network name specified is not legal. Check the network name.
21306 Communication Error
Server network name not found, connection failed.
The server was not found. 1) Check that the server is running. 2) Check that
the network name is valid. 3) Check that the server is listening to the given
network name.
21307 Communication Error
Invalid connect info network name.
The network name given as the connect info is not legal. Check the network
name.
21308 Communication Error
Connection is broken (protocol read/write operation failed with code internal
code).
The server was not able to establish a new client connection. The protocol is
out of resources. Increase the protocol's resources in the operating system.
21310 Communication Error
Failed to accept a new client connection, listening of network name
interrupted.
The server was not able to establish a new client connection. The listening
has been interrupted.
21311 Communication Error
Failed to start a selecting thread for network name.
A network name has already been specified for this server. A server can not
use a same network name more than once.
21313 Communication Error
Already listening with the network name network name.
You have tried to add a network name to a server when it is already listening
with that network name. A server can not use a same network name more
than once.
The server can not start listening with the given network name. Another
process in this computer is using the same network name.
21315 Communication Error
Cannot start listening, invalid listening info network name.
The server can not start listening with the given listening info. The given
network name is invalid. Check the syntax of the network name.
21316 Communication Error
Cannot stop the listening of network name. There are clients connected.
You can not stop listening of this network name. There are clients connected
to this server using this network name.
21317 Communication Error
Failed to save the listen information into the configuration file.
The server failed to save this listening information to the configuration file.
Check the file access rights and format of the configuration file.
21318 Communication Error
Operation failed because of an unusual protocol return code code.
This is returned in clients if the host machine name given in connect info is
not valid.
21323 Communication Error
Protocol protocol can not be used for listening in this environment.
solidDB uses one connect socket for internal use. Creation of this socket has
failed; the local loopback may not be working correctly.
Meaning: The switch process, catchup process, copy or netcopy process is still active.
14007 Server Return CONNECTING
Code
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby status connect'
Meaning: The Primary and Secondary servers are in the process of connecting.
14008 Server Return CATCHUP
Code
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby status connect'
Meaning: The Primary server is connected to the Secondary server, but the transaction log is
not yet fully copied. This message is returned only from the Primary server.
14009 Server Return No server switch occurred before.
Code
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby status switch'
Meaning: The switch process has never happened between the servers.
14501 Server Error
Operation failed.
This error occurs when a timed command fails. Check the arguments of timed commands.
This error number is also used for certain HotStandby errors. See IBM solidDB High
Availability User Guide for details.
14502 Server Error
RPC parameter is invalid.
You have tried to declare a cursor with a cursor name which is already in use. Use another
name.
14505 Server Error
Connect failed, illegal user name or password.
You have entered either a user name or a password that is not valid.
14506 Server Error
The server is closed, no new connections allowed.
A version mismatch has occurred. The client and server are different versions. Use same
versions in the client and the server.
14510 Server Error
Communication write operation failed.
A write operation failed. This indicates a network problem. Check your network settings.
14511 Server Error
Communication read operation failed.
A read operation failed. This indicates a network problem. Check your network settings.
Server Error
14512 There are users logged to the server.
You can not shutdown the server now. There are users connected to the server.
Server Error
14513 Backup process is active.
You cannot shut down the server now. The backup process is active
Server Error
14514 Checkpoint creation is active.
You cannot shut down the server now. The checkpoint creation is active.
Server Error
14515 Invalid user id.
You tried to drop a user, but the user id is not logged in to the server.
Server Error
14516 Invalid user name.
You tried to drop a user, but the user name is not logged in to the server.
Server Error
14517 Someone has updated the at commands at the same time, changes not saved.
You tried to update timed commands at the same time another user was doing the same.
Your changes will not be saved.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby copy'
To solve this problem, either specify the directory as part of the command, for example:
ADMIN COMMAND ’hotstandby copy \Secondary\dbfiles\’
Meaning: The switch process is already active in the HotStandby server. If you only need to
complete the current switch, then wait. If you are trying to switch a second time (that is,
switch back to the original configuration), then you must wait for the first switch to
complete before you can start the second switch.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby switch primary'
v ADMIN COMMAND 'hotstandby switch secondary'
v ADMIN COMMAND 'hotstandby status switch'
14524 Server Error HotStandby databases have a different base database, database time stamps are different.
Meaning: Databases are from a different seed database. You must synchronize databases. You
may need to perform netcopy of the Primary's database to the Secondary.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby connect'
v ADMIN COMMAND 'hotstandby status switch'
14525 Server Error HotStandby databases are not properly synchronized.
Meaning: Databases are not properly synchronized. You must synchronize the databases. You
may need to start one of the database servers (the one that you intend to become the
Secondary) with the command line parameter -x backupserver and then netcopy the
Primary's database to the Secondary.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby connect'
v ADMIN COMMAND 'hotstandby status switch'
All HotStandby commands can return this error in the result set of the ADMIN
COMMAND.
Note: In the following HotStandby commands, the invalid argument error is a syntax error
when the specified Primary or Secondary server can not apply to the switch:
v ADMIN COMMAND 'hotstandby switch primary'
v ADMIN COMMAND 'hotstandby switch secondary'
14527 Server Error This is a non-HotStandby server.
Meaning: The command was executed on a server that is not configured for HotStandby.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby connect'
v ADMIN COMMAND 'hotstandby status switch'
v ADMIN COMMAND 'hotstandby switch primary'
v ADMIN COMMAND 'hotstandby switch secondary'
v ADMIN COMMAND 'hotstandby state'
14528 Server Error Both HotStandby databases are primary databases.
Meaning: Both databases are Primary. This is a fatal error because there may be conflicting
changes. Both databases are automatically dropped to Secondary state by the system. You
must decide which database is the real Primary database and then synchronize the
databases.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby connect'
v ADMIN COMMAND 'hotstandby status switch'
Server Error
14529 The operation timed out.
Server Error
14530 The connected client does not support UNICODE data types.
Connected client is an old version client that does not support UNICODE data types.
UNICODE data type columns cannot be used with old clients.
Server Error
14531 Too many open cursor, max limit is value.
There are too many open cursors for one client; maximum number of open cursors for one
connection is 1000. The value can be changed using the parameter Srv.MaxOpenCursors=n.
14532 Server Error Internal error: cursor synchronization between client and server failed. Contact technical
support for more information.
Server Error
14533 Operation cancelled
Operation was cancelled because client application called ODBC or JDBC cancel function.
You get this error when the solidDB process size has exceeded the limit set with parameter
Srv.ProcessMemoryLimit. Only ADMIN COMMANDs are allowed so that you can increase
the process size limit.
Increase the value of Srv.ProcessMemoryLimit or disable the process memory size checking
by setting Srv.ProcessMemoryCheckInterval to 0.
Tip: You can modify the Srv.ProcessMemoryLimit and Srv.ProcessMemoryCheckInterval
parameters dynamically with ADMIN COMMAND 'parameter'.
14535 Server Error Server is already a primary server.
Meaning: The server you are trying to switch to Primary is already in one of the PRIMARY
states.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby switch primary'
14536 Server Error Server is already a secondary server.
Meaning: The server you are trying to switch to Secondary is already in one of the
SECONDARY states.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby switch secondary'
14537 Server Error HotStandby connection is broken.
Meaning: This command is returned from both the Primary and Secondary server.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby status connect'
v ADMIN COMMAND 'hotstandby connect'
One possible cause of this problem is an incorrect Connect string in the Secondary's
solid.ini file. If the netcopy operation succeeds but the connect command fails, check the
Connect string. (Netcopy does not require the Secondary to open a separate connection to
the Primary, and thus may succeed even if the Connect string on the Secondary is wrong.)
14538 Server Error Server is not HotStandby primary server.
Meaning: To issue this command, the server must be a HotStandby Primary server.
ADMIN COMMANDs that may return this status in the result set of the command:
v ADMIN COMMAND 'hotstandby copy copy_directory'
v ADMIN COMMAND 'hotstandby netcopy'
v ADMIN COMMAND 'hotstandby connect'
v ADMIN COMMAND 'hotstandby set primary alone'
v ADMIN COMMAND 'hotstandby set standalone'
This error code is given when one of the following situations occurs:
v The user issued a netcopy command to a Primary server, but the server that should be
Secondary is not actually in a Secondary state, or is not in "netcopy listening mode". (Both
the Primary and the "Secondary" server are probably in PRIMARY ALONE state.)
To solve the problem, restart the "Secondary" with the -x backupserver command-line
option, then try again to issue the netcopy command to the Primary.
Attention: If both servers were in PRIMARY ALONE state, and if both servers executed
transactions while those servers were in PRIMARY ALONE state, then they probably each
have data that the other one does not. This is a serious error, and doing a netcopy to put
them back in sync would result in writing over some transactions that have already been
committed in the "Secondary" server.
v This message can be generated when you use a callback function and the callback
function refuses to shut down or accept a backup or netcopy command.
When you use linked library access, you can provide "callback" functions by using the
SSCSetNotifier function. Your callback functions will be notified when the server has been
commanded to shut down or to do a netcopy operation. If for some reason your
application doesn't want the command to be followed, then your callback can return a
value that cancels the command. In this situation, you will see error 14539.
To solve the problem, wait until the client code finishes the operation that it does not
want to interrupt, then retry the command (for example, the shutdown or netcopy).
14540 Server Error Server is already a non-HotStandby server.
14541 Server Error HotStandby configuration in solid.ini conflicts with ADMIN COMMAND 'HSB SET STANDALONE'.
14542 Server Error Server in backupserver mode. Operation refused.
14543 Server Error Invalid command. The database is a HotStandby database but, HotStandby section not
found in solid.ini configuration file.
14544 Server Error Operation failed. This command is not supported on diskless server.
14545 Server Error Primary can only be set to primary alone when its role is primary broken.
14546 Server Error Switch failed. The server or the remote server cannot switch from primary alone to
secondary server. Catchup should be done first before switch.
Meaning: This command is returned when a state switch to SECONDARY is executed from a
local or remote Primary server that is in the PRIMARY ALONE state and it is detected that
the Primary and Secondary server are not in sync. You must connect the Primary server to
the Secondary server and wait for the catchup process to complete before switching the
Secondary to the Primary.
Meaning: This command is returned when a state switch to SECONDARY is executed from a
local or remote Primary server that is in the STANDALONE state and it is detected that the
Primary and Secondary server are not in sync. You must connect the Primary server to the
Secondary server and wait for the catchup to complete before switching the Secondary to the
Primary.
Meaning: If the HotStandby connection is broken, Primary server must be set to alone mode
or switched to secondary mode before shutdown.
14550 Server Error Hotstandby connect parameter can be changed only when the primary is not connected to
secondary.
Error 14552 is returned when a client attempts to establish a connection to a solidDB server
which is in a backup server mode (also called netcopy listening mode). The backup server
mode is a special server mode where the solidDB instance has been started with the
command line option -xbackupserver. This mode indicates that the solidDB instance is a
Secondary server that is either waiting for or in the process of receiving the database file
from the Primary server due to a netcopy command issued at the Primary server.
Server Error
14553 Backup process is not active
This error is given if ADMIN COMMAND 'abort backup' is issued and no backup is active.
Server Error
14554 The server does not support the required Transparent Failover level.
Reserved for future. This error will be reported when the server does not implement the
Transparent Failover (TF) level requested by the application. Currently, there is only one
level.
Server Error
14555 Netbackup: Conflicting usage of backup directory %s.
Server Error
14556 Netbackup: No server connection string specified.
Server Error
14557 Netbackup: A server configured for HotStandby cannot act as a netbackup server.
14558 Server Error Operation not allowed when delete capture is off.
14570 Server Error XID not found.
14571 Server Error XID has not been prepared. Cannot execute two phase commit.
14572 Server Error XID has been prepared. Cannot execute one phase commit.
14600 Server Error Command is ambiguous in cluster session.
14706 Server Error Invalid read thread mode for HotStandby, only mode 2 is supported.
30135 Server Fatal Error SMA application has failed while processing the solidDB server code. Server cannot continue
and is executing emergency shutdown.
You have used a cursor that has not been defined in a procedure definition.
Procedure Error
23003 Illegal SQL operation operation.
Procedure Error
23004 Syntax error: parse error, line line number.
Trigger name conflicts with some other database object. Triggers share the same name
space, as for example, in table and procedures.
Procedure Error
23026 Variable variable is of character type, line line number.
A CHAR or WCHAR variable is required for the operations like RETURN SQLERROR
variable.
Procedure Error
23027 Duplicate reference to column column_name in trigger definition.
You will see this message if the number of columns that you selected does not match
the number of variables in the INTO clause.
Procedure Error
23503 Previous SQL operation operation failed in cursor cursor.
Procedure Error
23504 Cursor cursor is not executed.
Procedure Error
23505 Cursor cursor is not a SELECT statement.
Procedure Error
23506 End of table in cursor cursor.
Procedure Error
23508 Illegal assignment, line line number.
Procedure Error
23509 In procedure line line number Stmt statement was not in error state in RETURN
SQLERROR OF ...
Procedure Error
23510 In procedure line line number Transaction cannot be set read only, because it has written
already.
Procedure Error
23511 In procedure line line number USING part is missing for dynamic parameters for
procedure.
Procedure Error
23512 In procedure line line number USING list is too short for procedure.
Procedure Error
23513 In procedure line line number Comparison between incompatible types data type and data
type.
Procedure Error
23514 In procedure line line number type data type is illegal for logical expression.
Procedure Error
23515 In procedure line line number assignment of parameter parameter in list list failed.
One possible cause of this error is trying to bind a parameter in a prepared statement
that has a clause like "...? IS NULL...". To work around this problem, we recommend
that you cast the placeholder (the question mark) to the appropriate data type. For
example, if you are binding a parameter of type TIMESTAMP, then replace
WHEN ? IS NULL
with
WHEN CAST(? AS TIMESTAMP) IS NULL
Procedure Error
23516 In CALL procedure, assignment of parameter parameter failed.
23517 Procedure Error Internal error: illegal operation code in procedure. Contact technical support for more
information.
Procedure Error
23518 User error: error_text
User generated error in a procedure or trigger. User can generate this error by using a
statement RETURN SQLERROR string or RETURN SQLERROR variable. Variable must
be of CHAR or WCHAR type.
Fetch previous row does not work for result sets returned by a procedure.
Procedure Error
23520 Invalid link name given in remote procedure call.
Procedure Error
23521 Link name not given in remote procedure call.
Procedure Error
23522 Dynamic parameters not allowed with remote procedure call.
Procedure Error
23523 Default node not defined.
Procedure Error
23524 Could not load application.
Procedure Error
23525 Function not found from the DLL.
Procedure Error
23526 In CALL <procedure_name> assignment of default value of parameter
<parameter_number> failed.
This error message occurs if you call a procedure with too few parameters and you
have not specified default values for the missing parameters.
Procedure Error
23527 In CALL <procedure_name> parameter <parameter_number> assigned twice.
This occurs if you specify the same parameter more than once.
Procedure Error
23528 Application is already running.
Procedure Error
23529 Application is not running.
Synchronization Error
25004 Dynamic parameters are not supported.
Synchronization Error
25005 Message message_name is already active.
The message is automatically deleted when the reply of the message has
been successfully executed in the replica database.
Synchronization Error
25006 Message message_name not active
You must first remove the message with MESSAGE message_name DELETE
command. Then switch autocommit off and run the script again.
Synchronization Error
25009 Replica replica_name not found
Synchronization Error
25010 Publication publication_name not found.
Synchronization Error
25011 Wrong number of parameters to publication publication_name.
Synchronization Error
25012 Message reply timed out.
A reply message has not arrived to the replica database within the given
timeout period. The reason is that the reply message is not yet ready in
the master database. The message needs to be retrieved later using
"MESSAGE message_name GET REPLY" command.
The message with the given name does not exist. The message name is
given when the message is created with command MESSAGE
message_name BEGIN. The message name is released when the reply
message has been successfully executed in the replica database.
Note: See the CREATE PUBLICATION syntax reference for correct syntax.
Synchronization Error
25016 Message not found, replica id replica_id, message id message_id
Message not found in master during processing. This can happen if the
message is explicitly deleted in master.
Synchronization Error
25017 No unique key found for table table_name.
The primary key for the table has not been defined.
Synchronization Error
25019 Database is not a replica.
Synchronization Error
25021 Database is not master or replica database.
The execution of a transaction has been cancelled and rolled back in the
master database. Because of the failed transaction, the execution of the
message that contained the transaction has been stopped.
Synchronization Error
25023 Replica registration failed.
Synchronization Error
25024 Master not defined.
Note that this error can be produced if you use double quotes rather than
single quotes around the master_connect_string in a MESSAGE FORWARD
command. solidDB statements that return this error:
IMPORT ’filename’
MESSAGE message_name FORWARD TO ’master_connect_string’
TIMEOUT timeout_in_seconds
Note: The use of double quotes rather than single quotes around the
master_connect_string in can produce this error message.
MESSAGE message_name GET REPLY ...
MESSAGE message_name APPEND REFRESH publication_name
MESSAGE message_name EXECUTE ....
Synchronization Error
25026 A user who has not been defined in the master database, attempts to
perform a solidDB SQL command.
To resolve this problem, use the correct user ID if there is one. If there is
not already a correct user ID, then you have two options:
1) Map a master user to the replica userid you are using. (The master user
must already have been downloaded from the master to the replica.) To
map a master user to a replica user, execute the command:
ALTER USER replica_user SET MASTER master_name
USER user_specification
Synchronization Error
25027 Too long column or parameter value; configured maximum is <value>
Synchronization Error
25028 Message message_name can include only one system subscription.
Synchronization Error
25032 All publication SQL statements must return rows.
Synchronization Error
25033 Publication publication_name already exists.
Synchronization Error
25034 Message name message_name already exists.
Each message must have a name that is unique within the database.
Synchronization Error
25035 Message message_name is in use.
Synchronization Error
25037 Publication column count mismatch in table table_name.
Synchronization Error
25038 Table is referenced in publication publication_name; drop or alter operations
are not allowed.
Synchronization Error
25039 Table is referenced in subscription to publication publication_name; drop or
alter operations are not allowed.
Synchronization Error
25040 User id user_id is not found.
Synchronization Error
25042 Message is too long (number bytes) to forward. Maximum is set to number
bytes.
Synchronization Error
25043 Reply message is too long (number bytes). Maximum is set to number
bytes.
Synchronization Error
25044 SYNC_CONFIG system publication takes only character arguments.
Synchronization Error
25045 Master/replica node support disabled.
Synchronization Error
25046 Commit and rollback are not supported in propagated transactions.
Synchronization Error
25049 Referenced table table_name not found in subscription hierarchy.
Synchronization Error
25050 Table has no history.
Synchronization Error
25051 Unfinished messages found.
Replica mode has been attempted to be switched off while there are
messages either waiting to be forwarded or being executed at master.
Synchronization Error
25052 Failed to set node name to node_name.
A table in the master database has the SYNCHISTORY property set, but
the corresponding table in the replica does not.
Synchronization Error
25055 Connect information is allowed only when not registered.
Synchronization Error
25057 Already registered to master master_name.
Synchronization Error
25058 Missing connect information.
Synchronization Error
25059 After registration nodename cannot be changed.
Synchronization Error
25060 Column column_name does not exist on publication publication_name
resultset in table table_name.
This error occurs when a replica finds out that the master is transferring
data that does not include primary key values that the replica requires.
Synchronization Error
25061 Where condition for table table_name must refer to an outer table of the
publication.
Dropping the user mapping failed because user is not mapped to a given
master.
Synchronization Error
25063 User user_id is already mapped to master user_id.
Synchronization Error
25064 Unfinished message message_name found for replica replica_name.
Synchronization Error
25065 Unfinished message message_name found for master master_name.
Synchronization Error
25066 Synchronization bookmark bookmark_name already exists.
Synchronization Error
25067 Synchronization bookmark bookmark_name not found.
Synchronization Error
25068 Export file file_name open failure.
Synchronization Error
25070 Statements can be saved only for one master in transaction.
Synchronization Error
25071 Not registered to publication publication_name.
Synchronization Error
25072 Already registered to publication publication_name.
Synchronization Error
25073 Export file can have data only from one master.
Synchronization Error
25074 User definition not allowed for this operation.
Synchronization Error
25075 Transaction not found.
Synchronization Error
25076 Only REGISTER REPLICA is allowed in message.
Synchronization Error
25077 Node name is not valid.
Synchronization Error
25078 Node name already exists.
Synchronization Error
25083 Commit block can not be used with HotStandby.
Synchronization Error
25084 Can not save ADMIN COMMAND.
Synchronization Error
25085 Failed to store blob from message.
Synchronization Error
25088 Catalog already in maintenance mode. You have set the mode on already.
Synchronization Error
25089 Not allowed to set maintenance mode off. Someone else has set the mode
on, so you cannot set it off.
Synchronization Error
25090 Catalog already in maintenance mode. Someone else has set the mode on,
so you cannot set it off.
Synchronization Error
25091 Catalog is not in maintenance mode. You tried to set the mode off when it
was not on.
Synchronization Error
25092 User version strings are not equal in master and replica, operation failed.
The server checks whether the master and replica sync schema version
numbers are equal. If the version numbers are not equal, then the server
gives this error. (Note: If neither the master nor the replica has set the
version number, then you will not get the error message.)
Meaning: Command 'hsb connect' returns this error if both nodes are in
same role.
14701 HotStandby Error Rejected connection, both servers in SECONDARY role.
Meaning: Command 'hsb connect' returns this error if both nodes are in
same role.
14702 HotStandby Error Operation failed, catchup is active.
Meaning: While the servers are performing catchup, you will get this error if
you issue any of the following commands on the Primary: 'hsb switch
secondary', 'hsb set secondary alone', 'hsb set standalone', 'hsb
connect', 'hsb copy' or 'hsb netcopy'.
While the servers are performing catchup, you will get this error if you issue
any of the following commands on the Secondary: 'hsb switch primary', 'hsb
set secondary alone', 'hsb set primary alone', 'hsb set standalone', or
'hsb connect'.
14703 HotStandby Error Operation failed, copy is active.
Meaning: This error is returned if the server is in PRIMARY ACTIVE state and
the command 'hsb copy' or 'hsb netcopy' is issued.
14705 HotStandby Error Setting to STANDALONE is not allowed in this state.
Meaning: If the server is in PRIMARY ACTIVE state and you issue the
command 'hsb set standalone', then you will get this message.
14706 HotStandby Error Invalid read thread mode for HotStandby, only mode 2 is supported.
14707 HotStandby Error Operation not allowed in the STANDALONE state.
14708 HotStandby Error Catchup failed, catchup position was not found from log files.
14709 HotStandby Error Hot Standby enabled, but connection string is not defined.
14710 HotStandby Error Hot Standby admin command conflict with an incoming admin command.
14711 HotStandby Error Failed because server is shutting down.
14712 HotStandby Error Server is secondary. Use primary server for this operation.
This error is used for the ODBC driver. It is given if signals attempt to use inappropriate buffer
type for reading values (such as reading string to integer value). This error is documented into
more detail in the ODBC specifications.
SSA Error
25201 Invalid use of null pointer
This error is given, if an invalid parameter - NULL is passed as a statement handle, connection
handle, or application buffer.
SSA Error
25202 Function sequence error
This error is given, if an attempt to violate the ODBC function call sequence is made. This can
happen, for example, when trying to execute a statement that has not been prepared.
SSA Error
25203 Invalid transaction operation code
This error is given, if an attempt to use an incorrect transaction completion code with the
SQLEndTran function (SQL_COMMIT and SQL_ROLLBACK are allowed) is made.
SSA Error
25204 Invalid string or buffer length
This error is given, if 0 or any negative buffer size is passed to an ODBC function that requires an
application buffer.
SSA Error
25205 Invalid attribute/option identifier
This error is given, if an invalid operation code is passed to the SQLSetPos, SQLDriverConnect,
SQLFreeStmt and so on.
SSA Error
25206 Connection timeout expired
SSA Error
25207 Invalid cursor state
This error is given, for example, if an attempt is made to fetch with a closed cursor.
SSA Error
25208 String data, right truncated
This error is given when updating a date or time column with incorrect data.
SSA Error
25210 COUNT field incorrect
This error is given, for example, when trying to pass an extra parameter to an insert statement.
SSA Error
25211 Invalid descriptor index
This error is given, for example, when using 0 or negative value as SQLBindParameter column
index.
SSA Error
25212 Client unable to establish a connection
This error is given, for example, when trying to reconnect an already connected connection.
SSA Error
25214 Connection does not exist
This error is given, for example, when trying to use a closed or not connected connection.
SSA Error
25215 Server rejected the connection
Transport layer connection to the server has been established, but the server rejects the connection
(for example, because it is shutting down).
SSA Error
25216 Connection switch, some session context may be lost
This is a TF-1 specific error. A TF-1 connection has encountered a connection switch. The
application must roll back the transaction to restore the connection.
SSA Error
25217 Client unable to establish a primary connection
This is a TF-1 specific error. The ODBC driver has not been able to establish connection to the
primary server, for example, after an application rolled back a transaction after a failover, or if
there is no primary server address in the TF-1 connection string (all the reachable servers are
secondary).
25404 SSA Error COUNT field incorrect
25406 SSA Error Invalid descriptor index
25411 SSA Error String data
25416 SSA Error Datetime field overflow
25418 SSA Error Invalid cursor state
25424 SSA Error Invalid application buffer type
25427 SSA Error Invalid use of null pointer
25428 SSA Error Function sequence error
25429 SSA Error Invalid transaction operation code
25432 SSA Error Invalid string or buffer length
25434 SSA Error Invalid attribute/option identifier
25448 SSA Error Connection timeout expired
Your client does not support the Unicode database mode; update the client to the same version
30023 COM Message as the server.
30150 SRV Fatal Error This error is given if the solidDB server cannot be started.
30151 SRV Message Database started.
Memory allocation size has exceeded <value>MB. Current size: <value> butes. Number of
30152 SRV Message allocations: <value>.
Memory allocation size has fallen below <value>MB. Current size: <value> bytes. Number
30153 SRV Message of allocations: <value>.
Statement (id: <userid> userid: <type> type: <value>) has allocated <value> bytes of
30154 SRV Message memory SQL: <value>.
30155 SRV Message Process size <virtual_size> is <above|below> the <warning_level|limit|low_level> <value>
30156 SRV Message Server health check monitoring started.
Parameter General.MultiprocessingLevel has been set automatically to <value>, the number
of logical CPUs detected.
As of V6.5 Fix Pack 4, the factory value of General.MultiprocessingLevel is read from the
system as the number of logical processing units. With some processor architectures, the
number of logical processing units might not be the same as the number of physical cores. In
such cases, the optimal value for this parameter typically varies between the number of the
30158 SRV Message physical cores and the number of logical processing units.
Increase the size of the value of the Srv.ProcessMemoryLimit parameter or disable the process
30465 INI Message memory size checking by setting Srv.ProcessMemoryLimit to 0.
This error refers to a failed operation on the HSB primary server. The error returns the failed
operation and its location in the log, and the log size. Operations in the replication log are
30787 HSB Fatal Error skipped.
pri_hsblogcopy_write:bad type, log pos, log size
This error refers to a failed operation on the HSB primary server. The write to the replication log
30788 HSB Fatal Error file fails. The error returns the failed operation and its location in the log, and the log size.
30789 HSB Fatal Error Failed to open hot standby replication log file.
Failed to allocate memory for HotStandby log. Max Log size is logsize.
This error concerns a diskless database using HotStandby. In these systems, the HotStandby log is
30790 HSB Fatal Error written to memory. This error is given if allocating more memory for the log file fails.
30791 HSB Fatal Error HotStandby:solhsby:bad type type, log pos log_pos, log size log_size
30792 HSB Message Both servers are secondary.
Maximum number of secondary tasks value reached.
The queue at the secondary server for incoming log operations is growing faster than the
operations can be executed and acknowledged to the primary server.
30793 HSB Message The queue can be monitored with the performance counter HSB secondary queues.
Class Type
Code Description
32001 PT Error Passthrough: <description>
32002 PT Error Passthrough: Error:<description>
The SQL parser could not parse the SQL string. Check the syntax of the SQL statement and
try again.
You may not have privileges to access the table and its data.
Table can not be created. You may not have privileges for this operation.
A column type in your CREATE TABLE statement is illegal. Use a legal type for the
column.
Table can not be dropped. Only the owner (that is, the creator) can drop it.
The value specified for column is invalid. Check the value for the column.
The server failed to do the insertion. You may not have INSERT privilege on the table or it
may be locked.
The server failed to do the deletion. You may not have DELETE privilege on the table or
the row may be locked.
The server failed to fetch a row. You may not have SELECT privilege on the table or there
may be an exclusive lock on the row.
You cannot create this view. You may not have SELECT privilege on one or more tables in
the query-specification of your CREATE VIEW statement.
You cannot drop this view. Only the owner (i.e. the creator) of the view can drop it.
Column name is illegal. Check that the name is not a reserved name.
Function call to function failed. Check the arguments and their types.
This view is not updatable. UPDATE, INSERT and DELETE operations are not allowed.
SQL Error 18 Inserted row does not meet check option condition
You tried to insert a row, but one or more of the column values do not meet column
constraint definition.
SQL Error 19 Updated row does not meet check option condition
You tried to update a row, but one or more of the column values do not meet column
constraint definition.
A check constraint given to the table is illegal. Check the types of the check constraint of
this table.
You tried to insert a row, but the values do not meet the check option conditions.
You tried to update a row, but the values do not meet the check option conditions.
You have included a column in column list twice. Remove duplicate columns.
You need to specify at least one column definition in a CREATE TABLE statement.
Granting privileges failed. You may not have privileges for this operation.
Revoking privileges failed. You may not have privileges for this operation.
You tried to grant privileges to a role or a user. You have included multiple instances of a
privilege type in the list of privileges.
You have entered different number of columns in CREATE VIEW statement to the view
and to the table.
You can not use column name in an ORDER BY for UNION statement.
You have tried to execute a set operation of tables with incompatible row types. The row
types in a set operation must be compatible.
An index could not be created. You may not have privileges for this operation. You need to
be an owner of the table or have SYS_ADMIN_ROLE to have privileges to create index for
the table.
An index could not be dropped. You may not have privileges for this operation. You need
to be an owner of the table or have SYS_ADMIN_ROLE to have privileges to drop index
from the table.
You tried to use an ORDER BY column that does not exist. Refer to an existing column in
the ORDER BY specification.
You have used a subquery that returns more than one row. Only subqueries returning one
row may be used in this situation.
You tried to insert or update a table using an aggregate function (SUM, MAX, MIN or
AVG) as a value. This is not allowed.
You have referenced a column which exists in more than one table. Use syntax table.column
to indicate which table you want to use.
A function was called in wrong order. Check the sequence and success of the function
calls.
A parameter was used illegally. For example: SELECT * FROM TEST WHERE ? < ?;
A parameter has an illegal value. Check the type and value of the parameter.
SQL Error 56 Only ANDs and simple condition predicates allowed in UPDATE CHECK
Server failed to open a cursor. You may not have cursor open at this moment.
You tried to group rows using column. All columns in group_by_clause must be listed in
your select_list. A star ('*') notation is not allowed with GROUP BY.
You tried to compare values which have incompatible types. Incompatible types are for
example an integer and a date value.
SQL Error 60 Reference to the insert table not allowed in the source query
You have referenced in subquery a table where you are inserting values. This is not
allowed.
You have referenced in subquery a table where you are updating values. This is not
allowed.
You have referenced in subquery a table where you are deleting values. This is not
allowed.
You have used a subquery that returns more than one column. Only subqueries returning
one column may be used.
You tried to update a pseudo column (ROWID, ROWVER). Pseudo columns are not
updatable.
A user could not be created. You may not have privileges for this operation.
A user could not be altered. You may not have privileges for this operation.
A user could not be dropped. You may not have privileges for this operation.
A role could not be created. You may not have privileges for this operation.
A role could not be dropped. You may not have privileges for this operation.
Granting role failed. You may not have privileges for this operation.
SQL Error 72 Revoking role failed. You may not have privileges for this operation.
You have tried to compare row value constructors that have different number of
dimensions. For example you have compared (a,b,c) to (1,1).
The aggregate expression can not be used with * columns. Specify columns using their
names when used with this aggregate expression. This usually happens when GROUP BY
expression is used with the * columns.
You have tried to reference a table which is not in the FROM list. For example: SELECT
T1.* FROM T2.
You have used the syntax table.column_name ambiguously. For example: SELECT T1.*
FROM T1 A,T1 B WHERE A.F1=0;
You tried to use aggregate expression illegally. For example: SELECT ID FROM TEST
WHERE SUM(ID) = 3;
The server failed to fetch a row. You may not have SELECT privilege on the table or there
may be an exclusive lock on the row.
External sorter is out of disk space or cache memory. Modify parameters in configuration
file solid.ini.
A table name alias was used in the query, but this alias was not specified as the table name
in the optimizer hint. The alias name must be specified, not the table name.
100 Operation failed. For example, this error code is procedured when performing an
operation, such as flushing arrays and inserting records.
This error applies to the column name used in the control file.
The data type in the data file conflicts with the table definition.
110 Concurrency conflict, two transactions updated or deleted the same row
ADMIN COMMAND
ADMIN COMMAND ’command_name’
Usage
When used with the solidDB SQL Editor (solsql), the command_name must be
given with single quotation marks. For example:
ADMIN COMMAND ’backup’
If you use double quotation marks, the command_name is not recognized and the
command fails.
When used with the solidDB Remote Control (solcon), the ADMIN COMMAND
syntax includes the command_name only, without the single quotation marks. For
example:
backup
Abbreviations
309
Important usage notes
v All options of the ADMIN COMMAND are not transactional and cannot be
rolled back.
v ADMIN COMMANDs and starting transactions
Although ADMIN COMMANDs are not transactional, they will start a new
transaction if one is not already open. (They do not commit or roll back any
open transaction.) This effect is usually insignificant. However, it may affect the
'start time" of a transaction, and that may occasionally have unexpected effects.
solidDB's concurrency control is based on a versioning system; you see a
database as it was at the time that your transaction started.
For example, if you issue an ADMIN COMMAND without another commit and
then leave for an hour; when you return, your next SQL command may see the
database as it was an hour ago, that is, when you first started the transaction
with the ADMIN COMMAND.
v Error codes
Error codes in ADMIN COMMANDS return an error only if the command
syntax or parameter values are incorrect. If only the requested operation may be
started, the command returns SQLSUCCESS (0). The outcome of the operation
itself is written into a result set. The result set has two columns: RC and TEXT.
The RC (return code) column contains the return code of the operation: it is "0"
for success, and different numeric values for errors. It is thus necessary to check
both the codes of the ADMIN COMMAND statement and of the operation.
If the option is not entered, the command defaults to ADMIN COMMAND ’abort backup’.
ADMIN COMMAND ’cleanbgjobinfo’ Note: This command has been deprecated. Use ADMIN COMMAND ’backgroundjob’
Abbreviation: cleanbgi instead.
ADMIN COMMAND ’close’ Closes the server to new connections; no new connections are allowed.
Abbreviation: clo
ADMIN COMMAND ’errorcode Returns a description of all error codes or a specific error code.
{all | SOLID_error_code}’
Abbreviation: ec SOLID_error_code is the code number, for example 10034.
ADMIN COMMAND ’errorcode 10034’;
RC TEXT
-- ----
0 Code: DBE_ERR_SEQEXIST (10034)
0 Class: Database
0 Type: Error
0 Text: Sequence already exists
4 rows fetched.
ADMIN COMMAND ’errorexit <number>’ Forces the server into an immediate process exit with the given process exit code.
Abbreviation: erex
ADMIN COMMAND ’errormessage <string>’ Outputs the user-defined <string> to the error message log (solerror.out).
Abbreviation: errmsg
ADMIN COMMAND
’hotstandby [option]’ A HotStandby command.
Abbreviation: hsb
For a list of options, see the IBM solidDB High Availability User Guide.
For a list of options, see HotStandby ADMIN COMMANDs in the IBM solidDB High
Availability User Guide.
ADMIN COMMAND ’indexusage’ Displays the indexes, showing the number of times each index has been used.
Abbreviation: idxu
More than one option can be used per command. Values are returned in the same
order as requested, one row for each value.
Example:
ADMIN COMMAND ’info dbsize logsize’;
RC TEXT
-- ----
0 851968
0 573440
2 rows fetched.
ADMIN COMMAND ’logmessage <string>’ Outputs the user-defined <string> to the message log (solmsg.out).
Abbreviation: logmsg
ADMIN COMMAND ’logreader stop This command stops the transmission of log records on active log reader connections.
[all|<partition_id>]’
Abbreviation: lr When this command is issued, the active log reader applications reach the end of the
result set (SQLSTATE 0200, No data found) when fetching the next row of the
SYS_LOG table.
If the form LOGREADER STOP or LOGREADER STOP ALL is used, all log record
transmissions are stopped. If a <PARTITION_ID> is given, the command affect only
the log reader operation on that partition.
To access the log again, the application needs to reconnect. The log reading may be
resumed without any loss of information if the last read position is known. If the
SYS_LOG table is accessed without specifying the log position, the reading starts from
the live data.
Important: The stopping of the log transmission is effective immediately, regardless of
the fact that there might be records in the log awaiting transmission.
If the server is running in the relaxed durability mode (default), do not execute
LOGREADER STOP before all the records are written to the log, if those records are
meant to be seen in the log reader. With the default logging settings, it is safe to wait
for 5 seconds after the last write operation.
By default, the checkpoint is asynchronous. With the option -s, the command returns
only after the checkpoint has completed.
ADMIN COMMAND ’netbackup Makes a network backup of the database. The operation can be performed as a
[options] [DELETE_LOGS | synchronized or an asynchronic (default) manner. The synchronized operation is
KEEP_LOGS] [connect specified by using the -s option.
connect str] [dir
backup dir]’ DELETE_LOGS means that backed-up log files in the source server are deleted. This is
Abbreviation: nbak sometimes referred to as full backup. This is the default value.
KEEP_LOGS means that backed-up log files are kept in the source server. This is
sometimes referred to as copy backup. Using KEEP_LOGS corresponds to setting the
General.NetbackupDeleteLog parameter to no.
The default connect string and the default netbackup directory are defined with the
General.NetBackupConnect and the General.NetBackupDirectory parameters.
The options that are entered with this command override the values specified in the
configuration file.
ADMIN COMMAND ’netstat’ Displays server settings and the network status.
Abbreviation: net
ADMIN COMMAND ’notify This command sends an event to a given user with event identifier NOTIFY. This
user {username | user id | ALL } identifier is used to cancel an event-waiting thread when the statement timeout is not
message’ long enough for a disconnect or to change the event registration.
Abbreviation: not
The following example sends a notify message to a user with user id 5 ; the event
then gets the value of the message parameter.
ADMIN COMMAND ’notify user 5 Canceled by admin’
ADMIN COMMAND ’open’ Opens server for new connections; new connections are allowed.
Abbreviation: ope
For example:
v ’parameter general’ displays all parameters from section [General].
v ’parameter general.readonly’ displays the parameter Readonly in the [General]
section.
v ’parameter com.trace=yes’ sets communication trace on.
v ’parameter com.trace=’ sets communication trace to its startup value.
v ’parameter com.trace=*’ sets communication trace to its factory value.
ADMIN COMMAND ’passthrough status’ Provides the following status information about the SQL passthrough connections:
Abbreviation: pt v NO REMOTE SERVER - no remote server object defined
v NOT CONNECTED - not connected, no errors
v CONNECTED - connected
v LOGIN FAILED - failed at login
v CONNECTION BROKEN - connection broken
ADMIN COMMAND ’perfmon Returns server performance counters for the past few minutes at approximately one
[- c | - r] [print_options] minute intervals. Most values are shown as the average number of events per second.
[name_prefix_list]’ Counters that cannot be expressed as events per second (for example, database size)
Abbreviation: pmon are expressed in absolute values.
v -c - prints actual counter values for each snapshot.
v -r - prints counter values in raw mode, which includes only the latest counter
values without any formatting. The counter names are not printed. This option is
useful if actual monitoring is performed using some other external program that
retrieves the counter values from the server. You can retrieve the counter names
with the --xnames option.
v print_options
– -xtime - prints the time in seconds
– -xtimediff - prints the difference to the last pmon call in milliseconds
– -xnames - prints out the column names for the output
– -xdiff - indicates the difference to the last ADMIN COMMAND 'perfmon'
execution instead of the absolute value
v name_prefix_list - limits the output to specific counter types, as indicated by the first
word in the counter name. For example, to print all File related counters, the
name_prefix_list should be file. You can also specify multiple prefixes.
The following example returns all values for counters whose name starts with prefix
File and Cache.
ADMIN COMMAND ’perfmon diff Starts a server task that prints out all perfmon counters with specified intervals to a
[ start | stop ] file.
[filename][interval]’ v filename is the name of the output file. The performance data is output in
Abbreviation: pmon diff comma-separated value format; the first row contains the counter names, and each
subsequent row contains the performance data per each sampling time.
The default file name is pmondiff.out.
v interval is the interval in milliseconds at which performance data is collected.
The default interval is 1000 milliseconds.
The following command starts a task that outputs performance data to myd.csv file on
500 milliseconds interval:
ADMIN COMMAND ’proctrace This turns on tracing in stored procedures and triggers.
{ on | off } user username
{ procedure | trigger | table } username is the name of the user whose procedure calls (or triggers) you want to trace.
entity_name’ If multiple connections are using the same username, calls from all of those
Abbreviation: ptrc connections will be traced. Furthermore, if you are using advanced replication, the
tracing will be done not only for calls on the replica, but also calls that are propagated
to the master and then executed on the master.
entity_nameis the name of the procedure, trigger, or table for which you want to turn
tracing on or off. If you specify a procedure or trigger name, then it will generate
output for every statement in the specified procedure or trigger. If you specify a table
name, then it will generate output for all triggers on that table. Trace is activated only
when the specified username calls the procedure / trigger.
For more details about proctrace, see section Tracing facilities for stored procedures
and triggers in IBM solidDB SQL Guide.
ADMIN COMMAND ’report filename’ Generates a report of server information to a file defined with filename.
Abbreviation: rep
ADMIN COMMAND ’save parameters Saves the set of current configuration parameter values to a file. If no file name is
[filename]’ given, the default solid.ini file is rewritten. This operation is performed implicitly at
Abbreviation: save each checkpoint.
ADMIN COMMAND ’sqllist This command prints out a list of the longest running SQL statements among the
top number_of_statements’ currently running statements. The list contains the selected number of statements.
ADMIN COMMAND ’status Displays status of the last started local or network backup. The status can be one of
backup | netbackup’ the following:
Abbreviation: sta backup | netbackup v If the last backup was successful or no backups have been requested, the output is
0 SUCCESS.
v If the backup is in process (for example, started but not ready yet), then the output
is 14003 ACTIVE.
v If the backup is being finalized, the output is 14003 STOPPING.
v If the last backup failed, the output is: errorcode ERROR where the errorcode shows
the reason for the failure.
Each information is tagged with user id so operations from different users can be
separated.
v passthrough - provides tracing information about the SQL passthrough connections
and the loading of the ODBC driver as follows:
– Loading of the ODBC driver: driver name and status of the load
– Status of connections to the back-end: connect/reconnect/disconnect/broken
v xa - distributed transaction information
v hac - High Availability Controller (HAC); trace information is output to
hactrace.out in the HAC working directory
Note: To start tracing on HAC, you must issue the command on a HAC
connection. For example, connect to HAC with solsql using the port defined with
the HAController.Listen parameter in the solidhac.ini configuration file.
v info <level> - SQL execution trace (level can be 0...8)
v all - both SQL messages and network communications messages are written to the
trace file.
v active - lists all active traces
ADMIN COMMAND ’tracemessage <string>’ Outputs the user-defined <string> to the trace message log (soltrace.out).
Abbreviation: trcmsg
ADMIN COMMAND ’userid’ Returns the user identification number of the current connection.
Abbreviation: uid
The lifetime of an Id is that of the user session. After a user logs out, the number may
be reused.
ADMIN COMMAND ’userid’
RC TEXT
-- ----
0 8
1 rows fetched.
For example, the userid can be used in the ADMIN COMMAND 'throwout' command to
disconnect a specific user.
ADMIN COMMAND ’userlist [-l] This command displays a list of users that are currently logged into the database, as
[name | id]’ well as information about various database operations and settings for each user. The
Abbreviation: ul option -l (long) displays a more detailed output.
Without the -l option, the following information is displayed: User name, User Id,
Type, Machine Id, Login time, Client version, and Appinfo (if available).
ADMIN COMMAND ’usertrace This turns on user tracing in stored procedures and triggers. This command will
{ on | off } user username generate output for every WRITETRACE statement in the specified procedure or
{ procedure | trigger | table } trigger.
entity_name’ v username is the name of the user whose procedure calls (or triggers) you want to
Abbreviation: utrc trace. If multiple connections are using the same username, then calls from all of
those connections will be traced. Furthermore, if you are using advanced
replication, the tracing will be done not only for calls on the replica, but also calls
that are propagated to the master and then executed on the master.
v entity_name is the name of the procedure, trigger, or table for which you want to
turn tracing on or off. If you specify a table name, it will generate output for all
triggers on that table. Trace is activated only when the specified user calls the
procedure / trigger.
For more details about usertrace, see section Tracing facilities for stored procedures
and triggers in IBM solidDB SQL Guide.
ADMIN COMMAND ’version’ Displays server version information and information related to the solidDB software
Abbreviation: ver license in use.
323
C connecting to solidDB
basics 20
cache (disk-based) 129 login 20
CacheSize (parameter) 55, 182 ConnectionCheckInterval (parameter) 202
CAST (function) 268 connections
catalogs committed transactions 135
name criteria 15 determining existing 134
CHARACTERSET keyword (solload) 108 ConnectStrForMaster (parameter) 214, 285
CharPadding (parameter) 197 ConnectTimeOut (parameter) 202, 219
checkpoint control file (solidDB Speed Loader)
'makecp' command 314 description 100
CheckpointDeleteLog (parameter) 170 syntax 106
CheckpointInterval (parameter) 133, 170 convert
checkpoints 36 command line option 222
automatic daemon 36 converting database format 222
automating 36 ConvertOrsToUnionsCount (parameter) 198
erasing automatically 36 counters 68
forcing 133 cptime ADMIN COMMAND 313
frequency 133 creating
timed commands 36 checkpoints 36
tuning 133 CursorCloseAtTransEnd (parameter) 198
client-side configuration parameters 217
ClientReadTimeout (parameter) 218
closing solidDB 14
ADMIN COMMAND 14 D
clustering D-table 6
data clustering 6 database
columns automating 36
setting LONG VARCHAR 20 backing up 27
command line options 221 block size 19
COMMIT WORK statement cache 129
application code 135 changing dynamically 129
troubleshooting 135 size 129
communication checking last backup status 66
between client and server 81 checking overall status 65
selecting a protocol 87 closing 13, 36
tracing problems 144 compacting 37
communication protocols 87 configuring 44
Named Pipes 90 converting format 222
selecting 87 creating 15
summary 91 creation time 313
supported protocols 87 currently connected users 66
TCP/IP 88 decreasing database file size 53
UNIX Pipes 89 defining objects 20
communication session layer disconnecting a user 66
description 8 file size
communication tracing 57 decreasing 53
configuration file free space in 313
description 19 in-memory 44
server-side 44 index file 53
setting 49 location 19, 53
solidDB Speed Loader 119 login 12
configuring maximum size 19
client-side configuration file 44 monitoring 67
configuration file 44 opening 36
default settings 44 performance 67
example 44 querying last backup 66
factory values 44 recovery 35
managing parameters 44, 45, 46 restoring master and replica 27
parameter settings 44 several databases on one computer 24
server-side configuration file 44 shutting down 14
setting parameters 45, 47 size 15, 53
solid.ini 44 troubleshooting 67
viewing parameter descriptions 46 using in-memory database 131
viewing parameters 44, 45 database mode
Connect (parameter) 58, 219 partial Unicode 16
connect string 58 Unicode 16
clients 84 DatabaseSizeReportInterval (parameter) 203
F
E file locations 17
Echo (parameter) 203 file system 17
EmulateOldTimestampDiff (parameter) 198 FileFlush (parameter) 185
EnableHints (parameter) 198 FileNameTemplate (parameter) 56, 186
ENCLOSURE (solidDB Speed Loader) 109 FileSpec (parameter) 19, 53
encryption FileWriteFlushMode (parameter) 171
DES ForceThreadsToSystemScope (parameter) 204
changing password 40 free space in database 313
creating 39
decrypting 41
enabling 39
password 40
H
HealthCheckEnabled (parameter) 204
starting encrypted database 40
HealthCheckInterval (parameter) 204
disabling 39
HealthCheckTimeout (parameter) 204
level 41
overview 38
entering timed commands 36
environment variables I
SOLTRACE 144 I/O
SOLTRACEFILE 144 distributing 132
error codes tuning 132
error handling 227 IBMPC (reserved word) 106
error handling IgnoreOnDisabled (parameter) 193
AT messages 293 ImdbMemoryLimit (parameter) 189
BCKP messages 293 ImdbMemoryLowPercentage (parameter) 190
COM messages 288 ImdbMemoryWarningPercentage (parameter) 190
communication errors 257 imdbsize ADMIN COMMAND 313
CP messages 293 ImplicitStart (parameter) 165
database errors 232 import file (solidDB Speed Loader) 101
DBE errors 291 index file
error codes 227 splitting to multiple disks 53
executable errors 306 Info (parameter) 199
FIL messages 298 InfoFileFlush (parameter) 199
HotStandby errors 286 InfoFileName (parameter) 199
HSB errors 295 InfoFileSize (parameter) 199
INI messages 294 InifileLineSplitting 205
LOG messages 294 intelligent join constraint transfer 7
passthrough errors 299 InternalCharEncoding (parameter) 172
procedure errors 266 INTO_TABLE_PART
RPC errors 270 solidDB Speed Loader 109
SA API errors 269 IOThreads (parameter) 172
server errors 260 isolation levels
SMA errors 299 read committed 125
SNC errors 297 repeatable read 125
Index 325
isolation levels (continued) MaxNestedProcedures (parameter) 200
serializable 125 MaxNestedtriggers (parameter) 200
IsolationLevel (parameter) 199 MaxOpencursors (parameter) 207
MaxOpenFiles (parameter) 174
MaxPhysMsgLen (parameter) 166
J MaxRPCDataLen (parameter) 207
MaxSharedMemorySize (parameter) 195
JDBC 2, 3
MaxSpace (parameter) 187, 188
MaxStartStatements (parameter) 207
MaxTransactionSize (parameter) 191
K MaxUsers (parameter) 207
KeepAllOutFiles (parameter) 205 maxusers ADMIN COMMAND 313
MaxWriteConcurrency (parameter) 175
memory
L physical 128
tuning 126
Latin1CaseSemantics (parameter) 199
virtual 128
Listen (parameter) 166
MemoryPoolScope (parameter) 192
listen name 81, 84
MemoryReportDelta (parameter) 207
listing users 320
MemoryReportLimit (parameter) 208
local backup 28
MemorySizeReportInterval (parameter) 208
LocalStartTasks (parameter) 205
memtotal ADMIN COMMAND 313
LockEscalationEnabled (parameter) 190
MergeInterval (parameter) 132, 175
LockEscalationLimit (parameter) 190
message log 63
LockHashSize (parameter) 173, 191
MessageLogSize (parameter) 208
LockWaitTimeOut (parameter) 174
MinCheckpointTime (parameter) 133, 175
log files
MinMergeTime (parameter) 175
overview 35
MinSplitSize (parameter) 187
solerror.out 63
monitoring 63
solmsg.out 63
monitorstate ADMIN COMMAND 313
Speed Loader 101
MSWINDOWS (reserved word) 106
LogDir (parameter) 186
MultiprocessingLevel (parameter) 175
LogEnabled (parameter) 186
multithread processing
logging
description 8
transaction durability 123
transactions 35
logical data source names
defining in solid.ini 86 N
login Name (parameter) 208
description 12 name ADMIN COMMAND 313
incorrect username or password 12 Named Pipes 90
LogReaderEnabled (parameter) 187 netbackup 29
logsize ADMIN COMMAND 313 NetBackupConnect (parameter) 175
LogWriteMode (parameter) 186 NetBackupConnectTimeout (parameter) 176
LongSequentialSearchLimit (parameter) 174 NetBackupCopyIniFile (parameter) 176
NetBackupCopyLog (parameter) 176
NetBackupCopySolmsgOut (parameter) 176
M NetBackupDeleteLog (parameter) 176
NetBackupDirectory (parameter) 176
M-tables 6
NetBackupDirectory (parameters) 56
makecp 133
NetBackupReadTimeout (parameter) 176
manual administration 5
NetBackupRootDir (parameter) 208
master database
network backup
backing up 27
directory 56
restoring 27
overview 28
MasterStatementCache (parameter) 214
network communication
MaxBgTaskInterval (parameter) 206
communication session layer 8
MaxBlobExpressionSize (parameter) 20, 200
network services 8
MaxBytesCachedInPrivateMemoryPool (parameter) 191
tracing 57
MaxCacheUsage (parameter) 191
troubleshooting 162
MaxCacheUsePercent (parameter) 196
network messages
MaxConstraintLength (parameter) 207
tuning 131
MaxFilesTotal (parameter) 196
network names 81, 84
MaxLogSize (parameter) 187, 188
adding 84
MaxMemLogSize (parameter) 187, 189
clients 84
MaxMemPerSort (parameter) 196
defining 53, 58
MaxMergeParts (parameter) 174
modifying 84
MaxMergeTasks (parameter) 174
Named Pipes 90
Index 327
ReferenceCacheSizeForHash (parameter) 184 solidDB (continued)
RefreshIsolationLevel (parameter) 215 executable program 11
RefreshReadLevelRows (parameter) 215 processes 1
relaxed durability 123 starting 11
RelaxedMaxDelay (parameter) 187 solidDB AT messages 293
ReleaseMemoryAtShutdown (parameter) 192 solidDB BCKP messages 293
RemoteServerDriverPath (parameter) 193 solidDB Bonsai Tree 134
RemoteServerDSN (parameter) 193 concurrency 6
RemoteStartTasks (parameter) 212 multiversion 6
REPEATABLE READ 215 reducing size 134
replica databases solidDB COM (communication) messages 288
backing up 27 solidDB communication errors 257
restoring 27 solidDB CP messages 293
ReplicaRefreshLoad (parameter) 215 solidDB Data Dictionary 116
ReportInterval (parameter) 212 starting 117
reports solidDB data management tools
automating 36 overview 93
creating a continuous performance monitoring report 68 solcon 93
creating a report for troubleshooting 67 soldd 93
creating a status report 67 solexp 93
full list of perfmon counters 68 solload 93
RestoreThreads (parameter) 192 solidDB database errors 232
Restoring backups 34 solidDB DBE errors 291
roles solidDB executable
database administration 11 -x execute command line option 122
roll-forward recovery 27 command line options 221
RowsPerMessage (parameter) 212, 218 errors 306
RPC 8 solidDB Export 114
RpcEventThresholdByteCount (parameter) 215 starting 114
running several servers 24 solidDB FIL messages 298
solidDB HotStandby errors 286
solidDB HSB errors 295
S solidDB INI messages 294
solidDB JDBC Driver
SA API 3
troubleshooting 161
SCAND7BIT (reserved word) 106
solidDB LOG messages 294
scripts
solidDB ODBC Driver
calling 99
troubleshooting 160
executing SQL script from file 99
solidDB Remote Control (solcon) 93
SearchBufferLimit (parameter) 177
commands 95
secondarystarttime ADMIN COMMAND 313
starting 94
sernum ADMIN COMMAND 313
solidDB RPC errors 270
server errors 260
solidDB SA API errors 269
server names
solidDB server shortcut (Windows) 13
network names 81
solidDB session errors 256
server-side configuration parameters 165
solidDB SMA errors 299
SharedMemoryAccessRights (parameter) 195
solidDB SNC errors 297
shortcut (Windows)
solidDB Speed Loader
server 13
control file 100
solsql 13
control file syntax 106
shutdown 14
description 100
Silent (parameter) 189, 212
errors 307
SimpleOptimizerRules (parameter) 200
import file 101
SocketLinger (parameter) 167, 219
ini file 119
SocketLingerTime (parameter) 167, 219
log file 101
soldd 116, 117
solidDB SQL
solerror.out
errors 300
description 63
troubleshooting 160
solexp 114
solidDB SQL API Errors 287
solid.ini
solidDB SQL Editor 95
configuration parameters 165, 217
executing SQL statements 98
configuring solidDB 43
starting 96
description 19
solidDB SQL Editor (solsql) shortcut (Windows) 13
solidDB
solidDB SQL optimizer
administering solidDB 11
description 7
command line options 221
solidDB SRV errors 260, 289
components 1
solidDB TAB messages 299
connecting to 20
Index 329
V
VersionedPessimisticReadCommitted (parameter) 178
VersionedPessimisticRepeatableRead (parameter) 178
virtual memory 128
W
Windows shortcuts 13
working directory 17
WriteBufSize (parameter) 168
WriterIOThreads (parameter) 178
No portion of this product may be used in any way except as expressly authorized
in writing by Oy International Business Machines Ab.
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law: INTERNATIONAL
BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. Some states do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
331
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
© your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs.
© Copyright IBM Corp. _enter the year or years_. All rights reserved.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, ibm.com, Solid, solidDB, InfoSphere, DB2, Informix®, and
WebSphere® are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and
service names might be trademarks of IBM or other companies. A current list of
IBM trademarks is available on the Web at “Copyright and trademark information”
at www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other product and service names might be trademarks of IBM or other companies.
Notices 333
334 IBM solidDB: Administrator Guide
Printed in USA
SC23-9869-05