Firebird Language
Firebird Language
5 Language Reference
Beta Release 1
Dmitry Filippov
Alexander Karpeykin
Alexey Kovyazin
Dmitry Kuzmenko
Denis Simonov
Paul Vinkenoog
Dmitry Yemanov
11 August 2016, document version 0.906
Firebird 2.5 Language Reference
Beta Release 1
Abstract
This volume represents a compilation of topics concerning Firebird's SQL language written by members of the Rus-
sian-speaking community of Firebird developers and users. In 2014, it culminated in a language reference manual, in Rus-
sian. At the instigation of Alexey Kovyazin, a campaign was launched amongst Firebird users world-wide to raise funds
to pay for a professional translation into English, from which translations into other languages would proceed under the
auspices of the Firebird Documentation Project.
Table of Contents
1. About the Firebird SQL Language Reference ...................................................................................... 1
Subject Matter ............................................................................................................................... 1
Authorship ..................................................................................................................................... 1
Language Reference Updates .................................................................................................. 1
Gestation of the Big Book ...................................................................................................... 2
Contributors ........................................................................................................................... 2
Acknowledgments .......................................................................................................................... 3
2. SQL Language Structure .................................................................................................................... 5
Background to Firebird's SQL Language ......................................................................................... 5
SQL Flavours ........................................................................................................................ 5
SQL Dialects ......................................................................................................................... 6
Error Conditions .................................................................................................................... 7
Basic Elements: Statements, Clauses, Keywords .............................................................................. 7
Identifiers ...................................................................................................................................... 8
Literals .......................................................................................................................................... 9
Operators and Special Characters .................................................................................................... 9
Comments .................................................................................................................................... 10
3. Data Types and Subtypes ................................................................................................................. 12
Integer Data Types .......................................................................................................................... 14
SMALLINT ........................................................................................................................... 14
INTEGER ............................................................................................................................. 14
BIGINT ................................................................................................................................ 14
Hexadecimal Format for Integer Numbers ............................................................................. 15
Floating-Point Data Types ................................................................................................................ 15
FLOAT ................................................................................................................................. 16
DOUBLE PRECISION ............................................................................................................ 16
Fixed-Point Data Types .................................................................................................................... 16
NUMERIC ............................................................................................................................ 17
DECIMAL ............................................................................................................................. 17
Data Types for Dates and Times ................................................................................................... 17
DATE ................................................................................................................................... 18
TIME ................................................................................................................................... 19
TIMESTAMP ......................................................................................................................... 19
Operations Using Date and Time Values ............................................................................... 19
Character Data Types ................................................................................................................... 20
Unicode ............................................................................................................................... 21
Client Character Set ............................................................................................................. 21
Special Character Sets .......................................................................................................... 21
Collation Sequence ............................................................................................................... 21
Character Indexes ................................................................................................................. 23
Character Types in Detail ..................................................................................................... 24
Binary Data Types ....................................................................................................................... 25
BLOB Subtypes ................................................................................................................... 25
BLOB Specifics .................................................................................................................... 26
ARRAY Type ....................................................................................................................... 26
Special Data Types ...................................................................................................................... 28
SQL_NULL Data Type .......................................................................................................... 28
Conversion of Data Types ............................................................................................................ 29
iv
Firebird 2.5 Language Reference
v
Firebird 2.5 Language Reference
vi
Firebird 2.5 Language Reference
vii
Firebird 2.5 Language Reference
viii
Firebird 2.5 Language Reference
ix
Firebird 2.5 Language Reference
x
List of Tables
3.1. Overview of Data Types ................................................................................................................ 12
3.2. Method of Physical Storage for Real Numbers ................................................................................ 16
3.3. Arithmetic Operations for Date and Time Data Types ..................................................................... 19
3.4. Collation Sequences for Character Set UTF8 .................................................................................. 22
3.5. Maximum Index Lengths by Page Size and Character Size .............................................................. 23
3.6. Conversions with CAST ................................................................................................................ 30
3.7. Date and Time Literal Format Arguments ....................................................................................... 31
3.8. Literals with Predefined Values of Date and Time ........................................................................... 32
3.9. Rules for Overriding Domain Attributes in Column Definition ......................................................... 35
4.1. Description of Expression Elements ............................................................................................... 38
4.2. Operator Type Precedence ............................................................................................................. 42
4.3. Arithmetic Operator Precedence ..................................................................................................... 43
4.4. Comparison Operator Precedence ................................................................................................... 43
4.5. Logical Operator Precedence .......................................................................................................... 44
5.1. CREATE DATABASE Statement Parameters ................................................................................. 67
5.2. ALTER DATABASE Statement Parameters ................................................................................... 71
5.3. CREATE SHADOW Statement Parameters .................................................................................... 74
5.4. DROP SHADOW Statement Parameter .......................................................................................... 76
5.5. CREATE DOMAIN Statement Parameters ..................................................................................... 77
5.6. ALTER DOMAIN Statement Parameters ........................................................................................ 82
5.7. CREATE TABLE Statement Parameters ......................................................................................... 87
5.8. ALTER TABLE Statement Parameters ......................................................................................... 100
5.9. DROP TABLE Statement Parameter ............................................................................................. 105
5.10. CREATE INDEX Statement Parameters ..................................................................................... 107
5.11. Maximum Indexes per Table ...................................................................................................... 108
5.12. Maximum indexable (VAR)CHAR length ................................................................................... 108
5.13. ALTER INDEX Statement Parameter ......................................................................................... 110
5.14. DROP INDEX Statement Parameter ........................................................................................... 111
5.15. SET STATISTICS Statement Parameter ...................................................................................... 112
5.16. CREATE VIEW Statement Parameters ....................................................................................... 113
5.17. ALTER VIEW Statement Parameters .......................................................................................... 117
5.18. CREATE OR ALTER VIEW Statement Parameters .................................................................... 118
5.19. DROP VIEW Statement Parameter ............................................................................................. 119
5.20. RECREATE VIEW Statement Parameters ................................................................................... 119
5.21. CREATE TRIGGER Statement Parameters ................................................................................. 121
5.22. ALTER TRIGGER Statement Parameters ................................................................................... 126
5.23. DROP TRIGGER Statement Parameter ....................................................................................... 129
5.24. CREATE PROCEDURE Statement Parameters ........................................................................... 131
5.25. ALTER PROCEDURE Statement Parameters .............................................................................. 136
5.26. DROP PROCEDURE Statement Parameter ................................................................................. 139
5.27. DECLARE EXTERNAL FUNCTION Statement Parameters ........................................................ 141
5.28. ALTER EXTERNAL FUNCTION Statement Parameters ............................................................. 143
5.29. DROP EXTERNAL FUNCTION Statement Parameter ................................................................ 144
5.30. DECLARE FILTER Statement Parameters .................................................................................. 145
5.31. DROP FILTER Statement Parameter .......................................................................................... 147
5.32. CREATE SEQUENCE | CREATE GENERATOR Statement Parameter ........................................ 148
5.33. ALTER SEQUENCE Statement Parameters ................................................................................ 149
5.34. SET GENERATOR Statement Parameters .................................................................................. 150
5.35. DROP SEQUENCE | DROP GENERATOR Statement Parameter ................................................ 151
5.36. CREATE EXCEPTION Statement Parameters ............................................................................. 151
xi
Firebird 2.5 Language Reference
xii
Firebird 2.5 Language Reference
xiii
Firebird 2.5 Language Reference
xiv
Chapter 1
This Firebird SQL Language Reference is the first comprehensive manual to cover all aspects of the query
language used by developers to communicate, through their applications, with the Firebird relational
database management system. It has a long history.
Subject Matter
The subject matter of this volume is wholly Firebird's implementation of the SQL relational database language.
Firebird conforms closely with international standards for SQL, from data type support, data storage structures,
referential integrity mechanisms, to data manipulation capabilities and access privileges. Firebird also imple-
ments a robust procedural languageprocedural SQL (PSQL) for stored procedures, triggers and dynamical-
ly-executable code blocks. These are the areas addressed in this volume.
Authorship
The material for assembling this Language Reference has been accumulating in the tribal lore of the open source
community of Firebird core developers and user-developers for 15 years. The gift of the InterBase 6 open source
codebase in July 2000 from the (then) Inprise/Borland conglomerate was warmly welcomed. However, it came
without rights to existing documentation. Once the code base had been forked by its owners for private, com-
mercial development, it became clear that the open source, non-commercial Firebird community would never
be granted right of use.
The two important books from the InterBase 6 published set were the Data Definition Guide and the Language
Reference. The former covered the data definition language (DDL) subset of the SQL language, while the latter
covered most of the rest. Fortunately for Firebird users over the years, both have been easy to find on-line as
PDF books.
1
About the Firebird SQL Language Reference
The leader of the Firebird Project's documentation team, Paul Vinkenoog, took up the cause for documenting
the huge volume of improvements and additions to DML and PSQL as Firebird advanced through its releases.
Paul was personally responsible for collating, assembling and, to a great extent, authoring a cumulative series
of Language Reference Updatesone for every major release from v.1.5 forward.
Then, in 2013-4, two benefactor companiesMICEX amd IBSurgeonfunded three writers to take up this
stalled book outline and publish a Firebird 2.5 Language Reference in Russian. They wrote the bulk of the
missing DDL section from scratch and wrote, translated or reused DML and PSQL material from the LangRef
Updates, Russian language support forums, Firebird release notes, read-me files and other sources. By the end
of 2014, they had the task almost complete, in the form of a Microsoft Word document.
Translation . . .
The Russian sponsors, recognising that their efforts needed to be shared with the world-wide Firebird commu-
nity, asked some Project members to initiate a crowd-funding campaign to have the Russian text professionally
translated into English. The translated text would be edited and converted to the Project's standard DocBook
format for addition to the open document library of the Firebird Project. From there, the source text would be
available for translation into other languages for addition to the library.
The fund-raising campaign happened at the end of 2014 and was successful. In June, 2015, professional trans-
lator Dmitry Borodin began translating the Russian text. From him, the raw English text went in stages for edit-
ing and DocBook conversion by Helen Borrieand here is The Firebird SQL Language Reference for V.2.5,
by...everyone!
Contributors
Direct Content
2
About the Firebird SQL Language Reference
Resource Content
Translation
Helen Borrie
Acknowledgments
The first full language reference manual for Firebird would not have eventuated without the funding that finally
brought it to fruition. We acknowledge these contributions with gratitude and thank you all for stepping up.
Moscow Exchange is the largest exchange holding in Russia and Eastern Europe, founded on De-
cember 19, 2011, through the consolidation of the MICEX (founded in 1992) and RTS (founded in
1995) exchange groups. Moscow Exchange ranks among the world's top 20 exchanges by trading
in bonds and by the total capitalization of shares traded, as well as among the 10 largest exchange
platforms for trading derivatives.
Technical support and developer of administrator tools for the Firebird DBMS.
Other Donors
Listed below are the names of companies and individuals whose cash contributions covered the costs for trans-
lation into English, editing of the raw, translated text and conversion of the whole into the Firebird Project's
standard DocBook 4 documentation source format.
3
About the Firebird SQL Language Reference
4
Chapter 2
SQL Flavours
Distinct subsets of SQL apply to different sectors of activity. The SQL subsets in Firebird's language implemen-
tation are:
Dynamic SQL is the major part of the language which corresponds to the Part 2 (SQL/Foundation) part of the
SQL specification. DSQL represents statements passed by client applications through the public Firebird API
and processed by the database engine.
Procedural SQL augments Dynamic SQL to allow compound statements containing local variables, assign-
ments, conditions, loops and other procedural constructs. PSQL corresponds to the Part 4 (SQL/PSM) part of the
SQL specifications. Originally, PSQL extensions were available in persistent stored modules (procedures and
triggers) only, but in more recent releases they were surfaced in Dynamic SQL as well (see EXECUTE BLOCK).
Embedded SQL defines the DSQL subset supported by Firebird gpre, the application which allows you to
embed SQL constructs into your host programming language (C, C++, Pascal, Cobol, etc.) and preprocess those
embedded constructs into the proper Firebird API calls.
Note
Only a portion of the statements and expressions implemented in DSQL are supported in ESQL.
Interactive ISQL refers to the language that can be executed using Firebird isql, the command-line application
for accessing databases interactively. As a regular client application, its native language is DSQL. It also offers
a few additional commands that are not available outside its specific environment.
5
SQL Language Structure
Both DSQL and PSQL subsets are completely presented in this reference. Neither ESQL nor ISQL flavours are
described here unless mentioned explicitly.
SQL Dialects
SQL dialect is a term that defines the specific features of the SQL language that are available when accessing a
database. SQL dialects can be defined at the database level and specified at the connection level. Three dialects
are available:
Dialect 1 is intended solely to allow backward comptibility with legacy databases from very old InterBase
versions, v.5 and below. Dialect 1 databases retain certain language features that differ from Dialect 3, the
default for Firebird databases.
- Date and time information are stored in a DATE data type. A TIMESTAMP data type is also available,
that is identical to this DATE implementation.
- Double quotes may be used as an alternative to apostrophes for delimiting string data. This is contrary to
the SQL standarddouble quotes are reserved for a distinct syntactic purpose both in standard SQL and
in Dialect 3. Double-quoting strings is therefore to be avoided strenuously.
- The precision for NUMERIC and DECIMAL data types is smaller than in Dialect 3 and, if the precision
of a fixed decimal number is greater than 9, Firebird stores it internally as a long floating point value.
- Identifiers are case-insensitive and must always comply with the rules for regular identifierssee the
section entitled Identifiers, below.
- Although generator values are stored as 64-bit integers, a Dialect 1 client request, SELECT GEN_ID
(MyGen, 1), for example, will return the generator value truncated to 32 bits.
Dialect 2 is available only on the Firebird client connection and cannot be set in the database. It is intended
to assist debugging of possible problems with legacy data when migrating a database from dialect 1 to 3.
In Dialect 3 databases,
- numbers (DECIMAL and NUMERIC data types) are stored internally as long fixed point values (scaled
integers) when the precision is greater than 9.
- The TIME data type is available for storing time-of-day data only.
- Double quotes are reserved for delimiting non-regular identifiers, enabling object names that are case-
sensitive or that do not meet the requirements for regular identifiers in other ways.
6
SQL Language Structure
Important
Use of Dialect 3 is strongly recommended for newly developed databases and applications. Both database and
connection dialects should match, except under migration conditions with Dialect 2.
This reference describes the semantics of SQL Dialect 3 unless specified otherwise.
Error Conditions
Processing of every SQL statement either completes successfully or fails due to a specific error condition.
Clauses: A clause defines a certain type of directive in a statement. For instance, the WHERE clause in a SELECT
statement and in some other data manipulation statements (UPDATE, DELETE) specifies criteria for searching
one or more tables for the rows that are to be selected, updated or deleted. The ORDER BY clause specifies how
the output data result set should be sorted.
Options: Options, being the simplest constructs, are specified in association with specific keywords to provide
qualification for clause elements. Where alternative options are available, it is usual for one of them to be the
default, used if nothing is specified for that option. For instance, the SELECT statement will return all of the
rows that match the search criteria unless the DISTINCT option restricts the output to non-duplicated rows.
Keywords: All words that are included in the SQL lexicon are keywords. Some keywords are reserved, meaning
their usage as identifiers for database objects, parameter names or variables is prohibited in some or all contexts.
Non-reserved keywords can be used as identifiers, although it is not recommended. From time to time, non-
reserved keywords may become reserved when some new language feature is introduced.
For instance, the following statement will be executed without errors because, although ABS is a keyword, it
is not a reserved word.
On the contrary, the following statement will return an error because ADD is both a keyword and
a reserved word.
Refer to the list of reserved words and keywords in the chapter Reserved Words and Keywords.
7
SQL Language Structure
Identifiers
All database objects have names, often called identifiers. Two types of names are valid as identifiers: regular
names, similar to variable names in regular programming languages, and delimited names that are specific to
SQL. To be valid, each type of identifier must conform to a set of rules, as follows:
The name must start with an unaccented, 7-bit ASCII alphabetic character. It may be followed by other 7-
bit ASCII letters, digits, underscores or dollar signs. No other characters, including spaces, are valid. The
name is case-insensitive, meaning it can be declared and used in either upper or lower case. Thus, from the
system's point of view, the following names are the same:
fullname
FULLNAME
FuLlNaMe
FullName
<name> ::=
<letter> | <name><letter> | <name><digit> | <name>_ | <name>$
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
It may contain characters from any Latin character set, including a accented characters, spaces and special
characters
Trailing spaces in delimited names are removed, as with any string constant
8
SQL Language Structure
Delimited identifiers are available in Dialect 3 only. For more details on dialects, see SQL Dialect
Note
A delimited identifier such as "FULLNAME" is the same as the regular identifiers FULLNAME, fullname,
FullName, and so on. The reason is that Firebird stores all regular names in upper case, regardless of how they
were defined or declared. Delimited identifiers are always stored according to the exact case of their definition
or declaration. Thus, "FullName" (quoted) is different from FullName (unquoted, i.e., regular) which is stored
as FULLNAME in the metadata.
Literals
Literals are used to represent data in a direct format. Examples of standard types of literals are:
Details about handling the literals for each data type are discussed in the next chapter, Data Types and Subtypes.
Some of these characters, alone or in combinations, may be used as operators (arithmetical, string, logical), as
SQL command separators, to quote identifiers and to mark the limits of string literals or comments.
Operator Syntax:
<operator> ::=
<string concatenation operator> |
<arithmetic operator> |
9
SQL Language Structure
<comparison operator> |
<logical operator>
Comments
Comments may be present in SQL scripts, SQL statements and PSQL modules. A comment can be any text
specified by the code writer, usually used to document how particular parts of the code work. The parser ignores
the text of comments.
Syntax:
Block comments start with the /* character pair and end with the */ character pair. Text in block comments may
be of any length and can occupy multiple lines.
In-line comments start with a pair of hyphen characters, -- and continue up to the end of the current line.
Example:
10
SQL Language Structure
11
Chapter 3
define columns in a database table in the CREATE TABLE statement or change columns using ALTER TABLE
declare or change a domain using the CREATE DOMAIN or ALTER DOMAIN statements
declare local variables in stored procedures, PSQL blocks and triggers and specify parameters in stored pro-
cedures
indirectly specify arguments and return values when declaring external functions (UDFsuser-defined func-
tions)
provide arguments for the CAST() function when explicitly converting data from one type to another
12
Data Types and Subtypes
Bear in mind that a time series consisting of dates in past centuries is processed without taking into account the
actual historical facts, as though the Gregorian calendar were applicable throughout the entire series.
13
Data Types and Subtypes
SMALLINT
The 16-bit SMALLINT data type is for compact data storage of integer data for which only a narrow range of
possible values is required for storing them. Numbers of the SMALLINT type are within the range from -216 to
216 - 1, that is, from -32,768 to 32,767.
SMALLINT Examples:
INTEGER
The INTEGER data type is a 32-bit integer. The shorthand name of the data type is INT. Numbers of the INTE-
32 32
GER type are within the range from -2 to 2 - 1, that is, from -2,147,483,648 to 2,147,483,647.
INTEGER Example:
BIGINT
BIGINT is an SQL:99-compliant 64-bit integer data type, available only in Dialect 3. If a client uses Dialect 1,
the generator value sent by the server is reduced to a 32-bit integer (INTEGER). When Dialect 3 is used for
connection, the generator value is of type BIGINT.
Numbers of the BIGINT type are within the range from -263 to 263 - 1, or from -9,223,372,036,854,775,808 to
9,223,372,036,854,775,807.
14
Data Types and Subtypes
The usage and numerical value ranges of hexadecimal notation are described in more detail in the discussion of
number constants in the chapter entitled Common Language Elements.
The hexadecimal INTEGERs in the above example are automatically cast to BIGINT before being inserted
into the table. However, this happens after the numerical value is determined, so 0x80000000 (8 digits) and
0x080000000 (9 digits) will be saved as different BIGINT values.
Considering the peculiarities of storing floating-point numbers in a database, these data types are not recom-
mended for storing monetary data. For the same reasons, columns with floating-point data are not recommended
for use as keys or to have uniqueness constraints applied to them.
For testing data in columns with floating-point data types, expressions should check using a range, for instance,
BETWEEN, rather than searching for exact matches.
When using these data types in expressions, extreme care is advised regarding the rounding of evaluation results.
15
Data Types and Subtypes
FLOAT
This data type has an approximate precision of 7 digits after the decimal point. To ensure the safety of storage,
rely on 6 digits.
DOUBLE PRECISION
This data type is stored with an approximate precision of 15 digits.
Different treatments limit precision for each type: precision for NUMERIC columns is exactly as declared,
while DECIMAL columns accepts numbers whose precision is at least equal to what was declared.
For instance, NUMERIC(4, 2) defines a number consisting altogether of four digits, including two digits after
the decimal point; that is, it can have up to two digits before the point and no more than two digits after the
point. If the number 3.1415 is written to a column with this data type definition, the value of 3.14 will be saved
in the NUMERIC(4, 2) column.
The form of declaration for fixed-point data, for instance, NUMERIC(p, s), is common to both types. It is
important to realise that the s argument in this template is scale, rather than a count of digits after the decimal
point. Understanding the mechanism for storing and retrieving fixed-point data should help to visualise why:
for storage, the number is multiplied by 10s (10 to the power of s), converting it to an integer; when read, the
integer is converted back.
The method of storing fixed-point data in the DBMS depends on several factors: declared precision, database
dialect, declaration type.
16
Data Types and Subtypes
NUMERIC
Data Declaration Format:
NUMERIC(p, s)
Storage Examples: Further to the explanation above, the DBMS will store NUMERIC data according the de-
clared precision (p) and scale (s). Some more examples are:
Caution
Always keep in mind that the storage format depends on the precision. For instance, you define the column
type as NUMERIC(2,2) presuming that its range of values will be -0.99...0.99. However, the actual range of
values for the column will be -327.68..327.67, which is due to storing the NUMERIC(2,2) data type in the
SMALLINT format. In storage, the NUMERIC(4,2), NUMERIC(3,2) and NUMERIC(2,2) data types are the
same, in fact. It means that if you really want to store data in a column with the NUMERIC(2,2) data type and
limit the range to -0.99...0.99, you will have to create a constraint for it.
DECIMAL
Data Declaration Format:
DECIMAL(p, s)
Storage Examples: The storage format in the database for DECIMAL is very similar to NUMERIC, with some
differences that are easier to observe with the help of some more examples:
17
Data Types and Subtypes
the Dialect 1 DATE type stores both date and time-of-day, equivalent to TIMESTAMP in Dialect 3. Dialect 1
has no date-only type.
Note
Dialect 1 DATE data can be defined alternatively as TIMESTAMP and this is recommended for new definitions
in Dialect 1 databases.
Fractions of Seconds: If fractions of seconds are stored in date and time data types, Firebird stores them to ten-
thousandths of a second. If a lower granularity is preferred, the fraction can be specified explicitly as thousandths,
hundredths or tenths of a second in Dialect 3 databases of ODS 11 or higher.
The time-part of a TIME or TIMESTAMP is a 4-byte WORD, with room for decimilliseconds precision and time
values are stored as the number of deci-milliseconds elapsed since midnight. The actual precision of values
stored in or read from time(stamp) functions and variables is:
CURRENT_TIME defaults to seconds precision and can be specified up to milliseconds precision with
CURRENT_TIME (0|1|2|3)
The EXTRACT() function returns up to deci-milliseconds precision with the SECOND and MILLISECOND
arguments
For TIME and TIMESTAMP literals Firebird happily accepts up to deci-milliseconds precision, but truncates
(not rounds) the time part to the nearest lower or equal millisecond. Try, for example, SELECT TIME
'14:37:54.1249' FROM rdb$database
the '+' and '-' operators work with deci-milliseconds precision, but only within the expression. As soon as
something is stored or even just SELECTed from RDB$DATABASE, it reverts to milliseconds precision
Deci-milliseconds precision is rare and is not currently stored in columns or variables. The best assumption to
make from all this is that, although Firebird stores TIME and the TIMESTAMP time-part values as the number
of deci-milliseconds (10-4 seconds) elapsed since midnight, the actual precision could vary from seconds to
milliseconds.
DATE
The DATE data type in Dialect 3 stores only date without time. The available range for storing data is from
January 01, 1 to December 31, 9999.
18
Data Types and Subtypes
Tip
In Dialect 1, date literals without a time part, as well as 'TODAY', 'YESTERDAY' and 'TOMORROW' automati-
cally get a zero time part.
If, for some reason, it is important to you to store a Dialect 1 timestamp literal with an explicit zero time-part,
the engine will accept a literal like '25.12.2016 00:00:00.0000'. However, '25.12.2016' would
have precisely the same effect, with fewer keystrokes!
TIME
The TIME data type is available in Dialect 3 only. It stores the time of day within the range from 00:00:00.0000
to 23:59:59.9999.
If you need to get the time-part from DATE in Dialect 1, you can use the EXTRACT function.
See also the EXTRACT() function in the chapter entitled Built-in Functions and Variables.
TIMESTAMP
The TIMESTAMP data type is available in Dialect 3 and Dialect 1. It comprises two 32-bit wordsa date-part
and a time-partto form a structure that stores both date and time-of-day. It is the same as the DATE type in
Dialect 1.
The EXTRACT function works equally well with TIMESTAMP as with the Dialect 1 DATE type.
An example is to subtract an earlier date, time or timestamp from a later one, resulting in an interval of time,
in days and fractions of days.
Table 3.3. Arithmetic Operations for Date and Time Data Types
19
Data Types and Subtypes
Notes
20
Data Types and Subtypes
If no character set is explicitly specified when defining a character object, the default character set specified
when the database was created will be used. If the database does not have a default character set defined, the
field gets the character set NONE.
Unicode
Most current development tools support Unicode, implemented in Firebird with the character sets UTF8 and
UNICODE_FSS. UTF8 comes with collations for many languages. UNICODE_FSS is more limited and is used
mainly by Firebird internally for storing metadata. Keep in mind that one UTF8 character occupies up to 4 bytes,
thus limiting the size of CHAR fields to 8,191 characters (32,767/4).
Note
The actual bytes per character value depends on the range the character belongs to. Non-accented Latin letters
occupy 1 byte, Cyrillic letters from the WIN1251 encoding occupy 2 bytes, characters from other encodings
may occupy up to 4 bytes.
The UTF8 character set implemented in Firebird supports the latest version of the Unicode standard, thus rec-
ommending its use for international databases.
Character set OCTETS: Data in OCTETS encoding are treated as bytes that may not actually be interpreted as
characters. OCTETS provides a way to store binary data, which could be the results of some Firebird functions.
The database engine has no concept of what it is meant to do with a string of bits in OCTETS, other than just
store it and retrieve it. Again, the client side is responsible for validating the data, presenting them in formats
that are meaningful to the application and its users and handling any exceptions arising from decoding and
encoding them.
Collation Sequence
Each character set has a default collation sequence (COLLATE) that specifies the collation order. Usually, it
provides nothing more than ordering based on the numeric code of the characters and a basic mapping of upper-
21
Data Types and Subtypes
and lower-case characters. If some behaviour is needed for strings that is not provided by the default collation
sequence and a suitable alternative collation is supported for that character set, a COLLATE collation clause
can be specified in the column definition.
A COLLATE collation clause can be applied in other contexts besides the column definition. For greater-
than/less-than comparison operations, it can be added in the WHERE clause of a SELECT statement. If output
needs to be sorted in a special alphabetic sequence, or case-insensitively, and the appropriate collation exists,
then a COLLATE clause can be included with the ORDER BY clause when rows are being sorted on a character
field and with the GROUP BY clause in case of grouping operations.
Case-Insensitive Searching
For a case-insensitive search, the UPPER function could be used to convert both the search argument and the
searched strings to upper-case before attempting a match:
where upper(name) = upper(:flt_name)
For strings in a character set that has a case-insensitive collation available, you can simply apply the collation,
to compare the search argument and the searched strings directly. For example, using the WIN1251 character set,
the collation PXW_CYRL is case-insensitive for this purpose:
WHERE FIRST_NAME COLLATE PXW_CYRL >= :FLT_NAME
ORDER BY NAME COLLATE PXW_CYRL
Collation Characteristics
Collation works according to the position of the character in the table (binary).
UCS_BASIC
Added in Firebird 2.0
Collation works according to the UCA algorithm (Unicode Collation Algorithm)
UNICODE
(alphabetical). Added in Firebird 2.0
The default, binary collation, identical to UCS_BASIC, which was added for SQL
UTF8
compatibility
Case-insensitive collation, works without taking character case into account.
UNICODE_CI
Added in Firebird 2.1
Case-insensitive, accent-insensitive collation, works alphabetically without taking
UNICODE_CI_AI
character case or accents into account. Added in Firebird 2.5
22
Data Types and Subtypes
Example: An example of collation for the UTF8 character set without taking into account the case or accentuation
of characters (similar to COLLATE PXW_CYRL).
...
ORDER BY NAME COLLATE UNICODE_CI_AI
Character Indexes
In Firebird earlier than version 2.0, a problem can occur with building an index for character columns that use
a non-standard collation sequence: the length of an indexed field is limited to 252 bytes with no COLLATE
specified or 84 bytes if COLLATE is specified. Multi-byte character sets and compound indexes limit the size
even further.
Starting from Firebird 2.0, the maximum length for an index equals one quarter of the page size, i.e. from 1,024
to 4,096 bytes. The maximum length of an indexed string is 9 bytes less than that quarter-page limit.
Calculating Maximum Length of an Indexed String Field: The following formula calculates the maximum length
of an indexed string (in characters):
max_char_length = FLOOR((page_size / 4 - 9) / N)
The table below shows the maximum length of an indexed string (in characters), according to page size and
character set, calculated using this formula.
Table 3.5. Maximum Index Lengths by Page Size and Character Size
Note
With case-insensitive collations (_CI), one character in the index will occupy not 4, but 6 (six) bytes, so the
maximum key length for a page of, for example, 4,096 bytes, will be 169 characters.
See also: CREATE DATABASE, Collation sequence, SELECT, WHERE, GROUP BY, ORDER BY
23
Data Types and Subtypes
CHAR
CHAR is a fixed-length data type. If the entered number of characters is less than the declared length, trailing
spaces will be added to the field. Generally, the pad character does not have to be a space: it depends on the
character set, For example, the pad character for the OCTETS character set is zero.
The full name of this data type is CHARACTER, but there is no requirement to use full names and people rarely
do so.
Fixed-length character data can be used to store codes whose length is standard and has a definite width in
directories. An example of such a code is an EAN13 barcode13 characters, all filled.
Declaration Syntax:
Note
A valid length is from 1 to the maximum number of characters that can be accommodated within 32,767
bytes.
VARCHAR
VARCHAR is the basic string type for storing texts of variable length, up to a maximum of 32,765 bytes. The
stored structure is equal to the actual size of the data plus 2 bytes where the length of the data is recorded.
All characters that are sent from the client application to the database are considered meaningful, including the
leading and trailing spaces. However, trailing spaces are not stored: they will be restored upon retrieval, up to
the recorded length of the string.
The full name of this type is CHARACTER VARYING. Another variant of the name is written as CHAR VARYING.
Syntax:
NCHAR
NCHAR is a fixed-length character data type with the ISO8859_1 character set predefined. In all other respects
it is the same as CHAR.
24
Data Types and Subtypes
Syntax:
NCHAR (length)
The synonymous name is NATIONAL CHAR. A similar data type is available for the variable-length string type:
NATIONAL CHARACTER VARYING.
Syntax:
Shortened syntax:
Segment Size: Specifying the BLOB segment is throwback to times past, when applications for working with
BLOB data were written in C (Embedded SQL) with the help of the gpre pre-compiler. Nowadays, it is effec-
tively irrelevant. The segment size for BLOB data is determined by the client side and is usually larger than
the data page size, in any case.
BLOB Subtypes
The optional SUB_TYPE parameter specifies the nature of data written to the column. Firebird provides two pre-
defined subtypes for storing user data:
Subtype 0: BINARY: If a subtype is not specified, the specification is assumed to be for untyped data and the
default SUB_TYPE 0 is applied. The alias for subtype zero is BINARY. This is the subtype to specify when the
data are any form of binary file or stream: images, audio, word-processor files, PDFs and so on..
Subtype 1: TEXT: Subtype 1 has an alias, TEXT, which can be used in declarations and definitions. For instance,
BLOB SUB_TYPE TEXT. It is a specialized subtype used to store plain text data that is too large to fit into a
string type. A CHARACTER SET may be specified, if the field is to store text with a different encoding to that
specified for the database. From Firebird 2.0, a COLLATE clause is also supported.
Custom Subtypes: It is also possible to add custom data subtypes, for which the range of enumeration from -1 to
-32,768 is reserved. Custom subtypes enumerated with positive numbers are not allowed, as the Firebird engine
uses the numbers from 2-upward for some internal subtypes in metadata.
25
Data Types and Subtypes
BLOB Specifics
Size: The maximum size of a BLOB field is limited to 4GB, regardless of whether the server is 32-bit or 64-bit.
(The internal structures related to BLOBs maintain their own 4-byte counters.) For a page size of 4 KB (4096
bytes) the maximum size is lowerslightly less than 2GB.
Operations and Expressions: Text BLOBs of any length and any character setincluding multi-bytecan be
operands for practically any statement or internal functions. The following operators are supported completely:
= (assignment)
=, <>, <, <=, >, >= (comparison)
|| (concatenation)
BETWEEN, IS [NOT] DISTINCT FROM,
IN, ANY|SOME,
ALL
Partial support:
An error occurs with these if the search argument is larger than or equal to 32 KB:
Aggregation clauses work not on the contents of the field itself, but on the BLOB ID. Aside from that, there
are some quirks:
BLOB Storage:
By default, a regular record is created for each BLOB and it is stored on a data page that is allocated for it.
If the entire BLOB fits onto this page, it is called a level 0 BLOB. The number of this special record is stored
in the table record and occupies 8 bytes.
If a BLOB does not fit onto one data page, its contents are put onto separate pages allocated exclusively to it
(blob pages), while the numbers of these pages are stored into the BLOB record. This is a level 1 BLOB.
If the array of page numbers containing the BLOB data does not fit onto a data page, the array is put on
separate blob pages, while the numbers of these pages are put into the BLOB record. This is a level 2 BLOB.
ARRAY Type
The support of arrays in the Firebird DBMS is a departure from the traditional relational model. Supporting
arrays in the DBMS could make it easier to solve some data-processing tasks involving large sets of similar data.
26
Data Types and Subtypes
Arrays in Firebird are stored in BLOB of a specialized type. Arrays can be one-dimensional and multidimensional
and of any data type except BLOB and ARRAY.
Example:
This example will create a table with a field of the array type consisting of four integers. The subscripts of this
array are from 1 to 4.
[<lower>:<upper>]
The DBMS does not offer much in the way of language or tools for working with the contents of arrays.
The database employee.fdb, found in the ../examples/empbuild directory of any Firebird distribution
package, contains a sample stored procedure showing some simple work with arrays:
27
Data Types and Subtypes
If the features described are enough for your tasks, you might consider using arrays in your projects. Currently,
no improvements are planned to enhance support for arrays in Firebird.
An evaluation problem occurs when optional filters are used to write queries of the following type:
After processing, at the API level, the query will look like this:
This is a case where the developer writes an SQL query and considers :param1 as though it were a variable that
he can refer to twice. However, at the API level, the query contains two separate and independent parameters.
The server cannot determine the type of the second parameter since it comes in association with IS NULL.
The SQL_NULL data type solves this problem. Whenever the engine encounters an '? IS NULL' predicate
in a query, it assigns the SQL_NULL type to the parameter, which will indicate that parameter is only about
nullness and the data type or the value need not be addressed.
The following example demonstrates its use in practice. It assumes two named parameterssay, :size and
:colourwhich might, for example, get values from on-screen text fields or drop-down lists. Each named
parameter corresponds with two positional parameters in the query.
SELECT
28
Data Types and Subtypes
Explaining what happens here assumes the reader is familiar with the Firebird API and the passing of parameters
in XSQLVAR structureswhat happens under the surface will not be of interest to those who are not writing
drivers or applications that communicate using the naked API.
The application passes the parameterized query to the server in the usual positional ?-form. Pairs of identical
parameters cannot be merged into one so, for two optional filters, for example, four positional parameters are
needed: one for each ? in our example.
After the call to isc_dsql_describe_bind(), the SQLTYPE of the second and fourth parameters will be
set to SQL_NULL. Firebird has no knowledge of their special relation with the first and third parameters: that
responsibility lies entirely on the application side.
Once the values for size and colour have been set (or left unset) by the user and the query is about to be executed,
each pair of XSQLVARs must be filled as follows:
Second parameter (NULL test): set sqldata to null (null pointer, not SQL NULL) and *sqlind to 0 (for NOT
NULL)
Syntax:
29
Data Types and Subtypes
Casting to a Domain
When you cast to a domain, any constraints declared for it are taken into account, i.e., NOT NULL or CHECK
constraints. If the <value> does not pass the check, the cast will fail.
If TYPE OF is additionally specifiedcasting to its base typeany domain constraints are ignored during the
cast. If TYPE OF is used with a character type (CHAR/VARCHAR), the character set and collation are retained.
Only the type of the column itself is used. For character types, the cast includes the character set, but not the
collation. The constraints and default values of the source column are not applied.
Example:
SELECT
CAST ('I have many friends' AS TYPE OF COLUMN TTT.S)
FROM RDB$DATABASE;
Important
Keep in mind that partial information loss is possible. For instance, when you cast the TIMESTAMP data type
to the DATE data type, the time-part is lost.
30
Data Types and Subtypes
Literal Formats
To cast string data types to the DATE, TIME or TIMESTAMP data types, you need the string argument to be
one of the predefined date and time literals (see Table 3.7) or a representation of the date in one of the allowed
date-time literal formats:
<datetime_literal> ::= {
[YYYY<p>]MM<p>DD[<p>HH[<p>mm[<p>SS[<p>NNNN]]]] |
MM<p>DD[<p>YYYY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]] |
DD<p>MM[<p>YYYY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]] |
MM<p>DD[<p>YY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]] |
DD<p>MM[<p>YY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]] |
NOW |
TODAY |
TOMORROW |
YESTERDAY
}
<date_literal> ::= {
[YYYY<p>]MM<p>DD |
MM<p>DD[<p>YYYY] |
DD<p>MM[<p>YYYY] |
MM<p>DD[<p>YY] |
DD<p>MM[<p>YY] |
TODAY |
TOMORROW |
YESTERDAY
}
<time_literal> := HH[<p>mm[<p>SS[<p>NNNN]]]
Argument Description
datetime_literal Date and time literal
time_literal Time literal
date_literal Date literal
YYYY Four-digit year
YY Two-digit year
Month. It may contain 1 or 2 digits (1-12 or 01-12). You can al-
MM so specify the three-letter shorthand name or the full name of a
month in English. Case-insensitive
DD Day. It may contain 1 or 2 digits (1-31 or 01-31)
31
Data Types and Subtypes
Argument Description
HH Hour. It may contain 1 or 2 digits (0-23 or 00-23)
mm Minutes. It may contain 1 or 2 digits (0-59 or 00-59)
SS Seconds. It may contain 1 or 2 digits (0-59 or 00-59)
Ten-thousandths of a second. It may contain from 1 to 4 digits
NNNN
(0-9999)
A separator, any of permitted characters. Leading and trailing
p
spaces are ignored
Data Type
Literal Description
Dialect 1 Dialect 3
'NOW' Current date and time DATE TIMESTAMP
'TODAY' Current date DATE with zero time DATE
'TOMORROW' Current date + 1 (day) DATE with zero time DATE
'YESTERDAY' Current date - 1 (day) DATE with zero time DATE
Important
Use of the complete specification of the year in the four-digit formYYYYis strongly recommended, to
avoid confusion in date calculations and aggregations.
select
cast('04.12.2014' as date) as d1, -- DD.MM.YYYY
cast('04 12 2014' as date) as d2, -- MM DD YYYY
cast('4-12-2014' as date) as d3, -- MM-DD-YYYY
cast('04/12/2014' as date) as d4, -- MM/DD/YYYY
cast('04,12,2014' as date) as d5, -- MM,DD,YYYY
cast('04.12.14' as date) as d6, -- DD.MM.YY
-- DD.MM with current year
cast('04.12' as date) as d7,
-- MM/DD with current year
cast('04/12' as date) as d8,
cast('2014/12/04' as date) as d9, -- YYYY/MM/DD
cast('2014 12 04' as date) as d10, -- YYYY MM DD
cast('2014.12.04' as date) as d11, -- YYYY.MM.DD
cast('2014-12-04' as date) as d12, -- YYYY-MM-DD
cast('4 Jan 2014' as date) as d13, -- DD MM YYYY
cast('2014 Jan 4' as date) as dt14, -- YYYY MM DD
cast('Jan 4, 2014' as date) as dt15, -- MM DD, YYYY
cast('11:37' as time) as t1, -- HH:mm
cast('11:37:12' as time) as t2, -- HH:mm:ss
cast('11:31:12.1234' as time) as t3, -- HH:mm:ss.nnnn
32
Data Types and Subtypes
Syntax:
data_type 'date_literal_string'
Example:
-- 1
UPDATE PEOPLE
SET AGECAT = 'SENIOR'
WHERE BIRTHDATE < DATE '1-Jan-1943';
-- 2
INSERT INTO APPOINTMENTS
(EMPLOYEE_ID, CLIENT_ID, APP_DATE, APP_TIME)
VALUES (973, 8804, DATE 'today' + 2, TIME '16:00');
-- 3
NEW.LASTMOD = TIMESTAMP 'now';
Note
These shorthand expressions are evaluated directly during parsing, as though the statement were already pre-
pared for execution. Thus, even if the query is run several times, the value of, for instance, timestamp
'now' remains the same no matter how much time passes.
If you need the time to be evaluated at each execution, use the full CAST syntax. An example of using such
an expression in a trigger:
33
Data Types and Subtypes
In Dialect 1, in many expressions, one type is implicitly cast to another without the need to use the CAST
function. For instance, the following statement in Dialect 1 is valid:
UPDATE ATABLE
SET ADATE = '25.12.2016' + 1
and the date literal will be cast to the date type implicitly.
In Dialect 3, this statement will throw error 35544569, "Dynamic SQL Error: expression evaluation not sup-
ported, Strings cannot be added or subtracted in dialect 3"a cast will be needed:
UPDATE ATABLE
SET ADATE = CAST ('25.12.2016' AS DATE) + 1
UPDATE ATABLE
SET ADATE = DATE '25.12.2016' + 1
In Dialect 1, mixing integer data and numeric strings is usually possible because the parser will try to cast the
string implicitly. For example,
2 + '1'
In Dialect 3, an expression like this will raise an error, so you will need to write it as a CAST expression:
2 + CAST('1' AS SMALLINT)
Example:
SELECT 30||' days hath September, April, June and November' CONCAT$
FROM RDB$DATABASE
CONCAT$
------------------------------------------------
30 days hath September, April, June and November
34
Data Types and Subtypes
Domain usage is not limited to column definitions for tables and views. Domains can be used to declare input
and output parameters and variables in PSQL code.
Domain Attributes
A domain definition contains required and optional attributes. The data type is a required attribute. Optional
attributes include:
a default value
CHECK constraints
character set (for character data types and text BLOB fields)
See also: Explicit Data Type Conversion for the description of differences in the data conversion mechanism
when domains are specified for the TYPE OF and TYPE OF COLUMN modifiers.
Domain Override
While defining a column using a domain, it is possible to override some of the attributes inherited from the
domain. Table 3.9 summarises the rules for domain override.
35
Data Types and Subtypes
Short Syntax:
See also: CREATE DOMAIN in the Data Definition Language (DDL) section.
Altering a Domain
To change the attributes of a domain, use the DDL statement ALTER DOMAIN. With this statement you can
Short Syntax:
36
Data Types and Subtypes
When planning to alter a domain, its dependencies must be taken into account: whether there are table columns,
any variables, input and/or output parameters with the type of this domain declared in the PSQL code. If you
change domains in haste, without carefully checking them, your code may stop working!
Important
When you convert data types in a domain, you must not perform any conversions that may result in data loss.
Also, for example, if you convert VARCHAR to INTEGER, check carefully that all data using this domain can
be successfully converted.
See also: ALTER DOMAIN in the Data Definition Language (DDL) section.
Syntax:
Important
Example:
See also: DROP DOMAIN in the Data Definition Language (DDL) section.
37
Chapter 4
Expressions
SQL expressions provide formal methods for evaluating, transforming and comparing values. SQL expressions
may include table columns, variables, constants, literals, various statements and predicates and also other ex-
pressions. The complete list of possible tokens in expressions follows.
Element Description
Identifier of a column from a specified table used in evaluations or as a
Column name search condition. A column of the array type cannot be an element in an ex-
pression except when used with the IS [NOT] NULL predicate.
An expression may contain a reference to an array member i.e.,
Array element <array_name>[s], where s is the subscript of the member in the array
<array_name>
38
Common Language Elements
Element Description
Context variable An internally-defined context variable
Declared local variable, input or output parameter of a PSQL module (stored
Local variable
procedure, trigger, unnamed PSQL block in DSQL)
A member of in an ordered group of one or more unnamed parameters
Positional parameter
passed to a stored procedure or prepared query
A SELECT statement enclosed in parentheses that returns a single (scalar)
Subquery
value or, when used in existential predicates, a set of values
Function identifier The identifier of an internal or external function in a function expression
An expression explicitly converting data of one data type to another using
the CAST function ( CAST (<value> AS <datatype>) ). For date/
Type cast
time literals only, the shorthand syntax <datatype> <value> is also supported
(DATE '25.12.2016')
Conditional expression Expressions using CASE and related internal functions
Bracket pairs () used to group expressions. Operations inside the parenthe-
ses are performed before operations outside them. When nested parentheses
Parentheses
are used, the most deeply nested expressions are evaluated first and then the
evaluations move outward through the levels of nesting.
Clause applied to CHAR and VARCHAR types to specify the character-set-
COLLATE clause
specific collation sequence to use in string comparisons
NEXT VALUE FOR se- Expression for obtaining the next value of a specified generator (sequence).
quence The internal GEN_ID() function does the same
Constants
A constant is a value that is supplied directly in an SQL statement, not derived from an expression, a parameter,
a column reference nor a variable. It can be a string or a number.
Note
Double quotes are NOT VALID for quoting strings. SQL reserves a different purpose for them.
If a literal apostrophe is required within a string constant, it is escaped by prefixing it with another apos-
trophe. For example, 'Mother O''Reilly's home-made hooch'.
Care should be taken with the string length if the value is to be written to a VARCHAR column. The maximum
length for a VARCHAR is 32,765 bytes.
39
Common Language Elements
The character set of a string constant is assumed to be the same as the character set of its destined storage.
From Firebird 2.5 forward, string literals can be entered in hexadecimal notation, so-called binary strings.
Each pair of hex digits defines one byte in the string. Strings entered this way will have character set OCTETS
by default but the introducer syntax can be used to force a string to be interpreted as another character set.
Syntax:
{x|X}'<hexstring>'
Examples:
Notes
The client interface determines how binary strings are displayed to the user. The isql utility, for example,
uses upper case letters A-F, while FlameRobin uses lower case letters. Other client programs may use other
conventions, such as displaying spaces between the byte pairs: '4E 65 72 76 65 6E'.
The hexadecimal notation allows any byte value (including 00) to be inserted at any position in the string.
However, if you want to coerce it to anything other than OCTETS, it is your responsibility to supply the bytes
in a sequence that is valid for the target character set.
If necessary, a string literal may be preceded by a character set name, itself prefixed with an underscore _.
This is known as introducer syntax. Its purpose is to inform the engine about how to interpret and store the
incoming string.
Example
40
Common Language Elements
Number Constants
A number constant is any valid number in a supported notation:
In SQL, for numbers in the standard decimal notation, the decimal point is always represented by period (full-
stop, dot) character and thousands are not separated. Inclusion of commas, blanks, etc. will cause errors.
From Firebird 2.5 forward, integer values can be entered in hexadecimal notation. Numbers with 1-8 hex digits
will be interpreted as type INTEGER; numbers with 9-16 hex digits as type BIGINT.
Syntax:
0{x|X}<hexdigits>
Examples:
Hex numbers in the range 0 .. 7FFF FFFF are positive INTEGERs with values between 0 .. 2147483647
decimal. To coerce a number to BIGINT, prepend enough zeroes to bring the total number of hex digits to
nine or above. That changes the type but not the value.
Hex numbers between 8000 0000 .. FFFF FFFF require some attention:
- When written with eight hex digits, as in 0x9E44F9A8, a value is interpreted as 32-bit INTEGER. Since
the leftmost bit (sign bit) is set, it maps to the negative range -2147483648 .. -1 decimal.
- With one or more zeroes prepended, as in 0x09E44F9A8, a value is interpreted as 64-bit BIGINT in the
range 0000 0000 8000 0000 .. 0000 0000 FFFF FFFF. The sign bit is not set now, so they map to the
positive range 2147483648 .. 4294967295 decimal.
41
Common Language Elements
Thus, in this rangeand only in this rangeprepending a mathematically insignificant 0 results in a totally
different value. This is something to be aware of.
Hex numbers between 1 0000 0000 .. 7FFF FFFF FFFF FFFF are all positive BIGINT.
Hex numbers between 8000 0000 0000 0000 .. FFFF FFFF FFFF FFFF are all negative BIGINT.
A SMALLINT cannot be written in hex, strictly speaking, since even 0x1 is evaluated as INTEGER. How-
ever, if you write a positive integer within the 16-bit range 0x0000 (decimal zero) to 0x7FFF (decimal 32767)
it will be converted to SMALLINT transparently.
It is possible to write to a negative SMALLINT in hex, using a 4-byte hex number within the range
0xFFFF8000 (decimal -32768) to 0xFFFFFFFF (decimal -1).
SQL Operators
SQL operators comprise operators for comparing, calculating, evaluating and concatenating values.
Operator Precedence
SQL Operators are divided into four types. Each operator type has a precedence, a ranking that determines the
order in which operators and the values obtained with their help are evaluated in an expression.The higher the
precedence of the operator type is, the earlier it will be evaluated. Each operator has its own precedence within
its type, that determines the order in which they are evaluated in an expression.
Operators with the same precedence are evaluated from left to right. To force a different evaluation order,
operations can be grouped by means of parentheses.
Concatenation Operator
The concatenation operator, two pipe characters known as double pipe || concatenates (connects together)
two character strings to form a single string. Character strings can be constants or values obtained from columns
or other expressions.
Example:
42
Common Language Elements
Arithmetic Operators
Example:
UPDATE T
SET A = 4 + 1/(B-C)*D
Note
Where operators have the same precedence, they are evaluated in left-to-right sequence.
Comparison Operators
43
Common Language Elements
This group also includes comparison predicates BETWEEN, LIKE, CONTAINING, SIMILAR TO, IS and others.
Example:
Logical Operators
Example:
NEXT VALUE FOR returns the next value of a sequence. SEQUENCE is an SQL-compliant term for a generator
in Firebird and its ancestor, InterBase. The NEXT VALUE FOR operator is equivalent to the legacy GEN_ID (...,
1) function and is the recommended syntax for retrieving the next sequence value.
Example:
44
Common Language Elements
Note
Unlike GEN_ID (..., 1), the NEXT VALUE FOR variant does not take any parameters and thus, provides no way
to retrieve the current value of a sequence, nor to step the next value by more than 1. GEN_ID (..., <step value>)
is still needed for these tasks. A <step value> of 0 returns the current sequence value.
Conditional Expressions
A conditional expression is one that returns different values according to how a certain condition is met. It
is composed by applying a conditional function construct, of which Firebird supports several. This section de-
scribes only one conditional expression construct: CASE. All other conditional expressions apply internal func-
tions derived from CASE and are described in Conditional Functions.
CASE
The CASE construct returns a single value from a number of possible ones. Two syntactic variants are supported:
The searched CASE, which works like a series of if ... else if ... else if clauses.
Simple CASE
Syntax:
CASE <test-expr>
WHEN <expr> THEN <result>
[WHEN <expr> THEN <result> ...]
[ELSE <defaultresult>]
END
When this variant is used, <test-expr> is compared to <expr> 1, <exp> 2 etc., until a match is found
and the corresponding result is returned. If no match is found, <defaultresult> from the optional ELSE clause is
returned. If there are no matches and no ELSE clause, NULL is returned.
The matching works identically to the "=" operator. That is, if <test-expr> is NULL, it does not match any <expr>,
not even an expression that resolves to NULL.
The returned result does not have to be a literal value: it might be a field or variable name, compound expression
or NULL literal.
Example:
45
Common Language Elements
SELECT
NAME,
AGE,
CASE UPPER(SEX)
WHEN 'M' THEN 'Male'
WHEN 'F' THEN 'Female'
ELSE 'Unknown'
END GENDER,
RELIGION
FROM PEOPLE
A short form of the simple CASE construct is used in the DECODE function.
Searched CASE
Syntax:
CASE
WHEN <bool_expr> THEN <result>
[WHEN <bool_expr> THEN <result> ]
[ELSE <defaultresult>]
END
The <bool_expr> expression is one that gives a ternary logical result: TRUE, FALSE or NULL. The first expres-
sion to return TRUE determines the result. If no expressions return TRUE, <defaultresult> from the optional
ELSE clause is returned as the result. If no expressions return TRUE and there is no ELSE clause, the result will
be NULL.
As with the simple CASE construct, the result need not be a literal value: it might be a field or variable name,
a compound expression, or be NULL.
Example:
CANVOTE = CASE
WHEN AGE >= 18 THEN 'Yes'
WHEN AGE < 18 THEN 'No'
ELSE 'Unsure'
END
NULL in Expressions
NULL is not a value in SQL, but a state indicating that the value of the element either unknown or it does not
exist. It is not a zero, nor a void, nor an empty string, and it does not act like any value.
When you use NULL in numeric, string or date/time expressions, the result will always be NULL. When you
use NULL in logical (Boolean) expressions, the result will depend on the type of the operation and on other
participating values. When you compare a value to NULL, the result will be unknown.
46
Common Language Elements
Important to Note
NULL means NULL but, in Firebird, the logical result unknown is also represented by NULL.
1 + 2 + 3 + NULL
'Home ' || 'sweet ' || NULL
MyField = NULL
MyField <> NULL
NULL = NULL
not (NULL)
If it seems difficult to understand why, remember that NULL is a state that stands for unknown.
Up to and including Firebird 2.5.x, there is no implementation for a logical (Boolean) data typethat is coming
in Firebird 3. However, there are logical expressions (predicates) that can return true, false or unknown.
Examples:
Subqueries
A subquery is a special form of expression that is actually a query embedded within another query. Subqueries
are written in the same way as regular SELECT queries, but they must be enclosed in parentheses. Subquery
expressions can be used in the following ways:
47
Common Language Elements
To obtain values or conditions for search predicates (the WHERE, HAVING clauses).
To produce a set that the enclosing query can select from, as though were a regular table or view. Subqueries
like this appear in the FROM clause (derived tables) or in a Common Table Expression (CTE)
Correlated Subqueries
A subquery can be correlated. A query is correlated when the subquery and the main query are interdependent.
To process each record in the subquery, it is necessary to fetch a record in the main query; i.e., the subquery
fully depends on the main query.
SELECT *
FROM Customers C
WHERE EXISTS
(SELECT *
FROM Orders O
WHERE C.cnum = O.cnum
AND O.adate = DATE '10.03.1990');
When subqueries are used to get the values of the output column in the SELECT list, a subquery must return
a scalar result.
Scalar Results
Subqueries used in search predicates, other than existential and quantified predicates, must return a scalar result;
that is, not more than one column from not more than one matching row or aggregation. If the result would
return more, a run-time error will occur (Multiple rows in a singleton select...).
Note
Although it is reporting a genuine error, the message can be slightly misleading. A singleton SELECT is
a query that must not be capable of returning more than one row. However, singleton and scalar are not
synonymous: not all singleton SELECTS are required to be scalar; and single-column selects can return multiple
rows for existential and quantified predicates.
Subquery Examples:
SELECT
e.first_name,
e.last_name,
(SELECT
sh.new_salary
FROM
salary_history sh
WHERE
sh.emp_no = e.emp_no
48
Common Language Elements
2. A subquery in the WHERE clause for obtaining the employee's maximum salary and filtering by it:
SELECT
e.first_name,
e.last_name,
e.salary
FROM
employee e
WHERE
e.salary = (
SELECT MAX(ie.salary)
FROM employee ie
)
Predicates
A predicate is a simple expression asserting some fact, let's call it P. If P resolves as TRUE, it succeeds. If it
resolves to FALSE or NULL (UNKNOWN), it fails. A trap lies here, though: suppose the predicate, P, returns
FALSE. In this case NOT(P) will return TRUE. On the other hand, if P returns NULL (unknown), then NOT(P)
returns NULL as well.
In SQL, predicates can appear in CHECK constraints, WHERE and HAVING clauses, CASE expressions, the IIF()
function and in the ON condition of JOIN clauses.
Assertions
An assertion is a statement about the data that, like a predicate, can resolve to TRUE, FALSE or NULL. Asser-
tions consist of one or more predicates, possibly negated using NOT and connected by AND and OR operators.
Parentheses may be used for grouping predicates and controlling evaluation order.
A predicate may embed other predicates. Evaluation sequence is in the outward direction, i.e., the innermost
predicates are evaluated first. Each level is evaluated in precedence order until the truth of the ultimate asser-
tion is resolved.
Comparison Predicates
A comparison predicate consists of two expressions connected with a comparison operator. There are six tradi-
tional comparison operators:
49
Common Language Elements
(For the complete list of comparison operators with their variant forms, see Comparison Operators.)
If one of the sides (left or right) of a comparison predicate has NULL in it, the value of the predicate will be
UNKNOWN.
Examples:
1. Retrieve information about computers with the CPU frequency not less than 500 MHz and the price lower
than $800:
SELECT *
FROM Pc
WHERE speed >= 500 AND price < 800;
2. Retrieve information about all dot matrix printers that cost less than $300:
SELECT *
FROM Printer
WHERE ptrtype = 'matrix' AND price < 300;
3. The following query will return no data, even if there are printers with no type specified for them, because
a predicate that compares NULL with NULL returns NULL:
SELECT *
FROM Printer
WHERE ptrtype = NULL AND price < 300;
On the other hand, ptrtype can be tested for NULL and return a result: it is just that it is not a comparison
test:
SELECT *
FROM Printer
WHERE ptrtype IS NULL AND price < 300;
When CHAR and VARCHAR fields are compared for equality, trailing spaces are ignored in all cases.
BETWEEN
50
Common Language Elements
Syntax:
The BETWEEN predicate tests whether a value falls within a specified range of two values. (NOT BETWEEN
tests whether the value does not fall within that range.)
The operands for BETWEEN predicate are two arguments of compatible data types. Unlike in some other DBMS,
the BETWEEN predicate in Firebird is not symmetricalif the lower value is not the first argument, the BE-
TWEEN predicate will always return False. The search is inclusive (the values represented by both arguments
are included in the search). In other words, the BETWEEN predicate could be rewritten:
When BETWEEN is used in the search conditions of DML queries, the Firebird optimizer can use an index on
the searched column, if it is available.
Example:
SELECT *
FROM EMPLOYEE
WHERE HIRE_DATE BETWEEN date '01.01.1992' AND CURRENT_DATE
LIKE
Syntax:
The LIKE predicate compares the character-type expression with the pattern defined in the second expression.
Case- or accent-sensitivity for the comparison is determined by the collation that is in use. A collation can be
specified for either operand, if required.
Wildcards
Two wildcard symbols are available for use in the search pattern:
51
Common Language Elements
the percentage symbol (%) will match any sequence of zero or more characters in the tested value
the underscore character (_) will match any single character in the tested value
If the tested value matches the pattern, taking into account wildcard symbols, the predicate is True.
If the search string contains either of the wildcard symbols, the ESCAPE clause can be used to specify an escape
character. The escape character must precede the '%' or '_' symbol in the search string, to indicate that the symbol
is to be interpreted as a literal character.
1. Find the numbers of departments whose names start with the word Software:
SELECT DEPT_NO
FROM DEPT
WHERE DEPT_NAME LIKE 'Software%';
Actually, the LIKE predicate does not use an index. However, if the predicate takes the form of LIKE
'string%' , it will be converted to the STARTING WITH predicate, which will use an index.
Soif you need to search for the beginning of a string, it is recommended to use the STARTING WITH
predicate instead of the LIKE predicate.
2. Search for employees whose names consist of 5 letters, start with the letters Sm and end with th. The
predicate will be true for such names as Smith and Smyth.
SELECT
first_name
FROM
employee
WHERE first_name LIKE 'Sm_th'
3. Search for all clients whose address contains the string Rostov:
SELECT *
FROM CUSTOMER
WHERE ADDRESS LIKE '%Rostov%'
52
Common Language Elements
Note
If you need to do a case-insensitive search for something enclosed inside a string ( LIKE '%Abc%' ),
use of the CONTAINING predicate is recommended, in preference to the LIKE predicate.
4. Search for tables containing the underscore character in their names. The # character is specified as the
escape character:
SELECT
RDB$RELATION_NAME
FROM RDB$RELATIONS
WHERE RDB$RELATION_NAME LIKE '%#_%' ESCAPE '#'
STARTING WITH
Syntax:
The STARTING WITH predicate searches for a string or a string-like type that starts with the characters in its
<value> argument. The search is case-sensitive.
When STARTING WITH is used in the search conditions of DML queries, the Firebird optimizer can use an index
on the searched column, if it exists.
Example: Search for employees whose last names start with Jo:
CONTAINING
Syntax:
53
Common Language Elements
The CONTAINING predicate searches for a string or a string-like type looking for the sequence of characters
that matches its argument. It can be used for an alphanumeric (string-like) search on numbers and dates. A
CONTAINING search is not case-sensitive. However, if an accent-sensitive collation is in use then the search
will be accent-sensitive.
When CONTAINING is used in the search conditions of DML queries, the Firebird optimizer can use an index
on the searched column, if a suitable one exists.
Examples:
SELECT *
FROM PROJECT
WHERE PROJ_NAME CONTAINING 'Map';
Two rows with the names AutoMap and MapBrowser port are returned.
2. Search for changes in salaries with the date containing number 84 (in this case, it means changes that took
place in 1984):
SELECT *
FROM SALARY_HISTORY
WHERE CHANGE_DATE CONTAINING 84;
SIMILAR TO
Syntax:
SIMILAR TO matches a string against an SQL regular expression pattern. Unlike in some other languages, the
pattern must match the entire string in order to succeedmatching a substring is not enough. If any operand is
NULL, the result is NULL. Otherwise, the result is TRUE or FALSE.
The following syntax defines the SQL regular expression format. It is a complete and correct top-down defini-
tion. It is also highly formal, rather long and probably perfectly fit to discourage everybody who hasn't already
some experience with regular expessions (or with highly formal, rather long top-down definitions). Feel free
to skip it and read the next section, Building Regular Expressions, which uses a bottom-up approach, aimed
at the rest of us.
54
Common Language Elements
<quantifier> ::= ?
| *
| +
| '{' <m> [,[<n>]] '}'
<m>, <n> ::= unsigned int, with <m> <= <n> if both present
In this section are the elements and rules for building SQL regular expressions.
Characters
Within regular expressions, most characters represent themselves. The only exceptions are the special characters
below:
55
Common Language Elements
[]()|^-+*%_?{}
A regular expression that contains no special or escape characters matches only strings that are identical to itself
(subject to the collation in use). That is, it functions just like the = operator:
Wildcards
The known SQL wildcards _ and % match any single character and a string of any length, respectively:
Character Classes
A bunch of characters enclosed in brackets define a character class. A character in the string matches a class in
the pattern if the character is a member of the class:
As can be seen from the second line, the class only matches a single character, not a sequence.
Within a class definition, two characters connected by a hyphen define a range. A range comprises the two
endpoints and all the characters that lie between them in the active collation. Ranges can be placed anywhere in
the class definition without special delimiters to keep them apart from the other elements.
The following predefined character classes can also be used in a class definition:
[:ALPHA:]: Latin letters a..z and A..Z. With an accent-insensitive collation, this class also matches accented
forms of these characters.
56
Common Language Elements
[:UPPER:]: Uppercase Latin letters A..Z. Also matches lowercase with case-insensitive collation and accented
forms with accent-insensitive collation.
[:LOWER:]: Lowercase Latin letters a..z. Also matches uppercase with case-insensitive collation and accented
forms with accent-insensitive collation.
[:WHITESPACE:]: Matches vertical tab (ASCII 9), linefeed (ASCII 10), horizontal tab (ASCII 11), formfeed
(ASCII 12), carriage return (ASCII 13) and space (ASCII 32).
Including a predefined class has the same effect as including all its members. Predefined classes are only allowed
within class definitions. If you need to match against a predefined class and nothing more, place an extra pair
of brackets around it.
If a class definition starts with a caret, everything that follows is excluded from the class. All other characters
match:
If the caret is not placed at the start of the sequence, the class contains everything before the caret, except for
the elements that also occur after the caret:
Lastly, the already mentioned wildcard _ is a character class of its own, matching any single character.
Quantifiers
A question mark immediately following a character or class indicates that the preceding item may occur 0 or
1 times in order to match:
57
Common Language Elements
An asterisk immediately following a character or class indicates that the preceding item may occur 0 or more
times in order to match:
A plus sign immediately following a character or class indicates that the preceding item must occur 1 or more
times in order to match:
If a character or class is followed by a number enclosed in braces, it must be repeated exactly that number of
times in order to match:
If the number is followed by a comma, the item must be repeated at least that number of times in order to match:
If the braces contain two numbers separated by a comma, the second number not smaller than the first, then the
item must be repeated at least the first number and at most the second number of times in order to match:
The quantifiers ?, * and + are shorthand for {0,1}, {0,} and {1,}, respectively.
OR-ing Terms
Regular expression terms can be OR'ed with the | operator. A match is made when the argument string matches
at least one of the terms:
58
Common Language Elements
Subexpressions
One or more parts of the regular expression can be grouped into subexpressions (also called subpatterns) by
placing them between parentheses. A subexpression is a regular expression in its own right. It can contain all
the elements allowed in a regular expression, and can also have quantifiers added to it.
In order to match against a character that is special in regular expressions, that character has to be escaped. There
is no default escape character; rather, the user specifies one when needed:
The last line demonstrates that the escape character can also escape itself, if needed.
Syntax:
Two operands are considered DISTINCT if they have a different value or if one of them is NULL and the other
non-null. They are NOT DISTINCT if they have the same value or if both of them are NULL.
IS [NOT] NULL
Syntax:
59
Common Language Elements
Since NULL is not a value, these operators are not comparison operators. The IS [NOT] NULL predicate tests
the assertion that the expression on the left side has a value (IS NOT NULL) or has no value (IS NULL).
Example: Search for sales entries that have no shipment date set for them:
Up to and including Firebird 2.5, the IS predicates, like the other comparison predicates, do not have precedence
over the others. In Firebird 3.0 and higher, these predicates take precedence above the others.
Existential Predicates
This group of predicates includes those that use subqueries to submit values for all kinds of assertions in search
conditions. Existential predicates are so called because they use various methods to test for the existence or non-
existence of some assertion, returning TRUE if the existence or non-existence is confirmed or FALSE otherwise.
EXISTS
Available: DSQL, PSQL, ESQL
Syntax:
[NOT] EXISTS(<select_stmt>)
The EXISTS predicate uses a subquery expression as its argument. It returns TRUE if the subquery result would
contain at least one row; otherwise it returns FALSE.
NOT EXISTS returns FALSE if the subquery result would contain at least one row; it returns TRUE otherwise.
Note
The subquery can specify multiple columns, or SELECT *, because the evaluation is made on the number of
rows that match its criteria, not on the data.
Examples:
SELECT *
FROM employee
60
Common Language Elements
WHERE EXISTS(SELECT *
FROM employee_project ep
WHERE ep.emp_no = employee.emp_no)
SELECT *
FROM employee
WHERE NOT EXISTS(SELECT *
FROM employee_project ep
WHERE ep.emp_no = employee.emp_no)
IN
Syntax:
The IN predicate tests whether the value of the expression on the left side is present in the set of values specified
on the right side. The set of values cannot have more than 1500 items. The IN predicate could be replaced with
the following equivalent form:
When the IN predicate is used in the search conditions of DML queries, the Firebird optimizer can use an index
on the searched column, if a suitable one exists.
In its second form, the IN predicate tests whether the value of the expression on the left side is present (or not
present, if NOT IN is used) in the result of the executed subquery on the right side.
The subquery must be specified to result in only one column, otherwise the error count of column list and
variable list do not match will occur.
Queries specified using the IN predicate with a subquery can be replaced with a similar query using the EXISTS
predicate. For instance, the following query:
SELECT
model, speed, hd
FROM PC
WHERE
model IN (SELECT model
FROM product
WHERE maker = 'A');
61
Common Language Elements
SELECT
model, speed, hd
FROM PC
WHERE
EXISTS (SELECT *
FROM product
WHERE maker = 'A'
AND product.model = PC.model);
However, a query using NOT IN with a subquery does not always give the same result as its NOT EXISTS
counterpart. The reason is that EXISTS always returns TRUE or FALSE, whereas IN returns NULL in one of
these two cases:
1. when the test value is NULL and the IN () list is not empty
2. when the test value has no match in the IN () list and at least one list element is NULL
It is in only these two cases that IN () will return NULL while the corresponding EXISTS predicate will return
FALSE ('no matching row found'). In a search or, for example, an IF (...) statement, both results mean failure
and it makes no difference to the outcome.
But, for the same data, NOT IN () will return NULL, while NOT EXISTS will return TRUE, leading to opposite
results.
Now, assume that the NY celebrities list is not empty and contains at least one NULL birthday. Then for every
citizen who does not share his birthday with a NY celebrity, NOT IN will return NULL, because that is what IN
does. The search condition is thereby not satisfied and the citizen will be left out of the SELECT result, which
is wrong.
For citizens whose birthday does match with a celebrity's birthday, NOT IN will correctly return FALSE, so they
will be left out too, and no rows will be returned.
non-matches will have a NOT EXISTS result of TRUE and their records will be in the result set.
62
Common Language Elements
Advice
If there is any chance of NULLs being encountered when searching for a non-match, you will want to use
NOT EXISTS.
Examples of use:
SELECT *
FROM EMPLOYEE
WHERE FIRST_NAME IN ('Pete', 'Ann', 'Roger');
2. Find all computers that have models whose manufacturer starts with the letter A:
SELECT
model, speed, hd
FROM PC
WHERE
model IN (SELECT model
FROM product
WHERE maker STARTING WITH 'A');
SINGULAR
Syntax:
[NOT] SINGULAR(<select_stmt>)
The SINGULAR predicate takes a subquery as its argument and evaluates it as True if the subquery returns exactly
one result row; otherwise the predicate is evaluated as False. The subquery may list several output columns
since the rows are not returned anyway. They are only tested for (singular) existence. For brevity, people usually
specify 'SELECT *'. The SINGULAR predicate can return only two values: TRUE or FALSE.
SELECT *
FROM employee
WHERE SINGULAR(SELECT *
FROM
employee_project ep
WHERE
ep.emp_no = employee.emp_no)
63
Common Language Elements
In subquery expressions, quantified predicates make it possible to compare separate values with the results of
subqueries; they have the following common form:
ALL
Syntax:
When the ALL quantifier is used, the predicate is TRUE if every value returned by the subquery satisfies the
condition in the predicate of the main query.
Example: Show only those clients whose ratings are higher than the rating of every client in Paris.
SELECT c1.*
FROM Customers c1
WHERE c1.rating > ALL
(SELECT c2.rating
FROM Customers c2
WHERE c2.city = 'Paris')
Important
If the subquery returns an empty set, the predicate is TRUE for every left-side value, regardless of the operator.
This may appear to be contradictory, because every left-side value will thus be considered both smaller and
greater than, both equal to and unequal to, every element of the right-side stream.
Nevertheless, it aligns perfectly with formal logic: if the set is empty, the predicate is true 0 times, i.e., for
every row in the set.
64
Common Language Elements
Syntax:
The quantifiers ANY and SOME are identical in their behaviour. Apparently, both are present in the SQL standard
so that they could be used interchangeably in order to improve the readability of operators. When the ANY or the
SOME quantifier is used, the predicate is true if any of the values returned by the subquery satisfies the condition
in the predicate of the main query. If the subquery would return no rows at all, the predicate is automatically
considered as False.
Example: Show only those clients whose ratings are higher than those of one or more clients in Rome.
SELECT *
FROM Customers
WHERE rating > ANY
(SELECT rating
FROM Customers
WHERE city = 'Rome')
65
Chapter 5
Data Definition
(DDL) Statements
DDL is the data definition language subset of Firebird's SQL language. DDL statements are used to create,
modify and delete database objects that have been created by users. When a DDL statement is committed, the
metadata for the object are created, changed or deleted.
DATABASE
This section describes how to create a database, connect to an existing database, alter the file structure of a
database and how to delete one. It also explains how to back up a database in two quite different ways and how
to switch the database to the copy-safe mode for performing an external backup safely.
CREATE DATABASE
Used for: Creating a new database
Syntax:
66
Data Definition (DDL) Statements
Parameter Description
filespec File specification for primary database file
Remote server specification in TCP/IP or Windows Networking style. Optional-
server_spec
ly includes a port number or service name
Full path and file name including its extension. The file name must be specified
filepath
according to the rules of the platform file system being used.
db_alias Database alias previously created in the aliases.conf file
servername Host name or IP address of the server where the database is to be created
User name of the owner of the new database. It may consist of up to 31 charac-
username
ters. Case-insensitive
Password of the user name as the database owner. The maximum length is 31
password
characters; however only the first 8 characters are considered. Case-sensitive
Page size for the database, in bytes. Possible values are 4096 (the default), 8192
size
and 16384
num Maximum size of the primary database file, or a secondary file, in pages
Specifies the character set of the connection available to a client connecting after
charset
the database is successfully created. Single quotes are required
default_charset Specifies the default character set for string data types
collation Default collation for the default character set
sec_file File specificaton for a secondary file
pagenum Starting page number for a secondary database file
diff_file File path and name for DIFFERENCE files (.delta files)
The CREATE DATABASE statement creates a new database. You can use CREATE DATABASE or CREATE
SCHEMA. They are synonymous.
A database may consist of one or several files. The first (main) file is called the primary file, subsequent files
are called secondary file[s].
Multi-file Databases
Nowadays, multi-file databases are considered a throwback. It made sense to use multi-file databases on old file
systems where the size of any file is limited. For instance, you could not create a file larger than 4 GB on FAT32.
The primary file specification is the name of the database file and its extension with the full path to it according
to the rules of the OS platform file system being used. The database file must not exist at the moment when the
database is being created. If it does exist, you will get an error message and the database will not be created.
If the full path to the database is not specified, the database will be created in one of the system directories. The
particular directory depends on the operating system. For this reason, unless you have a strong reason to prefer
that situation, always specify the absolute path, when creating either the database or an alias for it.
67
Data Definition (DDL) Statements
alias = filepath
servername[/{port|service}]:{filepath | db_alias}
If you use the Named Pipes protocol to create a database on a Windows server, the primary file specification
should look like this:
\\servername\{filepath | db_alias}
Optional PAGE_SIZE: Clause for specifying the database page size. This size will be set for the primary file
and all secondary files of the database. If you specify the database page size less than 4,096, it will be changed
automatically to the default page size, 4,096. Other values not equal to either 4,096, 8,192 or 16,384 will be
changed to the closest smaller supported value. If the database page size is not specified, it is set to the default
value of 4,096.
Optional LENGTH: Clause specifying the maximum size of the primary or secondary database file, in pages.
When a database is created, its primary and secondary files will occupy the minimum number of pages necessary
to store the system data, regardless of the value specified in the LENGTH clause. The LENGTH value does not
affect the size of the only (or last, in a multi-file database) file. The file will keep increasing its size automatically
when necessary.
Optional SET NAMES: Clause specifying the character set of the connection available after the database is suc-
cessfully created. The character set NONE is used by default. Notice that the character set should be enclosed
in a pair of apostrophes (single quotes).
Optional DEFAULT CHARACTER SET: Clause specifying the default character set for creating data structures of
string data types. Character sets are applied to CHAR, VARCHAR and BLOB TEXT data types. The character
68
Data Definition (DDL) Statements
set NONE is used by default. It is also possible to specify the default COLLATION for the default character set,
making that collation sequence the default for the default character set. The default will be used for the entire
database except where an alternative character set, with or without a specified collation, is used explicitly for
a field, domain, variable, cast expression, etc.
STARTING AT: Clause that specifies the database page number at which the next secondary database file should
start. When the previous file is completely filled with data according to the specified page number, the system
will start adding new data to the next database file.
Optional DIFFERENCE FILE: Clause specifying the path and name for the file delta that stores any mutations to
the database file after it has been switched to the copy-safe mode by the ALTER DATABASE BEGIN BACKUP
statement. For the detailed description of this clause, see ALTER DATABASE.
SET SQL DIALECT: Databases are created in Dialect 3 by default. For the database to be created in SQL dialect
1, you will need to execute the statement SET SQL DIALECT 1 from script or the client application, e.g. isql,
before the CREATE DATABASE statement.
2. Creating a database in the Linux operating system with a page size of 4,096. The owner of the database
will be the user wizard. The database will be in Dialect 3 and it will use UTF8 as its default character set,
with UNICODE_CI_AI as the default collation.
3. Creating a database on the remote server baseserver with the path specified in the alias test that has been
defined previously in the file aliases.conf. The TCP/IP protocol is used. The owner of the database
will be the user wizard. The database will be in Dialect 3 and will use UTF8 as its default character set.
4. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will contain up
to 10,000 pages with a page size of 8,192. As soon as the primary file has reached the maximum number
of pages, Firebird will start allocating pages to the secondary file test.fdb2. If that file is filled up to
its maximum as well, test.fdb3 becomes the recipient of all new page allocations. As the last file, it
has no page limit imposed on it by Firebird. New allocations will continue for as long as the file system
69
Data Definition (DDL) Statements
allows it or until the storage device runs out of free space. If a LENGTH parameter were supplied for this
last file, it would be ignored.
5. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will contain up to
10,000 pages with a page size of 8,192. As far as file size and the use of secondary files are concerned, this
database will behave exactly like the one in the previous example.
ALTER DATABASE
Used for: Altering the file organisation of a database or toggling its copy-safe state
Syntax:
70
Data Definition (DDL) Statements
Note
ALTER DATABASE
ADD FILE x LENGTH 8000
FILE y LENGTH 8000
FILE z
Multiple ADD FILE clauses are allowed; and an ADD FILE clause that adds multiple files (as in the example
above) can be mixed with others that add only one file. The statement was incorrectly documented in the old
InterBase 6 Language Reference.
Parameter Description
add_sec_clause Adding a secondary database file
sec_file File specification for secondary file
filepath Full path and file name of the delta file or the secondary database file
pagenum Page number from which the secondary database file is to start
num Maximum size of the secondary file in pages
diff_file File path and name of the .delta file (difference file)
The ADD DIFFERENCE FILE clause: specifies the path and name of the delta file that stores any mutations to
the database whenever it is switched to the copy-safe mode. This clause does not actually add any file. It just
overrides the default name and path of the .delta file. To change the existing settings, you should delete the
previously specified description of the .delta file using the DROP DIFFERENCE FILE clause before specifying
the new description of the delta file. If the path and name of the .delta file are not overridden, the file will have
the same path and name as the database, but with the .delta file extension.
71
Data Definition (DDL) Statements
Caution
If only a file name is specified, the .delta file will be created in the current directory of the server. On Windows,
this will be the system directorya very unwise location to store volatile user files and contrary to Windows
file system rules.
DROP DIFFERENCE FILE: This is the clause that deletes the description (path and name) of the .delta file speci-
fied previously in the ADD DIFFERENCE FILE clause. The file is not actually deleted. DROP DIFFERENCE FILE
deletes the path and name of the .delta file from the database header. Next time the database is switched to the
copy-safe mode, the default values will be used (i.e. the same path and name as those of the database, but
with the .delta extension).
BEGIN BACKUP: This is the clause that switches the database to the copy-safe mode. ALTER DATABASE with
this clause freezes the main database file, making it possible to back it up safely using file system tools, even
if users are connected and performing operations with data. Until the backup state of the database is reverted to
NORMAL, all changes made to the database will be written to the .delta (difference) file.
Important
Despite its syntax, a statement with the BEGIN BACKUP clause does not start a backup process but just creates
the conditions for doing a task that requires the database file to be read-only temporarily.
END BACKUP: is the clause used to switch the database from the copy-safe mode to the normal mode. A
statement with this clause merges the .delta file with the main database file and restores the normal operation
of the database. Once the END BACKUP process starts, the conditions no longer exist for creating safe backups
by means of file system tools.
Warning
Use of BEGIN BACKUP and END BACKUP and copying the database files with filesystem tools, is not safe
with multi-file databases! Use this method only on single-file databases.
Making a safe backup with the gbak utility remains possible at all times, although it is not recommended to
run gbak while the database is in LOCKED or MERGE state.
1. Adding a secondary file to the database. As soon as 30000 pages are filled in the previous primary or
secondary file, the Firebird engine will start adding data to the secondary file test4.fdb.
ALTER DATABASE
ADD FILE 'D:\test4.fdb'
STARTING AT PAGE 30001;
ALTER DATABASE
ADD DIFFERENCE FILE 'D:\test.diff';
72
Data Definition (DDL) Statements
ALTER DATABASE
DROP DIFFERENCE FILE;
ALTER DATABASE
BEGIN BACKUP;
5. Switching the database back from the copy-safe mode to the normal operation mode:
ALTER DATABASE
END BACKUP;
DROP DATABASE
Used for: Deleting the database to which you are currently connected
Syntax:
DROP DATABASE
The DROP DATABASE statement deletes the current database. Before deleting a database, you have to connect
to it. The statement deletes the primary file, all secondary files and all shadow files.
DROP DATABASE;
SHADOW
A shadow is an exact, page-by-page copy of a database. Once a shadow is created, all changes made in the
database are immediately reflected in the shadow. If the primary database file becomes unavailable for some
reason, the DBMS will switch to the shadow.
73
Data Definition (DDL) Statements
CREATE SHADOW
Used for: Creating a shadow for the current database
Syntax:
<secondary_file> ::=
FILE 'filepath'
[STARTING [AT [PAGE]] pagenum]
[LENGTH [=] num [PAGE[S]]]
Parameter Description
sh_num Shadow numbera positive number identifying the shadow set
The name of the shadow file and the path to it, in accord with the rules of the op-
filepath
erating system
num Maximum shadow size, in pages
secondary_file Secondary file specification
page_num The number of the page at which the secondary shadow file should start
The CREATE SHADOW statement creates a new shadow. The shadow starts duplicating the database right at the
moment it is created. It is not possible for a user to connect to a shadow.
Like a database, a shadow may be multi-file. The number and size of a shadow's files are not related to the
number and size of the files of database it is shadowing.
The page size for shadow files is set to be equal to the database page size and cannot be changed.
If a calamity occurs involving the original database, the system converts the shadow to a copy of the database
and switches to it. The shadow is then unavailable. What happens next depends on the MODE option.
When a shadow is converted to a database, it becomes unavailable. A shadow might alternatively become un-
available because someone accidentally deletes its file, or the disk space where the shadow files are stored is
exhausted or is itself damaged.
74
Data Definition (DDL) Statements
If the AUTO mode is selected (the default value), shadowing ceases automatically, all references to it are
deleted from the database header and the database continues functioning normally.
If the CONDITIONAL option was set, the system will attempt to create a new shadow to replace the lost one.
It does not always succeed, however, and a new one may need to be created manually.
If the MANUAL mode attribute is set when the shadow becomes unavailable, all attempts to connect to the
database and to query it will produce error messages. The database will remain inaccessible until either the
shadow again becomes available or the database administrator deletes it using the DROP SHADOW statement.
MANUAL should be selected if continuous shadowing is more important than uninterrupted operation of
the database.
STARTING AT: Clause specifying the shadow page number at which the next shadow file should start. The
system will start adding new data to the next shadow file when the previous file is filled with data up to the
specified page number.
Tip
You can verify the sizes, names and location of the shadow files by connecting to the database using isql and
running the command SHOW DATABASE;
DROP SHADOW
Used for: Deleting a shadow from the current database
75
Data Definition (DDL) Statements
Syntax:
Parameter Description
sh_num Shadow numbera positive number identifying the shadow set
The DROP SHADOW statement deletes the specified shadow for the database one is connected to. When a shadow
is dropped, all files related to it are deleted and shadowing to the specified sh_num ceases.
DROP SHADOW 1;
DOMAIN
Domain is one of the object types in a relational database. A domain is created as a specific data type with some
attributes attached to it. Once it has been defined in the database, it can be reused repeatedly to define table
columns, PSQL arguments and PSQL local variables. Those objects inherit all of the attributes of the domain.
Some attributes can be overriden when the new object is defined, if required.
This section describes the syntax of statements used to create, modify and delete domains. A detailed description
of domains and their usage can be found in Custom Data TypesDomains.
CREATE DOMAIN
Used for: Creating a new domain
Syntax:
76
Data Definition (DDL) Statements
<datatype> ::=
{SMALLINT | INTEGER | BIGINT} [<array_dim>]
| {FLOAT | DOUBLE PRECISION} [<array_dim>]
| {DATE | TIME | TIMESTAMP} [<array_dim>]
| {DECIMAL | NUMERIC} [(precision [, scale])] [<array_dim>]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[<array_dim>] [CHARACTER SET charset_name]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)] [<array_dim>]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset_name]
| BLOB [(seglen [, subtype_num])]
<dom_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
| <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
| <val> IS [NOT] NULL
| <val> IS [NOT] DISTINCT FROM <val>
| <val> [NOT] CONTAINING <val>
| <val> [NOT] STARTING [WITH] <val>
| <val> [NOT] LIKE <val> [ESCAPE <val>]
| <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
| <val> <operator> {ALL | SOME | ANY} (<select_list>)
| [NOT] EXISTS (<select_expr>)
| [NOT] SINGULAR (<select_expr>)
| (<dom_condition>)
| NOT <dom_condition>
| <dom_condition> OR <dom_condition>
| <dom_condition> AND <dom_condition>
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~>
<val> ::=
VALUE
| literal
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
| GEN_ID(genname, <val>)
| CAST(<val> AS <datatype>)
| (<select_one>)
| func([<val> [, <val> ...]])
Parameter Description
name Domain name consisting of up to 31 characters
77
Data Definition (DDL) Statements
Parameter Description
datatype SQL data type
literal A literal value that is compatible with datatype
context_var Any context variable whose type is compatible with datatype
dom_condition Domain condition
Name of a collation sequence that is valid for charset_name, if it is sup-
collation_name plied with datatype or, otherwise, is valid for the default character set of the
database
array_dim Array dimensions
m, n INTEGER numbers defining the index range of an array dimension
The total number of significant digits that a value of the datatype can hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string in characters
The name of a valid character set, if the character set of the domain is to be dif-
charset_name
ferent to the default character set of the database
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
select_one A scalar SELECT statementselecting one column and returning only one row
select_list A SELECT statement selecting one column and returning zero or more rows
A SELECT statement selecting one or more columns and returning zero or more
select_expr
rows
expression An expression resolving to a value that is compatible with datatype
genname Sequence (generator) name
func Internal function or UDF
Type-specific Details
ARRAY Types:
If the domain is to be an array, the base type can be any SQL data type except BLOB and ARRAY.
The dimensions of the array are specified between square brackets. (In the Syntax block, these brackets appear
in boldface to distinguish them from the square brackets that identify optional syntax elements.)
78
Data Definition (DDL) Statements
For each array dimension, one or two integer numbers define the lower and upper boundaries of its index
range:
- By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be
specified. A single number smaller than 1 defines the range num..1 and a number greater than 1 defines
the range 1..num.
- Two numbers separated by a colon (':') and optional whitespace, the second greater than the first, can be
used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper
boundary is greater than the lower.
When the array has multiple dimensions, the range definitions for each dimension must be separated by
commas and optional whitespace.
Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid
subscripts will be returned if selecting a specific element returns nothing or if an array field is NULL.
CHARACTER Types: You can use the CHARACTER SET clause to specify the character set for the CHAR, VAR-
CHAR and BLOB (SUB_TYPE TEXT ) types. If the character set is not specified, the character set specified as
DEFAULT CHARACTER SET in creating the database will be used. If no character set was specified then, the
character set NONE is applied by default when you create a character domain.
Warning
With character set NONE, character data are stored and retrieved the way they were submitted. Data in any
encoding can be added to a column based on such a domain, but it is impossible to add this data to a column with
a different encoding. Because no transliteration is performed between the source and destination encodings,
errors may result.
DEFAULT Clause: The optional DEFAULT clause allows you to specify a default value for the domain. This
value will be added to the table column that inherits this domain when the INSERT statement is executed, if no
value is specified for it in the DML statement. Local variables and arguments in PSQL modules that reference
this domain will be initialized with the default value. For the default value, use a literal of a compatible type
or a context variable of a compatible type.
NOT NULL Constraint: Columns and variables based on a domain with the NOT NULL constraint will be
prevented from being written as NULL, i.e., a value is required.
Caution
When creating a domain, take care to avoid specifying limitations that would contradict one another. For in-
stance, NOT NULL and DEFAULT NULL are contradictory.
CHECK Constraint[s]: The optional CHECK clause specifies constraints for the domain. A domain constraint
specifies conditions that must be satisfied by the values of table columns or variables that inherit from the
domain. A condition must be enclosed in parentheses. A condition is a logical expression (also called a predicate)
that can return the Boolean results TRUE, FALSE and UNKNOWN. A condition is considered satisfied if the
predicate returns the value TRUE or unknown value (equivalent to NULL). If the predicate returns FALSE, the
condition for acceptance is not met.
VALUE Keyword: The keyword VALUE in a domain constraint substitutes for the table column that is based on
this domain or for a variable in a PSQL module. It contains the value assigned to the variable or the table column.
VALUE can be used anywhere in the CHECK constraint, though it is usually used in the left part of the condition.
COLLATE: The optional COLLATE clause allows you to specify the collation sequence if the domain is based
on one of the string data types, including BLOBs with text subtypes. If no collation sequence is specified, the
collation sequence will be the one that is default for the specified character set at the time the domain is created.
79
Data Definition (DDL) Statements
2. Creating a domain that can take the values 'Yes' and 'No' in the default character set specified during the
creation of the database.
3. Creating a domain with the UTF8 character set and the UNICODE_CI_AI collation sequence.
4. Creating a domain of the DATE type that will not accept NULL and uses the current date as the default value.
5. Creating a domain defined as an array of 2 elements of the NUMERIC(18, 3) type. The starting array index
is 1.
Note
Domains defined over an array type may be used only to define table columns. You cannot use array
domains to define local variables in PSQL modules.
6. Creating a domain whose elements can be only country codes defined in the COUNTRY table.
80
Data Definition (DDL) Statements
Note
The example is given only to show the possibility of using predicates with queries in the domain test
condition. It is not recommended to create this style of domain in practice unless the lookup table contains
data that are never deleted.
ALTER DOMAIN
Used for: Altering the current attributes of a domain or renaming it
Syntax:
<datatype> ::=
{SMALLINT | INTEGER | BIGINT}
| {FLOAT | DOUBLE PRECISION}
| {DATE | TIME | TIMESTAMP}
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[CHARACTER SET charset_name]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING] [(size)]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset_name]
| BLOB [(seglen [, subtype_num])]
<dom_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
| <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
| <val> IS [NOT] NULL
| <val> IS [NOT] DISTINCT FROM <val>
| <val> [NOT] CONTAINING <val>
| <val> [NOT] STARTING [WITH] <val>
| <val> [NOT] LIKE <val> [ESCAPE <val>]
| <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
| <val> <operator> {ALL | SOME | ANY} (<select_list>)
| [NOT] EXISTS (<select_expr>)
| [NOT] SINGULAR (<select_expr>)
| (<dom_condition>)
| NOT <dom_condition>
| <dom_condition> OR <dom_condition>
| <dom_condition> AND <dom_condition>
81
Data Definition (DDL) Statements
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~>
<val> ::=
VALUE
| literal
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
| GEN_ID(genname, <val>)
| CAST(<val> AS <datatype>)
| (<select_one>)
| func([<val> [, <val> ...]])
Parameter Description
new_name New name for domain, consisting of up to 31 characters
datatype SQL data type
literal A literal value that is compatible with datatype
context_var Any context variable whose type is compatible with datatype
The total number of significant digits that a value of the datatype can hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string in characters
The name of a valid character set, if the character set of the domain is to be
charset_name
changed
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
select_one A scalar SELECT statementselecting one column and returning only one row
select_list A SELECT statement selecting one column and returning zero or more rows
A SELECT statement selecting one or more columns and returning zero or more
select_expr
rows
expression An expression resolving to a value that is compatible with datatype
genname Sequence (generator) name
func Internal function or UDF
The ALTER DOMAIN statement enables changes to the current attributes of a domain, including its name. You
can make any number of domain alterations in one ALTER DOMAIN statement.
82
Data Definition (DDL) Statements
TO <name>: Use the TO clause to rename the domain, as long as there are no dependencies on the domain,
i.e. table columns, local variables or procedure arguments referencing it.
SET DEFAULT: With the SET DEFAULT clause you can set a new default value. If the domain already has a
default value, there is no need to delete it firstit will be replaced by the new one.
DROP DEFAULT: Use this clause to delete a previously specified default value and replace it with NULL.
ADD CONSTRAINT CHECK: Use the ADD CONSTRAINT CHECK clause to add a CHECK constraint to the
domain. If the domain already has a CHECK constraint, it will have to be deleted first, using an ALTER DOMAIN
statement that includes a DROP CONSTRAINT clause.
TYPE: The TYPE clause is used to change the data type of the domain to a different, compatible one. The system
will forbid any change to the type that could result in data loss. An example would be if the number of characters
in the new type were smaller than in the existing type.
Important
When you alter the attributes of a domain, existing PSQL code may become invalid. For information on how
to detect it, read the piece entitled The RDB$VALID_BLR Field in Appendix A.
Any user connected to the database can alter a domain, provided it is not prevented by dependencies from objects
to which that user does not have sufficient privileges.
1. Changing the data type to INTEGER and setting or changing the default value to 2,000:
2. Renaming a domain.
3. Deleting the default value and adding a constraint for the domain:
83
Data Definition (DDL) Statements
DROP DOMAIN
Used for: Deleting an existing domain
Syntax:
The DROP DOMAIN statement deletes a domain that exists in the database. It is not possible to delete a domain
if it is referenced by any database table columns or used in any PSQL module. In order to delete a domain that is
in use, all columns in all tables that refer to the domain will have to be dropped and all references to the domain
will have to be removed from PSQL modules.
Example
84
Data Definition (DDL) Statements
TABLE
As a relational DBMS, Firebird stores data in tables. A table is a flat, two-dimensional structure containing any
number of rows. Table rows are often called records.
All rows in a table have the same structure and consist of columns. Table columns are often called fields. A table
must have at least one column. Each column contains a single type of SQL data.
This section describes how to create, alter and delete tables in a database.
CREATE TABLE
Used for: creating a new table (relation)
Syntax:
<regular_col_def> ::=
colname {<datatype> | domainname}
[DEFAULT {literal | NULL | <context_var>}]
[NOT NULL]
[<col_constraint>]
[COLLATE collation_name]
<computed_col_def> ::=
colname [<datatype>]
{COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
<datatype> ::=
{SMALLINT | INTEGER | BIGINT} [<array_dim>]
| {FLOAT | DOUBLE PRECISION} [<array_dim>]
| {DATE | TIME | TIMESTAMP} [<array_dim>]
| {DECIMAL | NUMERIC} [(precision [, scale])] [<array_dim>]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[<array_dim>] [CHARACTER SET charset_name]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)] [<array_dim>]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset_name]
| BLOB [(seglen [, subtype_num])]
85
Data Definition (DDL) Statements
<col_constraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY [<using_index>]
| UNIQUE [<using_index>]
| REFERENCES other_table [(colname)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>) }
<tconstraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY (col_list) [<using_index>]
| UNIQUE (col_list) [<using_index>]
| FOREIGN KEY (col_list)
REFERENCES other_table [(col_list)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>) }"
<check_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
| <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
| <val> IS [NOT] NULL
| <val> IS [NOT] DISTINCT FROM<val>
| <val> [NOT] CONTAINING <val>
| <val> [NOT] STARTING [WITH] <val>
| <val> [NOT] LIKE <val> [ESCAPE <val>]
| <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
| <val> <operator> {ALL | SOME | ANY} (<select_list>)
| [NOT] EXISTS (<select_expr>)
| [NOT] SINGULAR (<select_expr>)
| (<check_condition>)
| NOT <check_condition>
| <check_condition> OR <check_condition>
| <check_condition> AND <check_condition>
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~>
<val> ::=
colname [[<array_idx> [, <array_idx> ...]]]
| literal
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
| GEN_ID(genname, <val>)
| CAST(<val> AS <datatype>)
| (<select_one>)
| func([<val> [, <val> ...]])
86
Data Definition (DDL) Statements
Parameter Description
Name (identifier) for the table. It may consist of up to 31 characters and must be
tablename
unique in the database.
File specification (only for external tables). Full file name and path, enclosed in
filespec single quotes, correct for the local file system and located on a storage device
that is physically connected to Firebird's host computer.
Name (identifier) for a column in the table. May consist of up to 31 characters
colname
and must be unique in the table.
datatype SQL data type
col_constraint Column constraint
tconstraint Table constraint
constr_name The name (identifier) of a constraint. May consist of up to 31 characters.
other_table The name of the table referenced by the foreign key constraint
other_col The name of the column in other_table that is referenced by the foreign key
literal A literal value that is allowed in the given context
context_var Any context variable whose data type is allowed in the given context
The condition applied to a CHECK constraint, that will resolve as either true,
check_condition
false or NULL
collation Collation
array_dim Array dimensions
m, n INTEGER numbers defining the index range of an array dimension
The total number of significant digits that a value of the datatype can hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string in characters
The name of a valid character set, if the character set of the column is to be dif-
charset_name
ferent to the default character set of the database
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
select_one A scalar SELECT statementselecting one column and returning only one row
select_list A SELECT statement selecting one column and returning zero or more rows
A SELECT statement selecting one or more columns and returning zero or more
select_expr
rows
87
Data Definition (DDL) Statements
Parameter Description
expression An expression resolving to a value that is is allowed in the given context
genname Sequence (generator) name
func Internal function or UDF
The CREATE TABLE statement creates a new table. Any user can create it and its name must be unique among
the names of all tables, views and stored procedures in the database.
A table must contain at least one column that is not computed and the names of columns must be unique in
the table.
A column must have either an explicit SQL data type, the name of a domain whose attributes will be copied for
the column, or be defined as COMPUTED BY an expression (a calculated field).
Character Columns
You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB (text
subtype) types. If the character set is not specified, the character set specified during the creation of the database
will be used by default. If no character set was specified during the creation of the database, the NONE character
set is applied by default. In this case, data is stored and retrieved the way it was submitted. Data in any encoding
can be added to such a column, but it is not possible to add this data to a column with a different encoding. No
transliteration is performed between the source and destination encodings, which may result in errors.
The optional COLLATE clause allows you to specify the collation sequence for character data types, including
BLOB SUB_TYPE TEXT. If no collation sequence is specified, the collation sequence that is default for the
specified character set during the creation of the column is applied by default.
The default value can be a literal of a compatible type, a context variable that is type-compatible with the data
type of the column, or NULL, if the column allows it. If no default value is explicitly specified, NULL is implied.
88
Data Definition (DDL) Statements
Domain-based Columns
To define a column, you can use a previously defined domain. If the definition of a column is based on a domain,
it may contain a new default value, additional CHECK constraints and a COLLATE clause that will override
the values specified in the domain definition. The definition of such a column may contain additional column
constraints (for instance, NOT NULL), if the domain does not have it.
Important
It is not possible to define a domain-based column that is nullable if the domain was defined with the NOT
NULL attribute. If you want to have a domain that might be used for defining both nullable and non-nullable
columns and variables, it is better practice to make the domain nullable and apply NOT NULL in the downstream
column definitions and variable declarations.
Calculated Fields
Calculated fields can be defined with the COMPUTED [BY] or GENERATED ALWAYS AS clause (according
to the SQL:2003 standard). They mean the same. Describing the data type is not required (but possible) for
calculated fields, as the DBMS calculates and stores the appropriate type as a result of the expression analysis.
Appropriate operations for the data types included in an expression must be specified precisely.
If the data type is explicitly specified for a calculated field, the calculation result is converted to the specified
type. This means, for instance, that the result of a numeric expression could be rendered as a string.
In a query that selects a COMPUTED BY column, the expression is evaluated for each row of the selected data.
Tip
Instead of a computed column, in some cases it makes sense to use a regular column whose value is evaluated
in triggers for adding and updating data. It may reduce the performance of inserting/updating records, but it
will increase the performance of data selection.
89
Data Definition (DDL) Statements
Constraints
Four types of constraints can be specified. They are:
Constraints can be specified at column level (column constraints) or at table level (table constraints). Ta-
ble-level constraints are needed when keys (uniqueness constraint, Primary Key, Foreign Key) are to be formed
across multiple columns and when a CHECK constraint involves other columns in the row besides the column
being defined. Syntax for some types of constraint may differ slightly according to whether the constraint is
being defined at column or table level.
A column-level constraint is specified during a column definition, after all column attributes except COLLA-
TION are specified, and can involve only the column specified in that definition
Table-level constraints are specified after all of the column definitions. They are a more flexible way to set
constraints, since they can cater for constraints involving multiple columns
You can mix column-level and table-level constraints in the same CREATE TABLE statement
The system automatically creates the corresponding index for a primary key (PRIMARY KEY), a unique key
(UNIQUE) and a foreign key (REFERENCES for a column-level constraint, FOREIGN KEY REFERENCES for one
at the table level).
The constraint name has the form INTEG_n, where nrepresents one or more numerals
The index name has the form RDB$PRIMARYn (for a primary key index), RDB$FOREIGNn (for a foreign key
index) or RDB$n (for a unique key index). Again, n represents one or more numerals
Automatic naming of table-level constraints and their indexes follows the same pattern, unless the names are
supplied explicitly.
Named Constraints
A constraint can be named explicitly if the CONSTRAINT clause is used for its definition. While the CON-
STRAINT clause is optional for defining column-level constraints, it is mandatory for table-level. By default,
the constraint index will have the same name as the constraint. If a different name is wanted for the constraint
index, a USING clause can be included.
The USING clause allows you to specify a user-defined name for the index that is created automatically and,
optionally, to define the direction of the indexeither ascending (the default) or descending.
90
Data Definition (DDL) Statements
PRIMARY KEY
The PRIMARY KEY constraint is built on one or more key columns, each column having the NOT NULL constraint
specified for it. The values across the key columns in any row must be unique. A table can have only one primary
key.
The UNIQUE constraint defines the requirement of content uniqueness for the values in a key throughout the
table. A table can contain any number of unique key constraints.
As with the Primary Key, the Unique constraint can be multi-column. If so, it must be specified as a table-level
constraint.
Firebird's SQL-99-compliant rules for UNIQUE constraints allow one or more NULLs in a column with a UNIQUE
constraint. That makes it possible to define a UNIQUEconstraint on a column that does not have the NOT NULL
constraint.
For UNIQUE keys that span multiple columns, the logic is a little complicated:
Multiple rows having null in all the columns of the key are allowed
Multiple rows having keys with different combinations of nulls and non-null values are allowed
Multiple rows having the same key columns null and the rest filled with non-null values are allowed, provided
the values differ in at least one column
Multiple rows having the same key columns null and the rest filled with non-null values that are the same
in every column will violate the constraint
The rules for uniqueness can be summarised thus:
Illustration:
FOREIGN KEY
A Foreign Key ensures that the participating column(s) can contain only values that also exist in the referenced
column(s) in the master table. These referenced columns are often called target columns. They must be the
91
Data Definition (DDL) Statements
primary key or a unique key in the target table. They need not have a NOT NULL constraint defined on them
although, if they are the primary key, they will, of course, have that constraint.
The foreign key columns in the referencing table itself do not require a NOT NULL constraint.
A single-column Foreign Key can be defined in the column declaration, using the keyword REFERENCES:
... ,
ARTIFACT_ID INTEGER REFERENCES COLLECTION (ARTIFACT_ID),
The column ARTIFACT_ID in the example references a column of the same name in the table COLLECTIONS.
Both single-column and multi-column foreign keys can be defined at the table level. For a multi-column Foreign
Key, the table-level declaration is is the only option. This method also enables the provision of an optional name
for the constraint:
...
CONSTRAINT FK_ARTSOURCE FOREIGN KEY(DEALER_ID, COUNTRY)
REFERENCES DEALER (DEALER_ID, COUNTRY),
Notice that the column names in the referenced (master) table may differ from those in the Foreign Key.
Note
If no target columns are specified, the Foreign Key automatically references the target table's Primary Key.
With the sub-clauses ON UPDATE and ON DELETE it is possible to specify an action to be taken on the affected
foreign key column(s) when referenced values in the master table are changed:
CASCADE - The change in the master table is propagated to the corresponding row(s) in the child table. If
a key value changes, the corresponding key in the child records changes to the new value; if the master row
is deleted, the child records are deleted.
SET DEFAULT - The Foreign Key columns in the affected rows will be set to their default values as they
were when the foreign key constraint was defined.
SET NULL - The Foreign Key columns in the affected rows will be set to NULL.
The specified action, or the default NO ACTION, could cause a Foreign Key column to become invalid. For
example, it could get a value that is not present in the master table, or it could become NULL while the column
has a NOT NULL constraint. Such conditions will cause the operation on the master table to fail with an error
message.
Example:
...
92
Data Definition (DDL) Statements
CONSTRAINT FK_ORDERS_CUST
FOREIGN KEY (CUSTOMER) REFERENCES CUSTOMERS (ID)
ON UPDATE CASCADE ON DELETE SET NULL
CHECK Constraint
The CHECK constraint defines the condition the values inserted in this column must satisfy. A condition is
a logical expression (also called a predicate) that can return the TRUE, FALSE and UNKNOWN values. A
condition is considered satisfied if the predicate returns TRUE or value UNKNOWN (equivalent to NULL). If
the predicate returns FALSE, the value will not be accepted. This condition is used for inserting a new row into
the table (the INSERT statement) and for updating the existing value of the table column (the UPDATE statement)
and also for statements where one of these actions may take place (UPDATE OR INSERT, MERGE).
Important
A CHECK constraint on a domain-based column does not replace an existing CHECK condition on the domain,
but becomes an addition to it. The Firebird engine has no way, during definition, to verify that the extra CHECK
does not conflict with the existing one.
CHECK conditionswhether defined at table level or column level refer to table columns by their names.
The use of the keyword VALUE as a placeholder, as in domain CHECK constraints, is not valid in the context
of defining column constraints.
Syntax:
93
Data Definition (DDL) Statements
Syntax notes
ON COMMIT DELETE ROWS creates a transaction-level GTT (the default), ON COMMIT PRESERVE ROWS
a connection-level GTT
An EXTERNAL [FILE] clause is not allowed in the definition of a global temporary table
Restrictions on GTTs
GTTs can be dressed up with all the features and paraphernalia of ordinary tables (keys, references, indexes,
triggers and so on) but there are a few restrictions:
Tip
In an existing database, it is not always easy to distinguish a regular table from a GTT, or a transaction-level
GTT from a connection-level GTT. Use this query to find out what type of table you are looking at:
select t.rdb$type_name
from rdb$relations r
join rdb$types t on r.rdb$relation_type = t.rdb$type
where t.rdb$field_name = 'RDB$RELATION_TYPE'
and r.rdb$relation_name = 'TABLENAME'
The RDB$TYPE_NAME field will show PERSISTENT for a regular table, VIEW
for a view, GLOBAL_TEMPORARY_PRESERVE for a connection-bound GTT and
GLOBAL_TEMPORARY_DELETE for a transaction_bound GTT.
External Tables
The optional EXTERNAL [FILE] clause specifies that the table is stored outside the database in an external text
file of fixed-length records. The columns of a table stored in an external file can be of any type except BLOB or
ARRAY, although for most purposes, only columns of CHAR types would be useful.
All you can do with a table stored in an external file is insert new rows (INSERT) and query the data (SELECT).
Updating existing data (UPDATE) and deleting rows (DELETE) are not possible.
A file that is defined as an external table must be located on a storage device that is physically present on
the machine where the Firebird server runs and, if the parameter ExternalFileAccess in the firebird.conf
94
Data Definition (DDL) Statements
configuration file is Restrict, it must be in one of the directories listed there as the argument for Restrict.
If the file does not exist yet, Firebird will create it on first access.
Important
The ability to use external files for a table depends on the value set for the ExternalFileAccess parameter in
firebird.conf:
If it is set to None (the default), any attempt to access an external file will be denied.
The Restrict setting is recommended, for restricting external file access to directories created explicitly
for the purpose by the server administrator. For example:
If this parameter is set to Full, external files may be accessed anywhere on the host file system. It creates
a security vulnerability and is not recommended.
The row format of the external table is fixed length. There are no field delimiters: both field and row boundaries
are determined by maximum sizes, in bytes, of the field definitions. It is important to keep this in mind, both
when defining the structure of the external table and when designing an input file for an external table that is
to import data from another application. The ubiquitous .csv format, for example, is of no use as an input file
and cannot be generated directly into an external file.
The most useful data type for the columns of external tables is the fixed-length CHAR type, of suitable lengths
for the data they are to carry. Date and number types are easily cast to and from strings whereas, unless the
files are to be read by another Firebird database, the native data types will appear to external applications as
unparseable alphabetti.
Of course, there are ways to manipulate typed data so as to generate output files from Firebird that can be read
directly as input files to other applications, using stored procedures, with or without employing external tables.
Such techniques are beyond the scope of a language reference. Here, we provide some guidelines and tips for
producing and working with simple text files, since the external table feature is often used as an easy way to
produce or read transaction-independent logs that can be studied off-line in a text editor or auditing application.
Row Delimiters
Generally, external files are more useful if rows are separated by a delimiter, in the form of a newline sequence
that is recognised by reader applications on the intended platform. For most contexts on Windows, it is the two-
byte 'CRLF' sequence, carriage return (ASCII code decimal 13) and line feed (ASCII code decimal 10). On
POSIX, LF on its own is usual; for some MacOSX applications, it may be LFCR. There are various ways to
populate this delimiter column. In our example below, it is done by using a Before Insert trigger and the internal
function ASCII_CHAR.
For our example, we will define an external log table that might be used by an exception handler in a stored
procedure or trigger. The external table is chosen because the messages from any handled exceptions will be
retained in the log, even if the transaction that launched the process is eventually rolled back because of another,
95
Data Definition (DDL) Statements
unhandled exception. For demonstration purposes, it has just two data columns, a time stamp and a message.
The third column stores the row delimiter:
Now, a trigger, to write the timestamp and the row delimiter each time a message is written to the file:
SET TERM ^;
CREATE TRIGGER bi_ext_log FOR ext_log
ACTIVE BEFORE INSERT
AS
BEGIN
IF (new.stamp is NULL) then
new.stamp = CAST (CURRENT_TIMESTAMP as CHAR(24));
new.crlf = ASCII_CHAR(13) || ASCII_CHAR(10);
END ^
COMMIT ^
SET TERM ;^
Inserting some records (which could have been done by an exception handler or a fan of Shakespeare):
The output:
2. Creating the STOCK table with the named primary key specified at the column level and the named unique
key specified at the table level.
96
Data Definition (DDL) Statements
3. Creating the JOB table with a primary key constraint spanning two columns, a foreign key constraint for
the COUNTRY table and a table-level CHECK constraint. The table also contains an array of 5 elements.
4. Creating the PROJECT table with primary, foreign and unique key constraints with custom index names
specified with the USING clause.
5. Creating the SALARY_HISTORY table with two computed fields. The first one is declared according
to the SQL:2003 standard, while the second one is declared according to the traditional declaration of
computed fields in Firebird.
97
Data Definition (DDL) Statements
7. Creating a transaction-scoped global temporary table that uses a foreign key to reference a connec-
tion-scoped global temporary table. The ON COMMIT sub-clause is optional because DELETE ROWS
is the default.
ALTER TABLE
Used for: altering the structure of a table.
Syntax:
<regular_col_def> ::=
colname {<datatype> | domainname}
[DEFAULT {literal | NULL | <context_var>}]
[NOT NULL]
[<col_constraint>]
[COLLATE collation_name]
<computed_col_def> ::=
colname [<datatype>]
98
Data Definition (DDL) Statements
<regular_col_mod> ::=
TO newname
| POSITION newpos
| TYPE {<datatype> | domainname}
| SET DEFAULT {literal | NULL | <context_var>}
| DROP DEFAULT
<computed_col_mod> ::=
TO newname
| POSITION newpos
| [TYPE <datatype>] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
<datatype> ::=
{SMALLINT | INTEGER | BIGINT} [<array_dim>]
| {FLOAT | DOUBLE PRECISION} [<array_dim>]
| {DATE | TIME | TIMESTAMP} [<array_dim>]
| {DECIMAL | NUMERIC} [(precision [, scale])] [<array_dim>]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[<array_dim>] [CHARACTER SET charset_name]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)] [<array_dim>]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset_name]
| BLOB [(seglen [, subtype_num])]
<col_constraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY [<using_index>]
| UNIQUE [<using_index>]
| REFERENCES other_table [(colname)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>) }
<tconstraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY (col_list) [<using_index>]
| UNIQUE (col_list) [<using_index>]
| FOREIGN KEY (col_list)
REFERENCES other_table [(col_list)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>) }
<check_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
99
Data Definition (DDL) Statements
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~>
<val> ::=
colname [[<array_idx> [, <array_idx> ...]]]
| literal
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
| GEN_ID(genname, <val>)
| CAST(<val> AS <datatype>)
| (<select_one>)
| func([<val> [, <val> ...]])
Parameter Description
tablename Name (identifier) of the table
operation One of the available operations altering the structure of the table
Name (identifier) for a column in the table, max. 31 characters. Must be unique
colname
in the table.
New name (identifier) for the column, max. 31 characters. Must be unique in the
newname
table.
The new column position (an integer between 1 and the number of columns in
newpos
the table)
col_constraint Column constraint
tconstraint Table constraint
constr_name The name (identifier) of a constraint. May consist of up to 31 characters.
other_table The name of the table referenced by the foreign key constraint
literal A literal value that is allowed in the given context
context_var A context variable whose type is allowed in the given context
100
Data Definition (DDL) Statements
Parameter Description
The condition of a CHECK constraint that will be satisfied if it evaluates to
check_condition
TRUE or UNKNOWN/NULL
Name of a collation sequence that is valid for charset_name, if it is sup-
plied with datatype or, otherwise, is valid for the default character set of the
collation database
array_dim Array dimensions
m, n INTEGER numbers defining the index range of an array dimension
The total number of significant digits that a value of the datatype can hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string in characters
The name of a valid character set, if the character set of the column is to be dif-
charset_name
ferent to the default character set of the database
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
select_one A scalar SELECT statementselecting one column and returning only one row
select_list A SELECT statement selecting one column and returning zero or more rows
A SELECT statement selecting one or more columns and returning zero or more
select_expr
rows
expression An expression resolving to a value that is is allowed in the given context
genname Sequence (generator) name
func Internal function or UDF
The ALTER TABLE statement changes the structure of an existing table. With one ALTER TABLE statement it
is possible to perform multiple operations, adding/dropping columns and constraints and also altering column
specifications.
Some changes in the structure of a table increment the metadata change counter (version count) assigned to
every table. The number of metadata changes is limited to 255 for each table. Once the counter reaches the 255
limit, you will not be able to make any further changes to the structure of the table without resetting the counter.
To reset the metadata change counter: You should back up and restore the database using the gbak utility.
101
Data Definition (DDL) Statements
Each time a new column is added, the metadata change counter grows by one
Adding a new table constraint does not increase the metadata change counter
Points to Be Aware of
1. Be careful about adding a new column with the NOT NULL constraint set. It may lead to breaking the
logical integrity of data, since you will have existing records containing NULL in a non-nullable column.
When adding a non-nullable column, it is recommended either to set a default value for it or to update the
column in existing rows with a non-null value.
2. When a new CHECK constraint is added, existing data is not tested for compliance. Prior testing of existing
data against the new CHECK expression is recommended.
Effect on Version Count: Each time a column is dropped, the table's metadata change counter is increased by one.
A PRIMARY KEY or UNIQUE key constraint cannot be deleted if it is referenced by a FOREIGN KEY constraint in
another table. It will be necessary to drop that FOREIGN KEY constraint before attempting to drop the PRIMARY
KEY or UNIQUE key constraint it references.
Effect on Version Count: Deleting a column constraint or a table constraint does not increase the metadata
change counter.
change the name (does not affect the metadata change counter)
102
Data Definition (DDL) Statements
change the data type (increases the metadata change counter by one)
change the column position in the column list of the table (does not affect the metadata change counter)
delete the default column value (does not affect the metadata change counter)
set a default column value or change the existing default (does not affect the metadata change counter)
change the type and expression for a computed column (does not affect the metadata change counter)
It will not be possible to change the name of a column that is included in any constraint: PRIMARY KEY, UNIQUE
key, FOREIGN KEY, column constraint or the CHECK constraint of the table.
Renaming a column will also be disallowed if the column is used in any trigger, stored procedure or view.
If the column was declared as an array, no change to its type or its number of dimensions is permitted.
The data type of a column that is involved in a foreign key, primary key or unique constraint cannot be changed
at all.
If the column is based on a domain with a default value, the default value will revert to the domain default
An execution error will be raised if an attempt is made to delete the default value of a column which has no
default value or whose default value is domain-based
103
Data Definition (DDL) Statements
The optional SET DEFAULT clause sets a default value for the column. If the column already has a default value,
it will be replaced with the new one. The default value applied to a column always overrides one inherited from
a domain.
Only the table owner and administrators have the authority to use ALTER TABLE.
2. Adding the CAPITAL column with the UNIQUE constraint and deleting the CURRENCY column.
3. Adding the CHK_SALARY check constraint and a foreign key to the JOB table.
4. Setting default value for the MODEL field, changing the type of the ITEMID column and renaming the
MODELNAME column.
104
Data Definition (DDL) Statements
DROP TABLE
Used for: deleting a table
Syntax:
Parameter Description
tablename Name (identifier) of the table
The DROP TABLE statement deletes an existing table. If the table has dependencies, the DROP TABLE statement
will fail with an execution error.
When a table is dropped, all triggers for its events and indexes built for its fields will be deleted as well.
Only the table owner and administrators have the authority to use DROP TABLE.
RECREATE TABLE
Used for: creating a new table (relation) or recreating an existing one
105
Data Definition (DDL) Statements
Syntax:
See the CREATE TABLE section for the full syntax of CREATE TABLE and descriptions of defining tables,
columns and constraints.
RECREATE TABLE creates or recreates a table. If a table with this name already exists, the RECREATE TABLE
statement will try to drop it and create a new one. Existing dependencies will prevent the statement from exe-
cuting.
INDEX
An index is a database object used for faster data retrieval from a table or for speeding up the sorting of query. In-
dexes are used also to enforce the refererential integrity constraints PRIMARY KEY, FOREIGN KEY and UNIQUE.
This section describes how to create indexes, activate and deactivate them, delete them and collect statistics
(recalculate selectivity) for them.
CREATE INDEX
Used for: Creating an index for a table
Syntax:
106
Data Definition (DDL) Statements
Parameter Description
indexname Index name. It may consist of up to 31 characters
tablename The name of the table for which the index is to be built
Name of a column in the table. Columns of the types BLOB and ARRAY and
col
computed fields cannot be used in an index
The expression that will compute the values for a computed index, also known
expression
as an expression index
The CREATE INDEX statement creates an index for a table that can be used to speed up searching, sorting and
grouping. Indexes are created automatically in the process of defining constraints, such as primary key, foreign
key or unique constraints.
An index can be built on the content of columns of any data type except for BLOB and arrays. The name (iden-
tifier) of an index must be unique among all index names.
Key Indexes
When a primary key, foreign key or unique constraint is added to a table or column, an index with the same name
is created automatically, without an explicit directive from the designer. For example, the PK_COUNTRY
index will be created automatically when you execute and commit the following statement:
Unique Indexes
Specifying the keyword UNIQUE in the index creation statement creates an index in which uniqueness will be
enforced throughout the table. The index is referred to as a unique index. A unique index is not a constraint.
Unique indexes cannot contain duplicate key values (or duplicate key value combinations, in the case of com-
pound, or multi-column, or multi-segment) indexes. Duplicated NULLs are permitted, in accordance with the
SQL:99 standard, in both single-segment and multi-segment indexes.
Index Direction
All indexes in Firebird are uni-directional. An index may be constructed from the lowest value to the highest
(ascending order) or from the highest value to the lowest (descending order). The keywords ASC[ENDING] and
DESC[ENDING] are used to specify the direction of the index. The default index order is ASC[ENDING]. It is
quite valid to define both an ascending and a descending index on the same column or key set.
Tip
A descending index can be useful on a column that will be subjected to searches on the high values (newest,
maximum, etc.)
107
Data Definition (DDL) Statements
Note
You can actually create a computed index on a computed field, but the index will never be used.
Limits on Indexes
Certain limits apply to indexes.
The number of indexes that can be accommodated for each table is limited. The actual maximum for a specific
table depends on the page size and the number of columns in the indexes.
The maximum indexed string length is 9 bytes less than the maximum key length. The maximum indexable
string length depends on the page size and the character set.
108
Data Definition (DDL) Statements
Only the table owner and administrators have the authority to use CREATE INDEX.
2. Creating an index with keys sorted in the descending order for the CHANGE_DATE column in the
SALARY_HISTORY table
3. Creating a multi-segment index for the ORDER_STATUS, PAID columns in the SALES table
4. Creating an index that does not permit duplicate values for the NAME column in the COUNTRY table
SELECT *
FROM PERSONS
WHERE UPPER(NAME) STARTING WITH UPPER('Iv');
109
Data Definition (DDL) Statements
ALTER INDEX
Used for: Activating or deactivating an index; rebuilding an index
Syntax:
Parameter Description
indexname Index name
The ALTER INDEX statement activates or deactivates an index. There is no facility on this statement for altering
any attributes of the index.
With the INACTIVE option, the index is switched from the active to inactive state. The effect is similar to the
DROP INDEX statement except that the index definition remains in the database. Altering a constraint index
to the inactive state is not permitted.
An active index can be deactivated if there are no queries using that index; otherwise, an object in use
error is returned.
Activating an inactive index is also safe. However, if there are active transactions modifying the table, the
transaction containing the ALTER INDEX statement will fail if it has the NOWAIT attribute. If the transaction
is in WAIT mode, it will wait for completion of concurrent transactions.
On the other side of the coin, if our ALTER INDEX succeeds and starts to rebuild the index at COMMIT, other
transactions modifying that table will fail or wait, according to their WAIT/NO WAIT attributes. The situation
is exactly the same for CREATE INDEX.
How is it Useful?
It might be useful to switch an index to the inactive state whilst inserting, updating or deleting a large batch
of records in the table that owns the index.
With the ACTIVE option, if the index is in the inactive state, it will be switched to active state and the system
rebuilds the index.
How is it Useful?
Even if the index is active when ALTER INDEX ... ACTIVE is executed, the index will be rebuilt. Rebuilding
indexes can be a useful piece of houskeeping to do, occasionally, on the indexes of a large table in a database
that has frequent inserts, updates or deletes but is infrequently restored.
110
Data Definition (DDL) Statements
Only the table owner and administrators have the authority to use ALTER INDEX.
2. Switching the IDX_UPDATER index back to the active state and rebuilding it
DROP INDEX
Used for: Deleting an index
Syntax:
Parameter Description
indexname Index name
The DROP INDEX statement deletes an the named index from the database.
Note
A constraint index cannot deleted using DROP INDEX. Constraint indexes are dropped during the process of
executing the command ALTER TABLE ... DROP CONSTRAINT ....
Only the table owner and administrators have the authority to use DROP INDEX.
111
Data Definition (DDL) Statements
SET STATISTICS
Used for: Recalculating the selectivity of an index
Syntax:
Parameter Description
indexname Index name
The SET STATISTICS statement recalculates the selectivity of the specified index.
Index Selectivity
The selectivity of an index is the result of evaluating the number of rows that can be selected in a search on
every index value. A unique index has the maximum selectivity because it is impossible to select more than one
row for each value of an index key if it is used. Keeping the selectivity of an index up to date is important for
the optimizer's choices in seeking the most optimal query plan.
Index statistics in Firebird are not automatically recalculated in response to large batches of inserts, updates
or deletions. It may be beneficial to recalculate the selectivity of an index after such operations because the
selectivity tends to become outdated.
Note
The statements CREATE INDEX and ALTER INDEX ACTIVE both store index statistics that completely corre-
spond to the contents of the newly-[re]built index.
The selectivity of an index can be recalculated by the owner of the table or an administrator. It can be performed
under concurrent load without risk of corruption. However, be aware that, under concurrent load, the newly
calculated statistics could become outdated as soon as SET STATISTICS finishes.
Example Using SET STATISTICS: Recalculating the selectivity of the index IDX_UPDATER
112
Data Definition (DDL) Statements
VIEW
A view is a virtual table that is actually a stored and named SELECT query for retrieving data of any complexity.
Data can be retrieved from one or more tables, from other views and also from selectable stored procedures.
Unlike regular tables in relational databases, a view is not an independent data set stored in the database. The
result is dynamically created as a data set when the view is selected.
The metadata of a view are available to the process that generates the binary code for stored procedures and
triggers, just as though they were concrete tables storing persistent data.
CREATE VIEW
Used for: Creating a view
Syntax:
Parameter Description
viewname View name, maximum 31 characters
select_statement SELECT statement
full_column_list The list of columns in the view
colname View column name. Duplicate column names are not allowed.
The CREATE VIEW statement creates a new view. The identifier (name) of a view must be unique among the
names of all views, tables and stored procedures in the database.
The name of the new view can be followed by the list of column names that should be returned to the caller
when the view is invoked. Names in the list do not have to be related to the names of the columns in the base
tables from which they derive.
113
Data Definition (DDL) Statements
If the view column list is omitted, the system will use the column names and/or aliases from the SELECT state-
ment. If duplicate names or non-aliased expression-derived columns make this impossible to obtain a valid list,
creation of the view fails with an error.
The number of columns in the view's list must exactly match the number of columns in the selection list of the
underlying SELECT statement in the view definition.
Additional Points
If the full list of columns is specified, it makes no sense to specify aliases in the SELECT statement because
the names in the column list will override them
The column list is optional if all of the columns in the SELECT are explicitly named and are unique in the
selection list
Updatable Views
A view can be updatable or read-only. If a view is updatable, the data retrieved when this view is called can be
changed by the DML statements INSERT, UPDATE, DELETE, UPDATE OR INSERT or MERGE. Changes made
in an updatable view are applied to the underlying table(s).
A read-only view can be made updateable with the use of triggers. Once triggers have been defined on a view,
changes posted to it will never be written automatically to the underlying table, even if the view was updateable
to begin with. It is the responsibility of the programmer to ensure that the triggers update (or delete from, or
insert into) the base tables as needed.
A view will be automatically updatable if all of the following conditions are met:
the SELECT statement queries only one table or one updatable view
each base table (or base view) column not present in the view definition is covered by one of the following
conditions:
- it is nullable
- it has a non-NULL default value
- it has a trigger that supplies a permitted value
the SELECT statement contains no fields derived from subqueries or other expressions
the SELECT statement does not contain fields defined through aggregate functions, such as MIN, MAX, AVG,
SUM, COUNT, LIST
the SELECT statement does not include the keyword DISTINCT or row-restrictive keywords such as ROWS,
FIRST, SKIP
114
Data Definition (DDL) Statements
or to update an existing one is checked as to whether the new or updated record would meet the WHERE criteria.
If they fail the check, the operation is not performed and an appropriate error message is returned.
WITH CHECK OPTION can be specified only in a CREATE VIEW statement in which a WHERE clause is
present to restrict the output of the main SELECT statement. An error message is returned otherwise.
Please note:
If WITH CHECK OPTION is used, the engine checks the input against the WHERE clause before passing anything
to the base relation. Therefore, if the check on the input fails, any default clauses or triggers on the base relation
that might have been designed to correct the input will never come into action.
Furthermore, view fields omitted from the INSERT statement are passed as NULLs to the base relation, regard-
less of their presence or absence in the WHERE clause. As a result, base table defaults defined on such fields
will not be applied. Triggers, on the other hand, will fire and work as expected.
For views that do not have WITH CHECK OPTION, fields omitted from the INSERT statement are not passed
to the base relation at all, so any defaults will be applied.
Ownership of a View
The creator of a view becomes its owner.
To create a view, a non-admin user needs at least SELECT access to the underlying table(s) and/or view(s), and
the EXECUTE privilege on any selectable stored procedures involved.
To enable insertions, updates and deletions through the view, the creator/owner must also possess the corre-
sponding INSERT, UPDATE and DELETE rights on the base object(s).
Granting other users privileges on the view is only possible if the view owner himself has these privileges on
the underlying objects WITH GRANT OPTION. It will always be the case if the view owner is also the owner
of the underlying objects.
1. Creating view returning the JOB_CODE and JOB_TITLE columns only for those jobs where
MAX_SALARY is less than $15,000.
2. Creating a view returning the JOB_CODE and JOB_TITLE columns only for those jobs where
MAX_SALARY is less than $15,000. Whenever a new record is inserted or an existing record is updated,
the MAX_SALARY < 15000 condition will be checked. If the condition is not true, the insert/update op-
eration will be rejected.
115
Data Definition (DDL) Statements
4. Creating a view with the help of aliases for fields in the SELECT statement (the same result as in Example
3).
See also: ALTER VIEW, CREATE OR ALTER VIEW, RECREATE VIEW, DROP VIEW
ALTER VIEW
Used for: Modifying an existing view
Syntax:
116
Data Definition (DDL) Statements
Parameter Description
viewname Name of an existing view
select_statement SELECT statement
full_column_list The list of columns in the view
colname View column name. Duplicate column names are not allowed.
Use the ALTER VIEW statement for changing the definition of an existing view. Privileges for views remain
intact and dependencies are not affected.
The syntax of the ALTER VIEW statement corresponds completely with that of CREATE VIEW.
Caution
Be careful when you change the number of columns in a view. Existing application code and PSQL modules
that access the view may become invalid. For information on how to detect this kind of problem in stored
procedures and trigger, see The RDB$VALID_BLR Field in the Appendix.
Only the view owner and administrators have the authority to use ALTER VIEW.
117
Data Definition (DDL) Statements
Syntax:
Parameter Description
viewname Name of a view which may or may not exist
select_statement SELECT statement
full_column_list The list of columns in the view
colname View column name. Duplicate column names are not allowed.
Use the CREATE OR ALTER VIEW statement for changing the definition of an existing view or creating it if it
does not exist. Privileges for an existing view remain intact and dependencies are not affected.
The syntax of the CREATE OR ALTER VIEW statement corresponds completely with that of CREATE VIEW.
Example: Creating the new view PRICE_WITH_MARKUP view or altering it if it already exists:
DROP VIEW
Used for: Deleting (dropping) a view
Syntax:
118
Data Definition (DDL) Statements
Parameter Description
viewname View name
The DROP VIEW statement deletes an existing view. The statement will fail if the view has dependencies.
Only the view owner and administrators have the authority to use DROP VIEW.
RECREATE VIEW
Used for: Creating a new view or recreating an existing view
Syntax:
Parameter Description
viewname View name, maximum 31 characters
select_statement SELECT statement
full_column_list The list of columns in the view
colname View column name. Duplicate column names are not allowed.
Creates or recreates a view. If there is a view with this name already, the engine will try to drop it before creating
the new instance. If the existing view cannot be dropped, because of dependencies or insufficient rights, for
example, RECREATE VIEW fails with an error.
Example: Creating the new view PRICE_WITH_MARKUP view or recreating it, if it already exists.
119
Data Definition (DDL) Statements
CODE_PRICE,
COST,
COST_WITH_MARKUP
) AS
SELECT
CODE_PRICE,
COST,
COST * 1.15
FROM PRICE;
TRIGGER
A trigger is a special type of stored procedure that is not called directly, instead being executed when a specified
event occurs in the associated table or view. A trigger is specific to one and only one relation (table or view)
and one phase in the timing of the event (BEFORE or AFTER). It can be specified to execute for one specific
event (insert, update, delete) or for some combination of two or three of those events.
Another form of triggerknown as a database triggercan be specified to fire in association with the start
or end of a user session (connection) or a user transaction.
CREATE TRIGGER
Used for: Creating a new trigger
Syntax:
<relation_trigger_legacy> ::=
FOR {tablename | viewname}
[ACTIVE | INACTIVE]
{BEFORE | AFTER} <mutation_list>
[POSITION number]
<relation_trigger_sql2003> ::=
[ACTIVE | INACTIVE]
{BEFORE | AFTER} <mutation_list>
[POSITION number]
ON {tablename | viewname}
120
Data Definition (DDL) Statements
<database_trigger> ::=
[ACTIVE | INACTIVE] ON db_event [POSITION number]
<mutation_list> ::=
<mutation> [OR <mutation> [OR <mutation>]]
<db_event> ::= {
CONNECT |
DISCONNECT |
TRANSACTION START |
TRANSACTION COMMIT |
TRANSACTION ROLLBACK
}
Parameter Description
Trigger name consisting of up to 31 characters. It must be unique among all trig-
trigname
ger names in the database.
relation_trigger_legacy Legacy style of trigger declaration for a relation trigger
relation_trigger_sql2003 Relation trigger declaration compliant with the SQL:2003 standard
database_trigger Database trigger declaration
tablename Name of the table with which the relation trigger is associated
viewname Name of the view with which the relation trigger is associated
mutation_list List of relation (table | view) events
number Position of the trigger in the firing order. From 0 to 32,767
db_event Connection or transaction event
declarations Section for declaring local variables and named cursors
declare_var Local variable declaration
declare_cursor Named cursor declaration
PSQL_statements Statements in Firebird's programming language (PSQL)
The CREATE TRIGGER statement is used for creating a new trigger. A trigger can be created either for a relation
(table | view) event (or a combination of events), or for a database event.
CREATE TRIGGER, along with its associates ALTER TRIGGER, CREATE OR ALTER TRIGGER and RECREATE
TRIGGER, is a compound statement, consisting of a header and a body. The header specifies the name of the
trigger, the name of the relation (for a relation trigger), the phase of the trigger and the event[s] it applies to. The
body consists of optional declarations of local variables and named cursors followed by one or more statements,
121
Data Definition (DDL) Statements
or blocks of statements, all enclosed in an outer block that begins with the keyword BEGIN and ends with the
keyword END. Declarations and embedded statements are terminated with semi-colons (;).
The name of the trigger must be unique among all trigger names.
Statement Terminators
Some SQL statement editorsspecifically the isql utility that comes with Firebird and possibly some third-
party editorsemploy an internal convention that requires all statements to be terminated with a semi-colon.
This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with
this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the
Terminator in isql.
Forms of Declaration
A relation trigger specifiesamong other thingsa phase and one or more events.
Phase
Phase concerns the timing of the trigger with regard to the change-of-state event in the row of data:
A BEFORE trigger is fired before the specified database operation (insert, update or delete) is carried out
An AFTER trigger is fired after the database operation has been completed
Row Events
A relation trigger definition specifies at least one of the DML operations INSERT, UPDATE and DELETE, to
indicate one or more events on which the trigger should fire. If multiple operations are specified, they must be
separated by the keyword OR. No operation may occur more than once.
Within the statement block, the Boolean context variables INSERTING, UPDATING and DELETING can be used
to test which operation is currently executing.
The keyword POSITION allows an optional execution order (firing order) to be specified for a series of triggers
that have the same phase and event as their target. The default position is 0. If no positions are specified, or if
several triggers have a single position number, the triggers will be executed in the alphabetical order of their
names.
122
Data Definition (DDL) Statements
Variable Declarations
The optional declarations section beneath the keyword AS in the header of the trigger is for defining variables and
named cursors that are local to the trigger. For more details, see DECLARE VARIABLE and DECLARE CURSOR
in the Procedural SQL chapter.
The local declarations (if any) are the final part of a trigger's header section. The trigger body follows, where
one or more blocks of PSQL statements are enclosed in a structure that starts with the keyword BEGIN and
terminates with the keyword END.
Only the owner of the view or table and administrators have the authority to use CREATE TRIGGER.
1. Creating a trigger in the legacy form, firing before the event of inserting a new record into the CUS-
TOMER table occurs.
2. Creating a trigger firing before the event of inserting a new record into the CUSTOMER table in the
SQL:2003 standard-compliant form.
3. Creating a trigger that will file after either inserting, updating or deleting a record in the CUSTOMER table.
123
Data Definition (DDL) Statements
OLD.CUST_NO,
'CUSTOMER',
CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END);
END
Database Triggers
Triggers can be defined to fire upon database events, which really refers to a mixture of events that act across
the scope of a session (connection) and events that act across the scope of an individual transaction:
CONNECT
DISCONNECT
TRANSACTION START
TRANSACTION COMMIT
TRANSACTION ROLLBACK
CONNECT and DISCONNECT triggers are executed in a transaction created specifically for this purpose. If all
goes well, the transaction is committed. Uncaught exceptions cause the transaction to roll back, and
for a CONNECT trigger, the connection is then broken and the exception is returned to the client
for a DISCONNECT trigger, exceptions are not reported. The connection is broken as intended
TRANSACTION triggers are executed within the transaction whose start, commit or rollback evokes them. The
action taken after an uncaught exception depends on the event:
In a TRANSACTION START trigger, the exception is reported to the client and the transaction is rolled back
In a TRANSACTION COMMIT trigger, the exception is reported, the trigger's actions so far are undone and
the commit is cancelled
In a TRANSACTION ROLLBACK trigger, the exception is not reported and the transaction is rolled back as
intended.
Traps
Obviously there is no direct way of knowing if a DISCONNECT or TRANSACTION ROLLBACK trigger caused
an exception. It also follows that the connection to the database cannot happen if a CONNECT trigger causes an
exception and a transaction cannot start if a TRANSACTION START trigger causes one, either. Both phenomena
effectively lock you out of your database until you get in there with database triggers suppressed and fix the
bad code.
Trigger Suppression
Some Firebird command-line tools have been supplied with switches that an administrator can use to suppress
the automatic firing of database triggers. So far, they are:
124
Data Definition (DDL) Statements
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
Two-phase Commit
In a two-phase commit scenario, TRANSACTION COMMIT triggers fire in the prepare phase, not at the commit.
Some Caveats
1. The use of the IN AUTONOMOUS TRANSACTION DO statement in the database event triggers related to
transactions (TRANSACTION START, TRANSACTION ROLLBACK, TRANSACTION COMMIT) may cause
the autonomous transaction to enter an infinite loop
2. The DISCONNECT and TRANSACTION ROLLBACK event triggers will not be executed when clients are
disconnected via monitoring tables (DELETE FROM MON$ATTACHMENTS)
Only the database owner and administrators have the authority to create database triggers.
1. Creating a trigger for the event of connecting to the database that logs users logging into the system. The
trigger is created as inactive.
2. Creating a trigger for the event of connecting to the database that does not permit any users, except for
SYSDBA, to log in during off hours.
CREATE EXCEPTION E_INCORRECT_WORKTIME 'The working day has not started yet.';
125
Data Definition (DDL) Statements
See also: ALTER TRIGGER, CREATE OR ALTER TRIGGER, RECREATE TRIGGER, DROP TRIGGER
ALTER TRIGGER
Used for: Modifying and deactivating an existing trigger
Syntax:
<mutation_list> ::=
<mutation> [OR <mutation> [OR <mutation>]]
<db_event> ::= {
CONNECT |
DISCONNECT |
TRANSACTION START |
TRANSACTION COMMIT |
TRANSACTION ROLLBACK
}
Parameter Description
trigname Name of an existing trigger
mutation_list List of relation (table | view) events
number Position of the trigger in the firing order. From 0 to 32,767
declarations Section for declaring local variables and named cursors
declare_var Local variable declaration
declare_cursor Named cursor declaration
PSQL_statements Statements in Firebird's programming language (PSQL)
126
Data Definition (DDL) Statements
The ALTER TRIGGER statement allows certain changes to the header and body of a trigger.
Events; but relation trigger events cannot be changed to database trigger events, nor vice versa
Reminders
The BEFORE keyword directs that the trigger be executed before the associated event occurs; the AFTER
keyword directs that it be executed after the event.
More than one relation eventINSERT, UPDATE, DELETEcan be covered in a single trigger. The events
should be separated with the keyword OR. No event should be mentioned more than once.
The keyword POSITION allows an optional execution order (firing order) to be specified for a series of triggers
that have the same phase and event as their target. The default position is 0. If no positions are specified, or if
several triggers have a single position number, the triggers will be executed in the alphabetical order of their
names.
Administrators and the following users have the authority to use ALTER TRIGGER:
3. Switching the TR_CUST_LOG trigger to the inactive status and modifying the list of events.
127
Data Definition (DDL) Statements
4. Switching the tr_log_connect trigger to the active status, changing its position and body.
See also: CREATE TRIGGER, CREATE OR ALTER TRIGGER, RECREATE TRIGGER, DROP TRIGGER
Syntax:
The CREATE OR ALTER TRIGGER statement creates a new trigger if it does not exist; otherwise it alters and
recompiles it with the privileges intact and dependencies unaffected.
Example using CREATE OR ALTER TRIGGER: Creating a new trigger if it does not exist or altering it if it
does exist.
128
Data Definition (DDL) Statements
DROP TRIGGER
Used for: Deleting an existing trigger
Syntax:
Parameter Description
trigname Trigger name
Administrators and the following users have the authority to use DROP TRIGGER:
RECREATE TRIGGER
Used for: Creating a new trigger or recreating an existing trigger
Syntax:
129
Data Definition (DDL) Statements
[<PSQL_statements>]
END
The RECREATE TRIGGER statement creates a new trigger if no trigger with the specified name exists; otherwise
the RECREATE TRIGGER statement tries to delete the existing trigger and create a new one. The operation will
fail on COMMIT if the trigger dependencies.
Warning
Be aware that dependency errors are not detected until the COMMIT phase of this operation.
PROCEDURE
A stored procedure is a software module that can be called from a client, another procedure, an executable block
or a trigger. Stored procedures, executable blocks and triggers are written in procedural SQL (PSQL). Most SQL
statements are available in PSQL as well, sometimes with limitations or extensions. Among notable exceptions
are DDL and transaction control statements.
CREATE PROCEDURE
Used for: Creating a new stored procedure
Syntax:
130
Data Definition (DDL) Statements
BEGIN
[<PSQL_statements>]
END
<type> ::=
<datatype> |
[TYPE OF] domain |
TYPE OF COLUMN rel.col
<datatype> ::=
{SMALLINT | INT[EGER] | BIGINT}
| {FLOAT | DOUBLE PRECISION}
| {DATE | TIME | TIMESTAMP}
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[CHARACTER SET charset]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset]
| BLOB [(seglen [, subtype_num])]
<declarations> ::=
{<declare_var> | <declare_cursor>};
[{<declare_var> | <declare_cursor>}; ]
Parameter Description
Stored procedure name consisting of up to 31 characters. Must be unique among
procname
all table, view and procedure names in the database
inparam Input parameter description
outparam Output parameter description
declarations Section for declaring local variables and named cursors
declare_var Local variable declaration
declare_cursor Named cursor declaration
PSQL_statements Procedural SQL statements
literal A literal value that is assignment-compatible with the data type of the parameter
Any context variable whose type is compatible with the data type of the parame-
context_var
ter
131
Data Definition (DDL) Statements
Parameter Description
The name of an input or output parameter of the procedure. It may consist of up
paramname to 31 characters. The name of the parameter must be unique among input and
output parameters of the procedure and its local variables
datatype SQL data type
collation Collation sequence
domain Domain name
rel Table or view name
col Table or view column name
The total number of significant digits that the parameter should be able to hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string type parameter or variable, in characters
charset Character set of a string type parameter or variable
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
The CREATE PROCEDURE statement creates a new stored procedure. The name of the procedure must be unique
among the names of all stored procedures, tables and views in the database.
CREATE PROCEDURE is a compound statement, consisting of a header and a body. The header specifies the
name of the procedure and declares input parameters and the output parameters, if any, that are to be returned
by the procedure.
The procedure body consists of declarations for any local variables and named cursors that will be used by
the procedure, followed by one or more statements, or blocks of statements, all enclosed in an outer block that
begins with the keyword BEGIN and ends with the keyword END. Declarations and embedded statements are
terminated with semi-colons (;).
Statement Terminators
Some SQL statement editorsspecifically the isql utility that comes with Firebird and possibly some third-
party editorsemploy an internal convention that requires all statements to be terminated with a semi-colon.
This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with
this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the
Terminator in isql.
Parameters
Each parameter has a data type specified for it. The NOT NULL constraint can also be specified for any parameter,
to prevent NULL being passed or assigned to it.
132
Data Definition (DDL) Statements
A collation sequence can be specified for string-type parameters, using the COLLATE clause.
Input Parameters:
Input parameters are presented as a parenthesized list following the name of the procedure. They are
passed into the procedure as values, so anything that changes them inside the procedure has no effect
on the parameters in the calling program.
Input parameters may have default values. Those that do have values specified for them must be
located at the end of the list of parameters.
Output Parameters:
The optional RETURNS clause is for specifying a parenthesised list of output parameters for the stored
procedure.
A domain name can be specified as the type of a parameter. The parameter will inherit all domain attributes. If
a default value is specified for the parameter, it overrides the default value specified in the domain definition.
If the TYPE OF clause is added before the domain name, only the data type of the domain is used: any of the
other attributes of the domain NOT NULL constraint, CHECK constraints, default value are neither checked
nor used. However, if the domain is of a text type, its character set and collation sequence are always used.
Input and output parameters can also be declared using the data type of columns in existing tables and views.
The TYPE OF COLUMN clause is used for that, specifying relationname.columnname as its argument.
When TYPE OF COLUMN is used, the parameter inherits only the data type and, for string types, the character
set and collation sequence. The constraints and default value of the column are ignored.
For input parameters, the collation that comes with the column's type is ignored in comparisons (e.g. equality
tests). For local variables, the behaviour varies.
The header section is followed by the procedure body, consisting of one or more PSQL statements enclosed
between the outer keywords BEGIN and END. Multiple BEGIN ... END blocks of terminated statements may be
embedded inside the procedure body.
Any user connected to the database can create a new stored procedure. The user who creates a stored procedure
becomes its owner.
133
Data Definition (DDL) Statements
Examples: Creating a stored procedure that inserts a record into the BREED table and returns the code of the
inserted record:
Creating a selectable stored procedure that generates data for mailing labels (from employee.fdb):
134
Data Definition (DDL) Statements
See also: CREATE OR ALTER PROCEDURE, ALTER PROCEDURE, RECREATE PROCEDURE, DROP PROCE-
DURE
ALTER PROCEDURE
Used for: Modifying an existing stored procedure
Syntax:
135
Data Definition (DDL) Statements
[COLLATE collation]
<type> ::=
<datatype> |
[TYPE OF] domain |
TYPE OF COLUMN rel.col
<datatype> ::=
{SMALLINT | INT[EGER] | BIGINT}
| {FLOAT | DOUBLE PRECISSION}
| {DATE | TIME | TIMESTAMP}
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[CHARACTER SET charset]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset]
| BLOB [(seglen [, subtype_num])]
Parameter Description
procname Name of an existing stored procedure
inparam Input parameter description
outparam Output parameter description
declarations Section for declaring local variables and named cursors
declare_var Local variable declaration
declare_cursor Named cursor declaration
PSQL_statements Procedural SQL statements
literal A literal value that is assignment-compatible with the data type of the parameter
Any context variable whose type is compatible with the data type of the parame-
context_var
ter
The name of an input or output parameter of the procedure. It may consist of up
paramname to 31 characters. The name of the parameter must be unique among input and
output parameters of the procedure and its local variables
datatype SQL data type
collation Collation sequence
domain Domain name
rel Table or view name
col Table or view column name
136
Data Definition (DDL) Statements
Parameter Description
The total number of significant digits that the parameter should be able to hold
precision
(1..18)
scale The number of digits after the decimal point (0..precision)
size The maximum size of a string type parameter or variable, in characters
charset Character set of a string type parameter or variable
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size (max. 65535)
The ALTER PROCEDURE statement allows the following changes to a stored procedure definition:
Caution
Take care about changing the number and type of input and output parameters in stored procedures. Existing
application code and procedures and triggers that call it could become invalid because the new description
of the parameters is incompatible with the old calling format. For information on how to troubleshoot such a
situation, see the article The RDB$VALID_BLR Field in the Appendix.
The procedure owner and Administrators have the authority to use ALTER PROCEDURE.
See also: CREATE PROCEDURE, CREATE OR ALTER PROCEDURE, RECREATE PROCEDURE, DROP PROCE-
DURE
137
Data Definition (DDL) Statements
Syntax:
The CREATE OR ALTER PROCEDURE statement creates a new stored procedure or alters an existing one. If the
stored procedure does not exist, it will be created by invoking a CREATE PROCEDURE statement transparently.
If the procedure already exists, it will be altered and compiled without affecting its existing privileges and
dependencies.
DROP PROCEDURE
Used for: Deleting a stored procedure
138
Data Definition (DDL) Statements
Syntax:
Parameter Description
procname Name of an existing stored procedure
The DROP PROCEDURE statement deletes an existing stored procedure. If the stored procedure has any depen-
dencies, the attempt to delete it will fail and the appropriate error will be raised.
The procedure owner and Administrators have the authority to use DROP PROCEDURE.
RECREATE PROCEDURE
Used for: Creating a new stored procedure or recreating an existing one
Syntax:
The RECREATE PROCEDURE statement creates a new stored procedure or recreates an existing one. If there is
a procedure with this name already, the engine will try to delete it and create a new one. Recreating an existing
procedure will fail at the COMMIT request if the procedure has dependencies.
Warning
Be aware that dependency errors are not detected until the COMMIT phase of this operation.
139
Data Definition (DDL) Statements
After a procedure is successfully recreated, privileges to execute the stored procedure and the privileges of the
stored procedure itself are dropped.
Example: Creating the new GET_EMP_PROJ stored procedure or recreating the existing GET_EMP_PROJ
stored procedure.
EXTERNAL FUNCTION
REVIEW STATUS
All sections from this point forward to the end of the chapter are awaiting technical and editorial review.
External functions, also known as user-defined functions (UDFs) are programs written in an external program-
ming language and stored in dynamically loaded libraries. Once declared to a database, they become available
in dynamic and procedural statements as though they were implemented in the SQL language internally.
External functions extend the possibilities for processing data with SQL considerably. To make a function avail-
able to a database, it is declared using the statement DECLARE EXTERNAL FUNCTON.
The library containing a function is loaded when any function included in it is called.
Note
External functions may be contained in more than one libraryor module, as it is referred to in the syntax.
140
Data Definition (DDL) Statements
Syntax:
<arg_type_decl> ::=
sqltype [{BY DESCRIPTOR} | NULL] |
CSTRING(length) [NULL]
Parameter Description
Function name in the database. It may consist of up to 31 characters. It should
be unique among all internal and external function names in the database and
funcname
need not be the same name as the name exported from the UDF library via
ENTRY_POINT.
The DECLARE EXTERNAL FUNCTION statement makes a user-defined function available in the database. UDF
declarations must be made in each database that is going to use them. There is no need to declare UDFs that
will never be used.
The name of the external function must be unique among all function names. It may be different from the
exported name of the function, as specified in the ENTRY_POINT argument.
The input parameters of the function follow the name of the function and are separated with commas. Each
parameter has an SQL data type specified for it. Arrays cannot be used as function parameters. As well as the
SQL types, the CSTRING type is available for specifying a null-terminated string with a maximum length of
LENGTH bytes.
141
Data Definition (DDL) Statements
By default, input parameters are passed by reference. The BY DESCRIPTOR clause may be specified instead,
if the input parameter is passed by descriptor. Passing a parameter by descriptor makes it possible to process
NULLs.
RETURNS clause: (Required) specifies the output parameter returned by the function. A function is scalar:
it returns one and only one parameter. The output parameter can be of any SQL type (except an array or an
array element) or a null-terminated string (CSTRING). The output parameter can be passed by reference (the
default), by descriptor or by value. If the BY DESCRIPTOR clause is specified, the output parameter is passed
by descriptor. If the BY VALUE clause is specified, the output parameter is passed by value.
PARAMETER keyword: specifies that the function returns the value from the parameter under number
param_num. It is necessary if you need to return a value of data type BLOB.
FREE_IT keyword: means that the memory allocated for storing the return value will be freed after the function
is executed. It is used only if the memory was allocated dynamically in the UDF. In such a UDF, the memory
must be allocated with the help of the ib_util_malloc function from the ib_util module, a requirement
for compatibility with the functions used in Firebird code and in the code of the shipped UDF modules, for
allocating and freeing memory.
ENTRY_POINT clause: specifies the name of the entry point (the name of the imported function), as exported
from the module.
MODULE_NAME clause: defines the name of the module where the exported function is located. The link to
the module should not be the full path and extension of the file, if that can be avoided. If the module is located in
the default location (in the ../UDF subdirectory of the Firebird server root) or in a location explicitly configured
in firebird.conf, it makes it easier to move the database between different platforms. The UDFAccess
parameter in the firebird.conf file allows access restrictions to external functions modules to be configured.
Any user connected to the database can declare an external function (UDF).
1. Declaring the addDate external function located in the fbudf module. The input and output parameters are
passed by reference.
2. Declaring the invl external function located in the fbudf module. The input and output parameters are passed
by descriptor.
142
Data Definition (DDL) Statements
3. Declaring the isLeapYear external function located in the fbudf module. The input parameter is passed by
reference, while the output parameter is passed by value.
4. Declaring the i64Truncate external function located in the fbudf module. The input and output parameters
are passed by descriptor. The second parameter of the function is used as the return value.
Syntax:
Parameter Description
funcname Function name in the database
new_entry_point The new exported name of the function
The new name of the module (MODULE_NAME from which the function is ex-
new_library_name ported. This will be the name of the file, without the .dll or .so file exten-
sion.
The ALTER EXTERNAL FUNCTION statement changes the entry point and/or the module name for a user-defined
function (UDF). Existing dependencies remain intact after the statement containing the change[s] is executed.
The ENTRY_POINT clause: is for specifying the new entry point (the name of the function as exported from
the module).
143
Data Definition (DDL) Statements
The MODULE_NAME clause: Is for specifying the new name of the module where the exported function is
located.
Any user connected to the database can change the entry point and the module name.
Syntax:
Parameter Description
funcname Function name in the database
The DROP EXTERNAL FUNCTION statement deletes the declaration of a user-defined function from the database.
If there are any dependencies on the external function, the statement will fail and the appropriate error will be
raised.
Any user connected to the database can delete the declaration of an internal function.
Example using DROP EXTERNAL FUNCTION: Deleting the declaration of the addDay function.
144
Data Definition (DDL) Statements
FILTER
A BLOB FILTER filter is a database object that is actually a special type of external function, with the sole
purpose of taking a BLOB object in one format and converting it to a BLOB object in another format. The formats
of the BLOB objects are specifed with user-defined BLOB subtypes.
External functions for converting BLOB types are stored in dynamic libraries and loaded when necessary.
DECLARE FILTER
Used for: Declaring a BLOB filter to the database
Syntax:
Parameter Description
Filter name in the database. It may consist of up to 31 characters. It need not be
filtername
the same name as the name exported from the filter library via ENTRY_POINT.
sub_type BLOB subtype
145
Data Definition (DDL) Statements
The DECLARE FILTER statement makes a BLOB filter available to the database. The name of the BLOB filter
must be unique among the names of BLOB filters.
Note
Mnemonic names can be defined for custom BLOB subtypes and inserted manually into the system table RDB
$TYPES system table:
After the transaction is confirmed, the mnemonic names can be used in declarations when you create new filters.
Warning: From Firebird 3 onward, the system tables will no longer be writable by users.
Parameters
ENTRY_POINT: clause defining the name of the entry point (the name of the imported function) in the module.
MODULE_NAME: The clause defining the name of the module where the exported function is located. By
default, modules must be located in the UDF folder of the root directory on the server. The UDFAccess parameter
in firebird.conf allows editing of access restrictions to filter libraries.
*********************
Examples:
146
Data Definition (DDL) Statements
DROP FILTER
Used for: Removing a BLOB filter declaration from the database
Syntax:
Parameter Description
filtername Filter name in the database
The DROP FILTER statement removes the declaration of a BLOB filter from the database. Removing a BLOB
filter from a database makes it unavailable for use rom that database. The dynamic library where the conversion
function is located remains intact and the removal from one database does not affect other databases in which
the same BLOB filter is still declared.
SEQUENCE (GENERATOR)
A sequence or a generator is a database object used to get unique number values to fill a series. Sequence
is the SQL-compliant term for the same thing which, in Firebird, has traditionally been known as generator.
Both terms are implemented in Firebird, which recognises and has syntax for both terms.
147
Data Definition (DDL) Statements
Sequences (or generators) are always stored as 64-bit integers, regardless of the SQL dialect of the database.
Caution
If a client is connected using Dialect 1, the server sends sequence values to it as 32-bit integers. Passing a
sequence value to a 32-bit field or variable will not cause errors as long as the current value of the sequence
does not exceed the limits of a 32-bit number. However, as soon as the sequence value exceeds this limit, a
database in Dialect 3 will produce an error. A database in Dialect 1 will keep cutting the values, which will
compromise the uniqueness of the series.
CREATE SEQUENCE
Used for: Creating a new SEQUENCE (GENERATOR)
Syntax:
Parameter Description
seq_name Sequence (generator) name. It may consist of up to 31 characters
The statements CREATE SEQUENCE and CREATE GENERATOR are synonymousboth create a new sequence.
Either can be used but CREATE SEQUENCE is recommended if standards-compliant metadata management is
important.
When a sequence is created, its value is set to 0. Each time the NEXT VALUE FOR seq_name operator is used
with that sequence, its value increases by 1. The GEN_ID(seq_name, <step>) function can be called instead, to
step the series by a different integer number.
Examples:
148
Data Definition (DDL) Statements
See also: ALTER SEQUENCE, SET GENERATOR, DROP SEQUENCE (GENERATOR), NEXT VALUE FOR,
GEN_ID() function
ALTER SEQUENCE
Used for: Setting the value of a sequence or generator to a specified value
Syntax:
Parameter Description
seq_name Sequence (generator) name
new_val New sequence (generator) value. A 64-bit integer from -2-63 to 263-1.
The ALTER SEQUENCE statement sets the current value of a sequence or generator to the specified value.
Warning
Incorrect use of the ALTER SEQUENCE statement (changing the current value of the sequence or generator) is
likely to break the logical integrity of data.
Any user connected to the database can set the sequence (generator) value.
Examples:
See also: SET GENERATOR, CREATE SEQUENCE (GENERATOR), DROP SEQUENCE (GENERATOR), NEXT
VALUE FOR, GEN_ID() function
149
Data Definition (DDL) Statements
SET GENERATOR
Used for: Setting the value of a sequence or generator to a specified value
Syntax:
Parameter Description
seq_name Generator (sequence) name
new_val New sequence (generator) value. A 64-bit integer from -2-63 to 263-1.
The SET GENERATOR statement sets the current value of a sequence or generator to the specified value.
Note
Although SET GENERATOR is considered outdated, it is retained for backward compatibility. Using the stan-
dards-compliant ALTER SEQUENCE is current and is recommended.
Any user connected to the database can set the sequence (generator) value.
Examples:
DROP SEQUENCE
Used for: Deleting SEQUENCE (GENERATOR)
150
Data Definition (DDL) Statements
Syntax:
Parameter Description
seq_name Sequence (generator) name. It may consist of up to 31 characters
The statements DROP SEQUENCE and DROP GENERATOR statements are equivalent: both delete an existing
sequence (generator). Either is valid but DROP SEQUENCE, being current, is recommended.
EXCEPTION
This section describes how to create, modify and delete custom exceptions for use in error handlers in PSQL
modules.
CREATE EXCEPTION
Used for: Creating a new exception for use in PSQL modules
Syntax:
Parameter Description
exception_name Exception name. The maximum length is 31 characters
151
Data Definition (DDL) Statements
Parameter Description
message Default error message. The maximum length is 1,021 characters
The statement CREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception of the
same name exists, the statement will fail with an appropriate error message.
The exception name is a standard identifier. In a Dialect 3 database, it can be enclosed in double quotes to make
it case-sensitive and, if required, to use characters that are not valid in regular identifiers. See Identifiers for
more information.
The default message is stored in character set NONE, i.e., in characters of any single-byte character set. The text
can be overridden in the PSQL code when the exception is thrown.
Examples:
Tips
Grouping CREATE EXCEPTION statements together in system update scripts will simplify working with them
and documenting them. A system of prefixes for naming and categorising groups of exceptions is recommend-
ed.
See also: ALTER EXCEPTION, CREATE OR ALTER EXCEPTION, DROP EXCEPTION, RECREATE EXCEPTION
ALTER EXCEPTION
Used for: Modifying a the message returned from a custom exception
Syntax:
152
Data Definition (DDL) Statements
Parameter Description
exception_name Exception name
message New default error message. The maximum length is 1,021 characters
The statement ALTER EXCEPTION can be used at any time, to modify the default text of the message. Any user
connected to the database can alter an exception message.
Examples:
See also: CREATE EXCEPTION, CREATE OR ALTER EXCEPTION, DROP EXCEPTION, RECREATE EXCEPTION
Syntax:
Parameter Description
exception_name Exception name
message Error message. The maximum length is limited to 1,021 characters
The statement CREATE OR ALTER EXCEPTION is used to create the specified exception if it does not exist, or
to modify the text of the error message returned from it if it exists already. If an existing exception is altered by
this statement, any existing dependencies will remain intact.
153
Data Definition (DDL) Statements
Any user connected to the database can use this statement to create an exception or alter the text of one that
already exists.
DROP EXCEPTION
Used for: Deleting a custom exception
Syntax:
Parameter Description
exception_name Exception name
The statement DROP EXCEPTION is used to delete an exception. Any dependencies on the exception will cause
the statement to fail and the exception will not be deleted.
If an exception is used only in stored procedures, it can be deleted at any time. If it is used in a trigger, it cannot
be deleted.
In planning to delete an exception, all references to it should first be removed from the code of stored procedures,
to avoid its absence causing errors.
Examples:
154
Data Definition (DDL) Statements
RECREATE EXCEPTION
Used for: Creating a new custom exception or recreating an existing one
Syntax:
Parameter Description
exception_name Exception name. The maximum length is 31 characters
message Error message. The maximum length is limited to 1,021 characters
The statement RECREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception of the
same name exists already, the RECREATE EXCEPTION statement will try to delete it and create a new one. If
there are any dependencies on the existing exception, the attempted deletion fails and RECREATE EXCEPTION
is not executed.
COLLATION
CREATE COLLATION
Used for: Making a new collation for a supported character set available to the database
155
Data Definition (DDL) Statements
Syntax:
Parameter Description
collname The name to use for the new collation. The maximum length is 31 characters
charset A character set present in the database
basecoll A collation already present in the database
extname The collation name used in the .conf file
The CREATE COLLATION statement does not create anything: its purpose is to make a collation known to a
database. The collation must already be present on the system, typically in a library file, and must be properly
registered in a .conf file in the intl subdirectory of the Firebird installation.
The collation may alternatively be based on one that is already present in the database.
The single-quoted 'extname' is case-sensitive and must correspond exactly with the collation name in the
.conf file. The collname, charset and basecoll parameters are case-insensitive unless enclosed in
double-quotes.
Specific Attributes
The available specific attributes are listed in the table below. Not all specific attributes apply to every collation,
even if specifying them does not cause an error.
Important
156
Data Definition (DDL) Statements
In the table, 1 bpc indicates that an attribute is valid for collations of character sets using 1 byte per character
(so-called narrow character sets). UNI stands for UNICODE collations.
Tip
If you want to add a new character set with its default collation into your database, declare and run the stored
procedure sp_register_character_set(name, max_bytes_per_character), found in misc/
intl.sql/ under the Firebird installation directory.
Note: in order for this to work, the character set must be present on the system and registered in a .conf file
in the intl subdirectory.
Any user connected to the database can use CREATE COLLATION to add a new collation.
1. Creating a collation using the name found in the fbintl.conf file (case-sensitive).
157
Data Definition (DDL) Statements
2. Creating a collation using a special (user-defined) name (the external name must completely match the
name in the fbintl.conf file).
4. Creating a case-insensitive collation based on one already existing in the database with specific attributes.
5. Creating a case-insensitive collation by the value of numbers (the so-called natural collation).
DROP COLLATION
Used for: Removing a collation from the database
Syntax:
158
Data Definition (DDL) Statements
Parameter Description
collname The name of the collation
The DROP COLLATION statement removes the specified collation from the database, if is there. An error will
be raised if the specified collation is not present.
Tip
If you want to remove an entire character set with all its collations from the database, declare and execute the
stored procedure sp_unregister_character_set(name) from the misc/intl.sql subdirectory of
the Firebird installation.
Any user connected to the database can use DROP COLLATION to remove a collation.
CHARACTER SET
Syntax:
Parameter Description
charset Character set identifier
159
Data Definition (DDL) Statements
Parameter Description
collation The name of the collation
The statement ALTER CHARACTER SET statement changes the default collation for the specified character set.
It will affect the future usage of the character set, except for cases where the COLLATE clause is explicitly
overridden. In that case, the collation sequence of existing domains, columns and PSQL variables will remain
intact after the change to the default collation of the underlying character set.
NOTES
If you change the default collation for the database character set (the one defined when the database was cre-
ated), it will change the default collation for the database.
If you change the default collation for the character set that was specified during the connection, string constants
will be interpreted according to the new collation value, except in those cases where the character set and/or
the collation have been overridden.
Example of use: Setting the default UNICODE_CI_AI collation for the UTF8 encoding.
ROLE
A role is a database object that packages a set of SQL privileges. Roles implement the concept of access control
at a group level. Multiple privileges are granted to the role and then that role can be granted to or revoked from
one or many users.
A user that is granted a role must supply that role in his login credentials in order to exercise the associated
privileges. Any other privileges granted to the user are not affected by his login with the role. Logging in with
multiple roles simultaneously is not supported.
In this section the tasks of creating and dropping roles are discussed.
CREATE ROLE
Used for: Creating a new ROLE object
Syntax:
160
Data Definition (DDL) Statements
Parameter Description
rolename Role name. The maximum length is 31 characters
The statement CREATE ROLE creates a new role object, to which one or more privileges can be granted subse-
quently. The name of a role must be unique among the names of roles in the current database.
Warning
It is advisable to make the name of a role unique among user names as well. The system will not prevent the
creation of a role whose name clashes with an existing user name but, if it happens, the user will be unable
to connect to the database.
Any user connected to the database can create a role. The user that creates a role becomes its owner.
ALTER ROLE
ALTER ROLE has no place in the create-alter-drop paradigm for database objects since a role has no attributes
that can be modified. Its actual effect is to alter an attribute of the database: Firebird uses it to enable and disable
the capability for Windows Adminstrators to assume administrator privileges automatically when logging in.
This procedure can affect only one role: the system-generated role RDB$ADMIN that exists in every database of
ODS 11.2 or higher. Several factors are involved in enabling this feature.
DROP ROLE
Used for: Deleting a role
Syntax:
161
Data Definition (DDL) Statements
The statement DROP ROLE deletes an existing role. It takes just a single argument, the name of the role. Once
the role is deleted, the entire set of privileges is revoked from all users and objects that were granted the role.
COMMENTS
Database objects and a database itself may contain comments. It is a convenient mechanism for documenting
the development and maintenance of a database. Comments created with COMMENT ON will survive a gbak
backup and restore.
COMMENT ON
Used for: Documenting metadata
Syntax:
<object> ::=
DATABASE
| <basic-type> objectname
| COLUMN relationname.fieldname
| PARAMETER procname.paramname
<basic-type> ::=
CHARACTER SET |
COLLATION |
DOMAIN |
EXCEPTION |
EXTERNAL FUNCTION |
FILTER |
GENERATOR |
INDEX |
PROCEDURE |
ROLE |
SEQUENCE |
TABLE |
TRIGGER |
VIEW
162
Data Definition (DDL) Statements
Parameter Description
sometext Comment text
basic-type Metadata object type
objectname Metadata object name
relationname Name of table or view
procname Name of stored procedure
paramname Name of a stored procedure parameter
The COMMENT ON statement adds comments for database objects (metadata). Comments are saved to text fields
of the BLOB type in the RDB$DESCRIPTION column of the corresponding system tables. Client applications
can view comments from these fields.
Note
If you add an empty comment (''), it will be saved as NULL in the database.
The table or procedure owner and Administrators have the authority to use COMMENT ON.
163
Chapter 6
Data Manipulation
(DML) Statements
REVIEW STATUS
All sections from this point forward to the end of the chapter are awaiting technical and editorial review.
DMLdata manipulation language is the subset of SQL that is used by applications and procedural modules
to extract and change data. Extraction, for the purpose of reading data, both raw and manipulated, is achieved
with the SELECT statement. INSERT is for adding new data and DELETE is for erasing data that are no longer
required. UPDATE, MERGE and INSERT OR UPDATE all modify data in various ways.
SELECT
Used for: Retrieving data
Global syntax:
SELECT
[WITH [RECURSIVE] <cte> [, <cte> ...]]
SELECT
[FIRST m] [SKIP n]
[DISTINCT | ALL] <columns>
FROM
source [[AS] alias]
[<joins>]
[WHERE <condition>]
[GROUP BY <grouping-list>
[HAVING <aggregate-condition>]]
[PLAN <plan-expr>]
[UNION [DISTINCT | ALL] <other-select>]
[ORDER BY <ordering-list>]
[ROWS m [TO n]]
[FOR UPDATE [OF <columns>]]
[WITH LOCK]
[INTO <variables>]
164
Data Manipulation (DML) Statements
Description
The SELECT statement retrieves data from the database and hands them to the application or the enclosing SQL
statement. Data are returned in zero or more rows, each containing one or more columns or fields. The total of
rows returned is the result set of the statement.
The SELECT keyword, followed by a columns list. This part specifies what you want to retrieve.
The FROM keyword, followed by a selectable object. This tells the engine where you want to get it from.
In its most basic form, SELECT retrieves a number of columns from a single table or view, like this:
In practice, the rows retrieved are often limited by a WHERE clause. The result set may be sorted by an ORDER
BY clause, and FIRST, SKIP or ROWS may further limit the number of output rows. The column list may contain
all kinds of expressions instead of just column names, and the source need not be a table or view: it may also be
a derived table, a common table expression (CTE) or a selectable stored procedure (SP). Multiple sources may
be combined in a JOIN, and multiple result sets may be combined in a UNION.
The following sections discuss the available SELECT subclauses and their usage in detail.
FIRST, SKIP
Used for: Retrieving a slice of rows from an ordered set
Syntax:
SELECT
[FIRST <m>] [SKIP <n>]
FROM ...
...
Argument Description
integer literal Integer literal
query parameter Query parameter place-holder. ? in DSQL and :paramname in PSQL
integer-expression Expression returning an integer value
165
Data Manipulation (DML) Statements
FIRST and SKIP are Firebird-specific, non-SQL-compliant keywords. You are advised to use the ROWS syntax
wherever possible.
Description
FIRST limits the output of a query to the first m rows. SKIP will suppress the given n rows before starting to
return output.
FIRST and SKIP are both optional. When used together as in FIRST m SKIP n, the n topmost rows of the output
set are discarded and the first m rows of the rest of the set are returned.
If a SKIP lands past the end of the dataset, an empty set is returned.
If the number of rows in the dataset (or the remainder left after a SKIP) is less than the value of the m argument
supplied for FIRST, that smaller number of rows is returned. These are valid results, not error conditions.
Caution
will delete ALL records from the table. The subquery retrieves 10 rows each time, deletes them and the oper-
ation is repeated until the table is empty. Keep it in mind! Or, better, use the ROWS clause in the DELETE
statement.
Examples
The following query will return the first 10 names from the People table:
The following query will return everything but the first 10 names:
166
Data Manipulation (DML) Statements
And this one returns the last 10 rows. Notice the double parentheses:
Syntax:
SELECT
[...]
[DISTINCT | ALL] <output-column> [, <output-column> ...]
[...]
FROM ...
Argument Description
qualifier Name of relation (view, stored procedure, derived table); or an alias for it
Only for character-type columns: a collation name that exists and is valid for the
collation
character set of the data
alias Column or field alias
167
Data Manipulation (DML) Statements
Argument Description
table-column Name of a table column
view-column Name of a view column
selectable-SP-outparm Declared name of an output parameter of a selectable stored procedure
constant A constant
context-variable Context variable
function-call Scalar or aggregate function call expression
single-value-subselect A subquery returning one scalar value (singleton)
CASE-construct CASE construct setting conditions for a return value
other-single-value-expr Any other expression returning a single value of a Firebird data type; or NULL
Description
It is always valid to qualify a column name (or *) with the name or alias of the table, view or se-
lectable SP to which it belongs, followed by a dot. e.g., relationname.columnname, relationname.*,
alias.columnname, alias.*. Qualifying is required if the column name occurs in more than one relation
taking part in a join. Qualifying * is always mandatory if it is not the only item in the column list.
Important
Aliases obfuscate the original relation name: once a table, view or procedure has been aliased, only the alias
can be used as its qualifier throughout the query. The relation name itself becomes unavailable.
The column list may optionally be preceded by one of the keywords DISTINCT or ALL:
DISTINCT filters out any duplicate rows. That is, if two or more rows have the same values in every corre-
sponding column, only one of them is included in the result set
ALL is the default: it returns all of the rows, including duplicates. ALL is rarely used; it is supported for
compliance with the SQL standard.
A COLLATE clause will not change the appearance of the column as such. However, if the specified collation
changes the case or accent sensitivity of the column, it may influence:
The ordering, if an ORDER BY clause is also present and it involves that column
Grouping, if the column is part of a GROUP BY clause
The rows retrieved (and hence the total number of rows in the result set), if DISTINCT is used
A query featuring a concatenation expression and a function call in the columns list:
168
Data Manipulation (DML) Statements
select p.fullname,
(select name from classes c where c.id = p.class) as class,
(select name from mentors m where m.id = p.mentor) as mentor
from pupils p
The following query accomplishes the same as the previous one using joins instead of subselects:
select p.fullname,
c.name as class,
m.name as mentor
from pupils p
join classes c on c.id = p.class
join mentors m on m.id = p.mentor
This query uses a CASE construct to determine the correct title, e.g. when sending mail to a person:
Selecting from columns of a derived table. A derived table is a parenthesized SELECT statement whose result
set is used in an enclosing query as if it were a regular table or view. The derived table is shown in bold here:
select fieldcount,
count(relation) as num_tables
from (select r.rdb$relation_name as relation,
count(*) as fieldcount
from rdb$relations r
join rdb$relation_fields rf
on rf.rdb$relation_name = r.rdb$relation_name
group by relation)
group by fieldcount
For those not familiar with RDB$DATABASE: this is a system table that is present in all Firebird databases and
is guaranteed to contain exactly one row. Although it wasn't created for this purpose, it has become standard
practice among Firebird programmers to select from this table if you want to select from nothing, i.e., if you
169
Data Manipulation (DML) Statements
need data that are not bound to a any table or view, but can be derived from the expressions in the output columns
alone. Another example is:
Finally, an example where you select some meaningful information from RDB$DATABASE itself:
As you may have guessed, this will give you the default character set of the database.
See also: Scalar Functions, Aggregate Functions, Context Variables, CASE, Subqueries
This section concentrates on single-source selects. Joins are discussed in a following section.
Syntax:
SELECT
...
FROM <source>
[<joins>]
[...]
<common-table-expression>
::= WITH [RECURSIVE] <cte-def> [, <cte-def> ...]
select-statement
Argument Description
table Name of a table
170
Data Manipulation (DML) Statements
Argument Description
view Name of a view
selectable-stored-
Name of a selectable stored procedure
procedure
args Selectable stored procedure arguments
derived table Derived table query expression
cte-def Common table expression (CTE) definition, including an ad hoc name
select-statement Any SELECT statement
column-aliases Alias for a column in a relation, CTE or derived table
name The ad hoc name for a CTE
alias The alias of a data source (table, view, procedure, CTE, derived table)
When selecting from a single table or view, the FROM clause need not contain anything more than the name.
An alias may be useful or even necessary if there are subqueries that refer to the main select statement (as they
often dosubqueries like this are called correlated subqueries).
Examples
select firstname,
middlename,
lastname,
date_of_birth,
(select name from schools s where p.school = s.id) schoolname
from pupils p
where year_started = '2012'
order by schoolname, date_of_birth
171
Data Manipulation (DML) Statements
If you specify an alias for a table or a view, you must always use this alias in place of the table name whenever
you query the columns of the relation (and wherever else you make a reference to columns, such as ORDER
BY, GROUP BY and WHERE clauses.
Correct use:
SELECT PEARS
FROM FRUIT
SELECT FRUIT.PEARS
FROM FRUIT
SELECT PEARS
FROM FRUIT F
SELECT F.PEARS
FROM FRUIT F
Incorrect use:
SELECT FRUIT.PEARS
FROM FRUIT F
The output parameters of a selectable stored procedure correspond to the columns of a regular table.
Selecting from a stored procedure without input parameters is just like selecting from a table or view:
Any required input parameters must be specified after the procedure name, enclosed in parentheses:
Values for optional parameters (that is, parameters for which default values have been defined) may be omitted
or provided. However, if you provide them only partly, the parameters you omit must all be at the tail end.
Supposing that the procedure visible_stars from the previous example has two optional parameters:
min_magn (numeric(3,1)) and spectral_class (varchar(12)), the following queries are all valid:
172
Data Manipulation (DML) Statements
select name, az, alt from visible_stars('Brugge', current_date, '22:30', 4.0, 'G')
But this one isn't, because there's a hole in the parameter list:
An alias for a selectable stored procedure is specified after the parameter list:
select number,
(select name from contestants c where c.number = gw.number)
from get_winners('#34517', 'AMS') gw
If you refer to an output parameter (column) by qualifying it with the full procedure name, the parameter list
should be omitted:
select number,
(select name from contestants c where c.number = get_winners.number)
from get_winners('#34517', 'AMS')
Syntax:
(select-query)
[[AS] derived-table-alias]
[(<derived-column-aliases>)]
The set returned data set by this SELECT FROM (SELECT FROM..) style of statement is a virtual table that can
be queried within the enclosing statement, as if it were a regular table or view.
The derived table in the query below returns the list of table names in the database and the number of columns
in each. A drill-down query on the derived table returns the counts of fields and the counts of tables having
each field count:
SELECT
FIELDCOUNT,
COUNT(RELATION) AS NUM_TABLES
FROM (SELECT
R.RDB$RELATION_NAME RELATION,
COUNT(*) AS FIELDCOUNT
FROM RDB$RELATIONS R
JOIN RDB$RELATION_FIELDS RF
173
Data Manipulation (DML) Statements
ON RF.RDB$RELATION_NAME = R.RDB$RELATION_NAME
GROUP BY RELATION)
GROUP BY FIELDCOUNT
A trivial example demonstrating how the alias of a derived table and the list of column aliases (both optional)
can be used:
SELECT
DBINFO.DESCR, DBINFO.DEF_CHARSET
FROM (SELECT *
FROM RDB$DATABASE) DBINFO
(DESCR, REL_ID, SEC_CLASS, DEF_CHARSET)
be nested
have WHERE, ORDER BY and GROUP BY clauses, FIRST, SKIP or ROWS directives, et al.
Furthermore,
Each column in a derived table must have a name. If it does not have a name, such as when it is a constant
or a run-time expression, it should be given an alias, either in the regular way or by including it in the list
of column aliases in the derived table's specification.
- The list of column aliases is optional but, if it exists, it must contain an alias for every column in the
derived table
The optimizer can process derived tables very effectively. However, if a derived table is included in an inner
join and contains a subquery, the optimizer will be unable to use any join order.
Suppose we have a table COEFFS which contains the coefficients of a number of quadratic equations we have
to solve. It has been defined like this:
Depending on the values of a, b and c, each equation may have zero, one or two solutions. It is possible to
find these solutions with a single-level query on table COEFFS, but the code will look rather messy and several
174
Data Manipulation (DML) Statements
values (like the discriminant) will have to be calculated multiple times per row. A derived table can help keep
things clean here:
select
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
(select b, b*b - 4*a*c, 2*a from coeffs) (b, D, denom)
If we want to show the coefficients next to the solutions (which may not be a bad idea), we can alter the query
like this:
select
a, b, c,
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
(select a, b, c, b*b - 4*a*c as D, 2*a as denom
from coeffs)
Notice that whereas the first query used a column aliases list for the derived table, the second adds aliases
internally where needed. Both methods work, as long as every column is guaranteed to have a name.
For a full discussion of CTE's, please refer to the section Common Table Expressions (WITH ... AS ... SELECT).
Except for the fact that the calculations that have to be made first are now at the beginning, this isn't a great
improvement over the derived table version. But we can now also eliminate the double calculation of sqrt(D)
for every row:
175
Data Manipulation (DML) Statements
from vars2
The code is a little more complicated now, but it might execute more efficiently (depending on what takes more
time: executing the SQRT function or passing the values of b, D and denom through an extra CTE). Incidentally,
we could have done the same with derived tables, but that would involve nesting.
Joins
Joins combine data from two sources into a single set. This is done on a row-by-row basis and usually involves
checking a join condition in order to determine which rows should be merged and appear in the resulting dataset.
There are several types (INNER, OUTER) and classes (qualified, natural, etc.) of joins, each with its own syntax
and rules.
Since joins can be chained, the datasets involved in a join may themselves be joined sets.
Syntax:
SELECT
...
FROM <source>
[<joins>]
[...]
Argument Description
table Name of a table
view name of a view
selectable-stored-
Name of a selectable stored procedure
procedure
args Selectable stored procedure input parameter[s]
176
Data Manipulation (DML) Statements
Argument Description
derived-table Reference, by name, to a derived table
common-ta-
Reference, by name, to a common table expression (CTE)
ble-expression
alias An alias for a data source (table, view, procedure, CTE, derived table)
condition Join condition (criterion)
column-list The list of columns used for an equi-join
Table A:
ID S
87 Just some text
235 Silence
Table B:
CODE X
-23 56.7735
87 416.0
select *
from A
join B on A.id = B.code
ID S CODE X
87 Just some text 87 416.0
177
Data Manipulation (DML) Statements
The first row of A has been joined with the second row of B because together they met the condition A.id =
B.code. The other rows from the source tables have no match in the opposite set and are therefore not included
in the join. Remember, this is an INNER join. We can make that fact explicit by writing:
select *
from A
inner join B on A.id = B.code
It is perfectly possible that a row in the left set matches several rows from the right set or vice versa. In that case,
all those combinations are included, and we can get results like:
ID S CODE X
87 Just some text 87 416.0
87 Just some text 87 -1.0
-23 Don't know -23 56.7735
-23 Still don't know -23 56.7735
-23 I give up -23 56.7735
Sometimes we want (or need) all the rows of one or both of the sources to appear in the joined set, regardless of
whether they match a record in the other source. This is where outer joins come in. A LEFT outer join includes
all the records from the left set, but only matching records from the right set. In a RIGHT outer join it's the other
way around. FULL outer joins include all the records from both sets. In all outer joins, the holes (the places
where an included source record doesn't have a match in the other set) are filled up with NULLs.
In order to make an outer join, you must specify LEFT, RIGHT or FULL, optionally followed by the keyword
OUTER.
Below are the results of the various outer joins when applied to our original tables A and B:
select *
from A
left [outer] join B on A.id = B.code
ID S CODE X
87 Just some text 87 416.0
235 Silence <null> <null>
select *
from A
right [outer] join B on A.id = B.code
178
Data Manipulation (DML) Statements
ID S CODE X
<null> <null> -23 56.7735
87 Just some text 87 416.0
select *
from A
full [outer] join B on A.id = B.code
ID S CODE X
<null> <null> -23 56.7735
87 Just some text 87 416.0
235 Silence <null> <null>
Qualified joins
Qualified joins specify conditions for the combining of rows. This happens either explicitly in an ON clause or
implicitly in a USING clause.
Syntax:
Explicit-condition joins
Most qualified joins have an ON clause, with an explicit condition that can be any valid boolean expression but
usually involves some comparison between the two sources involved.
Quite often, the condition is an equality test (or a number of ANDed equality tests) using the = operator. Joins
like these are called equi-joins. (The examples in the section on inner and outer joins were al equi-joins.)
179
Data Manipulation (DML) Statements
/* For each man, select the women who are taller than he.
Men for whom no such woman exists are not included. */
select m.fullname as man, f.fullname as woman
from males m
join females f on f.height > m.height
Equi-joins often compare columns that have the same name in both tables. If this is the case, we can also use
the second type of qualified join: the named columns join.
Note
which is considerably shorter. The result set is a little different thoughat least when using SELECT *:
The explicit-condition joinwith the ON clausewill contain each of the columns SEA and SHIP twice: once
from table FLOTSAM, and once from table JETSAM. Obviously, they will have the same values.
The named columns joinwith the USING clausewill contain these columns only once.
If you want all the columns in the result set of the named columns join, set up your query like this:
This will give you the exact same result set as the explicit-condition join.
180
Data Manipulation (DML) Statements
For an OUTER named columns join, there's an additional twist when using SELECT * or an unqualified column
name from the USING list:
If a row from one source set doesn't have a match in the other but must still be included because of the LEFT,
RIGHT or FULL directive, the merged column in the joined set gets the non-NULL value. That is fair enough,
but now you can't tell whether this value came from the left set, the right set, or both. This can be especially
deceiving when the value came from the right hand set, because * always shows combined columns in the
left hand parteven in the case of a RIGHT join.
Whether this is a problem or not depends on the situation. If it is, use the a.*, b.* approach shown above, with
a and b the names or aliases of the two sources. Or better yet, avoid * altogether in your serious queries and
qualify all column names in joined sets. This has the additional benefit that it forces you to think about which
data you want to retrieve and where from.
It is your responsibility to make sure that the column names in the USING list are of compatible types between
the two sources. If the types are compatible but not equal, the engine converts them to the type with the broadest
range of values before comparing the values. This will also be the data type of the merged column that shows
up in the result set if SELECT * or the unqualified column name is used. Qualified columns on the other hand
will always retain their original data type.
Natural joins
Taking the idea of the named columns join a step further, a natural join performs an automatic equi-join on all the
columns that have the same name in the left and right table. The data types of these columns must be compatible.
Note
Syntax:
create table TA (
a bigint,
s varchar(12),
ins_date date
)
create table TB (
a bigint,
descr varchar(12),
x float,
ins_date date
)
a natural join on TA and TB would involve the columns a and ins_date, and the following two statements
would have the same effect:
181
Data Manipulation (DML) Statements
select * from TA
natural join TB
select * from TA
join TB using (a, ins_date)
Like all joins, natural joins are inner joins by default, but you can turn them into outer joins by specifying LEFT,
RIGHT or FULL before the JOIN keyword.
Caution: if there are no columns with the same name in the two source relations, a CROSS JOIN is performed.
We'll get to this type of join in a minute.
A Note on Equality
Important
This note about equality and inequality operators applies everywhere in Firebird's SQL language, not just in
JOIN conditions.
The = operator, which is explicitly used in many conditional joins and implicitly in named column joins and
natural joins, only matches values to values. According to the SQL standard, NULL is not a value and hence
two NULLs are neither equal nor unequal to one another. If you need NULLs to match each other in a join, use
the IS NOT DISTINCT FROM operator. This operator returns true if the operands have the same value or if they
are both NULL.
select *
from A join B
on A.id is not distinct from B.code
Likewise, in theextremely rarecases where you want to join on inequality, use IS DISTINCT FROM, not
<>, if you want NULL to be considered different from any value and two NULLs considered equal:
select *
from A join B
on A.id is distinct from B.code
Cross joins
A cross join produces the full set product of the two data sources. This means that it successfully matches every
row in the left source to every row in the right source.
Syntax:
Please notice that the comma syntax is deprecated! It is only supported to keep legacy code working and may
disappear in some future version.
Cross-joining two sets is equivalent to joining them on a tautology (a condition that is always true). The following
two statements have the same effect:
182
Data Manipulation (DML) Statements
select * from TA
cross join TB
select * from TA
join TB on 1 = 1
Cross joins are inner joins, because they only include matching records it just so happens that every record
matches! An outer cross join, if it existed, wouldn't add anything to the result, because what outer joins add are
non-matching records, and these don't exist in cross joins.
Cross joins are seldom useful, except if you want to list all the possible combinations of two or more variables.
Suppose you are selling a product that comes in different sizes, different colors and different materials. If these
variables are each listed in a table of their own, this query would return all the combinations:
select a, b, c
from TA
join TB on TA.a = TB.a
There is one exception to this rule: with named columns joins and natural joins, the unqualified field name of
a column taking part in the matching process may be used legally and refers to the merged column of the same
name. For named columns joins, these are the columns listed in the USING clause. For natural joins, they are
the columns that have the same name in both relations. But please notice again that, especially in outer joins,
plain colname isn't always the same as left.colname or right.colname. Types may differ, and one of the
qualified columns may be NULL while the other isn't. In that case, the value in the merged, unqualified column
may mask the fact that one of the source values is absent.
SELECT *
FROM MY_TAB
JOIN MY_PROC(MY_TAB.F) ON 1 = 1
Here, the procedure will be executed before a single record has been retrieved from the table, MY_TAB. The
isc_no_cur_rec error error (no current record for fetch operation) is raised, interrupting the execution.
183
Data Manipulation (DML) Statements
The solution is to use syntax that specifies the join order explicitly:
SELECT *
FROM MY_TAB
LEFT JOIN MY_PROC(MY_TAB.F) ON 1 = 1
This forces the table to be read before the procedure and everything works correctly.
Tip
This quirk has been recognised as a bug in the optimizer and will be fixed in the next version of Firebird.
The condition in the WHERE clause is often called the search condition, the search expression or simply the
search.
In DSQL and ESQL, the search expression may contain parameters. This is useful if a query has to be repeated
a number of times with different input values. In the SQL string as it is passed to the server, question marks are
used as placeholders for the parameters. They are called positional parameters because they can only be told
apart by their position in the string. Connectivity libraries often support named parameters of the form :id,
:amount, :a etc. These are more user-friendly; the library takes care of translating the named parameters to
positional parameters before passing the statement to the server.
The search condition may also contain local (PSQL) or host (ESQL) variable names, preceded by a colon.
Syntax:
SELECT ...
FROM ...
[...]
WHERE <search-condition>
[...]
Only those rows for which the search condition evaluates to TRUE are included in the result set. Be careful with
possible NULL outcomes: if you negate a NULL expression with NOT, the result will still be NULL and the row
will not pass. This is demonstrated in one of the examples below.
Examples
184
Data Manipulation (DML) Statements
The following example shows what can happen if the search condition evaluates to NULL.
Suppose you have a table listing some children's names and the number of marbles they possess. At a certain
moment, the table contains these data:
CHILD MARBLES
Anita 23
Bob E. 12
Chris <null>
Deirdre 1
Eve 17
Fritz 0
Gerry 21
Hadassah <null>
Isaac 6
First, please notice the difference between NULL and 0: Fritz is known to have no marbles at all, Chris's and
Hadassah's marble counts are unknown.
185
Data Manipulation (DML) Statements
you will get the names Anita, Bob E., Eve and Gerry. These children all have more than 10 marbles.
it's the turn of Deirdre, Fritz and Isaac to fill the list. Chris and Hadassah are not included, because they aren't
known to have ten marbles or less. Should you change that last query to:
the result will still be the same, because the expression NULL <= 10 yields UNKNOWN. This is not the same as
TRUE, so Chris and Hadassah are not listed. If you want them listed with the poor children, change the query to:
Now the search condition becomes true for Chris and Hadassah, because marbles is null obviously
returns TRUE in their case. In fact, the search condition cannot be NULL for anybody now.
Lastly, two examples of SELECT queries with parameters in the search. It depends on the application how you
should define query parameters and even if it is possible at all. Notice that queries like these cannot be executed
immediately: they have to be prepared first. Once a parameterized query has been prepared, the user (or calling
code) can supply values for the parameters and have it executed many times, entering new values before every
call. How the values are entered and the execution started is up to the application. In a GUI environment, the
user typically types the parameter values in one or more text boxes and then clicks an Execute, Run or
Refresh button.
The last query cannot be passed directly to the engine; the application must convert it to the other format first,
mapping named parameters to positional parameters.
If the select list only contains aggregate columns or, more generally, columns whose values don't depend on
individual rows in the underlying set, GROUP BY is optional. When omitted, the final result set of will consist
of a single row (provided that at least one aggregated column is present).
If the select list contains both aggregate columns and columns whose values may vary per row, the GROUP BY
clause becomes mandatory.
186
Data Manipulation (DML) Statements
Syntax:
Argument Description
Any non-aggregating expression that is not included in the SELECT list, i.e. un-
non-aggr-expression selected columns from the source set or expressions that do not depend on the
data in the set at all
A literal copy, from the SELECT list, of an expression that contains no aggre-
column-copy
gate function
The alias, from the SELECT list, of an expression (column) that contains no ag-
column-alias
gregate function
The position number, in the SELECT list, of an expression (column) that con-
column-position
tains no aggregate function
A general rule of thumb is that every non-aggregate item in the SELECT list must also be in the GROUP BY list.
You can do this in three ways:
1. By copying the item verbatim from the select list, e.g. class or 'D:' || upper(doccode).
3. By specifying the column position as an integer literal between 1 and the number of columns. Integer values
resulting from expressions or parameter substitutions are simply invariables and will be used as such in the
grouping. They will have no effect though, as their value is the same for each row.
Note
If you group by a column position, the expression at that position is copied internally from the select list. If it
concerns a subquery, that subquery will be executed again in the grouping phase. That is to say, grouping by
the column position, rather than duplicating the subquery expression in the grouping clause, saves keystrokes
and bytes, but it is not a way of saving processing cycles!
In addition to the required items, the grouping list may also contain:
Columns from the source table that are not in the select list, or non-aggregate expressions based on such
columns. Adding such columns may further subdivide the groups. But since these columns are not in the
187
Data Manipulation (DML) Statements
select list, you can't tell which aggregated row corresponds to which value in the column. So, in general, if
you are interested in this information, you also include the column or expression in the select listwhich
brings you back to the rule: every non-aggregate column in the select list must also be in the grouping list.
Expressions that aren't dependent on the data in the underlying set, e.g. constants, context variables, sin-
gle-value non-correlated subselects etc. This is only mentioned for completeness, as adding such items is
utterly pointless: they don't affect the grouping at all. Harmless but useless items like these may also figure
in the select list without being copied to the grouping list.
Examples
When the select list contains only aggregate columns, GROUP BY is not mandatory:
This will return a single row listing the number of male students and their average age. Adding expressions that
don't depend on values in individual rows of table STUDENTS doesn't change that:
The above query has a major drawback though: it gives you information about the different classes, but it doesn't
tell you which row applies to which class. In order to get that extra bit of information, the non-aggregate column
CLASS must be added to the select list:
Now we have a useful query. Notice that the addition of column CLASS also makes the GROUP BY clause
mandatory. We can't drop that clause anymore, unless we also remove CLASS from the column list.
The output of our last query may look something like this:
188
Data Manipulation (DML) Statements
The headings COUNT and AVG are not very informative. In a simple case like this, you might get away
with that, but in general you should give aggregate columns a meaningful name by aliasing them:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
As you may recall from the formal syntax of the columns list, the AS keyword is optional.
Adding more non-aggregate (or rather: row-dependent) columns requires adding them to the GROUP BY clause
too. For instance, you might want to see the above information for girls as well; and you may also want to
differentiate between boarding and day students:
select class,
sex,
boarding_type,
count(*) as number,
avg(age) as avg_age
from students
group by class, sex, boarding_type
Each row in the result set corresponds to one particular combination of the variables class, sex and boarding type.
The aggregate resultsnumber and average ageare given for each of these rather specific groups individually.
In a query like this, you don't see a total for boys as a whole, or day students as a whole. That's the tradeoff: the
more non-aggregate columns you add, the more you can pinpoint very specific groups, but the more you also
lose sight of the general picture. Of course you can still obtain the coarser aggregates through separate queries.
189
Data Manipulation (DML) Statements
HAVING
Just as a WHERE clause limits the rows in a dataset to those that meet the search condition, so the HAVING
subclause imposes restrictions on the aggregated rows in a grouped set. HAVING is optional, and can only be
used in conjunction with GROUP BY.
Any aggregated column in the select list. This is the most widely used alternative.
Any aggregated expression that is not in the select list, but allowed in the context of the query. This is
sometimes useful too.
Any column in the GROUP BY list. While legal, it is more efficient to filter on these non-aggregated data at
an earlier stage: in the WHERE clause.
Any expression whose value doesn't depend on the contents of the dataset (like a constant or a context vari-
able). This is valid but utterly pointless, because it will either suppress the entire set or leave it untouched,
based on conditions that have nothing to do with the set itself.
Examples
Building on our earlier examples, this could be used to skip small groups of students:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having count(*) >= 5
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having max(age) - min(age) > 1.2
Notice that if you're really interested in this information, you'd normally include min(age) and max(age)
or the expression max(age) - min(age) in the select list as well!
190
Data Manipulation (DML) Statements
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having class starting with '3'
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M' and class starting with '3'
group by class
Syntax:
PLAN <plan-expr>
Argument Description
table Table name or its alias
191
Data Manipulation (DML) Statements
Argument Description
view View name
index Index name
Every time a user submits a query to the Firebird engine, the optimizer computes a data retrieval strategy. Most
Firebird clients can make this retrieval plan visible to the user. In Firebird's own isql utility, this is done with
the command SET PLAN ON. If you are studying query plans rather than running queries, SET PLANONLY ON
will show the plan without executing the query.
In most situations, you can trust that Firebird will select the optimal query plan for you. However, if you have
complicated queries that seem to be underperforming, it may very well be worth your while to examine the plan
and see if you can improve on it.
Simple plans
The simplest plans consist of just a relation name followed by a retrieval method. E.g., for an unsorted single-ta-
ble select without a WHERE clause:
If there's a WHERE or a HAVING clause, you can specify the index to be used for finding matches:
The INDEX directive is also used for join conditions (to be discussed a little later). It can contain a list of indexes,
separated by commas.
ORDER specifies the index for sorting the set if an ORDER BY or GROUP BY clause is present:
192
Data Manipulation (DML) Statements
For sorting sets when there's no usable index available (or if you want to suppress its use), leave out ORDER
and prepend the plan expression with SORT:
Notice that SORT, unlike ORDER, is outside the parentheses. This reflects the fact that the data rows are retrieved
unordered and sorted afterwards by the engine.
When selecting from a view, specify the view and the table involved. For instance, if you have a view FRESHMEN
that selects just the first-year students:
Important
If a table or view has been aliased, it is the alias, not the original name, that must be used in the PLAN clause.
Composite plans
When a join is made, you can specify the index which is to be used for matching. You must also use the JOIN
directive on the two streams in the plan:
193
Data Manipulation (DML) Statements
If there is no index available to match the join criteria (or if you don't want to use it), the plan must first sort both
streams on their join column(s) and then merge them. This is achieved with the SORT directive (which we've
already met) and MERGE instead of JOIN:
Adding an ORDER BY clause means the result of the merge must also be sorted:
As follows from the formal syntax definition, JOINs and MERGEs in the plan may combine more than two
streams. Also, every plan expression may be used as a plan item in an encompassing plan. This means that plans
of certain complicated queries may have various nesting levels.
Finally, instead of MERGE you may also write SORT MERGE. As this makes absolutely no difference and may
create confusion with real SORT directives (the ones that do make a difference), it's probably best to stick
to plain MERGE.
194
Data Manipulation (DML) Statements
Warning
Occasionally, the optimizer will accept a plan and then not follow it, even though it does not reject it as invalid.
One such example was
UNION
A UNION concatenates two or more datasets, thus increasing the number of rows but not the number of columns.
Datasets taking part in a UNION must have the same number of columns, and columns at corresponding positions
must be of the same type. Other than that, they may be totally unrelated.
By default, a union suppresses duplicate rows. UNION ALL shows all rows, including any duplicates. The op-
tional DISTINCT keyword makes the default behaviour explicit.
Syntax:
Unions take their column names from the first select query. If you want to alias union columns, do so in the
column list of the topmost SELECT. Aliases in other participating selects are allowed and may even be useful,
but will not propagate to the union level.
If a union has an ORDER BY clause, the only allowed sort items are integer literals indicating 1-based column
positions, optionally followed by an ASC/DESC and/or a NULLS FIRST/LAST directive. This also implies that
you cannot order a union by anything that isn't a column in the union. (You can, however, wrap it in a derived
table, which gives you back all the usual sort options.)
195
Data Manipulation (DML) Statements
Unions are allowed in subqueries of any kind and can themselves contain subqueries. They can also contain
joins, and can take part in a join when wrapped in a derived table.
Examples
This query presents information from different music collections in one dataset using unions:
Qualifying the stars is necessary here because they are not the only item in the column list. Notice how the
c aliases in the first and third select do not conflict with each other: their scopes are not union-wide but apply
only to their respective select queries.
The next query retrieves names and phone numbers from translators and proofreaders. Translators who also
work as proofreaders will show up only once in the result set, provided their phone number is the same in both
tables. The same result can be obtained without DISTINCT. With ALL, these people would appear twice.
ORDER BY
When a SELECT statement is executed, the result set is not sorted in any way. It often happens that rows appear
to be sorted chronologically, simply because they are returned in the same order they were added to the table by
INSERT statements. To specify a sorting order for the set specification, an ORDER BY clause is used.
196
Data Manipulation (DML) Statements
Syntax:
<ordering-item> ::=
{col-name | col-alias | col-position | expression}
[COLLATE collation-name]
[ASC[ENDING] | DESC[ENDING]]
[NULLS {FIRST|LAST}]
Argument Description
col-name Full column name
col-alias Column alias
col-position Column position in the SELECT list
expression Any expression
collation-name Collation name (sorting order for string types)
Description
The ORDER BY consists of a comma-separated list of the columns on which the result data set should be sorted.
The sort order can be specified by the name of the columnbut only if the column was not previously aliased in
the SELECT columns list. The alias must be used if it was used there. The ordinal position number of the column
in the , the alias given to the column in the SELECT list with the help of the AS keyword or the number of the
column in the SELECT list can be used without restriction.
The three forms of expressing the columns for the sort order can be mixed in the same ORDER BY clause. For
instance, one column in the list can be specified by its name and another column can be specified by its number.
Note
If you use the column position to specify the sort order for a query of the SELECT * style, the server expands
the asterisk to the full column list in order to determine the columns for the sort. It is, however, considered
sloppy practice to design ordered sets this way.
Sorting Direction
The keyword ASCENDING, usually abbreviated to ASC, specifies a sort direction from lowest to highest. AS-
CENDING is the default sort direction.
The keyword DESCENDING, usually abbreviated to DESC, specifies a sort direction from highest to lowest.
Specifying ascending order for one column and the descending order for another is allowed.
197
Data Manipulation (DML) Statements
Collation Order
The keyword COLLATE specifies the collation order for a string column if you need a collation that is different
from the normal one for this column. The normal collation order will be either the default one for the database
character set or one that has been set explicitly in the column's definition.
NULLs Position
The keyword NULLS defines where NULL in the associated column will fall in the sort order: NULLS FIRST
places the rows with the NULL column above rows ordered by that column's value; NULLS LAST places those
rows after the ordered rows.
The simplestand, in some cases, the only method for specifying the sort order is by the ordinal column
position. However, it is also valid to use the column names or aliases, from the first contributing query only.
The ASC/DESC and/or NULLS directives are available for this global set.
If discrete ordering within the contributing set is required, use of derived tables or common table expressions
for those sets may be a solution.
Examples
Sorting the result set in ascending order, ordering by the RDB$CHARACTER_SET_ID, RDB
$COLLATION_ID columns of the DB$COLLATIONS table:
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY RDB$CHARACTER_SET_ID, RDB$COLLATION_ID
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY CHARSET_ID, COLL_ID
198
Data Manipulation (DML) Statements
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY 1, 2
Sorting a SELECT * query by position numberspossible, but nasty and not recommended:
SELECT *
FROM RDB$COLLATIONS
ORDER BY 3, 2
SELECT
BOOKS.*,
FILMS.DIRECTOR
FROM BOOKS, FILMS
ORDER BY 2
Caution
Expressions whose calculation results are non-negative integers will be interpreted as column position numbers
and will cause an exception if they fall outside the range from 1 to the number of columns.
Example:
SELECT
X, Y, NOTE
FROM PAIRS
ORDER BY X+Y DESC
The number returned by a function or a procedure is unpredictable, regardless of whether the sort order is
defined by the expression itself or by the column number
Examples, continued
Sorting in descending order by the values of column PROCESS_TIME, with NULLS placed at the beginning
of the set:
SELECT *
FROM MSG
ORDER BY PROCESS_TIME DESC NULLS FIRST
199
Data Manipulation (DML) Statements
Sorting the set obtained by a UNION of two queries. Results are sorted in descending order for the values in
the second column, with NULLs at the end of the set; and in ascending order for the values of the first column
with NULLs at the beginning.
SELECT
DOC_NUMBER, DOC_DATE
FROM PAYORDER
UNION ALL
SELECT
DOC_NUMBER, DOC_DATE
FROM BUDGORDER
ORDER BY 2 DESC NULLS LAST, 1 ASC NULLS FIRST
ROWS
Used for: Retrieving a slice of rows from an ordered set
Syntax:
Argument Description
m, n Any integer expressions
Description: Limits the amount of rows returned by the SELECT statement to a specified number or range.
The FIRST and SKIP clauses do the same job as ROWS are not SQL-compliant. Using ROWS is thus preferable
in new code. Unlike FIRST and SKIP, the ROWS and TO clauses accept any type of integer expression as their
arguments, without parentheses. Of course, parentheses may still be needed for nested evaluations inside the
expression and a subquery must always be enclosed in parentheses.
Important
Numbering of rows in the intermediate setthe overall set cached on disk before the slice is extracted
starts at 1.
Both FIRST/SKIP and ROWS can be used without the ORDER BY clause, although it rarely makes sense to
do soexcept perhaps when you want to take a quick look at the table data and don't care that rows will be
in random order. For this purpose, a query like SELECT * FROM TABLE1 ROWS 20 would return the
first 20 rows instead of a whole table that might be rather big.
200
Data Manipulation (DML) Statements
Calling ROWS m retrieves the first m records from the set specified.
If m is greater than the total number of records in the intermediate data set, the entire set is returned
If m = 0, an empty set is returned
If m < 0, the the SELECT statement call fails with an error
Calling ROWS m TO nretrieves the rows from the set, starting at row m and ending after row nthe set is
inclusive.
If m is greater than the total number of rows in the intermediate set and n >= m, an empty set is returned
If m is not greater than n and n is greater than the total number of rows in the intermediate set, the result set
will be limited to rows starting from m, up to the end of the set
If m < 1 and n < 1, the SELECT statement call fails with an error
If n = m - 1, an empty set is returned
If n < m - 1, the SELECT statement call fails with an error
While ROWS replaces the non-standard FIRST and SKIP syntax, there is one situation where the standard syntax
does not provide the same behaviour: specifying SKIP n on its own returns the entire intermediate set, without
the first n rows. The ROWS...TO syntax needs a little help to achieve this.
With the ROWS syntax, you need a ROWS clause in association with the TO clause and deliberately make the
second (n) argument greater than the size of the intermediate data set. This is achieved by creating an expression
for n that uses a subquery to retrieve the count of rows in the intermediate set and adds 1 to it.
When ROWS is used in a UNION query, the ROWS directive is applied to the unioned set and must be placed
after the last SELECT statement.
If a need arises to limit the subsets returned by one or more SELECT statements inside UNION, there are a
couple of options:
1. Use FIRST/SKIP syntax in these SELECT statementsbearing in mind that an ordering clause (ORDER BY)
cannot be applied locally to the discrete queries, but only to the combined output.
2. Convert the queries to derived tables with their own ROWS clauses.
Examples
The following examples rewrite the examples used in the section about FIRST and SKIP, earlier in this chapter.
Retrieve the first ten names from a the output of a sorted query on the PEOPLE table:
201
Data Manipulation (DML) Statements
or its equivalent
Return all records from the PEOPLE table except for the first 10 names:
And this query will return the last 10 records (pay attention to the parentheses):
This one will return rows 81-100 from the PEOPLE table:
Note
ROWS can also be used with the UPDATE and DELETE statements.
202
Data Manipulation (DML) Statements
[WHERE ...]
[FOR UPDATE [OF ...]]
FOR UPDATE does not do what it suggests. Its only effect currently is to disable the pre-fetch buffer.
Tip
It is likely to change in future: the plan is to validate cursors marked with FOR UPDATE if they are truly
updateable and reject positioned updates and deletes for cursors evaluated as non-updateable.
WITH LOCK
Available in: DSQL, PSQL
Description: WITH LOCK provides a limited explicit pessimistic locking capability for cautious use in conditions
where the affected row set is:
The need for a pessimistic lock in Firebird is very rare indeed and should be well understood before use of
this extension is considered.
It is essential to understand the effects of transaction isolation and other transaction attributes before attempting
to implement explicit locking in your application.
Syntax:
If the WITH LOCK clause succeeds, it will secure a lock on the selected rows and prevent any other transaction
from obtaining write access to any of those rows, or their dependants, until your transaction ends.
WITH LOCK can only be used with a top-level, single-table SELECT statement. It is not available:
in a subquery specification
for joined sets
with the DISTINCT operator, a GROUP BY clause or any other aggregating operation
with a view
with the output of a selectable stored procedure
with an external table
203
Data Manipulation (DML) Statements
As the engine considers, in turn, each record falling under an explicit lock statement, it returns either the record
version that is the most currently committed, regardless of database state when the statement was submitted,
or an exception.
Wait behaviour and conflict reporting depend on the transaction parameters specified in the TPB block:
204
Data Manipulation (DML) Statements
Tip
As an alternative, it may be possible in your access components to set the size of the fetch buffer to 1. This
would enable you to process the currently-locked row before the next is fetched and locked, or to handle errors
without rolling back your transaction.
OF <column-names>
The engine guarantees that all records returned by an explicit lock statement are actually locked and do meet
the search conditions specified in WHERE clause, as long as the search conditions do not depend on any other
tables, via joins, subqueries, etc. It also guarantees that rows not meeting the search conditions will not be locked
by the statement. It can not guarantee that there are no rows which, though meeting the search conditions, are
not locked.
Note
This situation can arise if other, parallel transactions commit their changes during the course of the locking
statement's execution.
The engine locks rows at fetch time. This has important consequences if you lock several rows at once. Many
access methods for Firebird databases default to fetching output in packets of a few hundred rows (buffered
fetches). Most data access components cannot bring you the rows contained in the last-fetched packet, where
an error occurred.
While explicit locks can be used to prevent and/or handle unusual update conflict errors, the volume of
deadlock errors will grow unless you design your locking strategy carefully and control it rigorously.
Most applications do not need explicit locks at all. The main purposes of explicit locks are (1) to prevent
expensive handling of update conflict errors in heavily loaded applications and (2) to maintain integrity of
objects mapped to a relational database in a clustered environment. If your use of explicit locking doesn't fall
in one of these two categories, then it's the wrong way to do the task in Firebird.
205
Data Manipulation (DML) Statements
Explicit locking is an advanced feature; do not misuse it! While solutions for these kinds of problems may be
very important for web sites handling thousands of concurrent writers, or for ERP/CRM systems operating
in large corporations, most application programs do not need to work in such conditions.
i. Simple:
INTO
Used for: Passing SELECT output into variables
In PSQL code (triggers, stored procedures and executable blocks), the results of a SELECT statement can be
loaded row-by-row into local variables. It is often the only way to do anything with the returned values at all.
The number, order and types of the variables must match the columns in the output row.
A plain SELECT statement can only be used in PSQL if it returns at most one row, i.e., if it is a singleton select.
For multi-row selects, PSQL provides the FOR SELECT loop construct, discussed later in the PSQL chapter.
PSQL also supports the DECLARE CURSOR statement, which binds a named cursor to a SELECT statement. The
cursor can then be used to walk the result set.
Syntax: In PSQL the INTO clause is placed at the very end of the SELECT statement.
Note
Examples
Selecting some aggregated values and passing them into previously declared variables min_amt, avg_amt
and max_amt:
206
Data Manipulation (DML) Statements
Note
The CAST serves to make the average a real number; otherwise, since amount is presumably an integer field,
SQL rules would truncate it to the nearest lower integer.
A PSQL trigger that retrieves two values as a BLOB field (using the LIST() function) and assigns it INTO a third
field:
A common table expression or CTE can be described as a virtual table or view, defined in a preamble to a main
query, and going out of scope after the main query's execution. The main query can reference any CTEs defined
in the preamble as if they were regular tables or views. CTEs can be recursive, i.e. self-referencing, but they
cannot be nested.
Syntax:
Argument Description
cte-stmt Any SELECT statement, including UNION
The main SELECT statement, which can refer to the CTEs defined in the pream-
main-query
ble
name Alias for a table expression
column-alias Alias for a column in a table expression
Example:
with dept_year_budget as (
select fiscal_year,
dept_no,
207
Data Manipulation (DML) Statements
sum(projected_budget) as budget
from proj_dept_budget
group by fiscal_year, dept_no
)
select d.dept_no,
d.department,
dyb_2008.budget as budget_08,
dyb_2009.budget as budget_09
from department d
left join dept_year_budget dyb_2008
on d.dept_no = dyb_2008.dept_no
and dyb_2008.fiscal_year = 2008
left join dept_year_budget dyb_2009
on d.dept_no = dyb_2009.dept_no
and dyb_2009.fiscal_year = 2009
where exists (
select * from proj_dept_budget b
where d.dept_no = b.dept_no
)
CTE Notes
A CTE definition can contain any legal SELECT statement, as long as it doesn't have a WITH... preamble
of its own (no nesting).
CTEs defined for the same main query can reference each other, but care should be taken to avoid loops.
Each CTE can be referenced multiple times in the main query, using different aliases if necessary.
When enclosed in parentheses, CTE constructs can be used as subqueries in SELECT statements, but also in
UPDATEs, MERGEs etc.
for
with my_rivers as (select * from rivers where owner = 'me')
select name, length from my_rivers into :rname, :rlen
do
begin
..
end
Important
If a CTE is declared, it must be used later: otherwise, you will get an error like this: 'CTE "AAA" is not used
in query'.
Recursive CTEs
A recursive (self-referencing) CTE is a UNION which must have at least one non-recursive member, called the
anchor. The non-recursive member(s) must be placed before the recursive member(s). Recursive members are
208
Data Manipulation (DML) Statements
linked to each other and to their non-recursive neighbour by UNION ALL operators. The unions between non-
recursive members may be of any type.
Recursive CTEs require the RECURSIVE keyword to be present right after WITH. Each recursive union member
may reference itself only once, and it must do so in a FROM clause.
A great benefit of recursive CTEs is that they use far less memory and CPU cycles than an equivalent recursive
stored procedure.
Execution Pattern
For each row evaluated, it starts executing each recursive member one by one, using the current values from
the outer row as parameters.
If the currently executing instance of a recursive member produces no rows, execution loops back one level
and gets the next row from the outer result set.
209
Data Manipulation (DML) Statements
The next example returns the pedigree of a horse. The main difference is that recursion occurs simultaneously
in two branches of the pedigree.
210
Data Manipulation (DML) Statements
MARK,
DEPTH
FROM
PEDIGREE
Aggregates (DISTINCT, GROUP BY, HAVING) and aggregate functions (SUM, COUNT, MAX etc) are not
allowed in recursive union members.
INSERT
Used for: Inserting rows of data into a table
Syntax:
Argument Description
The name of the table or view to which a new row, or batch of rows, should be
target
added
colname Column in the table or view
value An expression whose value is used for inserting into the table
ret_value The expression to be returned in the RETURNING clause
varname Name of a PSQL local variable
Description: The INSERT statement is used to add rows to a table or to one or more tables underlying a view:
211
Data Manipulation (DML) Statements
If the column values are supplied in a VALUES clause, exactly one row is inserted
The values may be provided instead by a SELECT expression, in which case zero to many rows may be inserted
With the DEFAULT VALUES clause, no values are provided at all and exactly one row is inserted.
Restrictions
Columns returned to the NEW.column_name context variables in triggers should not have a colon (:)
prefixed to their names
No column may appear more than once in the column list.
Regardless of the method used for inserting rows, be mindful of any columns in the target table or view that
are populated by BEFORE INSERT triggers, such as primary keys and case-insensitive search columns. Those
columns should be excluded from both the column_list and the VALUES list if, as they should, the triggers
test the NEW.column_name for NULL.
Note
Introducer syntax provides a way to identify the character set of a value that is a string constant (literal).
Introducer syntax works only with literal strings: it cannot be applied to string variables, parameters, column
references or values that are expressions.
Examples:
212
Data Manipulation (DML) Statements
Literal values, context variables or expressions of compatible type can be substituted for any column in the
source row. In this case, a source column list and a corresponding VALUES list are required.
If the column list is absentas it is when SELECT * is used for the source expressionthe column_list must
contain the names of every column in the target table or view (computed columns excluded).
Examples:
Of course, the column names in the source table need not be the same as those in the target table. Any type of
SELECT statement is permitted, as long as its output columns exactly match the insert columns in number, order
and type. Types need not be exactly the same, but they must be assignment-compatible.
INSERT INTO T
SELECT * FROM T
known affectionately as the infinite insertion loop, will continuously select rows and insert them, over and
over, until the system runs out of storage space.
This is a quirk that affects all data-changing DML operations, with a variety of effects. It happens because, in
the execution layers, DML statements use implicit cursors for performing the operations. Thus, using our simple
example, execution works as follows:
213
Data Manipulation (DML) Statements
The implementation results in behaviour that does not accord with the SQL standards. Future versions of Firebird
will comply with the standard.
Example:
In DSQL, a statement with RETURNING always returns only one row. If the RETURNING clause is specified
and more than one row is inserted by the INSERT statement, the statement fails and an error message is returned.
This behaviour may change in future Firebird versions.
Examples:
214
Data Manipulation (DML) Statements
Notes:
RETURNING is only supported for VALUES inserts and singleton SELECT inserts.
In DSQL, a statement with a RETURNING clause always returns exactly one row. If no record was actually
inserted, the fields in this row are all NULL. This behaviour may change in a later version of Firebird. In
PSQL, if no row was inserted, nothing is returned, and the target variables keep their existing values.
Editor's note :: These notes were missing from the raw translation: perhaps they were overlooked by the trans-
lator?
1. The client application has made special provisions for such inserts, using the Firebird API. In this case, the
modus operandi is application-specific and outside the scope of this manual.
Caution
If the value is not a string literal, beware of concatenations, as the output from the expression may exceed
the maximum length.
3. You are using the INSERT ... SELECT form and one or more columns in the result set are BLOBs.
UPDATE
Used for: Modifying rows in tables and views
Syntax:
215
Data Manipulation (DML) Statements
Argument Description
target The name of the table or view where the records are updated
alias Alias for the table or view
col Name or alias of a column in the table or view
New value for a column that is to be updated in the table or view by the state-
newval
ment
search-conditions A search condition limiting the set of the rows to be updated
cursorname The name of the cursor through which the row[s] to be updated are positioned
plan_items Clauses in the query plan
sort_items Columns listed in an ORDER BY clause
m, n Integer expressions for limiting the number of rows to be updated
ret_value A value to be returned in the RETURNING clause
varname Name of a PSQL local variable
Description: The UPDATE statement changes values in a table or in one or more of the tables that underlie a
view. The columns affected are specified in the SET clause. The rows affected may be limited by the WHERE
and ROWS clauses. If neither WHERE nor ROWS is present, all the records in the table will be updated.
Using an alias
If you assign an alias to a table or a view, the alias must be used when specifying columns and also in any column
references included in other clauses.
Examples:
Correct usage:
Not possible:
216
Data Manipulation (DML) Statements
A column name can be used in expressions on the right. The old value of the column will always be used in
these right-side values, even if the column was already assigned a new value earlier in the SET clause.
A B
----
1 0
2 0
The statement
A B
----
5 1
5 2
Notice that the old values (1 and 2) are used to update the b column even after the column was assigned a
new value (5).
Note
It was not always like that. Before version 2.5, columns got their new values immediately upon assignment. It
was non-standard behaviour that was fixed in version 2.5.
To maintain compatibility with legacy code, the configuration file firebird.conf includes the parameter
OldSetClauseSemantics, that can be set True (1) to restore the old, bad behaviour. It is a temporary
measurethe parameter will be removed in future.
In PSQL, if a named cursor is being used for updating a set, using the WHERE CURRENT OF clause, the action
is limited to the row where the cursor is currently positioned. This is a positioned update.
217
Data Manipulation (DML) Statements
Note
The WHERE CURRENT OF clause is available only in PSQL, since there is no statement for creating and ma-
nipulating an explicit cursor in DSQL. Searched updates are also available in PSQL, of course.
Examples:
UPDATE People
SET firstname = 'Boris'
WHERE lastname = 'Johnson';
UPDATE employee e
SET salary = salary * 1.05
WHERE EXISTS(
SELECT *
FROM employee_project ep
WHERE e.emp_no = ep.emp_no);
UPDATE addresses
SET city = 'Saint Petersburg', citycode = 'PET'
WHERE city = 'Leningrad'
UPDATE employees
SET salary = 2.5 * salary
WHERE title = 'CEO'
For string literals with which the parser needs help to interpret the character set of the data, the introducer syntax
may be used. The string literal is preceded by the character set name, prefixed with an underscore character:
UPDATE People
SET name = _ISO8859_1 'Hans-Jrg Schfer'
WHERE id = 53662
UPDATE T
SET ...
WHERE ID IN (SELECT FIRST 1 ID FROM T)
known affectionately as the infinite update loop, will continuously update rows, over and over, and give the
impression that the server has hung.
Quirks like this can affect any data-changing DML operations, most often when the selection conditions involve
a subquery. Cases have been reported where sort order interferes with expectations, without involving a sub-
218
Data Manipulation (DML) Statements
query. It happens because, in the execution layers, instead of establishing a stable target set and then execut-
ing the data changes to each set member, DML statements use implicit cursors for performing the operations
on whatever row currently meets the conditions, without knowledge of whether that row formerly failed the
condition or was updated already. Thus, using a simple example pattern:
Firebird's implementation does not accord with the SQL standards, which require that a stable set be established
before any data are changed. Versions of Firebird from V.3 onward will comply with the standard.
If ROWS has one argument, m, the rows to be updated will be limited to the first m rows.
Points to note:
If m > the number of rows being processed, the entire set of rows is updated
If m = 0, no rows are updated
If m < 0, an error occurs and the update fails
If two arguments are used, m and n, ROWS limits the rows being updated to rows from m to n inclusively. Both
arguments are integers and start from 1.
Points to note:
ROWS Example:
UPDATE employees
SET salary = salary + 50
ORDER BY salary ASC
ROWS 20
219
Data Manipulation (DML) Statements
When the RETURNING set contains data from the current row, the returned values report changes made in the
BEFORE UPDATE triggers, but not those made in AFTER UPDATE triggers.
The context variables OLD.fieldname and NEW.fieldname can be used as column names. If OLD. or NEW. is not
specified, the column values returned are the NEW. ones.
In DSQL, a statement with RETURNING always returns a single row. If the statement updates no records, the
returned values contain NULL. This behaviour may change in future Firebird versions.
Note
When a value is returned and assigned to a NEW context variable, it is not valid to use a colon prefix on it.
For example, this is invalid:
...
into :var1, :var2, :new.id
...
into :var1, :var2, new.id
UPDATE Scholars
SET firstname = 'Hugh', lastname = 'Pickering'
WHERE firstname = 'Henry' and lastname = 'Higgins'
RETURNING id, old.lastname, new.lastname
220
Data Manipulation (DML) Statements
1. The client application has made special provisions for this operation, using the Firebird API. In this case,
the modus operandi is application-specific and outside the scope of this manual.
2. The new value is a text string of at most 32767 bytes. Please notice: if the value is not a string literal, beware
of concatenations, as these may exceed the maximum length.
3. The source is itself a BLOB column or, more generally, an expression that returns a BLOB.
UPDATE OR INSERT
Used for: Updating an existing record in a table or, if it does not exist, inserting it
Syntax:
Argument Description
The name of the table or view where the record[s] is to be updated or a new
target
record inserted
colname Name of a column in the table or view
value An expression whose value is to be used for inserting or updating the table
ret_value An expression returned in the RETURNING clause
varname Variable namePSQL only
Description: UPDATE OR INSERT inserts a new record or updates one or more existing records. The action
taken depends on the values provided for the columns in the MATCHING clause (or, if the latter is absent, in the
primary key). If there are records found matching those values, they are updated. If not, a new record is inserted.
221
Data Manipulation (DML) Statements
Restrictions
In DSQL, a statement with a RETURNING clause always returns exactly one row. If a RETURNING clause is
present and more than one matching record is found, an error is raised. This behaviour may change in a later
version of Firebird.
Example: Modifying data in a table, using UPDATE OR INSERT in a PSQL module. The return value is passed
to a local variable, whose colon prefix is not optional.
Because of the way the execution of data-changing DML is implemented in Firebird, up to and including this
version, the sets targeted for updating sometimes produce unexpected results. For more information, refer to
The Unstable Cursor Problem in the UPDATE section.
DELETE
Used for: Deleting rows from a table or view
Syntax:
DELETE
FROM {target} [[AS] alias]
[WHERE {search-conditions | CURRENT OF cursorname}]
[PLAN plan_items]
[ORDER BY sort_items]
[ROWS <m> [TO <n>]]
[RETURNING <returning_list> [INTO <variables>]]
222
Data Manipulation (DML) Statements
Argument Description
target The name of the table or view from which the records are to be deleted
alias Alias for the target table or view
search-conditions Search condition limiting the set of rows being targeted for deletion
cursorname The name of the cursor in which current record is positioned for deletion
plan_items Query plan clause
sort_items ORDER BY clause
m, n Integer expressions for limiting the number of rows being deleted
ret_value An expression to be returned in the RETURNING clause
varname Name of a PSQL variable
Description: DELETE removes rows from a database table or from one or more of the tables that underlie a
view. WHERE and ROWS clauses can limit the number of rows deleted. If neither WHERE nor ROWS is present,
DELETE removes all the rows in the relation.
Aliases
If an alias is specified for the target table or view, it must be used to qualify all field name references in the
DELETE statement.
Examples:
Supported usage:
Not possible:
WHERE
The WHERE clause sets the conditions that limit the set of records for a searched delete.
223
Data Manipulation (DML) Statements
In PSQL, if a named cursor is being used for deleting a set, using the WHERE CURRENT OF clause, the action
is limited to the row where the cursor is currently positioned. This is a positioned update.
Note
The WHERE CURRENT OF clause is available only in PSQL and ESQL, since there is no statement for creating
and manipulating an explicit cursor in DSQL. Searched deletes are also available in PSQL, of course.
Examples:
PLAN
A PLAN clause allows the user to optimize the operation manually.
Example:
The ROWS clause limits the number of rows being deleted. Integer literals or any integer expressions can be
used for the arguments m and n.
If ROWS has one argument, m, the rows to be deleted will be limited to the first m rows.
Points to note:
224
Data Manipulation (DML) Statements
If m > the number of rows being processed, the entire set of rows is deleted
If m = 0, no rows are deleted
If m < 0, an error occurs and the deletion fails
If two arguments are used, m and n, ROWS limits the rows being deleted to rows from m to n inclusively. Both
arguments are integers and start from 1.
Points to note:
Examples:
No sorting (ORDER BY) is specified so 8 found records, starting from the fifth one, will be deleted:
225
Data Manipulation (DML) Statements
ROWS 5 TO 12
RETURNING
A DELETE statement removing at most one row may optionally include a RETURNING clause in order to return
values from the deleted row. The clause, if present, need not contain all the relation's columns and may also
contain other columns or expressions.
Notes
In DSQL, a statement with RETURNING always returns a singleton, never a mult-row set. If no records are
deleted, the returned columns contain NULL. This behaviour may change in future Firebird versions
- If the row is not deleted, nothing is returned and the target variables keep their values
Examples:
Because of the way the execution of data-changing DML is implemented in Firebird, up to and including this
version, the sets targeted for deletion sometimes produce unexpected results. For more information, refer to
The Unstable Cursor Problem in the UPDATE section.
MERGE
Used for: Merging data from a source set into a target relation
Syntax:
226
Data Manipulation (DML) Statements
WHEN MATCHED THEN UPDATE SET colname = value [, colname = value ...]
WHEN NOT MATCHED THEN INSERT [(<columns>)] VALUES (<values>)
Argument Description
target Name of target relation (table or updatable view)
source Data source. It can be a table, a view, a stored procedure or a derived table
target-alias Alias for the target relation (table or updatable view)
source-alias Alias for the source relation or set
join-conditions The (ON) condition[s] for matching the source records with those in the target
colname Name of a column in the target relation
The value assigned to a column in the target table. It is an expression that may
value be a literal value, a PSQL variable, a column from the source or a compatible
context variable
Description
The MERGE statement merges data into a table or updatable view. The source may be a table, view or anything
you can SELECT from in general. Each source record will be used to update one or more target records, insert
a new record in the target table, or neither.
The action taken depends on the supplied join condition and the WHEN clause(s). The condition will typically
contain a comparison of fields in the source and target relations.
Notes
Only one of each WHEN clause can be supplied. This will change in the next major version of Firebird, when
compound matching conditions will be supported.
WHEN NOT MATCHED is evaluated from the source viewpoint, that is, the table or set specified in USING.
It has to work this way because, if the source record does not match a target record, INSERT is executed. Of
course, if there is a target record that does not match a source record, nothing is done.
Currently, the ROW_COUNT variable returns the value 1, even if more than one record is modified or inserted.
For details and progress, refer to Tracker ticket CORE-4400.
If the WHEN MATCHED clause is present and several records match a single record in the target table, an
UPDATE will be executed on that one target record for each one of the matching source records, with each
successive update overwriting the previous one. This behaviour does not comply with the SQL:2003 standard,
which requires that this situation throw an exception (an error).
Examples:
227
Data Manipulation (DML) Statements
Because of the way the execution of data-changing DML is implemented in Firebird, up to and including this
version, the sets targeted for merging sometimes produce unexpected results. For more information, refer to
The Unstable Cursor Problem in the UPDATE section.
EXECUTE PROCEDURE
Used for: Executing a stored procedure
Syntax:
228
Data Manipulation (DML) Statements
Editor's note :: Notice I added the non-parenthesised option for return parameters, after an exchange with
Dmitry Y.
Argument Description
procname Name of the stored procedure
inparam An expression evaluating to the declared data type of an input parameter
varname A PSQL variable to receive the return value
Description: Executes an executable stored procedure, taking a list of one or more input parameters, if they are
defined for the procedure, and returning a one-row set of output values, if they are defined for the procedure.
Invoking the other style of stored procedurea selectable oneis possible with EXECUTE PROCEDURE but
it returns only the first row of an output set which is almost surely designed to be multi-row. Selectable stored
procedures are designed to be invoked by a SELECT statement, producing output that behaves like a virtual table.
Notes
In PSQL and DSQL, input parameters may be any expression that resolves to the expected type.
Although parentheses are not required after the name of the stored procedure to enclose the input parameters,
their use is recommended for the sake of good housekeeping.
Where output parameters have been defined in a procedure, the RETURNING_VALUES clause can be used
in PSQL to retrieve them into a list of previously declared variables that conforms in sequence, data type
and number with the defined output parameters.
The list of RETURNING_VALUES may be optionally enclosed in parentheses and their use is recommended.
When DSQL applications call EXECUTE PROCEDURE using the Firebird API or some form of wrapper for
it, a buffer is prepared to receive the output row and the RETURNING_VALUES clause is not used.
Examples:
229
Data Manipulation (DML) Statements
In Firebird's command-line utility isql, with literal parameters and optional parentheses:
Note: In isql, RETURNING_VALUES is not used. Any output values are captured by the application
and displayed automatically.
EXECUTE BLOCK
Used for: Creating an anonymous block of PSQL code in DSQL for immediate execution
Syntax:
datatype ::=
{SMALLINT | INTEGER | BIGINT}
| {FLOAT | DOUBLE PRECISION}
| {DATE | TIME | TIMESTAMP}
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[CHARACTER SET charset]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING] [(size)]
230
Data Manipulation (DML) Statements
Argument Description
param_decl Name and description of an input or output parameter
declarations A section for declaring local variables and named cursors
declare_var Local variable declaration
declare_cursor Declaration of a named cursor
The name of an input or output parameter of the procedural block, up to 31 char-
paramname acters long. The name must be unique among input and output parameters and
local variables in the block
datatype SQL data type
collation Collation sequence
domain Domain
rel Name of a table or view
col Name of a column in a table or view
precision Precision. From 1 to 18
scale Scale. From 0 to 18. It must be less than or equal to precision
size The maximum size of a string, in characters
charset Character set
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size, it cannot be greater than 65,535
Description: Executes a block of PSQL code as if it were a stored procedure, optionally with input and output
parameters and variable declarations. This allows the user to perform on-the-fly PSQL within a DSQL context.
Examples:
231
Data Manipulation (DML) Statements
EXECUTE BLOCK
AS
declare i INT = 0;
BEGIN
WHILE (i < 128) DO
BEGIN
INSERT INTO AsciiTable VALUES (:i, ascii_char(:i));
i = i + 1;
END
END
The next example calculates the geometric mean of two numbers and returns it to the user:
Because this block has input parameters, it has to be prepared first. Then the parameters can be set
and the block executed. It depends on the client software how this must be done and even if it is
possible at allsee the notes below.
Our last example takes two integer values, smallest and largest. For all the numbers in the range
smallest .. largest, the block outputs the number itself, its square, its cube and its fourth power.
Again, it depends on the client software if and how you can set the parameter values.
232
Data Manipulation (DML) Statements
get their values after the statement is prepared but before it is executed. This requires special provisions, which
not every client application offers. (Firebird's own isql, for one, doesn't.)
The server only accepts question marks (?) as placeholders for the input values, not :a, :MyParam etc.,
or literal values. Client software may support the :xxx form though, and will preprocess it before sending
it to the server.
If the block has output parameters, you must use SUSPEND or nothing will be returned.
Output is always returned in the form of a result set, just as with a SELECT statement. You can't use
RETURNING_VALUES or execute the block INTO some variables, even if there is only one result row.
PSQL Links
For more information about parameter and variable declarations, and <PSQL statements> consult
Chapter 7, Procedural SQL (PSQL) Statements.
For <declarations> in particular, see DECLARE [VARIABLE] and DECLARE CURSOR for the exact
syntax.
Statement Terminators
Some SQL statement editorsspecifically the isql utility that comes with Firebird and possibly some third-
party editorsemploy an internal convention that requires all statements to be terminated with a semi-colon.
This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with
this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the
Terminator in isql.
233
Chapter 7
Procedural SQL
(PSQL) Statements
REVIEW STATUS
All sections from this point forward to the end of the chapter are awaiting technical and editorial review.
Procedural SQL (PSQL) is a procedural extension of SQL. This language subset is used for writing stored
procedures, triggers, and PSQL blocks.
PSQL provides all the basic constructs of traditional structured programming languages, and also includes DML
statements (SELECT, INSERT, UPDATE, DELETE, etc.), with slight modifications to syntax in some cases.
Elements of PSQL
A procedural extension may contain declarations of local variables and cursors, assignments, conditional state-
ments, loops, statements for raising custom exceptions, error handling and sending messages (events) to client
applications. Triggers have access to special context variables, two arrays that store, respectively, the NEW
values for all columns during insert and update activity and the OLD values during update and delete work.
When a DML statement with parameters is included in PSQL code, the parameter name must be prefixed by
a colon (:) in most situations. The colon is optional in statement syntax that is specific to PSQL, such as
assignments and conditionals. The colon prefix on parameters is not required when calling stored procedures
from within another PSQL module or in DSQL.
Transactions
Stored procedures are executed in the context of the transaction in which they are called. Triggers are executed
as an intrinsic part of the operation of the DML statement: thus, their execution is within the same transaction
context as the statement itself. Individual transactions are launched for database event triggers.
234
Procedural SQL (PSQL) Statements
Statements that start and end transactions are not available in PSQL, but it is possible to run a statement or a
block of statements in an autonomous transaction.
Module Structure
PSQL code modules consist of a header and a body. The DDL statements for defining them are complex state-
ments; that is, the consist of a single statement that encloses blocks of multiple statements. These statements
begin with a verb (CREATE, ALTER, DROP, RECREATE, CREATE OR ALTER) and end with the last END
statement of the body.
The header provides the module name and defines any parameters and variables that are used in the body. Stored
procedures and PSQL blocks may have input and output parameters. Triggers do not have either input or output
parameters.
The header of a trigger indicates the database event (insert, update or delete, or a combination) and the phase of
operation (BEFORE or AFTER that event) that will cause it to fire.
The body of a PSQL module is a block of statements that run in a logical sequence, like a program. A block
of statements is contained within a BEGIN and an END statement. The main BEGIN...END block may contain
any number of other BEGIN...END blocks, both embedded and sequential. All statements except BEGIN and
END are terminated by semicolons (;). No other character is valid for use as a terminator for PSQL statements.
235
Procedural SQL (PSQL) Statements
Here we digress a little, to explain how to switch the terminator character in the isql utility to make it
possible to define PSQL modules in that environment without conflicting with isql itself, which uses the
same character, semicolon (;), as its own statement terminator.
Used for: Changing the terminator character[s] to avoid conflict with the terminator character in PSQL
statements
Syntax:
Argument Description
new_terminator New terminator
old_terminator Old terminator
When you write your triggers and stored procedures in isql, either in the interactive interface or in scripts,
running a SET TERM statement is needed to switch the normal isql statement terminator from the semicolon
to some other character or short string, to avoid conflict with the non-changeable semicolon terminator in
PSQL. The switch to an alternative terminator needs to be done before you begin defining PSQL objects
or running your scripts.
The alternative terminator can be any string of characters except for a space, an apostrophe or the current
terminator character[s]. Any letter character[s] used will be case-sensitive.
Example: Changing the default semicolon to '^' (caret) and using it to submit a stored procedure definition:
character as an alternative terminator character:
SET TERM ^;
SET TERM ;^
236
Procedural SQL (PSQL) Statements
Stored Procedures
A stored procedure is a program stored in the database metadata for execution on the server. A stored procedure
can be called by stored procedures (including itself), triggers and client applications. A procedure that calls itself
is known as recursive.
1. Modularityapplications working with the database can use the same stored procedure, thereby reducing
the size of the application code and avoiding code duplication.
2. Simpler Application Supportwhen a stored procedure is modified, changes appear immediately to all
host applications, without the need to recompile them if the parameters were unchanged.
3. Enhanced Performancesince stored procedures are executed on a server instead of at the client, network
traffic is reduced, which improves performance.
Executable Procedures
Executable procedures usually modify data in a database. They can receive input parameters and return a single
set of output (RETURNS) parameters. They are called using the EXECUTE PROCEDURE statement. See an
example of an executable stored procedure at the end of the CREATE PROCEDURE section of Chapter 5.
Selectable Procedures
Selectable stored procedures usually retrieve data from a database, returning an arbitrary number of rows to the
caller. The caller receives the output one row at a time from a row buffer that the database engine prepares for it.
Selectable procedures can be useful for obtaining complex sets of data that are often impossible or too difficult
or too slow to retrieve using regular DSQL SELECT queries. Typically, this style of procedure iterates through
a looping process of extracting data, perhaps transforming it before filling the output variables (parameters) with
fresh data at each iteration of the loop. A SUSPEND statement at the end of the iteration fills the buffer and waits
for the caller to fetch the row. Execution of the next iteration of the loop begins when the buffer has been cleared.
Selectable procedures may have input parameters and the output set is specified by the RETURNS clause in the
header.
237
Procedural SQL (PSQL) Statements
A selectable stored procedure is called with a SELECT statement. See an example of a selectable stored proce-
dure at the end of the CREATE PROCEDURE section of Chapter 5.
Syntax (partial):
The header of a stored procedure must contain the procedure name, and it must be unique among the names of
stored procedures, tables, and views. It may also define some input and output parameters. Input parameters are
listed after the procedure name inside a pair of brackets. Output parameters, which are mandatory for selectable
procedures, are bracketed inside one RETURNS clause.
The final item in the header (or the first item in the body, depending on your opinion of where the border lies)
is one or more declarations of any local variables and/or named cursors that your procedure might require.
Following the declarations is the main BEGIN...END block that delineates the procedure's PSQL code. With-
in that block could be PSQL and DML statements, flow-of-control blocks, sequences of other BEGIN...END
blocks, including embedded blocks. Blocks, including the main block, may be empty and the procedure will still
compile. It is not unusual to develop a procedure in stages, from an outline.
For more information about creating stored procedures: See CREATE PROCEDURE in Chapter 5, Data Defi-
nition (DDL) Statements.
Syntax (partial):
238
Procedural SQL (PSQL) Statements
[<PSQL_statements>]
END
For more information about modifying stored procedures: See ALTER PROCEDURE, CREATE OR ALTER PRO-
CEDURE, RECREATE PROCEDURE, in Chapter 5, Data Definition (DDL) Statements.
Syntax (complete):
For more information about deleting stored procedures: See DROP PROCEDURE in Chapter 5, Data Definition
(DDL) Statements.
Stored Functions
Stored PSQL scalar functions are not supported in this version but they are coming in Firebird 3. In Firebird
2.5 and below, you can instead write a selectable stored procedure that returns a scalar result and SELECT it
from your DML query or subquery.
Example:
SELECT
PSQL_FUNC(T.col1, T.col2) AS col3,
col3
FROM T
SELECT
(SELECT output_column FROM PSQL_PROC(T.col1)) AS col3,
col2
FROM T
or
SELECT
output_column AS col3,
col2,
FROM T
LEFT JOIN PSQL_PROC(T.col1)
239
Procedural SQL (PSQL) Statements
PSQL Blocks
A self-contained, unnamed (anonymous) block of PSQL code can be executed dynamically in DSQL, using
the EXECUTE BLOCK syntax. The header of an anonymous PSQL block may optionally contain input and
output parameters. The body may contain local variable and cursor declarations; and a block of PSQL statements
follows.
An anonymous PSQL block is not defined and stored as an object, unlike stored procedures and triggers. It
executes in run-time and cannot reference itself.
Just like stored procedures, anonymous PSQL blocks can be used to process data and to retrieve data from the
database.
Syntax (incomplete):
EXECUTE BLOCK
[(<inparam> = ? [, <inparam> = ? ...])]
[RETURNS (<outparam> [, <outparam> ...])]
AS
[<declarations>]
BEGIN
[<PSQL_statements>]
END
Argument Description
inparam Input parameter description
outparam Output parameter description
declarations A section for declaring local variables and named cursors
PSQL statements PSQL and DML statements
Triggers
A trigger is another form of executable code that is stored in the metadata of the database for execution by
the server. A trigger cannot be called directly. It is called automatically (fired) when data-changing events
involving one particular table or view occur.
240
Procedural SQL (PSQL) Statements
One trigger applies to exactly one table or view and only one phase in an event (BEFORE or AFTER the event).
A single trigger might be written to fire only when one specific data-changing event occurs (INSERT/UP-
DATE/DELETE) or it might be written to apply to more than one of those.
A DML trigger is executed in the context of the transaction in which the data-changing DML statement is
running. For triggers that respond to database events, the rule is different: for some of them, a default transaction
is started.
If a POSITION clause is omitted, or if several matching event-phase triggers have the same position number,
then the triggers will fire in alphabetical order.
DML Triggers
DML triggers are those that fire when a DML operation changes the state of data: modifies rows in tables, inserts
new rows or deletes rows. They can be defined for both tables and views.
Trigger Options
Six base options are available for the event-phase combination for tables and views:
These base forms are for creating single phase/single-event triggers. Firebird also supports forms for creating
triggers for one phase and multiple-events, BEFORE INSERT OR UPDATE OR DELETE, for example, or AFTER
UPDATE OR DELETE: the combinations are your choice.
Note
241
Procedural SQL (PSQL) Statements
Database Triggers
A trigger associated with a database or transaction event can be defined for the following events:
Creating Triggers
Syntax:
242
Procedural SQL (PSQL) Statements
AS
[<declarations>]
BEGIN
[<PSQL_statements>]
END
<db_event> ::=
CONNECT
| DISCONNECT
| TRANSACTION START
| TRANSACTION COMMIT
| TRANSACTION ROLLBACK
The header must contain a name for the trigger that is unique among trigger names. It must include the event
or events that will fire the trigger. Also, for a DML trigger it is mandatory to specify the event phase and the
name of the table or view that is to own the trigger.
The body of the trigger can be headed by the declarations of local variables and cursors, if any. Within the
enclosing main BEGIN...END wrapper will be one or more blocks of PSQL statements, which may be empty.
For more information about creating triggers: See ">CREATE TRIGGER in Chapter 5, Data Definition (DDL)
Statements.
Modifying Triggers
Altering the status, phase, table or view event(s), firing position and code in the body of a DML trigger are all
possible. However, you cannot modify a DML trigger to convert it to a database trigger, nor vice versa. Any
element not specified is left unchanged by ALTER TRIGGER. The alternative statements CREATE OR ALTER
TRIGGER and RECREATE TRIGGER will replace the original trigger definition entirely.
Syntax:
243
Procedural SQL (PSQL) Statements
<mutation_list> ::=
<mutation> [OR <mutation> [OR <mutation>]]
<db_event> ::=
CONNECT
| DISCONNECT
| TRANSACTION START
| TRANSACTION COMMIT
| TRANSACTION ROLLBACK
For more information about modifying triggers: See ALTER TRIGGER, CREATE OR ALTER TRIGGER, RECRE-
ATE TRIGGER in Chapter 5, Data Definition (DDL) Statements.
Deleting a Trigger
The DROP TRIGGER statement is used to delete stored procedures.
Syntax (complete):
For more information about deleting triggers: See DROP TRIGGER in Chapter 5, Data Definition (DDL) State-
ments.
The colon marker prefix (:) is used in PSQL to mark a reference to a variable in a DML statement. The
colon marker is not required before variable names in other code and it should never be applied to context
variables.
244
Procedural SQL (PSQL) Statements
Assignment Statements
Used for: Assigning a value to a variable
Syntax:
varname = <value_expr>
Argument Description
varname Name of a parameter or local variable
An expression, constant or variable whose value resolves to the same data type
value_expr
as <varname>
PSQL uses the equivalence symbol (=) as its assignment operator. The assignment statement assigns an SQL
expression value on the right to the variable on the left of the operator. The expression can be any valid SQL
expression: it may contain literals, internal variable names, arithmetic, logical and string operations, calls to
internal functions or to external functions (UDFs).
245
Procedural SQL (PSQL) Statements
DECLARE CURSOR
Used for: Declaring a named cursor
Syntax:
Argument Description
cursorname Cursor name
select SELECT statement
The DECLARE CURSOR ... FOR statement binds a named cursor to the result set obtained in the SELECT state-
ment specified in the FOR clause. In the body code, the cursor can be opened, used to walk row-by-row through
the result set and closed. While the cursor is open, the code can perform positioned updates and deletes using
the WHERE CURRENT OF in the UPDATE or DELETE statement.
Cursor Idiosyncrasies
The optional FOR UPDATE clause can be included in the SELECT statement but its absence does not prevent
successful execution of a positioned update or delete
Care should be taken to ensure that the names of declared cursors do not conflict with any names used
subsequently in statements for AS CURSOR clauses
If the cursor is needed only to walk the result set, it is nearly always easier and less error-prone to use a FOR
SELECT statement with the AS CURSOR clause. Declared cursors must be explicitly opened, used to fetch
data and closed. The context variable ROW_COUNT has to be checked after each fetch and, if its value is
zero, the loop has to be terminated. A FOR SELECT statement checks it automatically.
Nevertheless, declared cursors provide a high level of control over sequential events and allow several cursors
to be managed in parallel.
246
Procedural SQL (PSQL) Statements
Each parameter has to have been declared beforehand as a PSQL variable, even if they originate as input and
output parameters. When the cursor is opened, the parameter is assigned the current value of the variable.
Attention!
If the value of a PSQL variable used in the SELECT statement changes during the loop, its new value may (but
not always) be used for the remaining rows. It is better to avoid having such situations arise unintentionally.
If you really need this behaviour, you should test your code carefully to be certain that you know exactly how
changes in the variable affect the result.
Note particularly that the behaviour may depend on the query plan, specifically on the indexes being used. No
strict rules are in place for situations like this currently, but that could change in future versions of Firebird.
2. A collection of scripts for creating views with a PSQL block using named cursors.
EXECUTE BLOCK
RETURNS (
SCRIPT BLOB SUB_TYPE TEXT)
AS
DECLARE VARIABLE FIELDS VARCHAR(8191);
DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME;
DECLARE VARIABLE RELATION RDB$RELATION_NAME;
DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE;
DECLARE VARIABLE CUR_R CURSOR FOR (
SELECT
RDB$RELATION_NAME,
RDB$VIEW_SOURCE
FROM
RDB$RELATIONS
WHERE
247
Procedural SQL (PSQL) Statements
FIELDS = NULL;
-- The CUR_F cursor will use the value
-- of the RELATION variable initiated above
OPEN CUR_F;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_F
INTO :FIELD_NAME;
IF (ROW_COUNT = 0) THEN
LEAVE;
IF (FIELDS IS NULL) THEN
FIELDS = TRIM(FIELD_NAME);
ELSE
FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME);
END
CLOSE CUR_F;
SUSPEND;
END
CLOSE CUR_R;
END
DECLARE VARIABLE
Used for: Declaring a local variable
248
Procedural SQL (PSQL) Statements
Syntax:
<datatype> ::=
{SMALLINT | INTEGER | BIGINT}
| {FLOAT | DOUBLE PRECISION}
| {DATE | TIME | TIMESTAMP}
| {DECIMAL | NUMERIC} [(precision [, scale])]
| {CHAR | CHARACTER | CHARACTER VARYING | VARCHAR} [(size)]
[CHARACTER SET charset]
| {NCHAR | NATIONAL CHARACTER | NATIONAL CHAR} [VARYING]
[(size)]
| BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset]
| BLOB [(seglen [, subtype_num])]
Argument Description
varname Name of the local variable
datatype An SQL data type
domain The name of an existing domain in this database
Relation name (table or view) in this database and the name of a column in that
rel.col
relation
precision Precision. From 1 to 18
scale Scale. From 0 to 18, it must be less than or equal to precision
size The maximum size of a string in characters
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment size, not greater than 65,535
initvalue Initial value for this variable
literal Literal of a type compatible with the type of the local variable
context_var Any context variable whose type is compatible with the type of the local variable
charset Character set
collation Collation sequence
249
Procedural SQL (PSQL) Statements
The statement DECLARE [VARIABLE] is used for declaring a local variable. The keyword VARIABLE can be
omitted. One DECLARE [VARIABLE] statement is required for each local variable. Any number of DECLARE
[VARIABLE] statements can be included and in any order. The name of a local variable must be unique among
the names of local variables and input and output parameters declared for the module.
A domain name can be specified as the type and the variable will inherit all of its attributes.
If the TYPE OF <domain> clause is used instead, the variable will inherit only the domain's data type, and,
if applicable, its character set and collation attributes. Any default value or constraints such as NOT NULL
or CHECK constraints are not inherited.
If the TYPE OF COLUMN <relation.column>> option is used to borrow from a column in a table or
view, the variable will inherit only the column's data type, and, if applicable, its character set and collation
attributes. Any other attributes are ignored.
NOT NULL Constraint: The variable can be constrained NOT NULL if required. If a domain has been specified
as the data type and already carries the NOT NULL constraint, it will not be necessary. With the other forms,
including use of a domain that is nullable, the NOT NULL attribute should be included if needed.
CHARACTER SET and COLLATE clauses: Unless specified, the character set and collation sequence of a string
variable will be the database defaults. A CHARACTER SET clause can be included, if required, to handle string
data that is going to be in a different character set. A valid collation sequence (COLLATE clause) can also be
included, with or without the character set clause.
Initializing a Variable: Local variables are NULL when execution of the module begins. They can be initialized
so that a starting or default value is available when they are first referenced. The DEFAULT <initvalue> form
can be used, or just the assignment operator, "=": = <initvalue>. The value can be any type-compatible literal
or context variable.
Important
Be sure to use this clause for any variables that are constrained to be NOT NULL and do not otherwise have
a default value available.
250
Procedural SQL (PSQL) Statements
See also: Data Types and Subtypes, Custom Data TypesDomains, CREATE DOMAIN
Syntax:
<block> ::=
BEGIN
<compound_statement>
[<compound_statement>
]
END
The BEGIN ... END construct is a two-part statement that wraps a block of statements that are executed as one unit
of code. Each block starts with the half-statement BEGIN and ends with the other half-statement END. Blocks
can be nested to unlimited depth. They may be empty, allowing them to act as stubs, without the need to write
dummy statements.
The BEGIN and END statements have no line terminators. However, when defining or altering a PSQL module
in the isql utility, that application requires that the last END statement be followed by its own terminator character,
that was previously switched, using SET TERM, to some string other than a semicolon. That terminator is not
part of the PSQL syntax.
The final, or outermost, END statement in a trigger terminates the trigger. What the final END statement does
in a stored procedure depends on the type of procedure:
In a selectable procedure, the final END statement returns control to the caller, returning SQLCODE 100,
indicating that there are no more rows to retrieve
In an executable procedure, the final END statement returns control to the caller, along with the current values
of any output parameters defined.
Example: A sample procedure from the employee.fdb database, showing simple usage of BEGIN...END
blocks:
SET TERM ^;
CREATE OR ALTER PROCEDURE DEPT_BUDGET (
251
Procedural SQL (PSQL) Statements
DNO CHAR(3))
RETURNS (
TOT DECIMAL(12,2))
AS
DECLARE VARIABLE SUMB DECIMAL(12,2);
DECLARE VARIABLE RDNO CHAR(3);
DECLARE VARIABLE CNT INTEGER;
BEGIN
TOT = 0;
SELECT
BUDGET
FROM
DEPARTMENT
WHERE DEPT_NO = :DNO
INTO :TOT;
SELECT
COUNT(BUDGET)
FROM
DEPARTMENT
WHERE HEAD_DEPT = :DNO
INTO :CNT;
IF (CNT = 0) THEN
SUSPEND;
FOR
SELECT
DEPT_NO
FROM
DEPARTMENT
WHERE HEAD_DEPT = :DNO
INTO :RDNO
DO
BEGIN
EXECUTE PROCEDURE DEPT_BUDGET(:RDNO)
RETURNING_VALUES :SUMB;
TOT = TOT + SUMB;
END
SUSPEND;
END^
SET TERM ;^
Syntax:
252
Procedural SQL (PSQL) Statements
IF (<condition>)
THEN <single_statement> ; | BEGIN <compound_statement> END
[ELSE <single_statement> ; | BEGIN <compound_statement> END]
Argument Description
condition A logical condition returning TRUE, FALSE or UNKNOWN
single_statement A single statement terminated with a semicolon
compound_statement Two or more statements wrapped in BEGIN ... END
The conditional jump statement IF ... THEN is used to branch the execution process in a PSQL module. The
condition is always enclosed in parentheses. If it returns the value TRUE, execution branches to the statement
or the block of statements after the keyword THEN. If an ELSE is present and the condition returns FALSE or
UNKNOWN, execution branches to the statement or the block of statements after it.
Multi-branch Jumps
PSQL does not provide multi-branch jumps, such as CASE or SWITCH. Nevertheless, the CASE search
statement from DSQL is available in PSQL and is able to satisfy at least some use cases in the manner
of a switch:
CASE <test_expr>
WHEN <expr> THEN result
[WHEN <expr> THEN result ...]
[ELSE defaultresult]
END
CASE
WHEN <bool_expr> THEN result
[WHEN <bool_expr> THEN result ...]
[ELSE defaultresult]
END
Example in PSQL:
...
C = CASE
WHEN A=2 THEN 1
WHEN A=1 THEN 3
ELSE 0
END;
...
253
Procedural SQL (PSQL) Statements
Example: An example using the IF statement. Assume that the FIRST, LINE2 and LAST variables were declared
earlier.
...
IF (FIRST IS NOT NULL) THEN
LINE2 = FIRST || ' ' || LAST;
ELSE
LINE2 = LAST;
...
WHILE ... DO
Used for: Looping constructs
Syntax:
WHILE <condition> DO
<single_statement> ; | BEGIN <compound_statement> END
Argument Description
condition A logical condition returning TRUE, FALSE or UNKNOWN
single_statement A single statement terminated with a semicolon
compound_statement Two or more statements wrapped in BEGIN ... END
A WHILE statement implements the looping construct in PSQL. The statement or the block of statements will
be executed until the condition returns TRUE. Loops can be nested to any depth.
Example: A procedure calculating the sum of numbers from 1 to I shows how the looping construct is used.
254
Procedural SQL (PSQL) Statements
i = i - 1;
END
END
S
==========
10
See also: IF ... THEN ... ELSE, LEAVE, EXIT, FOR SELECT, FOR EXECUTE STATEMENT
LEAVE
Used for: Terminating a loop
Syntax:
[label:]
<loop>
BEGIN
...
LEAVE [label];
...
END
<loop_stmt> ::=
FOR <select_stmt> INTO <var_list> DO
| FOR EXECUTE STATEMENT ... INTO <var_list> DO
| WHILE (<condition>)} DO
Argument Description
label Label
select_stmt SELECT statement
condition A logical condition returning TRUE, FALSE or UNKNOWN
255
Procedural SQL (PSQL) Statements
A LEAVE statement immediately terminates the inner loop of a WHILE or FOR looping statement. The LABEL
parameter is optional.
LEAVE can cause an exit from outer loops as well. Code continues to be executed from the first statement after
the termination of the outer loop block.
Examples:
1. Leaving a loop if an error occurs on an insert into the NUMBERS table. The code continues to be executed
from the line C = 0.
...
WHILE (B < 10) DO
BEGIN
INSERT INTO NUMBERS(B)
VALUES (:B);
B = B + 1;
WHEN ANY DO
BEGIN
EXECUTE PROCEDURE LOG_ERROR (
CURRENT_TIMESTAMP,
'ERROR IN B LOOP');
LEAVE;
END
END
C = 0;
...
2. A example using labels in the LEAVE statement. LEAVE LOOPA terminates the outer loop and LEAVE
LOOPB terminates the inner loop. Note that the plain LEAVE statement would be enough to terminate the
inner loop.
...
STMT1 = 'SELECT NAME FROM FARMS';
LOOPA:
FOR EXECUTE STATEMENT :STMT1
INTO :FARM DO
BEGIN
STMT2 = 'SELECT NAME ' || 'FROM ANIMALS WHERE FARM = ''';
LOOPB:
FOR EXECUTE STATEMENT :STMT2 || :FARM || ''''
INTO :ANIMAL DO
BEGIN
IF (ANIMAL = 'FLUFFY') THEN
LEAVE LOOPB;
ELSE IF (ANIMAL = FARM) THEN
LEAVE LOOPA;
ELSE
SUSPEND;
END
END
...
256
Procedural SQL (PSQL) Statements
EXIT
Used for: Terminating module execution
Syntax:
EXIT;
The EXIT statement causes execution of the procedure or trigger to jump to the final END statement from any
point in the code, thus terminating the program.
SUSPEND
Used for: Passing output to the buffer and suspending execution while waiting for caller to fetch it
Syntax:
SUSPEND;
The SUSPEND statement is used in a selectable stored procedure to pass the values of output parameters to a
buffer and suspend execution. Execution remains suspended until the calling application fetches the contents
of the buffer. Execution resumes from the statement directly after the SUSPEND statement. In practice, this is
likely to be a new iteration of a looping process.
257
Procedural SQL (PSQL) Statements
Important Notes
1. Applications using interfaces that wrap the API perform the fetches from selectable procedures transpar-
ently.
2. When a SUSPEND statement is executed in an executable stored procedure, it is the same as executing the
EXIT statement, resulting in immediate termination of the procedure.
3. SUSPEND breaks the atomicity of the block in which it is located. If an error occurs in a selectable
procedure, statements executed after the final SUSPEND statement will be rolled back. Statements that
executed before the final SUSPEND statement will not be rolled back unless the transaction is rolled back.
EXECUTE STATEMENT
Used for: Executing dynamically created SQL statements
Syntax:
258
Procedural SQL (PSQL) Statements
Argument Description
paramless_stmt Literal string or variable containing a non-parameterized SQL query
stmt_with_params Literal string or variable containing a parameterized SQL query
paramname SQL query parameter name
value_expr SQL expression resolving to a value
user User name. It can be a string, CURRENT_USER or a string variable
password Password. It can be a string or a string variable
role Role. It can be a string, CURRENT_ROLE or a string variable
connection_string Connection string. It can be a string or a string variable
filepath Path to the primary database file
db_alias Database alias
hostname Computer name or IP address
varname Variable
The statement EXECUTE STATEMENT takes a string parameter and executes it as if it were a DSQL statement.
If the statement returns data, it can be passed to local variables by way of an INTO clause.
Parameterized Statements
You can use parameterseither named or positional in the DSQL statement string. Each parameter must be
assigned a value.
259
Procedural SQL (PSQL) Statements
2. If the statement has parameters, they must be enclosed in parentheses when EXECUTE STATEMENT is
called, regardless of whether they come directly as strings, as variable names or as expressions
3. Each named parameter must be prefixed by a colon (:) in the statement string itself, but not when the
parameter is assigned a value
4. Positional parameters must be assigned their values in the same order as they appear in the query text
5. The assignment operator for parameters is the special operator ":=", similar to the assignment operator in
Pascal
6. Each named parameter can be used in the statement more than once, but its value must be assigned only
once Editor note: I suspect something got lost in translation here. Needs confirmation or correction.
7. With positional parameters, the number of assigned values must match the number of parameter placehold-
ers (question marks) in the statement exactly
...
DECLARE license_num VARCHAR(15);
DECLARE connect_string VARCHAR (100);
DECLARE stmt VARCHAR (100) =
'SELECT license
FROM cars
WHERE driver = :driver AND location = :loc';
BEGIN
...
SELECT connstr
FROM databases
WHERE cust_id = :id
INTO connect_string;
...
FOR
SELECT id
FROM drivers
INTO current_driver
DO
BEGIN
FOR
SELECT location
FROM driver_locations
WHERE driver_id = :current_driver
INTO current_location
DO
BEGIN
...
EXECUTE STATEMENT (stmt)
(driver := current_driver,
loc := current_location)
ON EXTERNAL connect_string
260
Procedural SQL (PSQL) Statements
INTO license_num;
...
Traditionally, the executed SQL statement always ran within the current transaction, and this is still the default.
WITH AUTONOMOUS TRANSACTION causes a separate transaction to be started, with the same parameters as
the current transaction. It will be committed if the statement runs to completion without errors and rolled back
otherwise. WITH COMMON TRANSACTION uses the current transaction if possible.
If the statement must run in a separate connection, an already started transaction within that connection is used,
if available. Otherwise, a new transaction is started with the same parameters as the current transaction. Any new
transactions started under the COMMON regime are committed or rolled back with the current transaction.
By default, the SQL statement is executed with the privileges of the current user. Specifying WITH CALLER
PRIVILEGES adds to this the privileges of the calling procedure or trigger, just as if the statement were executed
261
Procedural SQL (PSQL) Statements
directly by the routine. WITH WITH CALLER PRIVILEGES has no effect if the ON EXTERNAL clause is also
present.
Connection Pooling
External connections made by statements WITH COMMON TRANSACTION (the default) will remain open
until the current transaction ends. They can be reused by subsequent calls to EXECUTE STATEMENT, but
only if the connect string is exactly the same, including case
External connections made by statements WITH AUTONOMOUS TRANSACTION are closed as soon as the
statement has been executed
Notice that statements WITH AUTONOMOUS TRANSACTION can and will re-use connections that were
opened earlier by statements WITH COMMON TRANSACTION. If this happens, the reused connection will
be left open after the statement has been executed. (It must be, because it has at least one un-committed
transaction!)
Transaction Pooling
If WITH COMMON TRANSACTION is in effect, transactions will be reused as much as possible. They will be
committed or rolled back together with the current transaction
If WITH AUTONOMOUS TRANSACTION is specified, a fresh transaction will always be started for the state-
ment. This transaction will be committed or rolled back immediately after the statement's execution
Exception Handling
Exception handling: When ON EXTERNAL is used, the extra connection is always made via a so-called external
provider, even if the connection is to the current database. One of the consequences is that exceptions cannot be
caught in the usual way. Every exception caused by the statement is wrapped in either an eds_connection
or an eds_statement error. In order to catch them in your PSQL code, you have to use WHEN GDSCODE
eds_connection, WHEN GDSCODE eds_statement or WHEN ANY.
Note
Without ON EXTERNAL, exceptions are caught in the usual way, even if an extra connection is made to the
current database.
Miscellaneous Notes
The character set used for the external connection is the same as that for the current connection
262
Procedural SQL (PSQL) Statements
- If at least one of AS USER, PASSWORD and ROLE is present, native authentication is attempted with the
given parameter values (locally or remotely, depending on the connect string). No defaults are used for
missing parameters
- If all three are absent and the connect string contains no hostname, then the new connection is established
on the local host with the same user and role as the current connection. The term 'local' means 'on the same
machine as the server' here. This is not necessarily the location of the client
- If all three are absent and the connect string contains a hostname, then trusted authentication is attempted
on the remote host (again, 'remote' from the perspective of the server). If this succeeds, the remote operating
system will provide the user name (usually the operating system account under which the Firebird process
runs)
If ON EXTERNAL is absent:
- If at least one of AS USER, PASSWORD and ROLE is present, a new connection to the current database is
opened with the suppled parameter values. No defaults are used for missing parameters
- If all three are absent, the statement is executed within the current connection
Notice
If a parameter value is NULL or '' (empty string), the entire parameter is considered absent. Additionally, AS US-
ER is considered absent if its value is equal to CURRENT_USER, and ROLE if it is the same as CURRENT_ROLE.
2. There are no dependency checks to discover whether tables or columns have been dropped
3. Even though the performance in loops has been significantly improved in Firebird 2.5, execution is still
considerably slower than when the same statements are launched directly
4. Return values are strictly checked for data type in order to avoid unpredictable type-casting exceptions.
For example, the string '1234' would convert to an integer, 1234, but 'abc' would give a conversion error
All in all, this feature is meant to be used very cautiously and you should always take the caveats into account.
If you can achieve the same result with PSQL and/or DSQL, it will almost always be preferable.
FOR SELECT
Used for: Looping row-by-row through a selected result set
263
Procedural SQL (PSQL) Statements
Syntax:
Argument Description
select_stmt SELECT statement
Cursor name. It must be unique among cursor names in the PSQL module
cursorname
(stored procedure, trigger or PSQL block)
A single statement, terminated with a colon, that performs all the processing for
single_statement
this FOR loop
A block of statements wrapped in BEGIN...END, that performs all the process-
compound_statement
ing for this FOR loop
retrieves each row sequentially from the result set and executes the statement or block of statements on the
row. In each iteration of the loop, the field values of the current row are copied into pre-declared variables.
Including the AS CURSOR clause enables positioned deletes and updates to be performedsee notes below
can carry named parameters that must be previously declared in the DECLARE VARIABLE statement or exist
as input or output parameters of the procedure
requires an INTO clause that is located at the end of the SELECT ... FROM ... specification. In each iteration
of the loop, the field values in the current row are copied to the list of variables specified in the INTO clause.
The loop repeats until all rows are retrieved, after which it terminates
can be terminated before all rows are retrieved by using a LEAVE statement
1. the OPEN, FETCH and CLOSE statements cannot be applied to a cursor surfaced by the AS CURSOR clause
2. the cursor name argument associated with an AS CURSOR clause must not clash with any names created
by DECLARE VARIABLE or DECLARE CURSOR statements at the top of the body code, nor with any other
cursors surfaced by an AS CURSOR clause
264
Procedural SQL (PSQL) Statements
3. The optional FOR UPDATE clause in the SELECT statement is not required for a positioned update
265
Procedural SQL (PSQL) Statements
SUSPEND;
END
END
END
3. Using the AS CURSOR clause to surface a cursor for the positioned delete of a record:
Syntax:
Argument Description
execute_stmt An EXECUTE STATEMENT string
A single statement, terminated with a colon, that performs all the processing for
single_statement
this FOR loop
A block of statements wrapped in BEGIN...END, that performs all the process-
compound_statement
ing for this FOR loop
266
Procedural SQL (PSQL) Statements
The statement FOR EXECUTE STATEMENT is used, in a manner analogous to FOR SELECT, to loop through the
result set of a dynamically executed query that returns multiple rows.
Example: Executing a dynamically constructed SELECT query that returns a data set:
OPEN
Used for: Opening a declared cursor
Syntax:
OPEN cursorname;
Argument Description
Cursor name. A cursor with this name must be previously declared with a DE-
cursorname
CLARE CURSOR statement
An OPEN statement opens a previously declared cursor, executes the SELECT statement declared for it and makes
the first record the result data set ready to fetch. OPEN can be applied only to cursors previously declared in a
DECLARE VARIABLE statement.
267
Procedural SQL (PSQL) Statements
Note
If the SELECT statement declared for the cursor has parameters, they must be declared as local variables or
exist as input or output parameters before the cursor is declared. When the cursor is opened, the parameter is
assigned the current value of the variable.
Examples:
SET TERM ^;
SET TERM ;^
2. A collection of scripts for creating views using a PSQL block with named cursors:
EXECUTE BLOCK
RETURNS (
SCRIPT BLOB SUB_TYPE TEXT)
AS
DECLARE VARIABLE FIELDS VARCHAR(8191);
DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME;
DECLARE VARIABLE RELATION RDB$RELATION_NAME;
DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE;
-- named cursor
DECLARE VARIABLE CUR_R CURSOR FOR (
SELECT
RDB$RELATION_NAME,
RDB$VIEW_SOURCE
FROM
RDB$RELATIONS
WHERE
RDB$VIEW_SOURCE IS NOT NULL);
-- named cursor with local variable
268
Procedural SQL (PSQL) Statements
FIELDS = NULL;
-- The CUR_F cursor will use
-- variable value of RELATION initialized above
OPEN CUR_F;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_F
INTO :FIELD_NAME;
IF (ROW_COUNT = 0) THEN
LEAVE;
IF (FIELDS IS NULL) THEN
FIELDS = TRIM(FIELD_NAME);
ELSE
FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME);
END
CLOSE CUR_F;
SUSPEND;
END
CLOSE CUR_R;
END
FETCH
Used for: Fetching successive records from a data set retrieved by a cursor
Syntax:
269
Procedural SQL (PSQL) Statements
Argument Description
Cursor name. A cursor with this name must be previously declared with a DE-
cursorname
CLARE CURSOR statement and opened by an OPEN statement.
A FETCH statement fetches the first and successive rows from the result set of the cursor and assigns the column
values to PSQL variables. The FETCH statement can be used only with a cursor declared with the DECLARE
CURSOR statement.
The INTO clause gets data from the current row of the cursor and loads them into PSQL variables.
For checking whether all of the the data set rows have been fetched, the context variable ROW_COUNT returns
the number of rows fetched by the statement. It is positive until all rows have been checked. A ROW_COUNT
of 1 indicates that the next fetch will be the last.
SET TERM ^;
SET TERM ;^
CLOSE
Used for: Closing a declared cursor
270
Procedural SQL (PSQL) Statements
Syntax:
CLOSE cursorname;
Argument Description
Cursor name. A cursor with this name must be previously declared with a DE-
cursorname
CLARE CURSOR statement and opened by an OPEN statement
A CLOSE statement closes an open cursor. Any cursors that are still open will be automatically closed after the
module code completes execution. Only a cursor that was declared with DECLARE CURSOR can be closed with
a CLOSE statement.
SET TERM ^;
IN AUTONOMOUS TRANSACTION
Used for: Executing a statement or a block of statements in an autonomous transaction
Syntax:
271
Procedural SQL (PSQL) Statements
Argument Description
compound_statement A statement or a block of statements
An autonomous transaction has the same isolation level as its parent transaction. Any exception that is thrown in
the block of the autonomous transaction code will result in the autonomous transaction being rolled back and all
made changes being cancelled. If the code executes successfully, the autonomous transaction will be committed.
Example: Using an autonomous transaction in a trigger for the database ON CONNECT event, in order to log
all connection attempts, including those that failed:
POST_EVENT
Used for: Notifying listening clients about database events in a module
272
Procedural SQL (PSQL) Statements
Syntax:
POST_EVENT event_name;
Argument Description
event_name Event name (message) limited to 127 bytes
The POST_EVENT statement notifies the event manager about the event, which saves it to an event table. When
the transaction is committed, the event manager notifies applications that are signalling their interest in the event.
The event name can be some sort of code or a short message: the choice is open as it is just a string up to 127 bytes.
The content of the string can be a string literal, a variable or any valid SQL expression that resolves to a string.
Example: Notifying the listening applications about inserting a record into the SALES table:
SET TERM ^;
CREATE TRIGGER POST_NEW_ORDER FOR SALES
ACTIVE AFTER INSERT POSITION 0
AS
BEGIN
POST_EVENT 'new_order';
END^
SET TERM ;^
System Exceptions
An exception is a message that is generated when an error occurs.
All exceptions handled by Firebird have predefined numeric values for context variables (symbols) and text
messages associated with them. Error messages are output in English by default. Localized Firebird builds are
available, where error messages are translated into other languages.
Complete listings of the system exceptions can be found in Appendix B: Exception Codes and Messages:
273
Procedural SQL (PSQL) Statements
Custom Exceptions
Custom exceptions can be declared in the database as persistent objects and called in the PSQL code to signal
specific errors; for instance, to enforce certain business rules. A custom exception consists of an identifier and
a default message of approximately 1000 bytes. For details, see CREATE EXCEPTION.
In PSQL code, exceptions are handled by means of the WHEN statement. Handling an exception in the code
involves either fixing the problem in situ, or stepping past it; either solution allows execution to continue without
returning an exception message to the client.
An exception results in execution being terminated in the block. Instead of passing the execution to the END
statement, the procedure moves outward through levels of nested blocks, starting from the block where the
exception is caught, searching for the code of the handler that knows about this exception. It stops searching
when it finds the first WHEN statement that can handle this exception.
EXCEPTION
Used for: Throwing a user-defined exception or re-throwing an exception
Syntax:
Argument Description
exception_name Exception name
Alternative message text to be returned to the caller interface when an exception
custom_message
is thrown. Maximum length of the text message is 1,021 bytes
An EXCEPTION statement throws the user-defined exception with the specified name. An alternative message
text of up to 1,021 bytes can optionally override the exception's default message text.
The exception can be handled in the statement, by just leaving it with no specific WHEN ... DO handler and
allowing the trigger or stored procedure to terminate and roll back all operations. The calling application ap-
plication gets the alternative message text, if any was specified; otherwise, it receives the message originally
defined for that exception.
Within the exception-handling blockand only within itthe caught exception can be re-thrown by executing
the EXCEPTION statement without parameters. If located outside the block, the re-thrown EXCEPTION call has
no effect.
274
Procedural SQL (PSQL) Statements
Note
Examples:
EXCEPTION EX_BAD_TYPE
'Incorrect record type with id ' || new.id;
3. Throwing an exception upon a condition and replacing the original message with an alternative message:
275
Procedural SQL (PSQL) Statements
s.order_status,
c.on_hold,
c.cust_no
FROM
sales s, customer c
WHERE
po_number = :po_num AND
s.cust_no = c.cust_no
INTO :ord_stat,
:hold_stat,
:cust_no;
WHEN ... DO
Used for: Catching an exception and handling the error
276
Procedural SQL (PSQL) Statements
Syntax:
<error> ::= {
EXCEPTION exception_name
| SQLCODE number
| GDSCODE errcode
}
Argument Description
exception_name Exception name
number SQLCODE error code
errcode Symbolic GDSCODE error name
compound_statement A statement or a block of statements
The WHEN ... DO statement is used to handle errors and user-defined exceptions. The statement catches all errors
and user-defined exceptions listed after the keyword WHEN keyword. If WHEN is followed by the keyword
ANY, the statement catches any error or user-defined exception, even if they have already been handled in a
WHEN block located higher up.
The WHEN ... DO block must be located at the very end of a block of statements, before the block's END statement.
The keyword DOis followed by a statement, or a block of statements inside a BEGIN ... END wrapper, that
handle the exception. The SQLCODE, GDSCODE, and SQLSTATE context variables are available in the context
of this statement or block. The EXCEPTION statement, with no parameters, can also be used in this context to
re-throw the error or exception.
Targeting GDSCODE
The argument for the WHEN GDSCODE clause is the symbolic name associated with the internally-defined
exception, such as grant_obj_notfound for GDS error 335544551.
After the DO clause, another GDSCODE context variable, containing the numeric code, becomes available for
use in the statement or the block of statements that code the error handler. That numeric code is required if you
want to compare a GDSCODE exception with a targeted error.
The WHEN ... DO statement or block is never executed unless one of the events targeted by its conditions occurs
in run-time. If the statement is executed, even if it actually does nothing, execution will continue as if no error
occurred: the error or user-defined exception neither terminates nor rolls back the operations of the trigger or
stored procedure.
However, if the WHEN ... DO statement or block does nothing to handle or resolve the error, the DML statement
(SELECT, INSERT, UPDATE, DELETE, MERGE) that caused the error will be rolled back and none of the state-
ments below it in the same block of statements are executed.
277
Procedural SQL (PSQL) Statements
Important
1. If the error is not caused by one of the DML statements (SELECT, INSERT, UPDATE, DELETE, MERGE),
the entire block of statements will be rolled back, not just the one that caused an error. Any operations
in the WHEN ... DO statement will be rolled back as well. The same limitation applies to the EXECUTE
PROCEDURE statement. Read an interesting discussion of the phenomenon in Firebird Tracker ticket
CORE-4483.
2. In selectable stored procedures, output rows that were already passed to the client in previous iterations of a
FOR SELECT DO SUSPEND loop remain available to the client if an exception is thrown subsequently
in the process of retrieving rows.
All changes made before the statement that caused the error are visible to a WHEN ... DO statement. However, if
you try to log them in an autonomous transaction, those changes are unavailable, because the transaction where
the changes took place is not committed at the point when the autonomous transaction is started. Example 4,
below, demonstrates this behaviour.
Tip
When handling exceptions, it is sometimes desirable to handle the exception by writing a log message to mark
the fault and having execution continue past the faulty record. Logs can be written to regular tables but there is
a problem with that: the log records will disappear if an unhandled error causes the module to stop executing
and a rollback ensues. Use of external tables can be useful here, as data written to them is transaction-indepen-
dent. The linked external file will still be there, regardless of whether the overall process succeeds or not.
278
Procedural SQL (PSQL) Statements
...
WHEN GDSCODE GRANT_OBJ_NOTFOUND,
GDSCODE GRANT_FLD_NOTFOUND,
GDSCODE GRANT_NOPRIV,
GDSCODE GRANT_NOPRIV_ON_BASE
DO
BEGIN
EXECUTE PROCEDURE LOG_GRANT_ERROR(GDSCODE);
EXIT;
END
...
See also: EXCEPTION, CREATE EXCEPTION, SQLCODE and GDSCODE Error Codes and Message Texts and
SQLSTATE Codes and Message Texts
279
Chapter 8
Built-in functions
and Variables
REVIEW STATUS
All sections from this point forward to the end of the chapter are awaiting technical and editorial review.
Here, the large collection of context variables, scalar functions and aggregate functions are described.
Context variables
CURRENT_CONNECTION
Available in: DSQL, PSQL
Type: INTEGER
Examples:
The value of CURRENT_CONNECTION is stored on the database header page and reset to 0 upon restore. Since
version 2.1, it is incremented upon every new connection. (In previous versions, it was only incremented if the
client read it during a session.) As a result, CURRENT_CONNECTION now indicates the number of connections
since the creation or most recent restorationof the database.
CURRENT_DATE
Available in: DSQL, PSQL, ESQL
Type: DATE
280
Built-in functions and Variables
Syntax:
CURRENT_DATE
Examples:
Notes:
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_DATE will remain
constant every time it is read. If multiple modules call or trigger each other, the value will remain constant
throughout the duration of the outermost module. If you need a progressing value in PSQL (e.g. to measure
time intervals), use 'TODAY'.
CURRENT_ROLE
Available in: DSQL, PSQL
Description: CURRENT_ROLE is a context variable containing the role of the currently connected user. If there
is no active role, CURRENT_ROLE is NONE.
Type: VARCHAR(31)
Example:
CURRENT_ROLE always represents a valid role or NONE. If a user connects with a non-existing role, the engine
silently resets it to NONE without returning an error.
CURRENT_TIME
Available in: DSQL, PSQL, ESQL
Description: CURRENT_TIME returns the current server time. In versions prior to 2.0, the fractional part used
to be always .0000, giving an effective precision of 0 decimals. From Firebird 2.0 onward you can specify
a precision when polling this variable. The default is still 0 decimals, i.e. seconds precision.
Type: TIME
Syntax:
CURRENT_TIME [(precision)]
precision ::= 0 | 1 | 2 | 3
281
Built-in functions and Variables
Parameter Description
precision Precision. The default value is 0. Not supported in ESQL
Examples:
Notes:
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIME will remain
constant every time it is read. If multiple modules call or trigger each other, the value will remain constant
throughout the duration of the outermost module. If you need a progressing value in PSQL (e.g. to measure
time intervals), use 'NOW'.
CURRENT_TIMESTAMP
Available in: DSQL, PSQL, ESQL
Description: CURRENT_TIMESTAMP returns the current server date and time. In versions prior to 2.0, the frac-
tional part used to be always .0000, giving an effective precision of 0 decimals. From Firebird 2.0 onward
you can specify a precision when polling this variable. The default is 3 decimals, i.e. milliseconds precision.
Type: TIMESTAMP
Syntax:
CURRENT_TIMESTAMP [(precision)]
precision ::= 0 | 1 | 2 | 3
Parameter Description
precision Precision. The default value is 0. Not supported in ESQL
282
Built-in functions and Variables
Examples:
Notes:
The default precision of CURRENT_TIME is still 0 decimals, so in Firebird 2.0 and up CURRENT_TIMESTAMP
is no longer the exact sum of CURRENT_DATE and CURRENT_TIME, unless you explicitly specify a preci-
sion.
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIMESTAMP will
remain constant every time it is read. If multiple modules call or trigger each other, the value will remain
constant throughout the duration of the outermost module. If you need a progressing value in PSQL (e.g. to
measure time intervals), use 'NOW'.
CURRENT_TRANSACTION
Available in: DSQL, PSQL
Type: INTEGER
Examples:
New.Txn_ID = current_transaction;
The value of CURRENT_TRANSACTION is stored on the database header page and reset to 0 upon restore. It
is incremented with every new transaction.
CURRENT_USER
Available in: DSQL, PSQL
Description: CURRENT_USER is a context variable containing the name of the currently connected user. It is
fully equivalent to USER.
Type: VARCHAR(31)
Example:
283
Built-in functions and Variables
DELETING
Available in: PSQL
Description: Available in triggers only, DELETING indicates if the trigger fired because of a DELETE operation.
Intended for use in multi-action triggers.
Type: boolean
Example:
if (deleting) then
begin
insert into Removed_Cars (id, make, model, removed)
values (old.id, old.make, old.model, current_timestamp);
end
GDSCODE
Available in: PSQL
Description: In a WHEN ... DO error handling block, the GDSCODE context variable contains the numeri-
cal representation of the current Firebird error code. Prior to Firebird 2.0, GDSCODE was only set in WHEN
GDSCODE handlers. Now it may also be non-zero in WHEN ANY, WHEN SQLCODE and WHEN EXCEPTION
blocks, provided that the condition raising the error corresponds with a Firebird error code. Outside error han-
dlers, GDSCODE is always 0. Outside PSQL it doesn't exist at all.
Type: INTEGER
Example:
Notice
After WHEN GDSCODE, you must use symbolic names like grant_obj_notfound etc. But the GDSCODE context
variable is an INTEGER. If you want to compare it against a specific error, the numeric value must be used,
e.g. 335544551 for grant_obj_notfound.
INSERTING
Available in: PSQL
284
Built-in functions and Variables
Description: Available in triggers only, INSERTING indicates if the trigger fired because of an INSERT operation.
Intended for use in multi-action triggers.
Type: boolean
Example:
NEW
Available in: PSQL, triggers only
Description: NEW contains the new version of a database record that has just been inserted or updated. Starting
with Firebird 2.0 it is read-only in AFTER triggers.
Note
In multi-action triggersintroduced in Firebird 1.5NEW is always available. But if the trigger is fired by a
DELETE, there will be no new version of the record. In that situation, reading from NEW will always return
NULL; writing to it will cause a runtime exception.
'NOW'
Available in: DSQL, PSQL, ESQL
Description: 'NOW' is not a variable but a string literal. It is, however, special in the sense that when you CAST()
it to a date/time type, you will get the current date and/or time. The fractional part of the time used to be always
.0000, giving an effective seconds precision. Since Firebird 2.0 the precision is 3 decimals, i.e. milliseconds.
'NOW' is case-insensitive, and the engine ignores leading or trailing spaces when casting.
Note: Please be advised that these shorthand expressions are evaluated immediately at parse time and stay the
same as long as the statement remains prepared. Thus, even if a query is executed multiple times, the value for
e.g. timestamp 'now' won't change, no matter how much time passes. If you need the value to progress (i.e.
be evaluated upon every call), use a full cast.
Type: CHAR(3)
Examples:
285
Built-in functions and Variables
Notes:
'NOW' always returns the actual date/time, even in PSQL modules, where CURRENT_DATE, CURRENT_TIME
and CURRENT_TIMESTAMP return the same value throughout the duration of the outermost routine. This
makes 'NOW' useful for measuring time intervals in triggers, procedures and executable blocks.
OLD
Available in: PSQL, triggers only
Description: OLD contains the existing version of a database record just before a deletion or update. Starting
with Firebird 2.0 it is read-only.
Note
In multi-action triggers introduced in Firebird 1.5OLD is always available. But if the trigger is fired by
an INSERT, there is obviously no pre-existing version of the record. In that situation, reading from OLD will
always return NULL; writing to it will cause a runtime exception.
ROW_COUNT
Available in: PSQL
Description: The ROW_COUNT context variable contains the number of rows affected by the most recent DML
statement (INSERT, UPDATE, DELETE, SELECT or FETCH) in the current trigger, stored procedure or executable
block.
286
Built-in functions and Variables
Type: INTEGER
Example:
After a singleton SELECT, ROW_COUNT is 1 if a data row was retrieved and 0 otherwise.
In a FOR SELECT loop, ROW_COUNT is incremented with every iteration (starting at 0 before the first).
After a FETCH from a cursor, ROW_COUNT is 1 if a data row was retrieved and 0 otherwise. Fetching more
records from the same cursor does not increment ROW_COUNT beyond 1.
Note
ROW_COUNT cannot be used to determine the number of rows affected by an EXECUTE STATEMENT or
EXECUTE PROCEDURE command.
SQLCODE
Available in: PSQL
Description: In a WHEN ... DO error handling block, the SQLCODE context variable contains the current SQL
error code. Prior to Firebird 2.0, SQLCODE was only set in WHEN SQLCODE and WHEN ANY handlers. Now it
may also be non-zero in WHEN GDSCODE and WHEN EXCEPTION blocks, provided that the condition raising
the error corresponds with an SQL error code. Outside error handlers, SQLCODE is always 0. Outside PSQL
it doesn't exist at all.
Type: INTEGER
Example:
when any
do
begin
if (sqlcode <> 0) then
Msg = 'An SQL error occurred!';
else
Msg = 'Something bad happened!';
exception ex_custom Msg;
end
Important notice: SQLCODE is now deprecated in favour of the SQL-2003-compliant SQLSTATE status code.
Support for SQLCODE and WHEN SQLCODE will be discontinued in some future version of Firebird.
287
Built-in functions and Variables
SQLSTATE
Available in: PSQL
Description: In a WHEN ... DO error handler, the SQLSTATE context variable contains the 5-character,
SQL-2003-compliant status code resulting from the statement that raised the error. Outside error handlers, SQL-
STATE is always '00000'. Outside PSQL it is not available at all.
Type: CHAR(5)
Example:
when any
do
begin
Msg = case sqlstate
when '22003' then 'Numeric value out of range.'
when '22012' then 'Division by zero.'
when '23000' then 'Integrity constraint violation.'
else 'Something bad happened! SQLSTATE = ' || sqlstate
end;
exception ex_custom Msg;
end
Notes:
SQLSTATE is destined to replace SQLCODE. The latter is now deprecated in Firebird and will disappear in
some future version.
Firebird does not (yet) support the syntax WHEN SQLSTATE ... DO. You have to use WHEN ANY and test
the SQLSTATE variable within the handler.
Each SQLSTATE code is the concatenation of a 2-character class and a 3-character subclass. Classes 00
(successful completion), 01 (warning) and 02 (no data) represent completion conditions. Every status code
outside these classes is an exception. Because classes 00, 01 and 02 don't raise an error, they won't ever show
up in the SQLSTATE variable.
For a complete listing of SQLSTATE codes, consult the SQLSTATE Codes and Message Texts section in
Appendix B: Exception Handling, Codes and Messages.
'TODAY'
Available in: DSQL, PSQL, ESQL
Description: 'TODAY' is not a variable but a string literal. It is, however, special in the sense that when you
CAST() it to a date/time type, you will get the current date. 'TODAY' is case-insensitive, and the engine ignores
leading or trailing spaces when casting.
Type: CHAR(5)
288
Built-in functions and Variables
Examples:
Notes:
'TODAY' always returns the actual date, even in PSQL modules, where CURRENT_DATE, CURRENT_TIME
and CURRENT_TIMESTAMP return the same value throughout the duration of the outermost routine. This
makes 'TODAY' useful for measuring time intervals in triggers, procedures and executable blocks (at least if
your procedures are running for days).
Except in the situation mentioned above, reading CURRENT_DATE, is generally preferable to casting 'NOW'.
'TOMORROW'
Available in: DSQL, PSQL, ESQL
Description: 'TOMORROW' is not a variable but a string literal. It is, however, special in the sense that when you
CAST() it to a date/time type, you will get the date of the next day. See also 'TODAY'.
Type: CHAR(8)
Examples:
UPDATING
Available in: PSQL
289
Built-in functions and Variables
Description: Available in triggers only, UPDATING indicates if the trigger fired because of an UPDATE opera-
tion. Intended for use in multi-action triggers.
Type: boolean
Example:
'YESTERDAY'
Available in: DSQL, PSQL, ESQL
Description: 'YESTERDAY' is not a variable but a string literal. It is, however, special in the sense that when you
CAST() it to a date/time type, you will get the date of the day before. See also 'TODAY'.
Type: CHAR(9)
Examples:
USER
Available in: DSQL, PSQL
Description: USER is a context variable containing the name of the currently connected user. It is fully equivalent
to CURRENT_USER.
Type: VARCHAR(31)
Example:
290
Built-in functions and Variables
New.purchases = 0;
end
Scalar Functions
Upgraders: PLEASE READ!
A large number of functions that were implemented as external functions (UDFs) in earlier versions of
Firebird have been progressively re-implemented as internal (built-in) functions. If some external function
of the same name as a built-in one is declared in your database, it will remain there and it will override
any internal function of the same name.
To make the internal function available, you need either to DROP the UDF, or to use ALTER EXTERNAL
FUNCTION the to change the declared name of the UDF.
RDB$GET_CONTEXT()
Note
RDB$GET_CONTEXT and its counterpart RDB$SET_CONTEXT are actually predeclared UDFs. They are listed
here as internal functions because they are always presentthe user doesn't have to do anything to make them
available.
Description: Retrieves the value of a context variable from one of the namespaces SYSTEM, USER_SESSION
and USER_TRANSACTION.
Syntax:
Parameter Description
namespace Namespace
varname Variable name. Case-sensitive. Maximum length is 80 characters
291
Built-in functions and Variables
The namespaces: The USER_SESSION and USER_TRANSACTION namespaces are initially empty. The user can
create and set variables in them with RDB$SET_CONTEXT() and retrieve them with RDB$GET_CONTEXT(). The
SYSTEM namespace is read-only. It contains a number of predefined variables, shown in the table below.
DB_NAME Either the full path to the database orif connecting via the path is disallowed
its alias.
NETWORK_PROTOCOL The protocol used for the connection: 'TCPv4', 'WNET', 'XNET' or NULL.
CLIENT_ADDRESS For TCPv4, this is the IP address. For XNET, the local process ID. For all other
protocols this variable is NULL.
CURRENT_USER Same as global CURRENT_USER variable.
CURRENT_ROLE Same as global CURRENT_ROLE variable.
SESSION_ID Same as global CURRENT_CONNECTION variable.
TRANSACTION_ID Same as global CURRENT_TRANSACTION variable.
ISOLATION_LEVEL The isolation level of the current transaction: 'READ COMMITTED', 'SNAPSHOT'
or 'CONSISTENCY'.
ENGINE_VERSION The Firebird engine (server) version. Added in 2.1.
Return values and error behaviour: If the polled variable exists in the given namespace, its value will be returned
as a string of max. 255 characters. If the namespace doesn't exist or if you try to access a non-existing variable
in the SYSTEM namespace, an error is raised. If you poll a non-existing variable in one of the other namespaces,
NULL is returned. Both namespace and variable names must be given as single-quoted, case-sensitive, non-NULL
strings.
Examples:
RDB$SET_CONTEXT()
Note
RDB$SET_CONTEXT and its counterpart RDB$GET_CONTEXT are actually predeclared UDFs. They are listed
here as internal functions because they are always presentthe user doesn't have to do anything to make them
available.
292
Built-in functions and Variables
Description: Creates, sets or unsets a variable in one of the user-writable namespaces USER_SESSION and
USER_TRANSACTION.
Syntax:
Parameter Description
namespace Namespace
varname Variable name. Case-sensitive. Maximum length is 80 characters
value Data of any type provided it can be cast to VARCHAR(255)
The namespaces: The USER_SESSION and USER_TRANSACTION namespaces are initially empty. The user can
create and set variables in them with RDB$SET_CONTEXT() and retrieve them with RDB$GET_CONTEXT(). The
USER_SESSION context is bound to the current connection. Variables in USER_TRANSACTION only exist in the
transaction in which they have been set. When the transaction ends, the context and all the variables defined
in it are destroyed.
Return values and error behaviour: The function returns 1 if the variable already existed before the call and 0
if it didn't. To remove a variable from a context, set it to NULL. If the given namespace doesn't exist, an error is
raised. Both namespace and variable names must be entered as single-quoted, case-sensitive, non-NULL strings.
Examples:
Notes:
All USER_TRANSACTION variables will survive a ROLLBACK RETAIN (see ROLLBACK Options) or ROLL-
BACK TO SAVEPOINT unaltered, no matter at which point during the transaction they were set.
Due to its UDF-like nature, RDB$SET_CONTEXT canin PSQL onlybe called like a void function, without
assigning the result, as in the second example above. Regular internal functions don't allow this type of use.
293
Built-in functions and Variables
Mathematical Functions
ABS()
Syntax:
ABS (number)
Parameter Description
value An expression of a numeric type
ACOS()
Syntax:
ACOS (number)
Parameter Description
value An expression of a numeric type within the range [-1; 1]
294
Built-in functions and Variables
ASIN()
Available in: DSQL, PSQL
Syntax:
ASIN (number)
Parameter Description
value An expression of a numeric type within the range [-1; 1]
ATAN()
Available in: DSQL, PSQL
Syntax:
ATAN (number)
Parameter Description
value An expression of a numeric type
Description: The function ATAN returns the arc tangent of the argument. The result is an angle in the range
<-pi/2, pi/2>.
ATAN2()
Available in: DSQL, PSQL
295
Built-in functions and Variables
Syntax:
ATAN2 (y, x)
Parameter Description
x An expression of a numeric type
y An expression of a numeric type
Description: Returns the angle whose sine-to-cosine ratio is given by the two arguments, and whose sine and
cosine signs correspond to the signs of the arguments. This allows results across the entire circle, including the
angles -pi/2 and pi/2.
If both y and x are 0, the result is meaningless. Starting with Firebird 3, an error will be raised if both argu-
ments are 0. At v.2.5.4, it is still not fixed in lower versions. For more details, visit Tracker ticket CORE-3201.
Notes:
A fully equivalent description of this function is the following: ATAN2(y, x) is the angle between the pos-
itive X-axis and the line from the origin to the point (x, y). This also makes it obvious that ATAN2(0, 0)
is undefined.
If both sine and cosine of the angle are already known, ATAN2(sin, cos) gives the angle.
CEIL(), CEILING()
Syntax:
CEIL[ING] (number)
Parameter Description
number An expression of a numeric type
296
Built-in functions and Variables
Description: Returns the smallest whole number greater than or equal to the argument.
COS()
Available in: DSQL, PSQL
Syntax:
COS (angle)
Parameter Description
angle An angle in radians
COSH()
Available in: DSQL, PSQL
Syntax:
COSH (number)
Parameter Description
number A number of a numeric type
COT()
Available in: DSQL, PSQL
297
Built-in functions and Variables
Syntax:
COT (angle)
Parameter Description
angle An angle in radians
EXP()
Available in: DSQL, PSQL
Syntax:
EXP (number)
Parameter Description
number A number of a numeric type
FLOOR()
Available in: DSQL, PSQL
Syntax:
FLOOR (number)
Parameter Description
number An expression of a numeric type
298
Built-in functions and Variables
Description: Returns the largest whole number smaller than or equal to the argument.
LN()
Available in: DSQL, PSQL
Syntax:
LN (number)
Parameter Description
number An expression of a numeric type
LOG()
Available in: DSQL, PSQL
Syntax:
LOG (x, y)
Parameter Description
x Base. An expression of a numeric type
y An expression of a numeric type
299
Built-in functions and Variables
If either argument is 0 or below, an error is raised. (Before 2.5, this would result in NaN, INF or 0, depending
on the exact values of the arguments.)
LOG10()
Syntax:
LOG10 (number)
Parameter Description
number An expression of a numeric type
An error is raised if the argument is negative or 0. (In versions prior to 2.5, such values would result in NaN
and -INF, respectively.)
MOD()
Syntax:
MOD (a, b)
Parameter Description
a An expression of a numeric type
b An expression of a numeric type
300
Built-in functions and Variables
Non-integer arguments are rounded before the division takes place. So, 7.5 mod 2.5 gives 2 (8 mod 3), not 0.
PI()
Syntax:
PI ()
POWER()
Syntax:
POWER (x, y)
Parameter Description
x An expression of a numeric type
y An expression of a numeric type
RAND()
301
Built-in functions and Variables
Syntax:
RAND ()
ROUND()
Available in: DSQL, PSQL
Syntax:
Parameter Description
number An expression of a numeric type
An integer specifying the number of decimal places toward which rounding is to
be performed, e.g.:
Description: Rounds a number to the nearest integer. If the fractional part is exactly 0.5, rounding is upward
for positive numbers and downward for negative numbers. With the optional scale argument, the number can
be rounded to powers-of-ten multiples (tens, hundreds, tenths, hundredths, etc.) instead of just integers.
Important
If you are used to the behaviour of the external function ROUND, please notice that the internal function
always rounds halves away from zero, i.e. downward for negative numbers.
Examples: If the scale argument is present, the result usually has the same scale as the first argument:
302
Built-in functions and Variables
ROUND(45.1212) -- returns 45
SIGN()
Syntax:
SIGN (number)
Parameter Description
number An expression of a numeric type
SIN()
Syntax:
SIN (angle)
Parameter Description
angle An angle, in radians
SINH()
303
Built-in functions and Variables
Syntax:
SINH (number)
Parameter Description
number An expression of a numeric type
SQRT()
Available in: DSQL, PSQL
Syntax:
SQRT (number)
Parameter Description
number An expression of a numeric type
TAN()
Available in: DSQL, PSQL
Syntax:
TAN (angle)
Parameter Description
angle An angle, in radians
304
Built-in functions and Variables
TANH()
Syntax:
TANH (number)
Parameter Description
number An expression of a numeric type
Due to rounding, any non-NULL result is in the range [-1, 1] (mathematically, it's <-1, 1>).
TRUNC()
Syntax:
Parameter Description
number An expression of a numeric type
An integer specifying the number of decimal places toward which truncating is
to be performed, e.g.:
305
Built-in functions and Variables
Parameter Description
-2 for truncating to the nearest multiple of 100
Description: Returns the integer part of a number. With the optional scale argument, the number can be
truncated to powers-of-ten multiples (tens, hundreds, tenths, hundredths, etc.) instead of just integers.
Notes:
If the scale argument is present, the result usually has the same scale as the first argument, e.g.
Important
If you are used to the behaviour of the external function TRUNCATE, please notice that the internal function
TRUNC always truncates toward zero, i.e. upward for negative numbers.
ASCII_CHAR()
Syntax:
ASCII_CHAR (<code>)
Parameter Description
code An integer within the range from 0 to 255
Description: Returns the ASCII character corresponding to the number passed in the argument.
306
Built-in functions and Variables
Important
If you are used to the behaviour of the ASCII_CHAR UDF, which returns an empty string if the argument
is 0, please notice that the internal function correctly returns a character with ASCII code 0 here.
ASCII_VAL()
Syntax:
ASCII_VAL (ch)
Parameter Description
A string of the [VAR]CHAR data type or a text BLOB with the maximum size
ch
of 32,767 bytes
If the argument is a string with more than one character, the ASCII code of the first character is returned.
If the first character of the argument string is multi-byte, an error is raised. (A bug in Firebird 2.12.1.3
and 2.5 causes an error to be raised if any character in the string is multi-byte. This is fixed in versions 2.1.4
and 2.5.1.)
BIT_LENGTH()
Syntax:
BIT_LENGTH (string)
Parameter Description
string An expression of a string type
307
Built-in functions and Variables
Description: Gives the length in bits of the input string. For multi-byte character sets, this may be less
than the number of characters times 8 times the formal number of bytes per character as found in RDB
$CHARACTER_SETS.
Note
With arguments of type CHAR, this function takes the entire formal string length (e.g. the declared length of a
field or variable) into account. If you want to obtain the logical bit length, not counting the trailing spaces,
right-TRIM the argument before passing it to BIT_LENGTH.
BLOB support: Since Firebird 2.1, this function fully supports text BLOBs of any length and character set.
Examples:
select bit_length
(cast (_iso8859_1 'Gr di!' as varchar(24) character set utf8))
from rdb$database
-- returns 80: and take up two bytes each in UTF8
select bit_length
(cast (_iso8859_1 'Gr di!' as char(24) character set utf8))
from rdb$database
-- returns 208: all 24 CHAR positions count, and two of them are 16-bit
CHAR_LENGTH(), CHARACTER_LENGTH()
Syntax:
CHAR_LENGTH (str)
CHARACTER_LENGTH (string)
Parameter Description
string An expression of a string type
308
Built-in functions and Variables
Notes
With arguments of type CHAR, this function returns the formal string length (i.e. the declared length of a
field or variable). If you want to obtain the logical length, not counting the trailing spaces, right-TRIM the
argument before passing it to CHAR[ACTER]_LENGTH.
>BLOB support: Since Firebird 2.1, this function fully supports text BLOBs of any length and character set.
Examples:
select char_length
(cast (_iso8859_1 'Gr di!' as varchar(24) character set utf8))
from rdb$database
-- returns 8; the fact that and take up two bytes each is irrelevant
select char_length
(cast (_iso8859_1 'Gr di!' as char(24) character set utf8))
from rdb$database
-- returns 24: all 24 CHAR positions count
HASH()
Syntax:
HASH (string)
Parameter Description
string An expression of a string type
Description: Returns a hash value for the input string. This function fully supports text BLOBs of any length
and character set.
LEFT()
309
Built-in functions and Variables
Syntax:
Parameter Description
string An expression of a string type
number Integer. Defines the number of characters to return
Description: Returns the leftmost part of the argument string. The number of characters is given in the second
argument.
This function fully supports text BLOBs of any length, including those with a multi-byte character set.
If string is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n the length of
the input string.
If the length argument exceeds the string length, the input string is returned unchanged.
If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes
0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
LOWER()
Syntax:
LOWER (string)
Parameter Description
string An expression of a string type
Description: Returns the lower-case equivalent of the input string. The exact result depends on the character
set. With ASCII or NONE for instance, only ASCII characters are lowercased; with OCTETS, the entire string is
returned unchanged. Since Firebird 2.1 this function also fully supports text BLOBs of any length and character
set.
310
Built-in functions and Variables
Name Clash
Because LOWER is a reserved word, the internal function will take precedence even if the external function
by that name has also been declared. To call the (inferior!) external function, use double-quotes and the exact
capitalisation, as in "LOWER"(str).
Example:
LPAD()
Syntax:
Parameter Description
str An expression of a string type
endlen Output string length
The character or string to be used to pad the source string up to the specified
padstr
length. Default is space (' ')
Description: Left-pads a string with spaces or with a user-supplied string until a given length is reached.
This function fully supports text BLOBs of any length and character set.
If padstr is given and equals '' (empty string), no padding takes place.
If endlen is less than the current string length, the string is truncated to endlen, even if padstr is the
empty string.
Note
In Firebird 2.12.1.3, all non-BLOB results were of type VARCHAR(32765), which made it advisable to cast
them to a more modest size. This is no longer the case.
311
Built-in functions and Variables
Warning
When used on a BLOB, this function may need to load the entire object into memory. Although it does try to
limit memory consumption, this may affect performance if huge BLOBs are involved.
Examples:
OCTET_LENGTH()
Syntax:
OCTET_LENGTH (string)
Parameter Description
string An expression of a string type
Description: Gives the length in bytes (octets) of the input string. For multi-byte character sets, this may
be less than the number of characters times the formal number of bytes per character as found in RDB
$CHARACTER_SETS.
Note
With arguments of type CHAR, this function takes the entire formal string length (e.g. the declared length of a
field or variable) into account. If you want to obtain the logical byte length, not counting the trailing spaces,
right-TRIM the argument before passing it to OCTET_LENGTH.
BLOB support: Since Firebird 2.1, this function fully supports text BLOBs of any length and character set.
Examples:
312
Built-in functions and Variables
select octet_length
(cast (_iso8859_1 'Gr di!' as varchar(24) character set utf8))
from rdb$database
-- returns 10: and take up two bytes each in UTF8
select octet_length
(cast (_iso8859_1 'Gr di!' as char(24) character set utf8))
from rdb$database
-- returns 26: all 24 CHAR positions count, and two of them are 2-byte
OVERLAY()
Syntax:
Parameter Description
string The string into which the replacement takes place
replacement Replacement string
pos The position from which replacement takes place (starting position)
length The number of characters that are to be overwritten
Description: OVERLAY() overwrites part of a string with another string. By default, the number of characters
removed from (overwritten in) the host string equals the length of the replacement string. With the optional
fourth argument, a different number of characters can be specified for removal.
If string or replacement is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with
n the sum of the lengths of string and replacement.
If pos is beyond the end of string, replacement is placed directly after string.
If the number of characters from pos to the end of string is smaller than the length of replacement (or
than the length argument, if present), string is truncated at pos and replacement placed after it.
The effect of a FOR 0 clause is that replacement is simply inserted into string.
313
Built-in functions and Variables
If pos or length is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes 0,
1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
Examples:
Warning
When used on a BLOB, this function may need to load the entire object into memory. This may affect perfor-
mance if huge BLOBs are involved.
POSITION()
Syntax:
Parameter Description
substr The substring whose position is to be searched for
string The string which is to be searched
startpos The position in string where the search is to start
314
Built-in functions and Variables
Description: Returns the (1-based) position of the first occurrence of a substring in a host string. With the
optional third argument, the search starts at a given offset, disregarding any matches that may occur earlier in
the string. If no match is found, the result is 0.
Notes:
The optional third argument is only supported in the second syntax (comma syntax).
The empty string is considered a substring of every string. Therefore, if substr is '' (empty string) and
string is not NULL, the result is:
Notice: A bug in Firebird 2.12.1.3 and 2.5 causes POSITION to always return 1 if substr is the empty
string. This is fixed in 2.1.4 and 2.5.1.
This function fully supports text BLOBs of any size and character set.
Examples:
Warning
When used on a BLOB, this function may need to load the entire object into memory. This may affect perfor-
mance if huge BLOBs are involved.
REPLACE()
Syntax:
Parameter Description
str The string in which the replacement is to take place
find The string to search for
315
Built-in functions and Variables
Parameter Description
repl The replacement string
This function fully supports text BLOBs of any length and character set.
If any argument is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n calculated
from the lengths of str, find and repl in such a way that even the maximum possible number of replacements won't
overflow the field.
If repl is the empty string, all occurrences of find are deleted from str.
If any argument is NULL, the result is always NULL, even if nothing would have been replaced.
Examples:
Warning
When used on a BLOB, this function may need to load the entire object into memory. This may affect perfor-
mance if huge BLOBs are involved.
REVERSE()
Syntax:
REVERSE (str)
Parameter Description
string An expression of a string type
316
Built-in functions and Variables
Examples:
Tip
This function comes in very handy if you want to group, search or order on string endings, e.g. when dealing
with domain names or email addresses:
RIGHT()
Syntax:
Parameter Description
string An expression of a string type
length Integer. Defines the number of characters to return
Description: Returns the rightmost part of the argument string. The number of characters is given in the second
argument.
This function supports text BLOBs of any length, but has a bug in versions 2.12.1.3 and 2.5 that makes
it fail with text BLOBs larger than 1024 bytes that have a multi-byte character set. This has been fixed in
versions 2.1.4 and 2.5.1.
If string is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n the length of
the input string.
If the length argument exceeds the string length, the input string is returned unchanged.
If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes
0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
317
Built-in functions and Variables
Warning
When used on a BLOB, this function may need to load the entire object into memory. This may affect perfor-
mance if huge BLOBs are involved.
RPAD()
Available in: DSQL, PSQL
Syntax:
Parameter Description
str An expression of a string type
endlen Output string length
The character or string to be used to pad the source string up to the specified
endlen
length. Default is space (' ')
Description: Right-pads a string with spaces or with a user-supplied string until a given length is reached.
This function fully supports text BLOBs of any length and character set.
If padstr is given and equals '' (empty string), no padding takes place.
If endlen is less than the current string length, the string is truncated to endlen, even if padstr is the
empty string.
Note
In Firebird 2.12.1.3, all non-BLOB results were of type VARCHAR(32765), which made it advisable to cast
them to a more modest size. This is no longer the case.
Examples:
318
Built-in functions and Variables
Warning
When used on a BLOB, this function may need to load the entire object into memory. Although it does try to
limit memory consumption, this may affect performance if huge BLOBs are involved.
SUBSTRING()
Available in: DSQL, PSQL
Syntax:
Parameter Description
str An expression of a string type
startpos Integer expression, the position from which to start retrieving the substring
length The number of characters to retrieve after the <startpos>
Description: Returns a string's substring starting at the given position, either to the end of the string or with
a given length.
This function returns the substring starting at character position startpos (the first position being 1). Without
the FOR argument, it returns all the remaining characters in the string. With FOR, it returns length characters
or the remainder of the string, whichever is shorter.
In Firebird 1.x, startpos and length must be integer literals. In 2.0 and above they can be any valid integer
expression.
Starting with Firebird 2.1, this function fully supports binary and text BLOBs of any length and character set. If
str is a BLOB, the result is also a BLOB. For any other argument type, the result is a VARCHAR(n). Previously,
the result type used to be CHAR(n) if the argument was a CHAR(n) or a string literal.
For non-BLOB arguments, the width of the result field is always equal to the length of str, regardless of start-
pos and length. So, substring('pinhead' from 4 for 2) will return a VARCHAR(7) containing
the string 'he'.
319
Built-in functions and Variables
Bugs
If str is a BLOB and the length argument is not present, the output is limited to 32767 characters.
Workaround: with long BLOBs, always specify char_length(str)or a sufficiently high integeras the
third argument, unless you are sure that the requested substring fits within 32767 characters.
This bug has been fixed in version 2.5.1; the fix was also backported to 2.1.5.
An older bug in Firebird 2.0, which caused the function to return false emptystrings if startpos or
length was NULL, was fixed.
Example:
Warning
When used on a BLOB, this function may need to load the entire object into memory. Although it does try to
limit memory consumption, this may affect performance if huge BLOBs are involved.
TRIM()
Available in: DSQL, PSQL
Syntax:
Parameter Description
str An expression of a string type
The position the substring is to be removed fromBOTH | LEADING | TRAIL-
where
ING. BOTH is the default
The substring that should be removed (multiple times if there are several match-
what es) from the beginning | the end | both sides of the input string <str>. By default
it is space (' ')
Description: Removes leading and/or trailing spaces (or optionally other strings) from the input string. Since
Firebird 2.1 this function fully supports text BLOBs of any length and character set.
320
Built-in functions and Variables
Examples:
select trim (leading from ' Waste no space ') from rdb$database
-- returns 'Waste no space '
select trim (leading '.' from ' Waste no space ') from rdb$database
-- returns ' Waste no space '
select trim ('la' from 'lalala I love you Ella') from rdb$database
-- returns ' I love you El'
select trim ('la' from 'Lalala I love you Ella') from rdb$database
-- returns 'Lalala I love you El'
Notes:
If str is a BLOB, the result is a BLOB. Otherwise, it is a VARCHAR(n) with n the formal length of str.
The substring to be removed, if specified, may not be bigger than 32767 bytes. However, if this substring is
repeated at str's head or tail, the total number of bytes removed may be far greater. (The restriction on the
size of the substring will be lifted in Firebird 3.)
Warning
When used on a BLOB, this function may need to load the entire object into memory. This may affect perfor-
mance if huge BLOBs are involved.
UPPER()
Available in: DSQL, ESQL, PSQL
Syntax:
UPPER (str)
Parameter Description
str An expression of a string type
Description: Returns the upper-case equivalent of the input string. The exact result depends on the character
set. With ASCII or NONE for instance, only ASCII characters are uppercased; with OCTETS, the entire string is
returned unchanged. Since Firebird 2.1 this function also fully supports text BLOBs of any length and character
set.
321
Built-in functions and Variables
Examples:
DATEADD()
Syntax:
DATEADD (<args>)
Parameter Description
An integer expression of the SMALLINT, INTEGER or BIGINT type. A nega-
amount
tive value is subtracted
unit Date/time unit
datetime An expression of the DATE, TIME or TIMESTAMP type
Description: Adds the specified number of years, months, weeks, days, hours, minutes, seconds or milliseconds
to a date/time value. (The WEEK unit is new in 2.5.)
With TIMESTAMP and DATE arguments, all units can be used. (Prior to Firebird 2.5, units smaller than DAY
were disallowed for DATEs.)
322
Built-in functions and Variables
With TIME arguments, only HOUR, MINUTE, SECOND and MILLISECOND can be used.
Examples:
DATEDIFF()
Syntax:
DATEDIFF (<args>)
Parameter Description
unit Date/time unit
moment1 An expression of the DATE, TIME or TIMESTAMP type
moment2 An expression of the DATE, TIME or TIMESTAMP type
Description: Returns the number of years, months, weeks, days, hours, minutes, seconds or milliseconds elapsed
between two date/time values. (The WEEK unit is new in 2.5.)
DATE and TIMESTAMP arguments can be combined. No other mixes are allowed.
With TIMESTAMP and DATE arguments, all units can be used. (Prior to Firebird 2.5, units smaller than DAY
were disallowed for DATEs.)
With TIME arguments, only HOUR, MINUTE, SECOND and MILLISECOND can be used.
323
Built-in functions and Variables
Computation:
DATEDIFF doesn't look at any smaller units than the one specified in the first argument. As a result,
Examples:
EXTRACT()
Syntax:
Parameter Description
part Date/time unit
datetime An expression of the DATE, TIME or TIMESTAMP type
Description: Extracts and returns an element from a DATE, TIME or TIMESTAMP expression. This function was
already added in InterBase 6, but not documented in the Language Reference at the time.
The returned data types and possible ranges are shown in the table below. If you try to extract a part that isn't
present in the date/time argument (e.g. SECOND from a DATE or YEAR from a TIME), an error occurs.
324
Built-in functions and Variables
MILLISECOND
Description: Firebird 2.1 and up support extraction of the millisecond from a TIME or TIMESTAMP. The datatype
returned is NUMERIC(9,1).
Note
If you extract the millisecond from CURRENT_TIME, be aware that this variable defaults to seconds precision,
so the result will always be 0. Extract from CURRENT_TIME(3) or CURRENT_TIMESTAMP to get milliseconds
precision.
WEEK
Description: Firebird 2.1 and up support extraction of the ISO-8601 week number from a DATE or TIMESTAMP.
ISO-8601 weeks start on a Monday and always have the full seven days. Week 1 is the first week that has a
majority (at least 4) of its days in the new year. The first 13 days of the year may belong to the last week (52
or 53) of the previous year. Likewise, a year's final 13 days may belong to week 1 of the following year.
Caution
Be careful when combining WEEK and YEAR results. For instance, 30 December 2008 lies in week 1 of 2009,
so extract (week from date '30 Dec 2008') returns 1. However, extracting YEAR always gives
the calendar year, which is 2008. In this case, WEEK and YEAR are at odds with each other. The same happens
when the first days of January belong to the last week of the previous year.
Please also notice that WEEKDAY is not ISO-8601 compliant: it returns 0 for Sunday, whereas ISO-8601
specifies 7.
325
Built-in functions and Variables
CAST()
Available in: DSQL, ESQL, PSQL
Syntax:
Parameter Description
value SQL expression
datatype SQL data type
domain
colname Table or view column name
precision Precision. From 1 to 18
scale Scale. From 0 to 18it must be less than or equal to precision
size The maximum size of a string in characters
charset Character set
subtype_num BLOB subtype number
subtype_name BLOB subtype mnemonic name
seglen Segment sizeit cannot be greater than 65,535
Description: CAST converts an expression to the desired datatype or domain. If the conversion is not possible,
an error is raised.
Shorthand Syntax
Alternative syntax, supported only when casting a string literal to a DATE, TIME or TIMESTAMP:
datatype 'date/timestring'
326
Built-in functions and Variables
This syntax was already available in InterBase, but was never properly documented.
Note
The short syntax is evaluated immediately at parse time, causing the value to stay the same until the statement
is unprepared. For datetime literals like '12-Oct-2012' this makes no difference. For the pseudo-variables
'NOW', 'YESTERDAY', 'TODAY' and 'TOMORROW', this may not be what you want. If you need the value to
be evaluated at every call, use the full CAST() syntax.
Examples:
A full-syntax cast:
Notice that you can drop even the shorthand cast from the example above, as the engine will under-
stand from the context (comparison to a DATE field) how to interpret the string:
But this is not always possible. The cast below cannot be dropped, otherwise the engine would find
itself with an integer to be subtracted from a string:
The following table shows the type conversions possible with CAST.
From To
Numeric types Numeric types
[VAR]CHAR
BLOB
[VAR]CHAR [VAR]CHAR
BLOB BLOB
Numeric types
DATE
TIME
TIMESTAMP
DATE [VAR]CHAR
TIME BLOB
TIMESTAMP
TIMESTAMP [VAR]CHAR
BLOB
DATE
327
Built-in functions and Variables
From To
TIME
Keep in mind that sometimes information is lost, for instance when you cast a TIMESTAMP to a DATE. Also, the
fact that types are CAST-compatible is in itself no guarantee that a conversion will succeed. CAST(123456789
as SMALLINT) will definitely result in an error, as will CAST('Judgement Day' as DATE).
Casting input fields: Since Firebird 2.0, you can cast statement parameters to a datatype:
cast (? as integer)
This gives you control over the type of input field set up by the engine. Please notice that with statement param-
eters, you always need a full-syntax castshorthand casts are not supported.
Casting to a domain or its type: Firebird 2.1 and above support casting to a domain or its base type. When casting
to a domain, any constraints (NOT NULL and/or CHECK) declared for the domain must be satisfied or the cast will
fail. Please be aware that a CHECK passes if it evaluates to TRUE or NULL! So, given the following statements:
When the TYPE OF modifier is used, the expression is cast to the base type of the domain, ignoring any con-
straints. With domain quint defined as above, the following two casts are equivalent and will both succeed:
If TYPE OF is used with a (VAR)CHAR type, its character set and collation are retained:
Warning
If a domain's definition is changed, existing CASTs to that domain or its type may become invalid. If these
CASTs occur in PSQL modules, their invalidation may be detected. See the note The RDB$VALID_BLR field,
in Appendix A.
328
Built-in functions and Variables
Casting to a column's type: In Firebird 2.5 and above, it is possible to cast expressions to the type of an existing
table or view column. Only the type itself is used; in the case of string types, this includes the character set but
not the collation. Constraints and default values of the source column are not applied.
select cast ('Jag har mnga vnner' as type of column ttt.s) from rdb$database;
Warnings
For text types, character set and collation are preserved by the castjust as when casting to a domain.
However, due to a bug, the collation is not always taken into consideration when comparisons (e.g. equality
tests) are made. In cases where the collation is of importance, test your code thoroughly before deploying!
This bug is fixed for Firebird 3.
If a column's definition is altered, existing CASTs to that column's type may become invalid. If these CASTs
occur in PSQL modules, their invalidation may be detected. See the note The RDB$VALID_BLR field, in
Appendix A.
Casting BLOBs: Successful casting to and from BLOBs is possible since Firebird 2.1.
BIN_AND()
Syntax:
Parameter Description
Any integer number (literal, smallint/integer/bigint, numeric/decimal with scale
number
0)
Note
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
329
Built-in functions and Variables
Description: Returns the result of the bitwise AND operation on the argument(s).
BIN_NOT()
Syntax:
BIN_NOT (number)
Parameter Description
Any integer number (literal, smallint/integer/bigint, numeric/decimal with scale
number
0)
Note
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
Description: Returns the result of the bitwise NOT operation on the argument, i.e., ones complement.
BIN_OR()
Syntax:
Parameter Description
Any integer number (literal, smallint/integer/bigint, numeric/decimal with scale
number
0)
330
Built-in functions and Variables
Note
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
BIN_SHL()
Syntax:
Parameter Description
number A number of an integer type
shift The number of bits the number value is shifted by
Description: Returns the first argument bitwise left-shifted by the second argument, i.e. a << b or a2^b.
BIN_SHR()
Syntax:
Parameter Description
number A number of an integer type
shift The number of bits the number value is shifted by
331
Built-in functions and Variables
Description: Returns the first argument bitwise right-shifted by the second argument, i.e. a >> b or a/2^b.
The operation performed is an arithmetic right shift (SAR), meaning that the sign of the first operand is
always preserved.
BIN_XOR()
Syntax:
Parameter Description
Any integer number (literal, smallint/integer/bigint, numeric/decimal with scale
number
0)
Description: Returns the result of the bitwise XOR operation on the argument(s).
Note
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
CHAR_TO_UUID()
Syntax:
CHAR_TO_UUID (ascii_uuid)
332
Built-in functions and Variables
Parameter Description
A 36-character representation of UUID. '-' (hyphen) in positions 9, 14, 19 and
ascii_uuid 24; valid hexadecimal digits in any other positions, e.g. 'A0bF4E45-3029-2a44-
D493-4998c9b439A3'
Description: Converts a human-readable 36-char UUID string to the corresponding 16-byte UUID.
Examples:
GEN_UUID()
Syntax:
GEN_UUID ()
Example:
UUID_TO_CHAR()
Syntax:
UUID_TO_CHAR (uuid)
333
Built-in functions and Variables
Parameter Description
uuid 16-byte UUID
Examples:
GEN_ID()
Description: Increments a generator or sequence and returns its new value. From Firebird 2.0 onward, the SQL-
compliant NEXT VALUE FOR syntax is preferred, except when an increment other than 1 is needed.
Syntax:
Parameter Description
Name of a generator (sequence) that exists. If it has been defined in double
generator-name quotes with a case-sensitive identifier, it must be used in the same form unless
the name is all upper-case.
334
Built-in functions and Variables
Parameter Description
step An integer expression
Description: Increments a generator or sequence and returns its new value. If step equals 0, the function will
leave the value of the generator unchanged and return its current value.
From Firebird 2.0 onward, the SQL-compliant NEXT VALUE FOR syntax is preferred, except when an incre-
ment other than 1 is needed.
Example:
Warning
If the value of the step parameter is less than zero, it will decrease the value of the generator. Attention! You
should be extremely cautious with such manipulations in the database, as they could compromise data integrity.
Conditional Functions
COALESCE()
Syntax:
Parameter Description
exp1, exp2 expN A list of expressions of any compatible types
Description: The COALESCE function takes two or more arguments and returns the value of the first non-NULL
argument. If all the arguments evaluate to NULL, the result is NULL.
Example: This example picks the Nickname from the Persons table. If it happens to be NULL, it goes on to
FirstName. If that too is NULL, Mr./Mrs. is used. Finally, it adds the family name. All in all, it tries to use
the available data to compose a full name that is as informal as possible. Notice that this scheme only works if
335
Built-in functions and Variables
absent nicknames and first names are really NULL: if one of them is an empty string instead, COALESCE will
happily return that to the caller.
select
coalesce (Nickname, FirstName, 'Mr./Mrs.') || ' ' || LastName
as FullName
from Persons
DECODE()
Syntax:
DECODE(testexpr,
expr1, result1
expr2, result2 ]
[, defaultresult])
CASE testexpr
WHEN expr1 THEN result1
[WHEN expr2 THEN result2 ]
[ELSE defaultresult]
END
Parameter Description
An expression of any compatible type that is compared to the expressions expr1,
testexpr
expr2 ... exprN
Expressions of any compatible types, to which the <testexpr> expression is com-
expr1, expr2, exprN
pared
result1, re-
Returned values of any type
sult2, resultN
defaultresult The expression to be returned if none of the conditions is met
Description: DECODE is a shortcut for the so-called simple CASE construct, in which a given expression is
compared to a number of other expressions until a match is found. The result is determined by the value listed
after the matching expression. If no match is found, the default result is returned, if present. Otherwise, NULL
is returned.
336
Built-in functions and Variables
Caution
Matching is done with the = operator, so if <testexpr> is NULL, it won't match any of the <expr>s, not
even those that are NULL.
Example:
select name,
age,
decode( upper(sex),
'M', 'Male',
'F', 'Female',
'Unknown' ),
religion
from people
IIF()
Syntax:
Parameter Description
condition A true|false expression
resultT The value returned if the condition is true
resultF The value returned if the condition is false
Description: IIF takes three arguments. If the first evaluates to true, the second argument is returned; otherwise
the third is returned.
Example:
Note
IIF(Cond, Result1, Result2) is a shortcut for CASE WHEN Cond THEN Result1 ELSE Result2 END.
337
Built-in functions and Variables
MAXVALUE()
Syntax:
Parameter Description
expr1 exprN List of expressions of compatible types
Result type: Varies according to inputresult will be of the same data type as the first expression in the list
(<expr1>).
Description: Returns the maximum value from a list of numerical, string, or date/time expressions. This function
fully supports text BLOBs of any length and character set.
If one or more expressions resolve to NULL, MAXVALUE returns NULL. This behaviour differs from the
aggregate function MAX.
Example:
MINVALUE()
Syntax:
Parameter Description
expr1 exprN List of expressions of compatible types
Result type: Varies according to inputresult will be of the same data type as the first expression in the list
(<expr1>).
338
Built-in functions and Variables
Description: Returns the minimum value from a list of numerical, string, or date/time expressions. This function
fully supports text BLOBs of any length and character set.
If one or more expressions resolve to NULL, MINVALUE returns NULL. This behaviour differs from the
aggregate function MIN.
Example:
NULLIF()
Syntax:
Parameter Description
exp1 An expression
exp2 Another expression of a data type compatible with <exp1>
Description: NULLIF returns the value of the first argument, unless it is equal to the second. In that case, NULL
is returned.
Example:
This will return the average weight of the persons listed in FatPeople, excluding those having a weight of -1,
since AVG skips NULL data. Presumably, -1 indicates weight unknown in this table. A plain AVG(Weight)
would include the -1 weights, thus skewing the result.
Aggregate Functions
Aggregate functions operate on groups of records, rather than on individual records or variables. They are often
used in combination with a GROUP BY clause.
339
Built-in functions and Variables
AVG()
Available in: DSQL, ESQL, PSQL
Syntax:
Parameter Description
Expression. It may contain a table column, a constant, a variable, an expression,
expr a non-aggregate function or a UDF that returns a numeric data type. Aggregate
functions are not allowed as expressions
Description: AVG returns the average argument value in the group. NULL is ignored.
Parameter ALL (the default) applies the aggregate function to all values.
Parameter DISTINCT directs the AVG function to consider only one instance of each unique value, no matter
how many times this value occurs.
If the set of retrieved records is empty or contains only NULL, the result will be NULL.
Result type: A numeric data type, the same as the data type of the argument.
Syntax:
AVG (expression)
Example:
SELECT
dept_no,
AVG(salary)
FROM employee
GROUP BY dept_no
COUNT()
Available in: DSQL, ESQL, PSQL
340
Built-in functions and Variables
Syntax:
Parameter Description
Expression. It may contain a table column, a constant, a variable, an expression,
expr a non-aggregate function or a UDF that returns a numeric data type. Aggregate
functions are not allowed as expressions
ALL is the default: it simply counts all values in the set that are not NULL.
If COUNT (*) is specified instead of the expression <expr>, all rows will be counted. COUNT (*)
- does not take an <expr> argument, since its context is column-unspecific by definition
- counts each row separately and returns the number of rows in the specified table or group without omitting
duplicate rows
If the result set is empty or contains only NULL in the specified column[s], the returned count is zero.
Example:
SELECT
dept_no,
COUNT(*) AS cnt,
COUNT(DISTINCT name) AS cnt_name
FROM employee
GROUP BY dept_no
LIST()
Available in: DSQL, PSQL
341
Built-in functions and Variables
Syntax:
Parameter Description
Expression. It may contain a table column, a constant, a variable, an expression,
a non-aggregate function or a UDF that returns the string data type or a BLOB.
expr
Fields of numeric and date/time types are converted to strings. Aggregate func-
tions are not allowed as expressions
Optional alternative separator, a string expression. Comma is the default separa-
separator
tor
Description: LIST returns a string consisting of the non-NULL argument values in the group, separated either
by a comma or by a user-supplied separator. If there are no non-NULL values (this includes the case where the
group is empty), NULL is returned.
ALL (the default) results in all non-NULL values being listed. With DISTINCT, duplicates are removed, except
if expression is a BLOB.
In Firebird 2.5 and up, the optional separator argument may be any string expression. This makes it
possible to specify e.g. ascii_char(13) as a separator. (This improvement has also been backported to
2.1.4.)
The expression and separator arguments support BLOBs of any size and character set.
Date/time and numeric arguments are implicitly converted to strings before concatenation.
The result is a text BLOB, except when expression is a BLOB of another subtype.
The ordering of the list values is undefinedthe order in which the strings are concatenated is determined by
read order from the source set which, in tables, is not generally defined. If ordering is important, the source
data can be pre-sorted using a derived table or similar.
Examples:
342
Built-in functions and Variables
MAX()
Available in: DSQL, ESQL, PSQL
Syntax:
Parameter Description
Expression. It may contain a table column, a constant, a variable, an expression,
expr a non-aggregate function or a UDF. Aggregate functions are not allowed as ex-
pressions.
Result type: Returns a result of the same data type the input expression.
Description: MAX returns the maximum non-NULL element in the result set.
If the input argument is a string, the function will return the value that will be sorted last if COLLATE is used.
This function fully supports text BLOBs of any size and character set.
Note
The DISTINCT parameter makes no sense if used with MAX() and is implemented only for compliance with
the standard.
Example:
SELECT
dept_no,
MAX(salary)
FROM employee
GROUP BY dept_no
MIN()
Available in: DSQL, ESQL, PSQL
Syntax:
343
Built-in functions and Variables
Parameter Description
Expression. It may contain a table column, a constant, a variable, an expression,
expr a non-aggregate function or a UDF. Aggregate functions are not allowed as ex-
pressions.
Result type: Returns a result of the same data type the input expression.
Description: MIN returns the minimum non-NULL element in the result set.
If the input argument is a string, the function will return the value that will be sorted first if COLLATE is used.
This function fully supports text BLOBs of any size and character set.
Note
The DISTINCT parameter makes no sense if used with MIN() and is implemented only for compliance with
the standard.
Example:
SELECT
dept_no,
MIN(salary)
FROM employee
GROUP BY dept_no
SUM()
Available in: DSQL, ESQL, PSQL
Syntax:
Parameter Description
Numeric expression. It may contain a table column, a constant, a variable, an
expr expression, a non-aggregate function or a UDF. Aggregate functions are not al-
lowed as expressions.
344
Built-in functions and Variables
Result type: Returns a result of the same numeric data type as the input expression.
Description: SUM calculates and returns the sum of non-null values in the group.
ALL is the default optionall values in the set that are not NULL are processed. If DISTINCT is specified,
duplicates are removed from the set and the SUM evaluation is done afterwards.
345
Chapter 9
Transaction Control
REVIEW STATUS
All sections from this point forward to the end of the chapter are awaiting technical and editorial review.
Everything in Firebird happens in transactions. Units of work are isolated between a start point and an end point.
Changes to data remain reversible until the moment the client application instructs the server to commit them.
Transaction Statements
Firebird has a small lexicon of SQL statements that are used by client applications to start, manage, commit and
reverse (roll back) the transactions that form the boundaries of all database tasks:
COMMIT: to signal the end of a unit of work and write changes permanently to the database
SAVEPOINT: to mark a position in the log of work done, in case a partial rollback is needed
SET TRANSACTION
Used for: Configuring and starting a transaction
Syntax:
SET TRANSACTION
[NAME tr_name]
[READ WRITE | READ ONLY]
[[ISOLATION LEVEL] {
SNAPSHOT [TABLE STABILITY]
| READ COMMITTED [[NO] RECORD_VERSION] }]
[WAIT | NO WAIT]
[LOCK TIMEOUT seconds]
[NO AUTO UNDO]
[IGNORE LIMBO]
[RESERVING <tables> | USING <dbhandles>]
346
Transaction Control
Parameter Description
tr_name Transaction name. Available only in ESQL
seconds The time in seconds for the statement to wait in case a conflict occurs
tables The list of tables to reserve
dbhandles The list of databases the database can access. Available only in ESQL
table_spec Table reservation specification
tablename The name of the table to reserve
dbhandle The handle of the database the database can access. Available only in ESQL
The SET TRANSACTION statement configures the transaction and starts it. As a rule, only client applications start
transactions. The exceptions are the occasions when the server starts an autonomous transaction or transactions
for certain background system threads/processes, such as sweeping.
A client application can start any number of concurrently running transactions. A limit does exist, for the total
number of running transactions in all client applications working with one particular database from the moment
the database was restored from its backup copy or from the moment the database was created originally. The
limit is 231-1, or 2,147,483,647.
All clauses in the SET TRANSACTION statement are optional. If the statement starting a transaction has no
clauses specified in it, it the transaction will be started with default values for access mode, lock resolution mode
and isolation level, which are:
SET TRANSACTION
READ WRITE
WAIT
ISOLATION LEVEL SNAPSHOT;
The server assigns integer numbers to transactions sequentially. Whenever a client starts any transaction, either
explicitly defined or by default, the server sends the transaction ID to the client. This number can be retrieved
in SQL using the context variable CURRENT_TRANSACTION.
Transaction Parameters
The main parameters of a transaction are:
347
Transaction Control
Transaction Name
The optional NAME attribute defines the name of a transaction. Use of this attribute is available only in Embed-
ded SQL. In ESQL applications, named transactions make it possible to have several transactions active simul-
taneously in one application. If named transactions are used, a host-language variable with the same name must
be declared and initialized for each named transaction. This is a limitation that prevents dynamic specification
of transaction names and thus, rules out transaction naming in DSQL.
Access Mode
The two database access modes for transactions are READ WRITE and READ ONLY.
If the access mode is READ WRITE, operations in the context of this transaction can be both read operations
and data update operations. This is the default mode.
If the access mode is READ ONLY, only SELECT operations can be executed in the context of this trans-
action. Any attempt to change data in the context of such a transaction will result in database exceptions.
However, it does not apply to global temporary tables (GTT) that are allowed to be changed in READ ONLY
transactions.
When several client processes work with the same database, locks may occur when one process makes uncom-
mitted changes in a table row, or deletes a row, and another process tries to update or delete the same row. Such
locks are called update conflicts.
Locks may occur in other situations when multiple transaction isolation levels are used.
WAIT Mode
In the WAIT mode (the default mode), if a conflict occurs between two parallel processes executing concurrent
data updates in the same database, a WAIT transaction will wait till the other transaction has finishedby
committing (COMMIT) or rolling back (ROLLBACK). The client application with the WAIT transaction will
be put on hold until the conflict is resolved.
If a LOCK TIMEOUT is specified for the WAIT transaction, waiting will continue only for the number of
seconds specified in this clause. If the lock is unresolved at the end of the specified interval, the error message
Lock time-out on wait transaction is returned to the client.
Lock resolution behaviour can vary a little, depending on the transaction isolation level.
NO WAIT Mode
In the NO WAIT mode, a transaction will immediately throw a database exception if a conflict occurs.
348
Transaction Control
Isolation Level
Keeping the work of one database task separated from others is what isolation is about. Changes made by one
statement become visible to all remaining statements executing within the same transaction, regardless of its
isolation level. Changes that are in process within other transactions remain invisible to the current transaction
as long as they remain uncommitted. The isolation level and, sometimes, other attributes, determine how trans-
actions will interact when another transaction wants to commit work.
The ISOLATION LEVEL attribute defines the isolation level for the transaction being started. It is the most
significant transaction parameter for determining its behavior towards other concurrently running transactions.
SNAPSHOT
SNAPSHOT TABLE STABILITY
READ COMMITTED with two specifications (NO RECORD_VERSION and RECORD_VERSION)
SNAPSHOT isolation levelthe default levelallows the transaction to see only those changes that were com-
mitted before this one was started. Any committed changes made by concurrent transactions will not be seen
in a SNAPSHOT transaction while it is active. The changes will become visible to a new transaction once the
current transaction is either committed or rolled back completely, but not if it is just rolled back to a savepoint.
Autonomous Transactions
Changes made by autonomous transactions are not seen in the context of the SNAPSHOT transaction that
launched it.
The SNAPSHOT TABLE STABILITY isolation level is the most restrictive. As in SNAPSHOT, a transaction
in SNAPSHOT TABLE STABILITY isolation sees only those changes that were committed before the current
transaction was started. After a SNAPSHOT TABLE STABILITY is started, no other transactions can make
any changes to any table in the database that has changes pending. Other transactions are able to read other data,
but any attempt at inserting, updating or deleting by a parallel process will cause conflict exceptions.
The RESERVING clause can be used to allow other transactions to change data in some tables.
If any other transaction has an uncommitted change of data pending in any database table before a transaction
with the SNAPSHOT TABLE STABILITY isolation level is started, trying to start a SNAPSHOT TABLE
STABILITY transaction will result in an exception.
The READ COMMITTED isolation level allows all data changes that other transactions have committed since
it started to be seen immediately by the uncommitted current transaction. Uncommitted changes are not visible
to a READ COMMITTED transaction.
To retrieve the updated list of rows in the table you are interested inrefreshthe SELECT statement just
needs to be requested again, whilst still in the uncommitted READ COMMITTED transaction.
349
Transaction Control
RECORD_VERSION
One of two modifying parameters can be specified for READ COMMITTED transactions, depending on the
kind of conflict resolution desired: RECORD_VERSION and NO RECORD_VERSION. As the names suggest,
they are mutually exclusive.
NO RECORD_VERSION (the default value) is a kind of two-phase locking mechanism: it will make the
transaction unable to write to any row that has an update pending from another transaction.
- if NO WAIT is the lock resolution strategy specified, it will throw a lock conflict error immediately
- with WAIT specified, it will wait until the other transaction either commits or is rolled back. If the other
transaction is rolled back, or if it is committed and its transaction ID is older than the current transaction's
ID, then the current transaction's change is allowed. A lock conflict error is returned if the other transaction
was committed and its ID was newer than that of the current transaction.
With RECORD_VERSION specified, the transaction reads the latest committed version of the row, regard-
less of other pending versions of the row. The lock resolution strategy (WAIT or NO WAIT) does not affect
the behavior of the transaction at its start in any way.
NO AUTO UNDO
The NO AUTO UNDO option affects the handling of unused record versions (garbage) in the event of rollback.
With NO AUTO UNDO flagged, the ROLLBACK statement just marks the transaction as rolled back without
deleting the unused record versions created in the transaction. They are left to be mopped up later by garbage
collection.
NO AUTO UNDO might be useful when a lot of separate statements are executed that change data in conditions
where the transaction is likely to be committed successfully most of the time.
The NO AUTO UNDO option is ignored for transactions where no changes are made.
IGNORE LIMBO
This flag is used to signal that records created by limbo transactions are to be ignored. Transactions are left in
limbo if the second stage of a two-phase commit fails.
Historical Note
IGNORE LIMBO surfaces the TPB parameter isc_tpb_ignore_limbo, available in the API since Inter-
Base times and mainly used by gfix.
RESERVING
The RESERVING clause in the SET TRANSACTION statement reserves tables specified in the table list. Re-
serving a table prevents other transactions from making changes in them or even, with the inclusion of certain
parameters, from reading data from them while this transaction is running.
A RESERVING clause can also be used to specify a list of tables that can be changed by other transactions,
even if the transaction is started with the SNAPSHOT TABLE STABILITY isolation level.
350
Transaction Control
If one of the keywords SHARED or PROTECTED is omitted, SHARED is assumed. If the whole FOR clause is
omitted, FOR SHARED READ is assumed. The names and compatibility of the four access options for reserving
tables are not obvious.
PROTECT- PROTECT-
SHARED READ SHARED WRITE
ED READ ED WRITE
SHARED READ Yes Yes Yes Yes
SHARED WRITE Yes Yes No No
PROTECT-
Yes No Yes No
ED READ
PROTECT-
Yes No No No
ED WRITE
The combinations of these RESERVING clause flags for concurrent access depend on the isolation levels of
the concurrent transactions:
SNAPSHOT isolation
- Concurrent SNAPSHOT transactions with SHARED READ do not affect one other's access
- A concurrent mix of SNAPSHOT and READ COMMITTED transactions with SHARED WRITE do not
affect one another's access but they block transactions with SNAPSHOT TABLE STABILITY isolation
from either reading from or writing to the specified table[s]
- Concurrent transactions with any isolation level and PROTECTED READ can only read data from the
reserved tables. Any attempt to write to them will cause an exception
- With PROTECTED WRITE, concurrent transactions with SNAPSHOT and READ COMMITTED isola-
tion cannot write to the specified tables. Transactions with SNAPSHOT TABLE STABILITY isolation
cannot read from or write to the reserved tables at all.
- All concurrent transactions with SHARED READ, regardless of their isolation levels, can read from or
write (if in READ WRITE mode) to the reserved tables
- Concurrent transactions with SNAPSHOT and READ COMMITTED isolation levels and SHARED
WRITE can read data from and write (if in READ WRITE mode) to the specified tables but concurrent
access to those tables from transactions with SNAPSHOT TABLE STABILITY is blocked completely
whilst these transactions are active
- Concurrent transactions with any isolation level and PROTECTED READ can only read from the reserved
tables
- With PROTECTED WRITE, concurrent SNAPSHOT and READ COMMITTED transactions can read
from but not write to the reserved tables. Access by transactions with the SNAPSHOT TABLE STABIL-
ITY isolation level is blocked completely.
351
Transaction Control
- With SHARED READ, all concurrent transactions with any isolation level can both read from and write
(if in READ WRITE mode) to the reserved tables
- SHARED WRITE allows all transactions in SNAPSHOT and READ COMMITTED isolation to read
from and write (if in READ WRITE mode) to the specified tables and blocks access completely from
transactions with SNAPSHOT TABLE STABILITY isolation
- With PROTECTED READ, concurrent transactions with any isolation level can only read from the re-
served tables
- With PROTECTED WRITE, concurrent transactions in SNAPSHOT and READ COMMITTED isolation
can read from but not write to the specified tables. Access from transactions in SNAPSHOT TABLE
STABILITY isolation is blocked completely.
Tip
In Embedded SQL, the USING clause can be used to conserve system resources by limiting the databases
the transaction can access to an enumerated list (of databases). USING is incompatible with RESERVING. A
USING clause in SET TRANSACTION syntax is not supported in DSQL.
COMMIT
Used for: Committing a transaction
Syntax:
Parameter Description
tr_name Transaction name. Available only in ESQL
The COMMIT statement commits all work carried out in the context of this transaction (inserts, updates, deletes,
selects, execution of procedures). New record versions become available to other transactions and, unless the
RETAIN clause is employed, all server resources allocated to its work are released.
If any conflicts or other errors occur in the database during the process of committing the transaction, the trans-
action is not committed and the reasons are passed back to the user application for handling and the opportunity
to attempt another commit or to roll the transaction back.
COMMIT Options
The optional TRANSACTION <tr_name> clause, available only in Embedded SQL, specifies the name of the
transaction to be committed. With no TRANSACTION clause, COMMIT is applied to the default transaction.
352
Transaction Control
Note
In ESQL applications, named transactions make it possible to have several transactions active simultane-
ously in one application. If named transactions are used, a host-language variable with the same name must
be declared and initialized for each named transaction. This is a limitation that prevents dynamic specifica-
tion of transaction names and thus, rules out transaction naming in DSQL.
The optional keyword WORK is supported just for compatibility with other relational database management
systems that require it.
The keyword RELEASE is available only in Embedded SQL and enables disconnection from all databases
after the transaction is committed. RELEASE is retained in Firebird only for compatibility with legacy versions
of InterBase. It has been superseded in ESQL by the DISCONNECT statement.
The RETAIN [SNAPSHOT] clause is used for the soft, variously referred to amongst host languages and
their practitioners as COMMIT WITH RETAIN, CommitRetaining, warm commit, et al. The transaction is
committed but some server resources are retained and the transaction is restarted transparently with the same
Transaction ID. The state of row caches and cursors is kept as it was before the soft commit.
For soft-committed transactions whose isolation level is SNAPSHOT or SNAPSHOT TABLE STABILITY,
the view of database state is not updated to reflect changes by other transactions and the user of the application
instance continues to have the same view as when the transaction started originally. Changes made during
the life of the retained transaction are visible to that transaction, of course.
Recommendation
Use of the COMMIT statement in preference to ROLLBACK is recommended for ending transactions that only
read data from the database, because COMMIT consumes fewer server resources and helps to optimize the
performance of subsequent transactions.
ROLLBACK
Used for: Rolling back a transaction
Syntax:
Parameter Description
tr_name Transaction name. Available only in ESQL
353
Transaction Control
Parameter Description
sp_name Savepoint name. Available only in DSQL
The ROLLBACK statement rolls back all work carried out in the context of this transaction (inserts, updates,
deletes, selects, execution of procedures). ROLLBACK never fails and, thus, never causes exceptions.Unless
the RETAIN clause is employed, all server resources allocated to the work of the transaction are released.
ROLLBACK Options
The optional TRANSACTION <tr_name> clause, available only in Embedded SQL, specifies the name of the
transaction to be committed. With no TRANSACTION clause, COMMIT is applied to the default transaction.
Note
In ESQL applications, named transactions make it possible to have several transactions active simultane-
ously in one application. If named transactions are used, a host-language variable with the same name must
be declared and initialized for each named transaction. This is a limitation that prevents dynamic specifica-
tion of transaction names and thus, rules out transaction naming in DSQL.
The optional keyword WORK is supported just for compatibility with other relational database management
systems that require it.
The keyword RETAIN keyword specifies that, although all of the work of the transaction is to be rolled back,
the transaction context is to be retained. Some server resources are retained and the transaction is restarted
transparently with the same Transaction ID. The state of row caches and cursors is kept as it was before the
soft rollback.
For transactions whose isolation level is SNAPSHOT or SNAPSHOT TABLE STABILITY, the view of
database state is not updated by the soft rollback to reflect changes by other transactions. The user of the
application instance continues to have the same view as when the transaction started originally. Changes that
were made and soft-committed during the life of the retained transaction are visible to that transaction, of
course.
ROLLBACK TO SAVEPOINT
The optional TO SAVEPOINT clause in the ROLLBACK statement specifies the name of a savepoint to which
changes are to be rolled back. The effect is to roll back all changes made within the transaction, from the created
savepoint forward until the point when ROLLBACK TO SAVEPOINT is requested.
Any database mutations performed since the savepoint was created are undone. User variables set with RDB
$SET_CONTEXT() remain unchanged.
Any savepoints that were created after the one named are destroyed. Savepoints earlier than the one named are
preserved, along with the named savepoint itself. Repeated rollbacks to the same savepoint are thus allowed.
354
Transaction Control
All implicit and explicit record locks that were acquired since the savepoint are released. Other transactions
that have requested access to rows locked after the savepoint must continue to wait until the transaction is
committed or rolled back. Other transactions that have not already requested the rows can request and access
the unlocked rows immediately.
SAVEPOINT
Used for: Creating a savepoint
Available: DSQL
Syntax:
SAVEPOINT sp_name
Parameter Description
sp_name Savepoint name. Available only in DSQL
The SAVEPOINT statement creates an SQL:99-compliant savepoint that acts as a marker in the stack of da-
ta activities within a transaction. Subsequently, the tasks performed in the stack can be undone back to this
savepoint, leaving the earlier work and older savepoints untouched. Savepoint mechanisms are sometimes char-
acterised as nested transactions.
If a savepoint already exists with the same name as the name supplied for the new one, the existing savepoint
is deleted and a new one is created using the supplied name.
To roll changes back to the savepoint, the statement ROLLBACK TO SAVEPOINT is used.
Memory Considerations
The internal mechanism beneath savepoints can consume large amounts of memory, especially if the same
rows receive multiple updates in one transaction. When a savepoint is no longer needed but the transaction still
has work to do, a RELEASE SAVEPOINT statement will erase it and thus free the resources.
355
Transaction Control
RELEASE SAVEPOINT
Used for: Erasing a savepoint
Available: DSQL
Syntax:
Parameter Description
sp_name Savepoint name. Available only in DSQL
The statement RELEASE SAVEPOINT erases a named savepoint, freeing up all the resources it encompasses. By
default, all the savepoints created after the named savepoint are released as well. The qualifier ONLY directs
the engine to release only the named savepoint.
Internal Savepoints
By default, the engine uses an automatic transaction-level system savepoint to perform transaction rollback.
When a ROLLBACK statement is issued, all changes performed in this transaction are backed out via a transac-
tion-level savepoint and the transaction is then committed. This logic reduces the amount of garbage collection
caused by rolled back transactions.
When the volume of changes performed under a transaction-level savepoint is getting large (~50000 records
affected), the engine releases the transaction-level savepoint and uses the Transaction Inventory Page (TIP) as
a mechanism to roll back the transaction if needed.
Tip
If you expect the volume of changes in your transaction to be large, you can specify the NO AUTO UNDO option
in your SET TRANSACTION statement to block the creation of the transaction-level savepoint. Using the API
instead, you would set the TPB flag isc_tpb_no_auto_undo.
356
Transaction Control
undo all actions performed by the procedure or trigger or, in for a selectable procedure, all actions performed
since the last SUSPEND, when execution terminates prematurely because of an uncaught error or exception
Each PSQL exception handling block is also bounded by automatic system savepoints.
Note
A BEGIN...END block does not itself create an automatic savepoint. A savepoint is created only in blocks that
contain the WHEN statement for handling exceptions.
357
Chapter 10
Security
Databases must be secure and so must the data stored in them. Firebird provides two levels of data security
protection: user authentication at the server level and SQL privileges within databases. This chapter tells you
how to manage security at both levels.
User Authentication
The security of the entire database depends on identifying a user on verifying its authority, a procedure known as
authentication. The information about users authorised to access a specific Firebird server is stored in a special
security database named security2.fdb. Each record in security2.fdb is a user account for one user.
A user name, consisting of up to 31 characters, is a case-insensitive system identifier. A user must have a pass-
word, of which the first eight are significant. Whilst it is valid to enter a password longer than eight characters,
any subsequent characters are ignored. Passwords are case-sensitive.
If the user specified during the connection is the SYSDBA, the database owner or a specially privileged user,
that user will have unlimited access to the database.
The default SYSDBA password on Windows and MacOS is 'masterkey'or 'masterke', to be exact, because
of the 8-character length limit.
Extremely Important!
The default password 'masterkey' is known across the universe. It should be changed as soon as the Firebird
server installation is complete.
Other users can acquire elevated privileges in several ways, some of which are dependent on the operating
system platform. These are discussed in the sections that follow and are summarised in Administrators.
POSIX Hosts
On POSIX systems, including MacOSX, Firebird will interpret a POSIX user account as though it were a Firebird
user account in its own security database, provided the server sees the client machine as a trusted host and the
system user accounts exist on both the client and the server. To establish a trusted relationship with the client
358
Security
host, the corresponding entries must be included in one of the files /etc/hosts.equiv or /etc/gds_
hosts.equiv on Firebird's host server.
The file hosts.equiv contains trusted relationships at operating system level, encompassing all services
(rlogin, rsh, rcp, and so on)
The file gds_hosts.equiv contains trusted relationships between Firebird hosts only.
The format is identical for both files and looks like this:
hostname [username]
On POSIX hosts, other than MacOSX, the SYSDBA user does not have a default password. If the full installation
is done using the standard scripts, a one-off password will be created and stored in a text file in the same directory
as security2.fdb, commonly /opt/firebird/. The name of the password file is SYSDBA.password.
Note
In an installation performed by a distribution-specific installer, the location of the security database and the
password file may be different from the standard one.
The root user can act directly as SYSDBA on POSIX host systems. Firebird interprets root as though it were
SYSDBA and it provides access to all databases on the server.
Windows Hosts
On Windows server-capable operating systems, operating system accounts can be used. Trusted Authentication
must be enabled by setting the Authentication parameter to Trusted or Mixed in the configuration file, fire-
bird.conf.
Even with trusted authentication enabled, Windows operating system Administrators are not automatically
granted SYSDBA privileges when they connect to a database. To make that happen, the internally-created role
RDB$ADMIN must be altered by SYSDBA or the database owner, to enable it. For details, refer to the later
section entitled AUTO ADMIN MAPPING.
The embedded version of Firebird server on Windows does not use server-level authentication. However, be-
cause objects within a database are subject to SQL privileges, a valid user name and, if applicable, a role, may
be required in the connection parameters.
Owner is not a user name. The user who is the owner of a database has full administrator rights with respect
to that database, including the right to drop it, to restore it from a backup and to enable or disable the AUTO
ADMIN MAPPING capability.
359
Security
Note
Prior to Firebird 2.1, the owner had no automatic privileges over any database objects that were created by
other users.
RDB$ADMIN Role
The internally-created role RDB$ADMIN is present in every database. Assigning the RDB$ADMIN role to a
regular user in a database grants that user the privileges of the SYSDBA, in the current database only.
The elevated privileges take effect when the user is logged in to that regular database under the RDB$ADMIN
role and give full control over all objects in the database.
Being granted the RDB$ADMIN role in the security database confers the authority to create, edit and delete user
accounts.
In both cases, the user with the elevated privileges can assign RDB$ADMIN role to any other user. In other words,
specifying WITH ADMIN OPTION is unnecessary because it is built into the role.
Note
GRANT ADMIN ROLE and REVOKE ADMIN ROLE are not statements in the GRANT and REVOKE lexicon.
They are three-word parameters to the statements CREATE USER and ALTER USER.
Parameter Description
new_user Using CREATE USER, name for the new user
existing_user Using ALTER USER, Name of an existing user
360
Security
Parameter Description
Using CREATE USER, password for the new user. Its theoretical limit is 31
password
bytes but only the first 8 characters are considered.
An alternative is to use gsec with the -admin parameter to store the RDB$ADMIN attribute on the user's record:
Note
Depending on the adminstrative status of the current user, more parameters may be needed when invoking
gsec, e.g., -user and -pass, or -trusted.
To manage user accounts through SQL, the grantee must specify the RDB$ADMIN role when connecting. No
user can connect to the security database, so the solution is that the user connects to a regular database where
he also has RDB$ADMIN rights, supplying the RDB$ADMIN role in his login parameters. From there, he can
submit any SQL user management command.
The SQL route for the user is blocked for any database in which he has not been the granted the RDB$ADMIN role.
To perform user management with gsec, the user must provide the extra switch -role rdb$admin.
In order to grant and revoke the RDB$ADMIN role, the grantor must be logged in as an administrator.
361
Security
To exercise his RDB$ADMIN privileges, the grantee simply includes the role in the connection attributes when
connecting to the database.
In Firebird 2.1, Windows Administrators would automatically receive SYSDBA privileges if trusted authenti-
cation was configured for server connections. In Firebird 2.5, it is no longer automatic. The setting of the AU-
TO ADMIN MAPPING switch now determines whether Administrators have automatic SYSDBA rights, on a
database-by-database basis. By default, when a database is created, it is disabled.
If AUTO ADMIN MAPPING is enabled in the database, it will take effect whenever a Windows Administrator
connects
After a successful auto admin connection, the current role is set to RDB$ADMIN.
Either statement must be issued by a user with sufficient rights, that is:
In regular databases, the status of AUTO ADMIN MAPPING is checked only at connection time. If an Adminis-
trator has the RDB$ADMIN role because auto-mapping was on when he logged in, he will keep that role for the
duration of the session, even if he or someone else turns off the mapping in the meantime.
Likewise, switching on AUTO ADMIN MAPPING will not change the current role to RDB$ADMIN for Adminis-
trators who were already connected.
No SQL statements exist to switch automatic mapping on and off in the security database. Instead, gsec must
be used:
362
Security
More gsec switches may be needed, depending on what kind of log-in you used to connect, e.g., -user and
-pass, or -trusted.
Only SYSDBA can set the auto-mapping on if it is disabled. Any administrator can drop (disable) it.
Administrators
As a general description, an administrator is a user that has sufficient rights to read, write to, create, alter or delete
any object in a database to which that user's administrator status applies. The table summarises how Superuser
privileges are enabled in the various Firebird security contexts.
363
Security
Note
For a Windows Administrator, AUTO ADMIN MAPPING enabled only in a regular database is not sufficient
to permit management of other users. For instructions to enable it in the security database, see Auto Admin
Mapping in the Security Database.
Non-privileged users can use only the ALTER USER statement and only to edit some data in their own accounts.
CREATE USER
Syntax:
Parameter Description
User name. The maximum length is 31 characters, following the rules for Fire-
username
bird regular identifiers. It is always case-insensitive
User password. Its theoretical limit is 31 bytes but only the first 8 characters are
password
considered. Case-sensitive
firstname Optional: User's first name. Maximum length 31 characters
middlename Optional: User's middle name. Maximum length 31 characters
lastname Optional: User's last name. Maximum length 31 characters
Use a CREATE USER statement to create a new Firebird user account. The user must not already exist in the
Firebird security database, or a primary key violation error message will be returned.
The <username argument must follow the rules for Firebird regular identifiers: see Identifiers in the Structure
chapter. User names are always case-insensitive. Supplying a user name enclosed in double quotes will not cause
364
Security
an exception: the quotes will be ignored. If a space is the only illegal character supplied, the user name will be
truncated back to the first space character. Other illegal characters will cause an exception.
The PASSWORD clause specifies the user's password. A password of more than eight characters is accepted with
a warning but any surplus characters will be ignored.
The optional FIRSTNAME, MIDDLENAME and LASTNAME clauses can be used to specify additional user prop-
erties, such as the person's first name, middle name and last name, respectively. They are just simple VAR-
CHAR(31) fields and can be used to store anything you prefer.
If the GRANT ADMIN ROLE clause is specified, the new user account is created with the privileges of the RDB
$ADMIN role in the security database (security2.fdb). It allows the new user to manage user accounts
from any regular database he logs into, but it does not grant the user any special privileges on objects in those
databases.
To create a user account, the current user must have administrator privileges in the security database. Adminis-
trator privileges only in regular databases are not sufficient.
Note
CREATE / ALTER / DROP USER are DDL statements. Remember to COMMIT your work. In isql, the com-
mand SET AUTO ON will enable autocommit on DDL statements. In third-party tools and other user applica-
tions, this may not be the case.
Examples:
2. Creating the user john with additional properties (first and last names):
ALTER USER
365
Security
Syntax:
Parameter Description
username User name. Cannot be changed.
User password. Its theoretical limit is 31 bytes but only the first 8 characters are
password
considered. Case-sensitive
firstname Optional: User's first name, or other optional text. Max. length is 31 characters
Optional: User's middle name, or other optional text. Max. length is 31 charac-
middlename
ters
lastname Optional: User's last name, or other optional text. Max. length is 31 characters
Use an ALTER USER statement to edit the details in the named Firebird user account. To modify the account
of another user, the current user must have administrator privileges in the security database. Administrator
privileges only in regular databases are not sufficient.
Any user can alter his or her own account, except that only an administrator may use GRANT/REVOKE ADMIN
ROLE.
All of the arguments are optional but at least one of them must be present:
The PASSWORD parameter is for specifying a new password for the user
FIRSTNAME, MIDDLENAME and LASTNAME allow updating of the optional user properties, such as the
person's first name, middle name and last name respectively
Including the clause GRANT ADMIN ROLE grants the user the privileges of the RDB$ADMIN role in the
security database (security2.fdb), enabling him/her to manage the accounts of other users. It does not
grant the user any special privileges in regular databases.
Including the clause REVOKE ADMIN ROLE removes the user's administrator in the security database which,
once the transaction is committed, will deny that user the ability to alter any user account except his or her own
Note
Remember to commit your work if you are working in an application that does not auto-commit DDL.
Examples:
366
Security
1. Changing the password for the user bobby and granting him user management privileges:
2. Editing the optional properties (the first and last names) of the user dan:
DROP USER
Used for: Deleting a Firebird user account
Syntax:
Parameter Description
username User name
Use the statement DROP USER to delete a Firebird user account. The current user requires administrator priv-
ileges.
Note
Remember to commit your work if you are working in an application that does not auto-commit DDL.
367
Security
SQL Privileges
The second level of Firebird's security model is SQL privileges. Whilst a successful loginthe first level
authorises a user's access to the server and to all databases under that server, it does not imply that he has
access to any objects in any databases. When an object is created, only the user that created it (its owner) and
administrators have access to it. The user needs privileges on each object he needs to access. As a general rule,
privileges must be granted explicitly to a user by the object owner or an administrator of the database.
A privilege comprises a DML access type (SELECT, INSERT, UPDATE, DELETE, EXECUTE and REFERENCES),
the name of a database object (table, view, procedure, role) and the name of the user (user, procedure, trigger,
role) to which it is granted. Various means are available to grant multiple types of access on an object to multiple
users in a single GRANT statement. Privileges may be withdrawn from a user with REVOKE statements.
Privileges are are stored in the database to which they apply and are not applicable to any other database.
Any authenticated user can access any database and create any valid database object. Up to and including this
release, the issue is not controlled.
Because not all database objects are associated with an ownerdomains, external functions (UDFs), BLOB
filters, generators (sequences) and exceptionsownerless objects must be regarded as vulnerable on a server
that is not adequately protected.
SYSDBA, the database owner or the object owner can grant privileges to and revoke them from other users,
including privileges to grant privileges to other users. The process of granting and revoking SQL privileges is
implemented with two statements of the general form:
The <OBJECT-TYPE> is not required for every type of privilege. For some types of privilege, extra parameters
are available, either as options or as requirements.
368
Security
GRANT
Syntax:
GRANT {
<privileges> ON [TABLE] {tablename | viewname}
| EXECUTE ON PROCEDURE procname
}
TO <grantee_list>
[WITH GRANT OPTION]} | [{GRANTED BY | AS} [USER] grantor];
GRANT <role_granted>
TO <role_grantee_list> [WITH ADMIN OPTION]
[{GRANTED BY | AS} [USER] grantor]
<privilege> ::=
SELECT |
DELETE |
INSERT |
UPDATE [(col [, col [, ] ] ) ] |
REFERENCES (col [, ])
<grantee> ::=
[USER] username | [ROLE] rolename | GROUP Unix_group
| PROCEDURE procname | TRIGGER trigname | VIEW viewname | PUBLIC
Parameter Description
tablename The name of the table the privilege applies to
viewname The name of the view the privilege applies to
The name of the stored procedure the EXECUTE privilege applies to; or the
procname
name of the procedure to be granted the privilege[s]
369
Security
Parameter Description
col The table column the privilege is to apply to
Unix_group The name of a user group in a POSIX operating system
The user name to which the privileges are granted to or to which the role is as-
username
signed
rolename Role name
trigname Trigger name
grantor The user granting the privilege[s]
A GRANT statement grants one or more privileges on database objects to users, roles, stored procedures, triggers
or views.
A regular, authenticated user has no privileges on any database object until they are explicitly granted, either
to that individual user or to all users bundled as the user PUBLIC. When an object is created, only the user
who has created it (the owner) and administrators have privileges for it and can grant privileges to other users,
roles or objects.
Different sets of privileges apply to different types of metadata objects. The different types of privileges will
be described separately later.
The TO Clause
The TO clause is used for listing the users, roles and database objects (procedures, triggers and views) that are
to be granted the privileges enumerated in <privileges>. The clause is mandatory.
The optional USER and ROLE keywords in the TO clause allow you to specify exactly who or what is granted
the privilege. If a USER or ROLE keyword is not specified, the server checks for a role with this name and, if
there is none, the privileges are granted to the user without further checking.
A role is a container object that can be used to package a collection of privileges. Use of the role is then granted
to each user that requires those privileges. A role can also be granted to a list of users.
The role must exist before privileges can be granted to it. See CREATE ROLE in the DDL chapter for the syntax
and rules. The role is maintained by granting privileges to it and, when required, revoking privileges from it. If
a role is dropped (see DROP ROLE), all users lose the privileges acquired through the role. Any privileges that
were granted additionally to an affected user by way of a different grant statement are retained.
A user that is granted a role must supply that role with his login credentials in order to exercise the associated
privileges. Any other privileges granted to the user are not affected by logging in with a role.
More than one role can be granted to the same user but logging in with multiple roles simultaneously is not
supported.
370
Security
Please note:
When a GRANT statement is executed, the security database is not checked for the existence of the grantee
user. This is not a bug: SQL permissions are concerned with controlling data access for authenticated users,
both native and trusted, and trusted operating system users are not stored in the security database.
When granting a privilege to a database object, such as a procedure, trigger or view, you must specify the
object type between the keyword TO and the object name.
Although the USER and ROLE keywords are optional, it is advisable to use them, in order to avoid ambiguity.
Firebird has a predefined user named PUBLIC, that represents all users. Privileges for operations on a particular
object that are granted to the user PUBLIC can be exercised by any user that has been authenticated at login.
Important
If privileges are granted to the user PUBLIC, they should be revoked from the user PUBLIC as well.
The optional WITH GRANT OPTION clause allows the users specified in the user list to grant the privileges
specified in the privilege list to other users.
Caution
By default, when privileges are granted in a database, the current user is recorded as the grantor. The GRANTED
BY clause enables the current user to grant those privileges as another user.
If the REVOKE statement is used, it will fail if the current user is not the user that was named in the GRANTED
BY clause.
The non-standard AS clause is supported as a synonym of the GRANTED BY clause to simplify migration from
other database systems.
The clauses GRANTED BY and AS can be used only by the database owner and administrators. The object owner
cannot use it unless he also has administrator privileges.
In theory, one GRANT statement grants one privilege to one user or object. In practice, the syntax allows multiple
privileges to be granted to multiple users in one GRANT statement.
371
Security
Syntax extract:
...
<privileges> ::= ALL [PRIVILEGES] | <privilege_list>
<privilege> ::= {
SELECT |
DELETE |
INSERT |
UPDATE [(col [,col [, ])] ] ) ] |
REFERENCES (col [, col [, ] ] )
}
Privilege Description
SELECT Permits the user or object to SELECT data from the table or view
INSERT Permits the user or object to INSERT rows into the table or view
Permits the user or object to UPDATE rows in the table or view, optionally re-
UPDATE
stricted to specific columns
col (Optional) name of a column to which the user's UPDATE privilege is restricted
DELETE Permits the user or object to DELETE rows from the table or view
Permits the user or object to reference the specified column[s] of the table via
REFERENCES a foreign key. If the primary or unique key referenced by the foreign key of the
other table is composite then all columns of the key must be specified.
col (Mandatory) name of one column in the referenced foreign key
Combines SELECT, INSERT, UPDATE, DELETE and REFERENCES privi-
ALL
leges in a single package
2. The SELECT privilege to the MANAGER, ENGINEER roles and to the user IVAN:
372
Security
3. All privileges to the ADMINISTRATOR role, together with the authority to grant the same privileges to
others:
4. The SELECT and REFERENCEs privileges on the NAME column to all users and objects:
5. The SELECT privilege being granted to the user IVAN by the user ALEX:
The EXECUTE privilege applies to stored procedures. It allows the grantee to execute the stored procedure and,
if applicable, to retrieve its output. In the case of selectable stored procedures, it acts somewhat like a SELECT
privilege, insofar as this style of stored procedure is executed in response to a SELECT statement.
Assigning Roles
Assigning a role is similar to granting a privilege. One or more roles can be assigned to one or more users,
including the user PUBLIC, using one GRANT statement.
373
Security
The optional WITH ADMIN OPTION clause allows the users specified in the user list to grant the role[s] specified
to other users.
Caution
2. Assigning the ADMIN role to the user ALEX with the authority to assign this role to other users:
REVOKE
Syntax:
374
Security
<privilege> ::=
SELECT |
DELETE |
INSERT |
UPDATE [(col [, col [, col [,] ] ] ) ] |
REFERENCES (col [, col [, ] ] )
<grantee> ::=
[USER] username | [ROLE] rolename | GROUP Unix_group
| PROCEDURE procname | TRIGGER trigname | VIEW viewname | PUBLIC
Parameter Description
tablename The name of the table the privilege is to be revoked from
viewname The name of the view the privilege is to be revoked from
The name of the stored procedure the EXECUTE privilege is to be revoked
procname
from; or the name of the procedure that is to have the privilege[s] revoked
trigname Trigger name
col The table column the privilege is to be revoked from
The user name from which the privileges are to be revoked from or the role is to
username
be removed from
rolename Role name
Unix_group The name of a user group in a POSIX operating system
grantor The grantor user on whose behalf the the privilege[s] are being revoked
The REVOKE statement is used for revoking privileges from users, roles, stored procedures, triggers and views
that were granted using the GRANT statement. See GRANT for detailed descriptions of the various types of
privileges.
Only the user who granted the privilege can revoke it.
The FROM clause is used to specify the list of users, roles and database objects (procedures, triggers and views)
that will have the enumerated privileges revoked. The optional USER and ROLE keywords in the FROM clause
375
Security
allow you to specify exactly which type is to have the privilege revoked. If a USER or ROLE keyword is not
specified, the server checks for a role with this name and, if there is none, the privileges are revoked from the
user without further checking.
Tips
Although the USER and ROLE keywords are optional, it is advisable to use them in order to avoid ambiguity.
The GRANT statement does not check for the existence of the user from which the privileges are being
revoked.
When revoking a privilege from a database object, you must specify its object type
Privileges that were granted to the special user named PUBLIC must be revoked from the user PUBLIC. User
PUBLIC provides a way to grant privileges to all users at once but it is not a group of users.
The optional GRANT OPTION FOR clause revokes the user's privilege to grant privileges on the table, view,
trigger or stored procedure to other users or to roles. It does not revoke the privilege with which the grant option
is associated.
One usage of the REVOKE statement is to remove roles that were assigned to a user, or a group of users, by a
GRANT statement. In the case of multiple roles and/or multiple grantees, the REVOKE verb is followed by the
list of roles that will be removed from the list of users specified after the FROM clause.
The optional ADMIN OPTION FOR clause provides the means to revoke the grantee's administrator privilege,
the ability to assign the same role to other users, without revoking the grantee's privilege to the role.
A privilege that has been granted using the GRANTED BY clause is internally attributed explicitly to the grantor
designated by that original GRANT statement. To revoke a privilege that was obtained by this method, the current
user must be logged in either with full administrative privileges or as the user designated as <grantor> by that
GRANTED BY clause.
Note
The same rule applies if the syntax used in the original GRANT statement used the synonymous AS form to
introduce the clause, instead of the standard GRANTED BY form.
If the current user is logged in with full administrator privileges in the database, the statement
376
Security
can be used to revoke all privileges (including role memberships) on all objects from one or more users and/or
roles. All privileges for the user will be removed, regardless of who granted them. It is a quick way to clear
privileges when access to the database must be blocked for a particular user or role.
If the current user is not logged in as an administrator, the only privileges revoked will be those that were granted
originally by that user.
The REVOKE ALL ON ALL statement cannot be used to revoke privileges that have been granted TO stored
procedures, triggers or views.
Note
1. Revoking the privileges for reading and inserting into the SALES
2. Revoking the privilege for reading the CUSTOMER table from the MANAGER and ENGINEER roles
and from the user IVAN:
3. Revoking from the ADMINISTRATOR role the authority to grant any privileges on the CUSTOMER table
to other users or roles:
4. Revoking the privilege for reading the COUNTRY table and the authority to reference the NAME column
of the COUNTRY table from any user, via the special user PUBLIC:
5. Revoking the privilege for reading the EMPLOYEE table from the user IVAN, that was granted by the
user ALEX:
377
Security
6. Revoking the privilege for updating the FIRST_NAME and LAST_NAME columns of the EMPLOYEE
table from the user IVAN:
7. Revoking the privilege for inserting records into the EMPLOYEE_PROJECT table from the
ADD_EMP_PROJ procedure:
8. Revoking the privilege for executing the procedure ADD_EMP_PROJ from the MANAGER role:
9. Revoking the DIRECTOR and MANAGER roles from the user IVAN:
10. Revoke from the user ALEX the authority to assign the MANAGER role to other users:
11. Revoking all privileges (including roles) on all objects from the user IVAN:
After this statement is executed, the user IVAN will have no privileges whatsoever.
378
Appendix A:
Supplementary Information
In this Appendix are topics that developers may wish to refer to, to enhance understanding of features or changes.
After the engine has altered any domain, including the implicit domains created internally behind column defi-
nitions and output parameters, the engine internally recompiles all of its dependencies.
Note
In V.2.x these comprise procedures and triggers but not blocks coded in DML statements for run-time execution
with EXECUTE BLOCK. Firebird 3 will encompass more module types (stored functions, packages).
Any module that fails to recompile because of an incompatibility arising from a domain change is marked as
invalid (invalidated by setting the RDB$VALID_BLR in its system record (in RDB$PROCEDURES or RDB
$TRIGGERS, as appropriate) to zero.
1. the domain is altered again and the new definition is compatible with the previously invalidated module
definition; OR
2. the previously invalidated module is altered to match the new domain definition
The following query will find the modules that depend on a specific domain and report the state of their RDB
$VALID_BLR fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
379
Supplementary Information
FROM rdb$procedures
UNION ALL
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers
) (type, name, valid)
WHERE EXISTS
(SELECT * from rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYDOMAIN')
The following query will find the modules that depend on a specific table column and report the state of their
RDB$VALID_BLR fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
FROM rdb$procedures
UNION ALL
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers) (type, name, valid)
WHERE EXISTS
(SELECT *
FROM rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYTABLE'
AND rdb$field_name = 'MYCOLUMN')
Important
All PSQL invalidations caused by domain/column changes are reflected in the RDB$VALID_BLR field. How-
ever, other kinds of changes, such as the number of input or output parameters, called routines and so on, do
not affect the validation field even though they potentially invalidate the module. A typical such scenario might
be one of the following:
1. A procedure (B) is defined, that calls another procedure (A) and reads output parameters from it. In this
case, a dependency is registered in RDB$DEPENDENCIES. Subsequently, the called procedure (A) is al-
tered to change or remove one or more of those output parameters. The ALTER PROCEDURE A statement
will fail with an error when commit is attempted.
2. A procedure (B) calls procedure A, supplying values for its input parameters. No dependency is registered
in RDB$DEPENDENCIES. Subsequent modification of the input parameters in procedure A will be allowed.
Failure will occur at run-time, when B calls A with the mismatched input parameter set.
380
Supplementary Information
Other Notes
For PSQL modules inherited from earlier Firebird versions (including a number of system triggers, even if
the database was created under Firebird 2.1 or higher), RDB$VALID_BLR is NULL. This does not imply
that their BLR is invalid.
The isql commands SHOW PROCEDURES and SHOW TRIGGERS display an asterisk in the RDB$VALID_BLR
column for any module for which the value is zero (i.e., invalid). However, SHOW PROCEDURE <procname>
and SHOW TRIGGER <trigname>, which display individual PSQL modules, do not signal invalid BLR at all.
A Note on Equality
Important
This note about equality and inequality operators applies everywhere in Firebird's SQL language.
The = operator, which is explicitly used in many conditions, only matches values to values. According to the
SQL standard, NULL is not a value and hence two NULLs are neither equal nor unequal to one another. If you
need NULLs to match each other in a condition, use the IS NOT DISTINCT FROM operator. This operator returns
true if the operands have the same value or if they are both NULL.
select *
from A join B
on A.id is not distinct from B.code
Likewise, in cases where you want to test against NULL for a condition of inequalityequality, use IS DISTINCT
FROM, not <>. If you want NULL to be considered different from any value and two NULLs to be considered
equal:
select *
from A join B
on A.id is distinct from B.code
381
Appendix B:
Exception Codes
and Messages
This appendix includes:
Custom Exceptions
Firebird DDL provides a simple syntax for creating custom exceptions for use in PSQL modules, with message
text of up to 1,021 characters. For more information, see CREATE EXCEPTION in DDL Statements and, for
usage, the statement EXCEPTION in PSQL Statements.
The Firebird SQLCODE error codes do not correlate with the standards-compliant SQLSTATE codes. SQLCODE
has been used for many years and should be considered as deprecated now. Support for SQLCODE is likely to
be dropped in a future version.
The structure of an SQLSTATE error code is five characters comprising the SQL error class (2 characters) and
the SQL subclass (3 characters).
382
Exception Codes and Messages
383
Exception Codes and Messages
384
Exception Codes and Messages
385
Exception Codes and Messages
386
Exception Codes and Messages
387
Exception Codes and Messages
388
Exception Codes and Messages
389
Exception Codes and Messages
Note
SQLCODE has been used for many years and should be considered as deprecated now. Support for SQLCODE
is likely to be dropped in a future version.
Table B.2. SQLCODE and GDSCODE Error Codes and Message Texts (1)
SQL-
GDSCODE Symbol Message Text
CODE
Segment buffer length shorter than ex-
101 335544366 Segment
pected
100 335544338 from_no_match No match for first value expression
100 335544354 no_record Invalid database key
Attempted retrieval of more segments
100 335544367 segstr_eof
than exist
Attempt to fetch past the last record in a
100 335544374 stream_eof
record stream
0 335741039 gfix_opt_SQL_dialect -sql_dialect | set database dialect n
0 335544875 bad_debug_format Bad debug info format
Table/procedure has non-SQL security
-84 335544554 nonsql_security_rel
class defined
Column has non-SQL security class de-
-84 335544555 nonsql_security_fld
fined
-84 335544668 dsql_procedure_use_err Procedure @1 does not return any values
The username entered is too long. Maxi-
-85 335544747 usrname_too_long
mum length is 31 bytes
The password specified is too long. Maxi-
-85 335544748 password_too_long
mum length is @1 bytes
-85 335544749 usrname_required A username is required for this operation
-85 335544750 password_required A password is required for this operation
-85 335544751 bad_protocol The network protocol specified is invalid
-85 335544752 dup_usrname_found
390
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
A duplicate user name was found in the
security database
The user name specified was not found in
-85 335544753 usrname_not_found
the security database
An error occurred while attempting to add
-85 335544754 error_adding_sec_record
the user
An error occurred while attempting to
-85 335544755 error_modifying_sec_record
modify the user record
An error occurred while attempting to
-85 335544756 error_deleting_sec_record
delete the user record
An error occurred while updating the se-
-85 335544757 error_updating_sec_db
curity database
-103 335544571 dsql_constant_err Data type for constant unknown
Precision 10 to 18 changed from DOU-
-104 336003075 dsql_transitional_numeric BLE PRECISION in SQL dialect 1 to 64-
bit scaled integer in SQL dialect 3
Database SQL dialect @1 does not sup-
-104 336003077 sql_db_dialect_dtype_unsupport
port reference to @2 datatype
-104 336003087 dsql_invalid_label Label @1 @2 in the current scope
Datatypes @1are not comparable in ex-
-104 336003088 dsql_datatypes_not_comparable
pression @2
-104 335544343 invalid_blr Invalid request BLR at offset @1
BLR syntax error: expected @1 at offset
-104 335544390 syntaxerr
@2, encountered @3
-104 335544425 ctxinuse Context already in use (BLR error)
-104 335544426 ctxnotdef Context not defined (BLR error)
-104 335544429 badparnum Bad parameter number
-104 335544440 bad_msg_vec -
Invalid slice description language at offset
-104 335544456 invalid_sdl
@1
-104 335544570 dsql_command_err Invalid command
-104 335544579 dsql_internal_err Internal error
-104 335544590 dsql_dup_option Option specified more than once
-104 335544591 dsql_tran_err Unknown transaction option
-104 335544592 dsql_invalid_array Invalid array reference
391
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-104 335544608 command_end_err Unexpected end of command
-104 335544612 token_err Token unknown
-104 335544634 dsql_token_unk_err Token unknown - line @1, column @2
-104 335544709 dsql_agg_ref_err Invalid aggregate reference
-104 335544714 invalid_array_id Invalid blob id
Client/Server Express not supported in
-104 335544730 cse_not_supported
this release
-104 335544743 token_too_long Token size exceeds limit
A string constant is delimited by double
-104 335544763 invalid_string_constant
quotes
-104 335544764 transitional_date DATE must be changed to TIMESTAMP
Client SQL dialect @1 does not support
-104 335544796 sql_dialect_datatype_unsupport
reference to @2 datatype
You created an indirect dependency on
-104 335544798 depend_on_uncommitted_rel uncommitted metadata. You must roll
back the current transaction
Invalid column position used in the @1
-104 335544821 dsql_column_pos_err
clause
Cannot use an aggregate function in a
-104 335544822 dsql_agg_where_err
WHERE clause, use HAVING instead
Cannot use an aggregate function in a
-104 335544823 dsql_agg_group_err
GROUP BY clause
Invalid expression in the @1 (not con-
-104 335544824 dsql_agg_column_err tained in either an aggregate function or
the GROUP BY clause)
Invalid expression in the @1 (neither
-104 335544825 dsql_agg_having_err an aggregate function nor a part of the
GROUP BY clause)
Nested aggregate functions are not al-
-104 335544826 dsql_agg_nested_err
lowed
-104 335544849 malformed_string Malformed string
Unexpected end of command- line @1,
-104 335544851 command_end_err2
column @2
-104 336397215 dsql_max_sort_items Cannot sort on more than 255 items
-104 336397216 dsql_max_group_items Cannot group on more than 255 items
392
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Cannot include the same field (@1.@2)
-104 336397217 dsql_conflicting_sort_field twice in the ORDER BY clause with con-
flicting sorting options
Column list from derived table @1 has
-104 336397218 dsql_derived_table_more_columns more columns than the number of items in
its SELECT statement
Column list from derived table @1 has
-104 336397219 dsql_derived_table_less_columns less columns than the number of items in
its SELECT statement
No column name specified for column
-104 336397220 dsql_derived_field_unnamed
number @1 in derived table @2
Column @1 was specified multiple times
-104 336397221 dsql_derived_field_dup_name
for derived table @2
Internal dsql error: alias type expected by
-104 336397222 dsql_derived_alias_select
pass1_expand_select_node
Internal dsql error: alias type expected by
-104 336397223 dsql_derived_alias_field
pass1_field
Internal dsql error: column position out of
-104 336397224 dsql_auto_field_bad_pos
range in pass1_union_auto_cast
Recursive CTE member (@1) can refer it-
-104 336397225 dsql_cte_wrong_reference
self only in FROM clause
-104 336397226 dsql_cte_cycle CTE '@1' has cyclic dependencies
Recursive member of CTE can't be mem-
-104 336397227 dsql_cte_outer_join
ber of an outer join
Recursive member of CTE can't reference
-104 336397228 dsql_cte_mult_references
itself more than once
-104 336397229 dsql_cte_not_a_union Recursive CTE (@1) must be an UNION
CTE '@1' defined non-recursive member
-104 336397230 dsql_cte_nonrecurs_after_recurs
after recursive
Recursive member of CTE '@1' has @2
-104 336397231 dsql_cte_wrong_clause
clause
Recursive members of CTE (@1) must be
-104 336397232 dsql_cte_union_all linked with another members via UNION
ALL
Non-recursive member is missing in CTE
-104 336397233 dsql_cte_miss_nonrecursive
'@1'
-104 336397234 dsql_cte_nested_with WITH clause can't be nested
393
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Column @1 appears more than once in
-104 336397235 dsql_col_more_than_once_using
USING clause
-104 336397237 dsql_cte_not_used CTE "@1" is not used in query
-105 335544702 like_escape_invalid Invalid ESCAPE sequence
Specified EXTRACT part does not exist
-105 335544789 extract_input_mismatch
in input datatype
-150 335544360 read_only_rel Attempted update of read-only table
-150 335544362 read_only_view Cannot update read-only view @1
-150 335544446 non_updatable Not updatable
-150 335544546 constaint_on_view Cannot define constraints on views
-151 335544359 read_only_field Attempted update of read - only column
@1 is not a valid base table of the speci-
-155 335544658 dsql_base_table
fied view
Must specify column name for view se-
-157 335544598 specify_field_err
lect expression
Number of columns does not match select
-158 335544599 num_field_err
list
Dbkey not available for multi - table
-162 335544685 no_dbkey
views
Input parameter mismatch for procedure
-170 335544512 prcmismat
@1
External functions cannot have morethan
-170 335544619 extern_func_err
10 parametrs
Output parameter mismatch for procedure
-170 335544850 prc_out_param_mismatch
@1
-171 335544439 funmismat Function @1 could not be matched
Column not array or invalid dimensions
-171 335544458 invalid_dimension
(expected @1, encountered @2)
Return mode by value not allowed for this
-171 335544618 return_mode_err
data type
Array data type can use up to @1 dimen-
-171 335544873 array_max_dimensions
sions
-172 335544438 funnotdef Function @1 is not defined
-203 335544708 dyn_fld_ambiguous Ambiguous column reference
394
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Ambiguous field name between @1 and
-204 336003085 dsql_ambiguous_field_name
@2
-204 335544463 gennotdef Generator @1 is not defined
-204 335544502 stream_not_defined Reference to invalid stream number
-204 335544509 charset_not_found CHARACTER SET @1 is not defined
-204 335544511 prcnotdef Procedure @1 is not defined
-204 335544515 codnotdef Status code @1 unknown
-204 335544516 xcpnotdef Exception @1 not defined
Name of Referential Constraint not de-
-204 335544532 ref_cnstrnt_notfound
fined in constraints table
Could not find table/procedure for
-204 335544551 grant_obj_notfound
GRANT
Implementation of text subtype @1 not
-204 335544568 text_subtype
located
-204 335544573 dsql_datatype_err Data type unknown
-204 335544580 dsql_relation_err Table unknown
-204 335544581 dsql_procedure_err Procedure unknown
COLLATION @1 for CHARACTER
-204 335544588 collation_not_found
SET @2 is not defined
COLLATION @1 is not valid for speci-
-204 335544589 collation_not_for_charset
fied CHARACTER SET
-204 335544595 dsql_trigger_err Trigger unknown
Alias @1 conflicts with an alias in the
-204 335544620 alias_conflict_err
same statement
Alias @1 conflicts with a procedure in the
-204 335544621 procedure_conflict_error
same statement
Alias @1 conflicts with a table in the
-204 335544622 relation_conflict_err
same statement
There is no alias or table named @1 at
-204 335544635 dsql_no_relation_alias
this scope level
-204 335544636 indexname There is no index @1 for table @2
Invalid use of CHARACTER SET or
-204 335544640 collation_requires_text
COLLATE
-204 335544662 dsql_blob_type_unknown BLOB SUB_TYPE @1 is not defined
395
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Can not define a not null column with
-204 335544759 bad_default_value
NULL as default value
-204 335544760 invalid_clause Invalid clause - '@1'
Too many Contexts of Relation/Proce-
-204 335544800 too_many_contexts
dure/Views. Maximum allowed is 255
Invalid parameter to FIRST.Only integers
-204 335544817 bad_limit_param
>= 0 are allowed
Invalid parameter to SKIP. Only integers
-204 335544818 bad_skip_param
>= 0 are allowed
Invalid offset parameter @1 to SUB-
-204 335544837 bad_substring_offset STRING. Only positive integers are al-
lowed
Invalid length parameter @1 to SUB-
-204 335544853 bad_substring_length STRING. Negative integers are not al-
lowed
-204 335544854 charset_not_installed CHARACTER SET @1 is not installed
COLLATION @1 for CHARACTER
-204 335544855 collation_not_installed
SET @2 is not installed
Blob sub_types bigger than 1 (text) are
-204 335544867 subtype_for_internal_use
for internal use only
-205 335544396 fldnotdef Column @1 is not defined in table @2
-205 335544552 grant_fld_notfound Could not find column for GRANT
Column @1 is not defined in procedure
-205 335544883 fldnotdef2
@2
-206 335544578 dsql_field_err Column unknown
-206 335544587 dsql_blob_err Column is not a BLOB
-206 335544596 dsql_subselect_err Subselect illegal in this context
-206 336397208 dsql_line_col_error At line @1, column @2
-206 336397209 dsql_unknown_pos At unknown line and column
Column @1 cannot be repeated in @2
-206 336397210 dsql_no_dup_name
statement
-208 335544617 order_by_err Invalid ORDER BY clause
-219 335544395 relnotdef Table @1 is not defined
-219 335544872 domnotdef Domain @1 is not defined
-230 335544487 walw_err WAL Writer error
396
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-231 335544488 logh_small Log file header of @1 too small
-232 335544489 logh_inv_version Invalid version of log file @1
Log file @1 not latest in the chain but
-233 335544490 logh_open_flag
open flag still set
Log file @1 not closed properly; database
-234 335544491 logh_open_flag2
recovery may be required
Database name in the log file @1 is dif-
-235 335544492 logh_diff_dbname
ferent
Unexpected end of log file @1 at offset
-236 335544493 logf_unexpected_eof
@2
Incomplete log record at offset @1 in log
-237 335544494 logr_incomplete
file @2
Log record header too small at offset @1
-238 335544495 logr_header_small2
in log file @
Log block too small at offset @1 in log
-239 335544496 logb_small
file @2
Insufficient memory to allocate page
-239 335544691 cache_too_small
buffer cache
-239 335544693 log_too_small Log size too small
-239 335544694 partition_too_small Log partition size too small
-243 335544500 no_wal Database does not use Write-ahead Log
WAL defined; Cache Manager must be
-257 335544566 start_cm_for_wal
started first
-260 335544690 cache_redef Cache redefined
-260 335544692 log_redef Log redefined
Partitions not supported in series of log
-261 335544695 partition_not_supp
file specification
Total length of a partitioned log must be
-261 335544696 log_length_spec
specified
-281 335544637 no_stream_plan Table @1 is not referenced in plan
Table @1 is referenced more than once in
-282 335544638 stream_twice
plan; use aliases to distinguish
The table @1 is referenced twice; use
-282 335544643 dsql_self_join
aliases to differentiate
397
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Table @1 is referenced twice in view; use
-282 335544659 duplicate_base_table
an alias to distinguish
View @1 has more than one base table;
-282 335544660 view_alias
use aliases to distinguish
Navigational stream @1 references a
-282 335544710 complex_view
view with more than one base table
Table @1 is referenced in the plan but not
-283 335544639 stream_not_found
the from list
Index @1 cannot be used in the specified
-284 335544642 index_unused
plan
Column used in a PRIMARY constraint
-291 335544531 primary_key_notnull
must be NOT NULL
Cannot update constraints (RDB
-292 335544534 ref_cnstrnt_update
$REF_CONSTRAINTS)
Cannot update constraints (RDB
-293 335544535 check_cnstrnt_update
$CHECK_CONSTRAINTS)
Cannot delete CHECK constraint entry
-294 335544536 check_cnstrnt_del
(RDB$CHECK_CONSTRAINTS)
Cannot update constraints (RDB
-295 335544545 rel_cnstrnt_update
$RELATION_CONSTRAINTS)
Internal gds software consistency check
-296 335544547 invld_cnstrnt_type
(invalid RDB$CONSTRAINT_TYPE)
Operation violates check constraint @1
-297 335544558 check_constraint
on view or table @2
UPDATE OR INSERT field list does not
-313 336003099 upd_ins_doesnt_match_pk
match primary key of table @1
UPDATE OR INSERT field list does not
-313 336003100 upd_ins_doesnt_ match _matching
match MATCHING clause
Count of column list and variable list do
-313 335544669 dsql_count_mismatch
not match
Cannot transliterate character between
-314 335544565 transliteration_failed
character sets
Cannot change datatype for column
-315 336068815 dyn_dtype_invalid @1.Changing datatype is not supported
for BLOB or ARRAY columns
Column @1 from table @2 is referenced
-383 336068814 dyn_dependency_exists
in @3
398
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Invalid comparison operator for find oper-
-401 335544647 invalid_operator
ation
-402 335544368 segstr_no_op Attempted invalid operation on a BLOB
BLOB and array data types are not sup-
-402 335544414 blobnotsup
ported for @1 operation
-402 335544427 datnotsup Data operation not supported
-406 335544457 out_of_bounds Subscript out of bounds
-407 335544435 nullsegkey Null segment of UNIQUE KEY
-413 335544334 convert_error Conversion error from string "@1"
Table B.3. SQLCODE and GDSCODE Error Codes and Message Texts (2)
SQL-
GDSCODE Symbol Message Text
CODE
Filter not found to convert type @1 to
-413 335544454 nofilter
type @2
Unsupported conversion to target type
-413 335544860 blob_convert_error
BLOB (subtype @1)
Unsupported conversion to target type
-413 335544861 array_convert_error
ARRAY
-501 335544577 dsql_cursor_close_err Attempt to reclose a closed cursor
Statement already has a cursor @1 as-
-502 336003090 dsql_cursor_redefined
signed
Cursor @1 is not found in the current
-502 336003091 dsql_cursor_not_found
context
Cursor @1 already exists in the current
-502 336003092 dsql_cursor_exists
context
-502 336003093 dsql_cursor_rel_ambiguous Relation @1 is ambiguous in cursor @2
-502 336003094 dsql_cursor_rel_not_found Relation @1 is not found in cursor @2
-502 336003095 dsql_cursor_not_open Cursor is not open
-502 335544574 dsql_decl_err Invalid cursor declaration
-502 335544576 dsql_cursor_open_err Attempt to reopen an open cursor
-504 336003089 dsql_cursor_invalid Empty cursor name is not allowed
-504 335544572 dsql_cursor_err Invalid cursor reference
-508 335544348 no_cur_rec No current record for fetch operation
399
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-510 335544575 dsql_cursor_update_err Cursor @1 is not updatable
-518 335544582 dsql_request_err Request unknown
The prepare statement identifies a prepare
-519 335544688 dsql_open_cursor_request
statement with an open cursor
Violation of FOREIGN KEY constraint
-530 335544466 foreign_key
"@1" on table "@2"
Foreign key reference target does not ex-
-530 335544838 foreign_key_target_doesnt_exist
ist
Foreign key references are present for the
-530 335544839 foreign_key_references_present
record
Cannot prepare a CREATE DATABASE/
-531 335544597 dsql_crdb_prepare_err
SCHEMA statement
-532 335544469 trans_invalid Transaction marked invalid by I/O error
-551 335544352 no_priv No permission for @1 access to @2 @3
Service @1 requires SYSDBA permis-
-551 335544790 insufficient_svc_privileges sions. Reattach to the Service Manager
using the SYSDBA account
Only the owner of a table may reassign
-552 335544550 not_rel_owner
ownership
User does not have GRANT privileges for
-552 335544553 grant_nopriv
operation
User does not have GRANT privileges on
-552 335544707 grant_nopriv_on_base
base table/view for operation
-553 335544529 existing_priv_mod Cannot modify an existing user privilege
-595 335544645 stream_crack The current position is on a crack
Illegal operation when at beginning of
-596 335544644 stream_bof
stream
Preceding file did not specify length, so
-597 335544632 dsql_file_length_err
@1 must include starting page number
Shadow number must be a positive inte-
-598 335544633 dsql_shadow_number_err
ger
-599 335544607 node_err Gen.c: node not supported
A node name is not permitted in a sec-
-599 335544625 node_name_err
ondary, shadow, cache or log file name
-600 335544680 crrp_data_err Sort error: corruption in data structure
400
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-601 335544646 db_or_file_exists Database or file exists
-604 335544593 dsql_max_arr_dim_exceeded Array declared with too many dimensions
-604 335544594 dsql_arr_range_error Illegal array dimension range
-605 335544682 dsql_field_ref Inappropriate self-reference of column
Cannot SELECT RDB$DB_KEY from a
-607 336003074 dsql_dbkey_from_non_table
stored procedure
External function should have return posi-
-607 336003086 dsql_udf_return_pos_err
tion between 1 and @1
Data type @1 is not supported for EX-
-607 336003096 dsql_type_not_supp_ext_tab TERNAL TABLES. Relation '@2', field
'@3'
-607 335544351 no_meta_update Unsuccessful metadata update
-607 335544549 systrig_update Cannot modify or erase a system trigger
Array/BLOB/DATE data types not al-
-607 335544657 dsql_no_blob_array
lowed in arithmetic
"REFERENCES table" without "(col-
-607 335544746 reftable_requires_pk umn)" requires PRIMARY KEY on refer-
enced table
-607 335544815 generator_name GENERATOR @1
-607 335544816 udf_name UDF @1
Can't have relation with only computed
-607 335544858 must_have_phys_field
fields or constraints
-607 336397206 dsql_table_not_found Table @1 does not exist
-607 336397207 dsql_view_not_found View @1 does not exist
Array and BLOB data types not allowed
-607 336397212 dsql_no_array_computed
in computed field
Scalar operator used on field @1 which is
-607 336397214 dsql_only_can_subscript_array
not an array
Cannot rename domain @1 to @2. A do-
-612 336068812 dyn_domain_name_exists
main with that name already exists
Cannot rename column @1 to @2.A col-
-612 336068813 dyn_field_name_exists umn with that name already exists in table
@3
Lock on table @1 conflicts with existing
-615 335544475 relation_lock
lock
401
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Requested record lock conflicts with ex-
-615 335544476 record_lock
isting lock
-615 335544507 range_in_use Refresh range number @1 already in use
Cannot delete PRIMARY KEY being
-616 335544530 primary_key_ref
used in FOREIGN KEY definition
Cannot delete index used by an Integrity
-616 335544539 integ_index_del
Constraint
Cannot modify index used by an Integrity
-616 335544540 integ_index_mod
Constraint
Cannot delete trigger used by a CHECK
-616 335544541 check_trig_del
Constraint
Cannot delete column being used in an In-
-616 335544543 cnstrnt_fld_del
tegrity Constraint
-616 335544630 dependency There are @1 dependencies
-616 335544674 del_last_field Last column in a table cannot be deleted
Cannot deactivate index used by an in-
-616 335544728 integ_index_deactivate
tegrity constraint
Cannot deactivate index used by a PRI-
-616 335544729 integ_deactivate_primary
MARY/UNIQUE constraint
Cannot update trigger used by a CHECK
-617 335544542 check_trig_update
Constraint
Cannot rename column being used in an
-617 335544544 cnstrnt_fld_rename
Integrity Constraint
Cannot delete index segment used by an
-618 335544537 integ_index_seg_del
Integrity Constraint
Cannot update index segment used by an
-618 335544538 integ_index_seg_mod
Integrity Constraint
Validation error for column @1, value
-625 335544347 not_valid
"@2"
Validation error for variable @1, value
-625 335544879 not_valid_for_var
"@2"
-625 335544880 not_valid_for Validation error for @1, value "@2"
Duplicate specification of @1- not sup-
-637 335544664 dsql_duplicate_spec
ported
Implicit domain name @1 not allowed in
-637 336397213 dsql_implicit_domain_name
user created domain
402
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-660 336003098 primary_key_required Primary key required on table @1
Non-existent PRIMARY or UNIQUE
-660 335544533 foreign_key_notfound
KEY specified for FOREIGN KEY
-660 335544628 idx_create_err Cannot create index @1
-663 335544624 idx_seg_err Segment count of 0 defined for index @1
-663 335544631 idx_key_err Too many keys defined for index @1
Too few key columns found for index @1
-663 335544672 key_field_err
(incorrect column name?)
Key size exceeds implementation restric-
-664 335544434 keytoobig
tion for index "@1"
-677 335544445 ext_err @1 extension error
-685 335544465 bad_segstr_type Invalid BLOB type for operation
Attempt to index BLOB column in index
-685 335544670 blob_idx_err
@1
Attempt to index array column in index
-685 335544671 array_idx_err
@1
Page @1 is of wrong type (expected @2,
-689 335544403 badpagtyp
found @3)
-689 335544650 page_type_err Wrong page type
Segments not allowed in expression index
-690 335544679 no_segments_err
@1
-691 335544681 rec_size_err New record size of @1 bytes is too big
Maximum indexes per table (@1) exceed-
-692 335544477 max_idx
ed
Too many concurrent executions of the
-693 335544663 req_max_clones_exceeded
same request
-694 335544684 no_field_access Cannot access column @1 in view @2
Arithmetic exception, numeric overflow,
-802 335544321 arith_except
or string truncation
Concatenation overflow. Resulting string
-802 335544836 concat_overflow
cannot exceed 32K in length
Attempt to store duplicate value ( visible
-803 335544349 no_dup to active transactions ) in unique index
"@1"
403
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Violation of PRIMARY or UNIQUE
-803 335544665 unique_key_violation
KEY constraint "@1" on table "@2"
Feature not supported on ODS version
-804 336003097 dsql_feature_not_supported_ods
older than @1.@2
-804 335544380 wronumarg Wrong number of arguments on call
SQLDA missing or incorrect version, or
-804 335544583 dsql_sqlda_err
incorrect number/type of variables
Count of read - write columns does not
-804 335544584 dsql_var_count_err
equal count of values
-804 335544586 dsql_function_err Function unknown
-804 335544713 dsql_sqlda_value_err Incorrect values within SQLDA structure
ODS versions before ODS@1 are not
-804 336397205 dsql_too_old_ods
supported
Only simple column names permitted for
-806 335544600 col_name_err
VIEW WITH CHECK OPTION
No WHERE clause for VIEW WITH
-807 335544601 where_err
CHECK OPTION
Only one table allowed for VIEW WITH
-808 335544602 table_view_err
CHECK OPTION
DISTINCT, GROUP or HAVING not
-809 335544603 distinct_err permitted for VIEW WITH CHECK OP-
TION
No subqueries permitted for VIEW WITH
-810 335544605 subquery_err
CHECK OPTION
-811 335544652 sing_select_err Multiple rows in singleton select
Cannot insert because the file is readonly
-816 335544651 ext_readonly_err
or is on a read only medium
Operation not supported for EXTERNAL
-816 335544715 extfile_uns_op
FILE table @1
DB dialect @1 and client dialect @2 con-
-817 336003079 isc_sql_dialect_conflict_num
flict with respect to numeric precision @3
UPDATE OR INSERT without MATCH-
-817 336003101 upd_ins_with_complex_view ING could not be used with views based
on more than one table
-817 336003102 dsql_incompatible_trigger_type Incompatible trigger type
-817 336003103 dsql_db_trigger_type_cant_change Database trigger type can't be changed
404
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Attempted update during read - only
-817 335544361 read_only_trans
transaction
-817 335544371 segstr_no_write Attempted write to read-only BLOB
-817 335544444 read_only Operation not supported
-817 335544765 read_only_database Attempted update on read - only database
SQL dialect @1 is not supported in this
-817 335544766 must_be_dialect_2_and_up
database
Metadata update statement is not allowed
-817 335544793 ddl_not_allowed_by_db_sql_dial
by the current database SQL dialect @1
-820 335544356 obsolete_metadata Metadata is obsolete
Unsupported on - disk structure for file
-820 335544379 wrong_ods
@1; found @2.@3, support @4.@5
-820 335544437 wrodynver Wrong DYN version
Minor version too high found @1 expect-
-820 335544467 high_minor
ed @2
Difference file name should be set explic-
-820 335544881 need_difference
itly for database on raw device
-823 335544473 invalid_bookmark Invalid bookmark handle
-824 335544474 bad_lock_level Invalid lock level @1
-825 335544519 bad_lock_handle Invalid lock handle
-826 335544585 dsql_stmt_handle Invalid statement handle
-827 335544655 invalid_direction Invalid direction for find operation
-827 335544718 invalid_key Invalid key for find operation
-828 335544678 inval_key_posn Invalid key position
New size specified for column @1 must
-829 336068816 dyn_char_fld_too_small
be at least @2 characters
Cannot change datatype for @1.Conver-
-829 336068817 dyn_invalid_dtype_conversion sion from base type @2 to @3 is not sup-
ported
Cannot change datatype for column @1
-829 336068818 dyn_dtype_conv_invalid from a character type to a non-character
type
Maximum number of collations per char-
-829 336068829 max_coll_per_charset
acter set exceeded
-829 336068830 invalid_coll_attr Invalid collation attributes
405
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
New scale specified for column @1 must
-829 336068852 dyn_scale_too_big
be at most @2
New precision specified for column @1
-829 336068853 dyn_precision_too_small
must be at least @2
-829 335544616 field_ref_err Invalid column reference
-830 335544615 field_aggregate_err Column used with aggregate
Attempt to define a second PRIMARY
-831 335544548 primary_key_exists
KEY for the same table
FOREIGN KEY column count does not
-832 335544604 key_field_count_err
match PRIMARY KEY
-833 335544606 expression_eval_err Expression evaluation not supported
-833 335544810 date_range_exceeded Value exceeds the range for valid dates
-834 335544508 range_not_found Refresh range number @1 not found
-835 335544649 bad_checksum Bad checksum
-836 335544517 except Exception @1
-836 335544848 except2 Exception @1
-837 335544518 cache_restart Restart shared cache manager
-838 335544560 shutwarn Database @1 shutdown in @2 seconds
-841 335544677 version_err Too many versions
-842 335544697 precision_err Precision must be from 1 to 18
-842 335544698 scale_nogt Scale must be between zero and precision
-842 335544699 expec_short Short integer expected
-842 335544700 expec_long Long integer expected
-842 335544701 expec_ushort Unsigned short integer expected
-842 335544712 expec_positive Positive value expected
-901 335740929 gfix_db_name Database file name (@1) already given
-901 336330753 gbak_unknown_switch Found unknown switch
-901 336920577 gstat_unknown_switch Found unknown switch
-901 336986113 fbsvcmgr_bad_am Wrong value for access mode
-901 335740930 gfix_invalid_sw Invalid switch @1
-901 335544322 bad_dbkey Invalid database key
-901 336986114 fbsvcmgr_bad_wm Wrong value for write mode
406
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336330754 gbak_page_size_missing Page size parameter missing
-901 336920578 gstat_retry Please retry, giving a database name
-901 336986115 fbsvcmgr_bad_rs Wrong value for reserve space
Wrong ODS version, expected @1, en-
-901 336920579 gstat_wrong_ods
countered @2
Page size specified (@1) greater than lim-
-901 336330755 gbak_page_size_toobig
it (16384 bytes)
-901 335740932 gfix_incmp_sw Incompatible switch combination
-901 336920580 gstat_unexpected_eof Unexpected end of database file
Redirect location for output is not speci-
-901 336330756 gbak_redir_ouput_missing
fied
Unknown tag (@1) in info_svr_db_info
-901 336986116 fbsvcmgr_info_err
block after isc_svc_query()
-901 335740933 gfix_replay_req Replay log pathname required
-901 336330757 gbak_switches_conflict Conflicting switches for backup/restore
Unknown tag (@1) in isc_svc_query() re-
-901 336986117 fbsvcmgr_query_err
sults
-901 335544326 bad_dpb_form Unrecognized database parameter block
Number of page buffers for cache re-
-901 335740934 gfix_pgbuf_req
quired
-901 336986118 fbsvcmgr_switch_unknown Unknown switch "@1"
-901 336330758 gbak_unknown_device Device type @1 not known
-901 335544327 bad_req_handle Invalid request handle
-901 335740935 gfix_val_req Numeric value required
-901 336330759 gbak_no_protection Protection is not there yet
-901 335544328 bad_segstr_handle Invalid BLOB handle
-901 335740936 gfix_pval_req Positive numeric value required
Page size is allowed only on restore or
-901 336330760 gbak_page_size_not_allowed
create
Table B.4. SQLCODE and GDSCODE Error Codes and Message Texts (3)
SQL-
GDSCODE Symbol Message Text
CODE
-901 335544329 bad_segstr_id Invalid BLOB ID
407
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Number of transactions per sweep re-
-901 335740937 gfix_trn_req
quired
-901 336330761 gbak_multi_source_dest Multiple sources or destinations specified
Invalid parameter in transaction parame-
-901 335544330 bad_tpb_content
ter block
-901 336330762 gbak_filename_missing Requires both input and output filenames
Invalid format for transaction parameter
-901 335544331 bad_tpb_form
block
Input and output have the same name.
-901 336330763 gbak_dup_inout_names
Disallowed
-901 335740940 gfix_full_req "full" or "reserve" required
Invalid transaction handle (expecting ex-
-901 335544332 bad_trans_handle
plicit transaction start)
-901 336330764 gbak_inv_page_size Expected page size, encountered "@1"
-901 335740941 gfix_usrname_req User name required
REPLACE specified, but the first file @1
-901 336330765 gbak_db_specified
is a database
-901 335740942 gfix_pass_req Password required
Database @1 already exists.To replace it,
-901 336330766 gbak_db_exists
use the -REP switch
-901 335740943 gfix_subs_name Subsystem name
-901 336723983 gsec_cant_open_db Unable to open database
-901 336330767 gbak_unk_device Device type not specified
-901 336723984 gsec_switches_error Error in switch specifications
-901 335740945 gfix_sec_req Number of seconds required
Attempt to start more than @1 transac-
-901 335544337 excess_trans
tions
-901 336723985 gsec_no_op_spec No operation specified
Numeric value between 0 and 32767 in-
-901 335740946 gfix_nval_req
clusive required
-901 336723986 gsec_no_usr_name No user name specified
-901 335740947 gfix_type_shut Must specify type of shutdown
Information type inappropriate for object
-901 335544339 infinap
specified
408
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
No information of this type available for
-901 335544340 infona
object specified
-901 336723987 gsec_err_add Add record error
-901 336723988 gsec_err_modify Modify record error
-901 336330772 gbak_blob_info_failed Gds_$blob_info failed
-901 335740948 gfix_retry Please retry, specifying an option
-901 335544341 infunk Unknown information item
-901 336723989 gsec_err_find_mod Find / modify record error
-901 336330773 gbak_unk_blob_item Do not understand BLOB INFO item @1
Action cancelled by trigger (@1) to pre-
-901 335544342 integ_fail
serve data integrity
-901 336330774 gbak_get_seg_failed Gds_$get_segment failed
-901 336723990 gsec_err_rec_not_found Record not found for user: @1
-901 336723991 gsec_err_delete Delete record error
-901 336330775 gbak_close_blob_failed Gds_$close_blob failed
-901 335740951 gfix_retry_db Please retry, giving a database name
-901 336330776 gbak_open_blob_failed Gds_$open_blob failed
-901 336723992 gsec_err_find_del Find / delete record error
-901 335544345 lock_conflict Lock conflict on no wait transaction
-901 336330777 gbak_put_blr_gen_id_failed Failed in put_blr_gen_id
-901 336330778 gbak_unk_type Data type @1 not understood
-901 336330779 gbak_comp_req_failed Gds_$compile_request failed
-901 336330780 gbak_start_req_failed Gds_$start_request failed
-901 336723996 gsec_err_find_disp Find / display record error
-901 336330781 gbak_rec_failed gds_$receive failed
-901 336920605 gstat_open_err Can't open database file @1
-901 336723997 gsec_inv_param Invalid parameter, no switch defined
Program attempted to exit without finish-
-901 335544350 no_finish
ing database
-901 336920606 gstat_read_err Can't read a database page
-901 336330782 gbak_rel_req_failed Gds_$release_request failed
409
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336723998 gsec_op_specified Operation already specified
-901 336920607 gstat_sysmemex System memory exhausted
-901 336330783 gbak_db_info_failed gds_$database_info failed
-901 336723999 gsec_pw_specified Password already specified
-901 336724000 gsec_uid_specified Uid already specified
-901 336330784 gbak_no_db_desc Expected database description record
-901 335544353 no_recon Transaction is not in limbo
-901 336724001 gsec_gid_specified Gid already specified
-901 336330785 gbak_db_create_failed Failed to create database @1
-901 336724002 gsec_proj_specified Project already specified
-901 336330786 gbak_decomp_len_error RESTORE: decompression length error
-901 335544355 no_segstr_close BLOB was not closed
-901 336330787 gbak_tbl_missing Cannot find table @1
-901 336724003 gsec_org_specified Organization already specified
-901 336330788 gbak_blob_col_missing Cannot find column for BLOB
-901 336724004 gsec_fname_specified First name already specified
Cannot disconnect database with open
-901 335544357 open_trans
transactions (@1 active)
-901 336330789 gbak_create_blob_failed Gds_$create_blob failed
-901 336724005 gsec_mname_specified Middle name already specified
Message length error ( encountered @1,
-901 335544358 port_len
expected @2)
-901 336330790 gbak_put_seg_failed Gds_$put_segment failed
-901 336724006 gsec_lname_specified Last name already specified
-901 336330791 gbak_rec_len_exp Expected record length
-901 336724008 gsec_inv_switch Invalid switch specified
Wrong length record, expected @1 en-
-901 336330792 gbak_inv_rec_len
countered @2
-901 336330793 gbak_exp_data_type Expected data attribute
-901 336724009 gsec_amb_switch Ambiguous switch specified
-901 336330794 gbak_gen_id_failed Failed in store_blr_gen_id
410
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336724010 gsec_no_op_specified No operation specified for parameters
-901 335544363 req_no_trans No transaction for request
-901 336330795 gbak_unk_rec_type Do not recognize record type @1
-901 336724011 gsec_params_not_allowed No parameters allowed for this operation
-901 335544364 req_sync Request synchronization error
-901 336724012 gsec_incompat_switch Incompatible switches specified
-901 336330796 gbak_inv_bkup_ver Expected backup version 1..8. Found @1
Request referenced an unavailable
-901 335544365 req_wrong_db
database
-901 336330797 gbak_missing_bkup_desc Expected backup description record
-901 336330798 gbak_string_trunc String truncated
-901 336330799 gbak_cant_rest_record warning -- record could not be restored
-901 336330800 gbak_send_failed Gds_$send failed
-901 335544369 segstr_no_read Attempted read of a new, open BLOB
-901 336330801 gbak_no_tbl_name No table name for data
Attempted action on blob outside transac-
-901 335544370 segstr_no_trans
tion
-901 336330802 gbak_unexp_eof Unexpected end of file on backup file
Database format @1 is too old to restore
-901 336330803 gbak_db_format_too_old
to
Attempted reference to BLOB in unavail-
-901 335544372 segstr_wrong_db
able database
Array dimension for column @1 is in-
-901 336330804 gbak_inv_array_dim
valid
-901 336330807 gbak_xdr_len_expected Expected XDR record length
Table @1 was omitted from the transac-
-901 335544376 unres_rel
tion reserving list
Request includes a DSRI extension not
-901 335544377 uns_ext
supported in this implementation
-901 335544378 wish_list Feature is not supported
-901 335544382 random @1
Unrecoverable conflict with limbo trans-
-901 335544383 fatal_conflict
action @1
411
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 335740991 gfix_exceed_max Internal block exceeds maximum size
-901 335740992 gfix_corrupt_pool Corrupt pool
-901 335740993 gfix_mem_exhausted Virtual memory exhausted
-901 336330817 gbak_open_bkup_error Cannot open backup file @1
-901 335740994 gfix_bad_pool Bad pool id.
Cannot open status and error output file
-901 336330818 gbak_open_error
@1
-901 335740995 gfix_trn_not_valid Transaction state @1 not in valid range
-901 335544392 bdbincon Internal error
Invalid user name (maximum 31 bytes al-
-901 336724044 gsec_inv_username
lowed)
Warning - maximum 8 significant bytes
-901 336724045 gsec_inv_pw_length
of password used
-901 336724046 gsec_db_specified Database already specified
Database administrator name already
-901 336724047 gsec_db_admin_specified
specified
Database administrator password already
-901 336724048 gsec_db_admin_pw_specified
specified
-901 336724049 gsec_sql_role_specified SQL role name already specified
-901 335741012 gfix_unexp_eoi Unexpected end of input
-901 335544407 dbbnotzer Database handle not zero
-901 335544408 tranotzer Transaction handle not zero
Failed to reconnect to a transaction in
-901 335741018 gfix_recon_fail
database @1
-901 335544418 trainlim Transaction in limbo
-901 335544419 notinlim Transaction not in limbo
-901 335544420 traoutsta Transaction outstanding
-901 335544428 badmsgnum Undefined message number
-901 335741036 gfix_trn_unknown Transaction description item unknown
-901 335741038 gfix_mode_req "read_only" or "read_write" required
-901 335544431 blocking_signal Blocking signal has been received
-901 335741042 gfix_pzval_req Positive or zero numeric value required
412
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Database system cannot read argument
-901 335544442 noargacc_read
@1
Database system cannot write argument
-901 335544443 noargacc_write
@1
-901 335544450 misc_interpreted @1
-901 335544468 tra_state Transaction @1 is @2
-901 335544485 bad_stmt_handle Invalid statement handle
-901 336330934 gbak_missing_block_fac Blocking factor parameter missing
Expected blocking factor, encountered
-901 336330935 gbak_inv_block_fac
"@1"
A blocking factor may not be used in con-
-901 336330936 gbak_block_fac_specified
junction with device CT
-901 336068796 dyn_role_does_not_exist SQL role @1 does not exist
-901 336330940 gbak_missing_username User name parameter missing
-901 336330941 gbak_missing_password Password parameter missing
User @1 has no grant admin option on
-901 336068797 dyn_no_grant_admin_opt
SQL role @2
-901 335544510 lock_timeout Lock time-out on wait transaction
-901 336068798 dyn_user_not_role_member User @1 is not a member of SQL role @2
-901 336068799 dyn_delete_role_failed @1 is not the owner of SQL role @2
-901 336068800 dyn_grant_role_to_user @1 is a SQL role and not a user
User name @1 could not be used for SQL
-901 336068801 dyn_inv_sql_role_name
role
-901 336068802 dyn_dup_sql_role SQL role @1 already exists
Keyword @1 can not be used as a SQL
-901 336068803 dyn_kywd_spec_for_role
role name
SQL roles are not supported in on older
-901 336068804 dyn_roles_not_supported versions of the database. A backup and
restore of the database is required
missing parameter for the number of
-901 336330952 gbak_missing_skipped_bytes
bytes to be skipped
Expected number of bytes to be skipped,
-901 336330953 gbak_inv_skipped_bytes
encountered "@1"
-901 336068820 dyn_zero_len_id Zero length identifiers are not allowed
413
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336330965 gbak_err_restore_charset Character set
-901 336330967 gbak_err_restore_collation Collation
Unexpected I/O error while reading from
-901 336330972 gbak_read_error
backup file
Unexpected I/O error while writing to
-901 336330973 gbak_write_error
backup file
-901 336068840 dyn_wrong_gtt_scope @1 cannot reference @2
Could not drop database @1 (database
-901 336330985 gbak_db_in_use
might be in use)
-901 336330990 gbak_sysmemex System memory exhausted
-901 335544559 bad_svc_handle Invalid service handle
-901 335544561 wrospbver Wrong version of service parameter block
-901 335544562 bad_spb_form Unrecognized service parameter block
-901 335544563 svcnotdef Service @1 is not defined
Feature '@1' is not supported in ODS
-901 336068856 dyn_ods_not_supp_feature
@2.@3
-901 336331002 gbak_restore_role_failed SQL role
-901 336331005 gbak_role_op_missing SQL role parameter missing
-901 336331010 gbak_page_buffers_missing Page buffers parameter missing
-901 336331011 gbak_page_buffers_wrong_param Expected page buffers, encountered "@1"
Page buffers is allowed only on restore or
-901 336331012 gbak_page_buffers_restore
create
Size specification either missing or incor-
-901 336331014 gbak_inv_size
rect for file @1
-901 336331015 gbak_file_outof_sequence File @1 out of sequence
-901 336331016 gbak_join_file_missing Can't join - one of the files missing
standard input is not supported when us-
-901 336331017 gbak_stdin_not_supptd
ing join operation
Standard output is not supported when us-
-901 336331018 gbak_stdout_not_supptd
ing split operation
-901 336331019 gbak_bkup_corrupt Backup file @1 might be corrupt
-901 336331020 gbak_unk_db_file_spec Database file specification missing
-901 336331021 gbak_hdr_write_failed Can't write a header record to file @1
414
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336331022 gbak_disk_space_ex Free disk space exhausted
File size given (@1) is less than minimum
-901 336331023 gbak_size_lt_min
allowed (@2)
-901 336331025 gbak_svc_name_missing Service name parameter missing
Cannot restore over current database,
-901 336331026 gbak_not_ownr must be SYSDBA or owner of the exist-
ing database
-901 336331031 gbak_mode_req "read_only" or "read_write" required
-901 336331033 gbak_just_data Just data ignore all constraints etc.
Restoring data only ignoring foreign key,
-901 336331034 gbak_data_only
unique, not null & other constraints
-901 335544609 index_name INDEX @1
-901 335544610 exception_name EXCEPTION @1
-901 335544611 field_name COLUMN @1
-901 335544613 union_err Union not supported
Table B.5. SQLCODE and GDSCODE Error Codes and Message Texts (4)
SQL-
GDSCODE Symbol Message Text
CODE
-901 335544614 dsql_construct_err Unsupported DSQL construct
-901 335544623 dsql_domain_err Illegal use of keyword VALUE
-901 335544626 table_name TABLE @1
-901 335544627 proc_name PROCEDURE @1
Specified domain or source column @1
-901 335544641 dsql_domain_not_found
does not exist
Variable @1 conflicts with parameter in
-901 335544656 dsql_var_conflict
same procedure
Server version too old to support all CRE-
-901 335544666 srvr_version_too_old
ATE DATABASE options
-901 335544673 no_delete Cannot delete
-901 335544675 sort_err Sort error
Service @1 does not have an associated
-901 335544703 svcnoexe
executable
-901 335544704 net_lookup_err Failed to locate host machine
415
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 335544705 service_unknown Undefined service @1/@2
The specified name was not found in the
-901 335544706 host_unknown
hosts file or Domain Name Services
Attempt to execute an unprepared dynam-
-901 335544711 unprepared_stmt
ic SQL statement
-901 335544716 svc_in_use Service is currently busy: @1
-901 335544731 tra_must_sweep [no associated message]
A fatal exception occurred during the exe-
-901 335544740 udf_exception
cution of a user defined function
-901 335544741 lost_db_connection Connection lost to database
User cannot write to RDB
-901 335544742 no_write_user_priv
$USER_PRIVILEGES
A fatal exception occurred during the exe-
-901 335544767 blob_filter_exception
cution of a blob filter
Access violation.The code attempted to
-901 335544768 exception_access_violation access a virtual address without privilege
to do so
Datatype misalignment.The attempted to
-901 335544769 exception_datatype_missalignment read or write a value that was not stored
on a memory boundary
Array bounds exceeded. The code at-
-901 335544770 exception_array_bounds_exceeded tempted to access an array element that is
out of bounds.
Float denormal operand.One of the float-
-901 335544771 exception_float_denormal_ operand ing-point operands is too small to repre-
sent a standard float value.
Floating-point divide by zero.The code at-
-901 335544772 exception_float_divide_by_zero tempted to divide a floating-point value
by zero.
Floating-point inexact result.The result of
-901 335544773 exception_float_inexact_result a floating-point operation cannot be repre-
sented as a decimal fraction
Floating-point invalid operand.An inde-
-901 335544774 exception _float_invalid_operand terminant error occurred during a float-
ing-point operation
Floating-point overflow.The exponent of
-901 335544775 exception_float_overflow a floating-point operation is greater than
the magnitude allowed
416
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Floating-point stack check.The stack
-901 335544776 exception_float_stack_check overflowed or underflowed as the result
of a floating-point operation
Floating-point underflow.The exponent of
-901 335544777 exception_float_underflow a floating-point operation is less than the
magnitude allowed
Integer divide by zero.The code attempt-
-901 335544778 exception_integer_divide_by_zero ed to divide an integer value by an integer
divisor of zero
Integer overflow.The result of an integer
-901 335544779 exception_integer_overflow operation caused the most significant bit
of the result to carry
An exception occurred that does not have
-901 335544780 exception_unknown
a description.Exception number @1
Stack overflow.The resource require-
-901 335544781 exception_stack_overflow ments of the runtime stack have exceeded
the memory available to it
Segmentation Fault. The code attempted
-901 335544782 exception_sigsegv
to access memory without privileges
Illegal Instruction. The Code attempted to
-901 335544783 exception_sigill
perfrom an illegal operation
Bus Error. The Code caused a system bus
-901 335544784 exception_sigbus
error
Floating Point Error. The Code caused an
-901 335544785 exception_sigfpe Arithmetic Exception or a floating point
exception
-901 335544786 ext_file_delete Cannot delete rows from external files
-901 335544787 ext_file_modify Cannot update rows in external files
Unable to perform operation.You must be
-901 335544788 adm_task_denied
either SYSDBA or owner of the database
-901 335544794 cancelled Operation was cancelled
User name and password are required
-901 335544797 svcnouser
while attaching to the services manager
-901 335544801 datype_notsup Data type not supported for arithmetic
-901 335544803 dialect_not_changed Database dialect not changed
-901 335544804 database_create_failed Unable to create database @1
-901 335544805 inv_dialect_specified Database dialect @1 is not a valid dialect
417
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 335544806 valid_db_dialects Valid database dialects are @1
Passed client dialect @1 is not a valid di-
-901 335544811 inv_client_dialect_specified
alect
-901 335544812 valid_client_dialects Valid client dialects are @1
Services functionality will be supported in
-901 335544814 service_not_supported
a later version of the product
Unable to find savepoint with name @1
-901 335544820 invalid_savepoint
in transaction context
Target shutdown mode is invalid for
-901 335544835 bad_shutdown_mode
database "@1"
-901 335544840 no_update Cannot update
-901 335544842 stack_trace @1
Context variable @1 is not found in
-901 335544843 ctx_var_not_found
namespace @2
Invalid namespace name @1 passed to
-901 335544844 ctx_namespace_invalid
@2
-901 335544845 ctx_too_big Too many context variables
-901 335544846 ctx_bad_argument Invalid argument passed to @1
BLR syntax error. Identifier @1... is too
-901 335544847 identifier_too_long
long
Time precision exceeds allowed range (0-
-901 335544859 invalid_time_precision
@1)
-901 335544866 met_wrong_gtt_scope @1 cannot depend on @2
Procedure @1 is not selectable (it does
-901 335544868 illegal_prc_type
not contain a SUSPEND statement)
Datatype @1 is not supported for sorting
-901 335544869 invalid_sort_datatype
operation
-901 335544870 collation_name COLLATION @1
-901 335544871 domain_name DOMAIN @1
A multi database transaction cannot span
-901 335544874 max_db_per_trans_allowed
more than @1 databases
-901 335544876 bad_proc_BLR Error while parsing procedure @1' s BLR
-901 335544877 key_too_big Index key too big
Too many values ( more than @1) in
-901 336397211 dsql_too_many_values
member list to match against
418
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-901 336397236 dsql_unsupp_feature_dialect Feature is not supported in dialect @1
Internal gds software consistency check
-902 335544333 bug_check
(@1)
-902 335544335 db_corrupt Database file appears corrupt (@1)
-902 335544344 io_error I/O error for file "@2"
-902 335544346 metadata_corrupt Corrupt system table
-902 335544373 sys_request Operating system directive @1 failed
-902 335544384 badblk Internal error
-902 335544385 invpoolcl Internal error
-902 335544387 relbadblk Internal error
Block size exceeds implementation re-
-902 335544388 blktoobig
striction
-902 335544394 badodsver Incompatible version of on-disk structure
-902 335544397 dirtypage Internal error
-902 335544398 waifortra Internal error
-902 335544399 doubleloc Internal error
-902 335544400 nodnotfnd Internal error
-902 335544401 dupnodfnd Internal error
-902 335544402 locnotmar Internal error
-902 335544404 corrupt Database corrupted
-902 335544405 badpage Checksum error on database page @1
-902 335544406 badindex Index is broken
Transaction - request mismatch ( synchro-
-902 335544409 trareqmis
nization error )
-902 335544410 badhndcnt Bad handle count
Wrong version of transaction parameter
-902 335544411 wrotpbver
block
Unsupported BLR version (expected @1,
-902 335544412 wroblrver
encountered @2)
Wrong version of database parameter
-902 335544413 wrodpbver
block
-902 335544415 badrelation Database corrupted
419
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-902 335544416 nodetach Internal error
-902 335544417 notremote Internal error
-902 335544422 dbfile Internal error
-902 335544423 orphan Internal error
-902 335544432 lockmanerr Lock manager error
-902 335544436 sqlerr SQL error code = @1
-902 335544448 bad_sec_info [no associated message]
-902 335544449 invalid_sec_info [no associated message]
-902 335544470 buf_invalid Cache buffer for page @1 invalid
-902 335544471 indexnotdefined There is no index in table @1 with id @2
Your user name and password are not de-
-902 335544472 login fined. Ask your database administrator to
set up a Firebird login
-902 335544506 shutinprog Database @1 shutdown in progress
-902 335544528 shutdown Database @1 shutdown
-902 335544557 shutfail Database shutdown unsuccessful
-902 335544569 dsql_error Dynamic SQL Error
-902 335544653 psw_attach Cannot attach to password database
Cannot start transaction for password
-902 335544654 psw_start_trans
database
Stack size insufficent to execute current
-902 335544717 err_stack_limit
request
Unable to complete network request to
-902 335544721 network_error
host "@1"
-902 335544722 net_connect_err Failed to establish a connection
Error while listening for an incoming con-
-902 335544723 net_connect_listen_err
nection
Failed to establish a secondary connection
-902 335544724 net_event_connect_err
for event processing
Error while listening for an incoming
-902 335544725 net_event_listen_err
event connection request
-902 335544726 net_read_err Error reading data from the connection
-902 335544727 net_write_err Error writing data to the connection
420
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
Access to databases on file servers is not
-902 335544732 unsupported_network_drive
supported
-902 335544733 io_create_err Error while trying to create file
-902 335544734 io_open_err Error while trying to open file
-902 335544735 io_close_err Error while trying to close file
-902 335544736 io_read_err Error while trying to read from file
-902 335544737 io_write_err Error while trying to write to file
-902 335544738 io_delete_err Error while trying to delete file
-902 335544739 io_access_err Error while trying to access file
Your login @1 is same as one of the SQL
-902 335544745 login_same_as_role_name role name. Ask your database administra-
tor to set up a valid Firebird login.
The file @1 is currently in use by another
-902 335544791 file_in_use
process.Try again later
Unexpected item in service parameter
-902 335544795 unexp_spb_form
block, expected @1
Function @1 is in @2, which is not in a
-902 335544809 extern_func_dir_error
permitted directory for external functions
File exceeded maximum size of 2GB.
-902 335544819 io_32bit_exceeded_err Add another database file or use a 64 bit
I/O version of Firebird
Access to @1 "@2" is denied by server
-902 335544831 conf_access_denied
administrator
-902 335544834 cursor_not_open Cursor is not open
-902 335544841 cursor_already_open Cursor is already open
-902 335544856 att_shutdown Connection shutdown
Login name too long (@1 characters,
-902 335544882 long_login
maximum allowed @2)
Invalid database handle (no active con-
-904 335544324 bad_db_handle
nection)
-904 335544375 unavailable Unavailable database
-904 335544381 imp_exc Implementation limit exceeded
-904 335544386 nopoolids Too many requests
-904 335544389 bufexh Buffer exhausted
421
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-904 335544391 bufinuse Buffer in use
-904 335544393 reqinuse Request in use
-904 335544424 no_lock_mgr No lock manager available
Unable to allocate memory from operat-
-904 335544430 virmemexh
ing system
-904 335544451 update_conflict Update conflicts with concurrent update
-904 335544453 obj_in_use Object @1 is in use
-904 335544455 shadow_accessed Cannot attach active shadow file
A file in manual shadow @1 is unavail-
-904 335544460 shadow_missing
able
-904 335544661 index_root_page_full Cannot add index, index root page is full
-904 335544676 sort_mem_err Sort error: not enough memory
Request depth exceeded. (Recursive defi-
-904 335544683 req_depth_exceeded
nition?)
Sort record size of @1 bytes is too
-904 335544758 sort_rec_size_err
big ????
-904 335544761 too_many_handles Too many open handles to database
-904 335544792 service_att_err Cannot attach to services manager
-904 335544799 svc_name_missing The service name was not specified
Unsupported field type specified in BE-
-904 335544813 optimizer_between_err
TWEEN predicate
Invalid argument in EXECUTE STATE-
-904 335544827 exec_sql_invalid_arg
MENT-cannot convert to string
Wrong request type in EXECUTE
-904 335544828 exec_sql_invalid_req
STATEMENT '@1'
Variable type (position @1) in EXE-
-904 335544829 exec_sql_invalid_var CUTE STATEMENT '@2' INTO does
not match returned column type
Too many recursion levels of EXECUTE
-904 335544830 exec_sql_max_call_exceeded
STATEMENT
Cannot change difference file name while
-904 335544832 wrong_backup_state
database is in backup mode
Partner index segment no @1 has incom-
-904 335544852 partner_idx_incompat_type
patible data type
-904 335544857 blobtoobig Maximum BLOB size exceeded
422
Exception Codes and Messages
SQL-
GDSCODE Symbol Message Text
CODE
-904 335544862 record_lock_not_supp Stream does not support record locking
Cannot create foreign key constraint @1.
-904 335544863 partner_idx_not_found
Partner index does not exist or is inactive
Transactions count exceeded. Perform
-904 335544864 tra_num_exc backup and restore to make database op-
erable again
-904 335544865 field_disappeared Column has been unexpectedly deleted
-904 335544878 concurrent_transaction Concurrent transaction number is @1
Maximum user count exceeded.Contact
-906 335544744 max_att_exceeded
your database administrator
-909 335544667 drdb_completed_with_errs Drop database completed with errors
Record from transaction @1 is stuck in
-911 335544459 rec_in_limbo
limbo
-913 335544336 deadlock Deadlock
-922 335544323 bad_db_format File @1 is not a valid database
-923 335544421 connect_reject Connection rejected by remote interface
Secondary server attachments cannot vali-
-923 335544461 cant_validate
date databases
Secondary server attachments cannot start
-923 335544464 cant_start_logging
logging
Bad parameters on attach or create
-924 335544325 bad_dpb_content
database
-924 335544441 bad_detach Database detach completed with errors
-924 335544648 conn_lost Connection lost to pipe server
-926 335544447 no_rollback No rollback performed
-999 335544689 ib_error Firebird error
423
Appendix C:
Reserved Words
and Keywords
Reserved words are part of the Firebird SQL language. They cannot be used as identifiers (e.g. as table or
procedure names), except when enclosed in double quotes in Dialect 3. However, you should avoid this unless
you have a compelling reason.
Keywords are also part of the language. They have a special meaning when used in the proper context, but they
are not reserved for Firebird's own and exclusive use. You can use them as identifiers without double-quoting.
Reserved words
Full list of reserved words in Firebird 2.5:
424
Reserved Words and Keywords
Keywords
The following terms have a special meaning in Firebird 2.5 DSQL. Some of them are also reserved words,
others are not.
!< ^< ^=
^> , :=
!= !> (
) < <=
<> = >
>= || ~<
~= ~> ABS
ACCENT ACOS ACTION
ACTIVE ADD ADMIN
AFTER ALL ALTER
ALWAYS AND ANY
AS ASC ASCENDING
ASCII_CHAR ASCII_VAL ASIN
AT ATAN ATAN2
AUTO AUTONOMOUS AVG
425
Reserved Words and Keywords
426
Reserved Words and Keywords
427
Reserved Words and Keywords
428
Appendix D:
System Tables
When you create a database, the Firebird engine creates a lot of system tables. Metadatathe descriptions and
attributes of all database objectsare stored in these system tables.
429
System Tables
RDB$BACKUP_HISTORY
RDB$BACKUP_HISTORY stores the history of backups performed using the nBackup utility.
RDB$CHARACTER_SETS
RDB$CHARACTER_SETS names and describes the character sets available in the database.
430
System Tables
RDB$CHECK_CONSTRAINTS
RDB$CHECK_CONSTRAINTS provides the cross references between the names of system-generated triggers
for constraints and the names of the associated constraints (NOT NULL constraints, CHECK constraints and
the ON UPDATE and ON DELETE clauses in foreign key constraints).
431
System Tables
RDB$COLLATIONS
RDB$COLLATIONS stores collation sequences for all character sets.
RDB$DATABASE
RDB$DATABASE stores basic information about the database. It contains only one record.
432
System Tables
RDB$DEPENDENCIES
RDB$DEPENDENCIES stores the dependencies between database objects.
0 - table
1 - view
2 - trigger
RDB$DEPENDENT_TYPE SMALLINT 3 - computed column
4 - CHECK constraint
5 - procedure
6 - index expression
7 - exception
8 - user
433
System Tables
RDB$EXCEPTIONS
RDB$EXCEPTIONS stores custom database exceptions.
RDB$FIELDS
RDB$FIELDS stores definitions of columns and domains, both system and custom. This is where the detailed
data attributes are stored for all columns.
434
System Tables
Note
435
System Tables
0 - untyped
1 - text
2 - BLR
3 - access control list
4 - reserved for future use
5 - encoded table metadata description
6 - for storing the details of a cross-database
transaction that ends abnormally
436
System Tables
7 = SMALLINT
8 = INTEGER
10 = FLOAT
12 = DATE
RDB$EXTERNAL_TYPE SMALLINT
13 = TIME
14 = CHAR
16 = BIGINT
27 = DOUBLE PRECISION
35 = TIMESTAMP
37 = VARCHAR
261 = BLOB
Defines the number of dimensions in an ar-
RDB$DIMENSIONS SMALLINT ray if the column is defined as an array. Al-
ways NULL for columns that are not arrays
Specifies whether the column can take an
RDB$NULL_FLAG SMALLINT empty value (the field will contain NULL)
or not (the field will contain the value of 1)
The length of CHAR or VARCHAR
RDB$CHARACTER_LENGTH SMALLINT
columns in characters (not in bytes)
The identifier of the collation sequence for
RDB$COLLATION_ID SMALLINT a character column or domain. If it is not
defined, the value of the field will be 0
The identifier of the character set for a char-
RDB$CHARACTER_SET_ID SMALLINT acter column, BLOB TEXT column or do-
main
Specifies the total number of digits for the
fixed-point numeric data type (DECIMAL
RDB$FIELD_PRECISION SMALLINT and NUMERIC). The value is 0 for the in-
teger data types, NULL is for other data
types
437
System Tables
RDB$FIELD_DIMENSIONS
RDB$FIELD_DIMENSIONS stores the dimensions of array columns.
RDB$FILES
RDB$FILES stores information about secondary files and shadow files.
438
System Tables
RDB$FILTERS
RDB$FILTERS stores information about BLOB filters.
RDB$FORMATS
RDB$FORMATS stores information about changes in tables. Each time any metadata change to a table is com-
mitted, it gets a new format number. When the format number of any table reaches 255, the entire database
becomes inoperable. To return to normal, the database must be backed up with the gbak utility and restored
from that backup copy.
439
System Tables
RDB$FUNCTIONS
RDB$FUNCTIONS stores the information needed by the engine about external functions (user-defined func-
tions, UDFs).
Note
In future major releases (Firebird 3.0 +) RDB$FUNCTIONS will also store the information about stored func-
tions: user-defined functions written in PSQL.
440
System Tables
RDB$FUNCTION_ARGUMENTS
RDB$FUNCTION_ARGUMENTS stores the parameters of external functions and their attributes.
0 = by value
RDB$MECHANISM SMALLINT
1 = by reference
2 = by descriptor
3 = by BLOB descriptor
Data type code defined for the column:
7 = SMALLINT
8 = INTEGER
12 = DATE
13 = TIME
14 = CHAR
RDB$FIELD_TYPE SMALLINT 16 = BIGINT
27 = DOUBLE PRECISION
35 = TIMESTAMP
37 = VARCHAR
261 = BLOB
40 = CSTRING (null-terminated text)
45 = BLOB_ID
261 = BLOB
The scale of an integer or a fixed-point ar-
RDB$FIELD_SCALE SMALLINT
gument. It is an exponent of 10
Argument length in bytes:
441
System Tables
RDB$GENERATORS
RDB$GENERATORS stores generators (sequences) and keeps them up-to-date.
RDB$INDICES
RDB$INDICES stores definitions of both system- and user-defined indexes. The attributes of each column
belonging to an index are stored in one row of the table RDB$INDEX_SEGMENTS.
442
System Tables
443
System Tables
RDB$INDEX_SEGMENTS
RDB$INDEX_SEGMENTS stores the segments (table columns) of indexes and their positions in the key. A
separate row is stored for each column in an index.
RDB$LOG_FILES
RDB$LOG_FILES is not currently used.
RDB$PAGES
RDB$PAGES stores and maintains information about database pages and their usage.
444
System Tables
RDB$PROCEDURES
RDB$PROCEDURES stores the definitions of stored procedures, including their PSQL source code and the
binary language representation (BLR) of it. The next table, RDB$PROCEDURE_PARAMETERS, stores the
definitions of input and output parameters.
445
System Tables
RDB$PROCEDURE_PARAMETERS
RDB$PROCEDURE_PARAMETERS stores the parameters of stored procedures and their attributes. It holds
one row for each parameter.
446
System Tables
RDB$REF_CONSTRAINTS
RDB$REF_CONSTRAINTS stores the attributes of the referential constraintsForeign Key relationships and
referential actions.
447
System Tables
RDB$RELATIONS
RDB$RELATIONS stores the top-level definitions and attributes of all tables and views in the system.
448
System Tables
RDB$RELATION_CONSTRAINTS
RDB$RELATION_CONSTRAINTS stores the definitions of all table-level constraints: primary, unique, for-
eign key, CHECK, NOT NULL constraints.
449
System Tables
RDB$RELATION_FIELDS
RDB$RELATION_FIELDS stores the definitions of table and view columns.
450
System Tables
RDB$ROLES
RDB$ROLES stores the roles that have been defined in this database.
451
System Tables
RDB$SECURITY_CLASSES
RDB$SECURITY_CLASSES stores the access control lists
RDB$TRANSACTIONS
RDB$TRANSACTIONS stores the states of distributed transactions and other transactions that were prepared
for two-phase commit with an explicit prepare message
452
System Tables
RDB$TRIGGERS
RDB$TRIGGERS stores the trigger definitions for all tables and views.
1 - before insert
2 - after insert
3 - before update
4 - after update
5 - before delete
6 - after delete
17 - before insert or update
18 - after insert or update
RDB$TRIGGER_TYPE SMALLINT 25 - before insert or delete
26 - after insert or delete
27 - before update or delete
28 - after update or delete
113 - before insert or update or delete
114 - after insert or update or delete
8192 - on connect
8193 - on disconnect
8194 - on transaction start
8195 - on transaction commit
8196 - on transaction rollback
Identification of the exact RDB
$TRIGGER_TYPE code is a lit-
tle more complicated, since it is
a bitmap, calculated according to
which phase and events are covered
and the order in which they are de-
fined. For the curious, the calcu-
lation is explained in this blog by
Mark Rotteveel.
Stores the source code of the trigger in
RDB$TRIGGER_SOURCE BLOB TEXT
PSQL
453
System Tables
RDB$TRIGGER_MESSAGES
RDB$TRIGGER_MESSAGES stores the trigger messages.
RDB$TYPES
RDB$TYPES stores the defining sets of enumerated types used throughout the system.
454
System Tables
RDB$USER_PRIVILEGES
RDB$USER_PRIVILEGES stores the SQL access privileges for Firebird users and privileged objects.
455
System Tables
RDB$VIEW_RELATIONS
RDB$VIEW_RELATIONS stores the tables that are referred to in view definitions. There is one record for each
table in a view.
456
Appendix E:
Monitoring Tables
The Firebird engine can monitor activities in a database and make them available for user queries via the mon-
itoring tables. The definitions of these tables are always present in the database, all named with the prefix MON
$. The tables are virtual: they are populated with data only at the moment when the user queries them. That is
also one good reason why it is no use trying to create triggers for them!
The key notion in understanding the monitoring feature is an activity snapshot. The activity snapshot represents
the current state of the database at the start of the transaction in which the monitoring table query runs. It delivers
a lot of information about the database itself, active connections, users, transactions prepared, running queries
and more.
The snapshot is created when any monitoring table is queried for the first time. It is preserved until the end of the
current transaction to maintain a stable, consistent view for queries across multiple tables, such as a master-detail
query. In other words, monitoring tables always behave as though they were in SNAPSHOT TABLE STABILITY
(consistency) isolation, even if the current transaction is started with a lower isolation level.
To refresh the snapshot, the current transaction must be completed and the monitoring tables must be re-queried
in a new transaction context.
Access Security
SYSDBA and the database owner have full access to all information available from the monitoring tables
Regular users can see information about their own connections; other connections are not visible to them
Warning
In a highly loaded environment, collecting information via the monitoring tables could have a negative impact
on system performance.
457
Monitoring Tables
MON$ATTACHMENTS
MON$ATTACHMENTS displays information about active attachments to the database.
458
Monitoring Tables
Notes
All the current activity in the connection being deleted is immediately stopped and all active transactions
are rolled back
The closed connection will return an error with the isc_att_shutdown code to the application
Later attempts to use this connection (i.e., use its handle in API calls) will return errors
MON$CALL_STACK
MON$CALL_STACK displays calls to the stack from queries executing in stored procedures and triggers.
459
Monitoring Tables
EXECUTE STATEMENT Calls: Information about calls during the execution of the EXECUTE STATEMENT
statement does not get into the call stack.
Example using MON$CALL_STACK: Getting the call stack for all connections except own:
WITH RECURSIVE
HEAD AS (
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
WHERE CALL.MON$CALLER_ID IS NULL
UNION ALL
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
JOIN HEAD ON CALL.MON$CALLER_ID = HEAD.MON$CALL_ID
)
SELECT MON$ATTACHMENT_ID, MON$OBJECT_NAME, MON$OBJECT_TYPE
FROM HEAD
JOIN MON$STATEMENTS STMT ON STMT.MON$STATEMENT_ID = HEAD.MON$STATEMENT_ID
WHERE STMT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION
MON$CONTEXT_VARIABLES
MON$CONTEXT_VARIABLES displays information about custom context variables.
460
Monitoring Tables
MON$DATABASE
MON$DATABASE displays the header information from the database the current user is connected to.
461
Monitoring Tables
MON$IO_STATS
MON$IO_STATS displays input/output statistics. The counters are cumulative, by group, for each group of
statistics.
0 - database
MON$STAT_GROUP SMALLINT 1 - connection
2 - transaction
3 - statement
4 - call
MON$PAGE_READS BIGINT Count of database pages read
MON$PAGE_WRITES BIGINT Count of database pages written to
462
Monitoring Tables
MON$MEMORY_USAGE
MON$MEMORY_USAGE displays memory usage statistics.
0 - database
MON$STAT_GROUP SMALLINT 1 - connection
2 - transaction
3 - operator
4 - call
The amount of memory in use, in bytes.
This data is about the high-level memory
allocation performed by the server. It can be
MON$MEMORY_USED BIGINT
useful to track down memory leaks and ex-
cessive memory usage in connections, pro-
cedures, etc.
The amount of memory allocated by the op-
erating system, in bytes. This data is about
the low-level memory allocation performed
MON$MEMORY_ALLOCATED BIGINT by the Firebird memory managerthe
amount of memory allocated by the operat-
ing systemwhich can allow you to control
the physical memory usage.
The maximum number of bytes used by this
MON$MAX_MEMORY_USED BIGINT
object
MON The maximum number of bytes allocated
BIGINT
$MAX_MEMORY_ALLOCATED for this object by the operating system
Note
Not all records in this table have non-zero values. MON$DATABASE and objects related to memory allocation
have non-zero values. Minor memory allocations are not accrued here but are added to the database memory
pool instead.
463
Monitoring Tables
MON$RECORD_STATS
MON$RECORD_STATS displays record-level statistics. The counters are cumulative, by group, for each group
of statistics.
0 - database
MON$STAT_GROUP SMALLINT 1 - connection
2 - transaction
3 - statement
4 - call
MON$RECORD_SEQ_READS BIGINT Count of records read sequentially
MON$RECORD_IDX_READS BIGINT Count of records read via an index
MON$RECORD_INSERTS BIGINT Count of inserted records
MON$RECORD_UPDATES BIGINT Count of updated records
MON$RECORD_DELETES BIGINT Count of deleted records
MON$RECORD_BACKOUTS BIGINT Count of records backed out
MON$RECORD_PURGES BIGINT Count of records purged
MON$RECORD_EXPUNGES BIGINT Count of records expunged
MON$STATEMENTS
MON$STATEMENTS displays statements prepared for execution.
464
Monitoring Tables
The STALLED state indicates that, at the time of the snapshot, the statement had an open cursor and was waiting
for the client to resume fetching rows.
Notes
If no statements are currently being executed in the connection, any attempt to cancel queries will not proceed
After a query is cancelled, calling execute/fetch API functions will return an error with the
isc_cancelled code
MON$TRANSACTIONS
MON$TRANSACTIONS reports started transactions.
465
Monitoring Tables
466
Appendix F:
Character Sets and
Collation Sequences
Table F.1. Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
ASCII 2 1 ASCII English
467
Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
DOS850 11 1 DOS850 Latin I (no Euro symbol)
468
Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
DOS864 18 1 DOS864 Arabic
469
Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
ISO8859_1 '' '' PT_PT Portuguese
470
Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
'' '' '' NXT_ITA Italian
471
Character Sets and Collation Sequences
Bytes
Character Set ID per Collation Language
Char
CzechCase insensitive and accent in-
'' '' '' WIN_CZ_CI_AI
sensitive
472
Appendix G:
License notice
The contents of this Documentation are subject to the Public Documentation License Version 1.0 (the Li-
cense); you may only use this Documentation if you comply with the terms of this License. Copies of the Li-
cense are available at http://www.firebirdsql.org/pdfmanual/pdl.pdf (PDF) and http://www.firebirdsql.org/man-
ual/pdl.html (HTML).
The Initial Writers of the Original Documentation are: Paul Vinkenoog, Dmitry Yemanov and Thomas Woinke.
Writers of text originally in Russian are Denis Simonov, Dmitry Filippov, Alexander Karpeykin, Alexey
Kovyazin and Dmitry Kuzmenko.
Copyright (C) 2008-2015. All Rights Reserved. Initial Writers contact: paul at vinkenoog dot nl.
Writers and Editors of included PDL-licensed material are: J. Beesley, Helen Borrie, Arno Brinkman, Frank In-
germann, Vlad Khorsun, Alex Peshkov, Nickolay Samofatov, Adriano dos Santos Fernandes, Dmitry Yemanov.
Included portions are Copyright (C) 2001-2015 by their respective authors. All Rights Reserved.
473
Appendix H:
Document History
The exact file history is recorded in the manual module in our CVS tree; see http://sourceforge.net/cvs/?group_
id=9028
Revision History
1.1 1 September H.E.M.B. Original was in Russian, translated by Dmitry Borodin (MegaTransla-
2015 tions). Raw translation edited and converted to DocBook, as this revi-
son (1.1), by Helen Borrie.
This revision distributed as a PDF build only, for review by Dmitry Ye-
manov, et al.
Reviewers, please pay attention to the comments like this: Editor's
note :: The sky is falling, take cover!
474