ASTROS Programmers Manual
ASTROS Programmers Manual
ASTROS
Programmer’s Manual
for Version 20
The use, duplication, or disclosure of the information contained in this document is subject to the
restrictions set forth in your Software License Agreement with Universal Analytics, Inc. Use, duplica-
tion, or disclosure by the Government of the United States is subject to the restrictions set forth in
Subdivision (b)(3)(ii) of the Rights in Technical Data and Computer Software clause, 48 CFR
252.227-7013.
The information contained herein is subject to change without notice. Universal Analytics Inc. does
not warrant that this document is free of errors or defects and assumes no liability or responsibility to
any person or company for direct or indirect damages resulting from the use of any information
contained herein.
TABLE OF CONTENTS
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
ASTROS i
PROGRAMMER’S MANUAL
ii ASTROS
PROGRAMMER’S MANUAL
ALPHABETICAL INDEX OF
SOFTWARE MODULES
ASTROS vii
PROGRAMMER’S MANUAL
viii ASTROS
PROGRAMMER’S MANUAL
ASTROS ix
PROGRAMMER’S MANUAL
x ASTROS
PROGRAMMER’S MANUAL
ASTROS xi
PROGRAMMER’S MANUAL
xii ASTROS
PROGRAMMER’S MANUAL
Chapter 1.
INTRODUCTION
There are five manuals documenting ASTROS, the Automated Structural Optimization System:
This Programmer’s Manual gives the detailed description of ASTROS software. It describes the system in
terms of its software components, documents the procedure for installing ASTROS on different host
machines and provides detailed documentation of the application and utility modules that comprise the
procedure. In addition, the data structures of the database entities are presented in detail. This manual
is intended to provide the system administrator with a guide to the existing software and the researcher
with sufficient information to add application modules or otherwise manipulate the data generated by
the ASTROS system. Using standard ASTROS features does not require a familiarity with the informa-
tion contained in this manual except, perhaps, for the entity documentation, which is useful when
additional database entities are to be viewed.
This document, while useful to the advanced engineering user, is directed toward the system administra-
tor or code developer. This is the individual referred to by the term user unless otherwise indicated. The
Programmer’s Manual is structured in this way because all the information needed by the engineering
user is contained as a subset of that needed by the system administrator. As a consequence, however, the
manual is not as simple for the analyst as might be desired. It is anticipated that the advanced applica-
tion user will need to sift through the module documentation and entity documentation to extract the
information needed to modify the ASTROS execution path or to insert additional modules for performing
alternative computations, printing additional results, writing data in alternative formats or other ad-
vanced features that may be performed.
As an introduction to the ASTROS system, Chapter 2 contains a description of the software structure of
ASTROS, both to provide a resource for the system administrator and to be a road map for the applica-
tion user in identifying specific modules relevant to the task of interest. Chapter 2 attempts to introduce
the user to the totality of ASTROS source code and their interrelationships so that subsequent reading
will be more readily interpretable: in essence, Chapter 2 provides a nomenclature section enabling the
reader to identify (with the inevitable exceptions) the major unit (module) or functional library to which a
particular program belongs. This shapter provides a framework for subsequent shapters in the Program-
mer’s Manual.
Chapter 3 is devoted to the installation of the ASTROS system on various host computers. The steps
involved in installing the system are given, followed by detailed documentation of all the machine and
installation dependent code. Sufficient detail is given to allow someone familiar with the target host
system to write a set of machine-dependent code for that machine or site. This documentation is followed
by the description of the System Generation Process (SYSGEN) and its inputs. These inputs, along with
the SYSGEN program, define the system database which, in turn, defines system data to the ASTROS
executive. It is these inputs which the researcher may wish to modify to define a new module, define a
new set of inputs or make other advanced modifications of the system. A brief presentation of the order of
the operations that follow preparation of the machine dependent library is given to complete an installa-
tion of the system.
Chapters 4 through 8 contain the formal documentation of the ASTROS modules. Chapter 4 documents
those portions of the code that are considered to be at the system level. This means that the user need not
be aware of their existence but they are important in the overall system architecture. Further, they
perform many tasks of which the user may want to be aware if any system modifications are to be made.
Chapters 6 through 8 document the utilities that are associated with the ASTROS application modules,
matrix operations and the database. These shapters are the most important from the view of the ad-
vanced researcher/user in that these are the software tools from which additional capabilities can be put
together with reasonable rapidity. In each case, the executive (MAPOL) and application interface is fully
defined and the algorithm of the utility is outlined.
Chapter 9 contains the documentation of the data structures on the CADDB database that are used by
the ASTROS system. The contents and structure of each database entity are given along with an
indication of the module that generates the data and which modules use the data. For matrix entities,
the relevant shapter of the Theoretical Manual is also referenced since the entity contents are more
clearly understood in the content of the equations that are highlighted there.
Chapter 10 contains a presentation of notes for the ASTROS application programmer. It is felt that the
ASTROS system has been designed with sufficient flexibility that the additional features or minor
enhancements are desired. Chapter 10, therefore, attempts to address some issues involved in writing an
ASTROS module. Rules and guidelines are given which will help the programmer avoid complications
arising from the interface of the new module and application utilities are also given. Particular emphasis
is placed on the memory management utilities and the database utilities since these require a more
sophisticated interface than the simple application utilities.
A standard documentation format has been adopted for the modules that are described in Chapter 3
through 8. Figure 1 illustrates this format and provides a key for identifying the data that are given for
each module. While this format is brief, enough information is given for the user to identify the principal
action of the module and the role it plays in the standard ASTROS execution. The utility modules are
documented to the extent necessary for an application programmer to use the utility in any new code to
be inserted in the system.
Chapter 2.
ASTROS SOFTWARE DESCRIPTION
ASTROS is a software system made up of two separate executable programs comprising over 1500
independently addressable code segments containing approximately 300,000 lines of FORTRAN. While
this Programmer’s manual is devoted primarily to the detailed documentation to the separate modules
and subroutines of the ASTROS system, an overview of that system is necessary to understand how the
individual pieces fit together. This section introduces the ASTROS system and describes the software
structure of ASTROS in terms of its major code blocks. Both the system generation program, SYSGEN,
and the main program, ASTROS, are described and their interrelationships are illustrated. This section
provides a resource for the system administrator and a road map for the application programmer to
identify the section documenting modules relevant to the task of interest. This section also provides a
framework to direct the subsequent sections in the Programmer’s Manual.
In the context of the Programmer’s Manual, the structure of the ASTROS system refers to the interrela-
tionships among the major code blocks. Typically, an analysis of the software associated with an individ-
ual code segment will indicate the nature of the task being performed and provide information on the
mechanisms by which intramodular communication takes place. The larger picture, in which the inter-
modular requirements of a particular code segment becomes clear, is more difficult to grasp. It is that
picture which this section attempts to provide.
The magnitude of the ASTROS system requires that the code segments be grouped into abstract collec-
tions of code such as utility modules and the database in order to be understood. While necessary, these
abstract collections can also obscure the picture of the system since a great deal of the detail is necessar-
ily lost. Nonetheless, since a discussion of each individual code segment is not possible, a set of code
blocks has been defined for the purpose of writing the Programmer’s Manual. Naturally, there are many
ways in which the code segments could be grouped to aid the user in understanding the code segments
and their interactions. For the Programmer’s Manual, the code is grouped in a hierarchical manner by
function: that is, code segments that perform similar tasks at a similar level (relative to the executive
system) are grouped together. Some segments of the code, of course, do not fit clearly into this sort of
functional abstraction. Their role is such that they could lie in more than one group or really don’t belong
to any group that has been defined. These exceptions complicate the issue but do not destroy the utility of
the functional breakdown of the code. When a module could be documented with more than one code
group, this fact is noted in the appropriate manual sections.
The SYSGEN program is a stand-alone executable program that is used to define ASTROS system
parameters. The use of an executable program that is directed by a set of inputs was adopted to provide a
simple mechanism to expand the capabilities of the ASTROS procedure. The inputs, outputs, and use of
this very important feature of the ASTROS architecture are fully documented in Section 3.2. The SYS-
GEN program consists of five items indicated by the numbered boxes in Figure 2. Each of these is briefly
discussed below:
1. The SYSGEN INPUTS consist of a set of files that define certain system level data that is written
by SYSGEN to the system database, SYSDB.
2. SYSGEN is an executable program that reads the SYSGEN INPUTS and creates a set of
database entities on SYSDB that provide data to the ASTROS executive and high level
engineering modules.
3. The SYSTEM DATABASE, SYSDB, consists of an index file, SYSDBIX, and (typically) a single
data file, SYSDB01. The SYSGEN program creates and loads database entities onto the system
database which defines:
a. The set of modules which can be addressed through the MAPOL language
b. The set of relational schemata for all relations declared in the MAPOL sequence
d. The error message texts for most run time error messages
4. XQDRIV is a FORTRAN subroutine written by SYSGEN that must be compiled and linked into
the ASTROS executable during the generation of the ASTROS executable image. It is the
XQDRIV subroutine that forms the FORTRAN link between the MAPOL language and the
application/utility modules.
5. The SYSGEN OUTPUT FILE is a listing generated by SYSGEN that echoes all the data stored
on the system database. As such, it provides a resource for the application user and the system
administrator documenting the current ASTROS system. Since this file represents what is, by
1 SYSGEN
INPUT
SYSTEM DATABASE
SYSGEN SYSDBIX 3
2
SYSDB01
XQDRV
4
ASTROS
XQDRV
SYSGEN 5
OUTPUT
INPUT
STREAM
RUN-TIME DATABASE
RUNDBIX
OUTPUT
FILE
RUNDB01
RUNDB02
definition, the ASTROS program, any problems that arise or questions in the documentation
should be checked against the data in this file. If any discrepancies exist, either the documen-
tation is in error or the SYSGEN inputs are in error. In any case, the ASTROS program is
directed by the SYSGEN data.
As illustrated in Figure 2, the XQDRIV subroutine and SYSDB are also part of the ASTROS program. The
XQDRIV subroutine is needed to generate the executable image and the SYSDB files MUST be available
on a read-only basis by the ASTROS program whenever an ASTROS job is run. The ASTROS program is
comprised of the following:
1. XQDRIV is a FORTRAN subroutine written by SYSGEN that must be compiled and linked into
the ASTROS executable during the generation of the ASTROS executable image. It is the
XQDRIV subroutine that forms the FORTRAN link between the MAPOL language and the
application/utility modules.
2. The SYSTEM DATABASE, SYSDB, contains database entities which define sets of data
establishing the extent of some of the capabilities of the ASTROS program. ASTROS requires
these files on a read-only basis for every execution of the system.
3. The ASTROS program is the main executable image associated with the ASTROS procedure. It
is comprised of all the executive, database, utility, and engineering application modules that are
needed to perform the automated multidisciplinary optimization tasks.
4. The INPUT STREAM is the user’s input file containing the directives to execute the ASTROS
program. The User’s Manual is devoted to its documentation.
5. The OUTPUT FILE contains the data written to the user’s output file containing those results
of the ASTROS execution that were requested to be printed or that are printed by default.
6. The RUN-TIME DATABASE consists of one index file and one or more data files (called,
respectively, RUNDBIX, and RUNDB01, 02, etc., in Figure 2) that contain the database
generated at run time by ASTROS. Assuming an execution based on the standard MAPOL
sequence, the run-time database will contain some or all of the entities that are documented in
Section 9 of this manual. The application user can direct whether this database is to be saved
or deleted on termination of the execution. The Interactive CADDB Environment (ICE)
(AFWAL-TR-88-3060, August 1988) can be used to view these data, prepare reports or port the
data into other applications.
highly developed executive system that comprises this major ASTROS code block. Also shown are the five
groups of routines which are used by the SYSGEN and ASTROS programs.
The naming conventions used within each code block are worthy of some discussion since they are useful
in identifying an unknown routine in a piece of ASTROS software. Whenever possible, a set of consistent,
meaningful mnemonics was adopted to identify groups of code that belong together, either functionally or
logically. Where such conventions have been adopted, they are indicated in the discussion of the code
block. One complication to such conventions is the use of existing source code as a resource for the
ASTROS program. When major code units were used from existing software, the convention was not
typically enforced. As a result, there are exceptions to the nomenclatures adopted in some of the source
code blocks presented in this section.
Each of the source code blocks is now briefly discussed by reference to the name assigned to it in Figure 3
and its related Programmer’s Manual section is indicated.
1. SYSGEN is a very small code block containing the SYSGEN driver (SYSGEN), a set of four
output routines (xxxOUT) to print the SYSGEN output file and five routine (TIMxxx) that
compute the timing constants for the large matrix utilities. The SYSGEN program has a single
execution path which is documented in Section 3.2.
2. The ASTROS executive is the code block containing the ASTROS main driver program, ASTROS,
and the ASTROS executive system software. The executive system is embodied in the routines
beginning with the mnemonics XQxxxx. In addition to the pure executive system routines, the
executive initialization routines for the database (DBINIT) and the memory manager (MMINIT)
are also located in this code block. Finally, the general initialization routine PREPAS and the
MAPOL compiler software are considered, for the purposes of the Programmer’s Manual, to be
part of the executive system. These routines are documented in Section 5.
3. The DATABASE code block contains all the software related to the application interface to the
database and memory management systems for the ASTROS procedure. This software is further
subdivided into five groups of code that represent the application interface to the database and
memory manager. These groups are:
a. The General Utilities that comprise the database application interface applicable to all
database entity types. These routines are denoted by the mnemonics DBxxxx and are
documented in Section 8.2.
b. The Memory Management Utilities that comprise the application interface to the ASTROS
dynamic memory manager. These routines are denoted by the mnemonics MMxxxx and are
documented in Section 8.3.
c. The Matrix Utilities that comprise the database application interface applicable to matrix
entities. These routines are denoted by the mnemonics MXxxxx and are documented in
Section 8.4.
d. The Relation Utilities that comprise the database application interface applicable to
relational entities. These routines are denoted by the mnemonics RExxxx and are docu-
mented in Section 8.5.
UTILITIES
GENERAL (UT)
SYSGEN
OTHERS
SYSGEN (Main)
BDTOUT
ERROUT
MODOUT
RELOUT
CADDB DATABASE
General (DB)
Memory (MM)
Matrix (MX)
ASTROS EXECUTIVE Relation (RE)
ASTROS (Main) Unstructured (UN)
DBINIT
MMINIT
PREPAS
MAPOL
DBTERM
Executive (XQ)
MACHINE DEPENDENT
GENERAL (XX)
DATABASE (DBMD)
e. The Unstructured Utilities that comprise the database application interface applicable to
unstructured entities. These routines are denoted by the mnemonics UNxxxx and are
documented in Section 8.6.
4. The MACHINE DEPENDENT code block contains all the software in the ASTROS system that
has been designated machine dependent. This software supplies the interface between the host
computer and the ASTROS system. It is further subdivided into two groups of code:
a. The General Utilities, comprising the machine dependent code used throughout the AS-
TROS system. These routines are denoted by the mnemonics XXxxxx and are documented
in Section 3.1.1.
b. The Database Utilities, comprising the database machine dependent code used primarily
by the database software. These routines are denoted by the mnemonics DBMDxx and are
documented in Section 3.1.2.
5. The UTILITIES code block contains all the machine independent application utilities developed
for the ASTROS system. This software is a suite of functions that are useful in many places in
the code. They have therefore been formalized to the extent that they may be used by any
ASTROS application routine. The majority of these routines are denoted by the mnemonics
UTxxxx with exceptions corresponding to those in-core utilities that came from COSMIC/NAS-
TRAN. These are documented in Section 6.
6. The LARGE MATRIX UTILITIES code block contains the utilities developed for the ASTROS
system to operate on large matrices stored on the ASTROS database (rather than matrices
stored in memory). This software comprises a suite of matrix operations that have been
formalized to the extent that they may be used by any ASTROS application routine and by the
ASTROS executive system. There is no consistent naming convention for these routines since
they have been derived from their COSMIC/NASTRAN counterparts. The utilities are docu-
mented in Section 7.
7. The APPLICATION MODULES code block is the largest code block within ASTROS. It contains
the engineering and application modules that support the analysis and optimization features
of the ASTROS system. Each of these modules has been designed to be independent of the other
application modules to the maximum extent possible. Typically, consistent naming conventions
have been used for routines within each module. Because of the disparate code resources that
were used in the development of ASTROS, however, no globally consistent naming convention
was adopted. Section 5 documents each of the modules in the application library.
Rather than write and maintain separate code blocks to perform similar functions, SYSGEN makes use
of the suite of general utilities in the UTILITIES CODE BLOCK. The machine dependent code block is
also shared between ASTROS and SYSGEN.
One of the tasks of SYSGEN is to compile and store the standard executive sequence (written in the
MAPOL language) onto the system database. Therefore, the SYSGEN program makes use of the AS-
TROS EXECUTIVE code block to supply the MAPOL compiler. In addition, the SYSGEN driver must
perform the executive functions to initialize the memory manager and the database. Therefore, the
MMINIT and DBINIT routines from the ASTROS EXECUTIVE code block are also used by SYSGEN.
Chapter 3.
SYSTEM INSTALLATION
A software system of the magnitude of ASTROS requires a formal installation of the system on each host
computer. For ASTROS, the installation process can be broken into three distinct phases. In the first
phase, the ASTROS/host interface is defined and the proper machine dependent code is written to create
that interface. The second phase involves the generation of the executable image of the SYSGEN pro-
gram and its execution. Finally, the ASTROS executable image is generated using the outputs from the
SYSGEN program. The purpose of this section is to document all the machine dependent code in a
generic manner and to indicate which parameters and routines are most likely to be site dependent and
which are truly machine dependent. In the typical case, the system manager at each facility will be given
the machine dependent library for the host system that is to be used. For completeness, however,
sufficient detail is presented to allow someone familiar with the host system to write a new set of
machine dependent code.
Following the formal documentation of the machine dependent interface is a discussion of the SYSGEN
program and its inputs. The SYSGEN program is important in that it provides the advanced analyst/user
with a mechanism to add features to the system. It is also important for system installation in that part
of its output is required before the executable image of the ASTROS procedure can be generated. Again,
in the typical case the user will be given a proper set of SYSGEN outputs but the utility of SYSGEN in
increasing the capabilities of the system makes its complete documentation very useful to the majority of
ASTROS users. Finally, a brief section is included to present the total ASTROS installation in a step by
step manner to give an overall view of the process.
The information presented in these sections serves as a guide to the installation of ASTROS on alterna-
tive host machines, but the nature of the machine dependencies make it impossible to anticipate all
contingencies that may arise. The installation of the ASTROS procedure on a new host computer can
therefore be a complex task despite the relatively small number of machine dependent routines.
The machine dependent code is separated into two libraries: the general library, denoted by names
starting with XX, and the database machine dependent library, denoted by names starting with DBMD.
The general library consists of timing routines, bit manipulation routines, some character string manipu-
lation routines, a random number generator and a BLOCK DATA subroutine containing a number of
machine and installation dependent parameters. The timing routines and the random number generator
are site dependent in that each facility typically has a library of such routines. The BLOCK DATA contains
such parameters as the open core size, the definition of logical units, output paging parameters and other
site dependent parameters. The remainder of the routines are very simple and typically do not vary
substantially from site to site, although they are different between machines. In some cases, the XX-rou-
tines are written in standard FORTRAN and are in the machine dependent library only because some
host systems provide special routines to perform these tasks.
The database machine dependent library (DBMD) is much more complex than the general machine de-
pendent library. The complication arises because of the flexibility of the machine dependent interface and
because of the nature of the interface. Unlike the XX library, the DBMD library deals with file structures
and I/O to the host system and with memory management. These issues are highly machine dependent
and are further complicated because the translation of machine independent parameters like file names
to the actual host system file name may need to be very flexible depending on the nature of the local host
system. The ASSIGN DATABASE entry in ASTROS allows the user to enter machine dependent parame-
ters associated with the data base file attachment. A major task in writing the DBMD library is the
definition of these parameters and the rules for their use: in general they are used to enable the user to
modify the default file attributes. For example, block sizes; or their location on a physical device, such as
disk volume. The flexibility inherent in the machine dependent interface can cause difficulties in writing
the DBMD code, however, in that the code developer may find it hard to differentiate those aspects of the
interface that are free to be redefined from those that are required by ASTROS. In the authors’ experi-
ence, however, the task has proven to be tractable for all host systems used thus far by using the existing
routines as a model. The reader should be under no illusion, however, that the task of writing the DBMD
machine dependent library is simple.
The following sections document the XX and DBMD machine dependent libraries in a machine independent
manner. Each routine that is essential to the ASTROS interface, its calling sequence and its design
requirements is listed. It is very important to appreciate that the actual machine dependent interface
may require additional routines that are not documented in these sections. The only routines that are
shown here are those that are referenced by the machine independent portions of ASTROS. By definition,
it is these routines that constitute the machine dependent interface. It is often desirable and sometimes
necessary for the machine dependent code to call other machine dependent routines. These internal
interfaces are not documented in this report because of their high degree of dependence on particular
host machines and/or site configurations. It is completely up to the discretion of the code developer to
decide whether such routines are desirable and what tasks they should perform. In fact, there are no
requirements of any kind for the machine dependent code except those imposed by the definition of the
interface (calling sequence and design assumptions). It is that very flexibility that makes the machine
dependent code generation difficult.
The following sections document each of the general machine dependent routines contained in the XX
library. These routines tend to be highly site dependent as well as machine dependent, but are relatively
straightforward to develop. Their functions are simple and do not deal with the major machine depend-
encies like I/O and word sizes.
Method:
DOUBLE returns a .TRUE. if the machine precision is double or a .FALSE. if it is single. ASTROS then
produces all matrix entities and assumes that all matrix entities are of the machine precision. Mixing
single and double precision matrices is not supported by ASTROS code. DOUBLE should be used by all
application modules that use matrix entities.
Design Requirements:
1. All matrix operations must be either single or double, not mixed.
Error Conditions:
None
Method:
The bit manipulation routines all assume that the BIT identifier can vary from 1 to any positive integer.
A consistent set of assumptions on the correspondence of BIT to a word/bit combination in ARRAY must
be made for all bit routines.
Design Requirements:
1. For machine independent use, application program units should size ARRAY based on 32 or fewer
bits per word.
Error Conditions:
None
Method:
The bit manipulation routines all assume that the BIT identifier can vary from 1 to any positive integer.
A consistent set of assumptions on the correspondence of BIT to a word/bit combination in ARRAY must
be made for all bit routines.
Design Requirements:
1. For machine independent use, application program units should size ARRAY based on 32 or fewer
bits per word.
Error Conditions:
None
Method:
The bit manipulation routines all assume that the BIT identifier can vary from 1 to any positive integer.
A consistent set of assumptions on the correspondence of BIT to a word/bit combination in ARRAY must
be made for all bit routines.
Design Requirements:
1. For machine independent use, application program units should size ARRAY based on 32 or fewer
bits per word.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
On the first call to XXCPU, the utility must initialize the system CPU timer and return 0.0 elapsed
seconds. On subsequent calls, the elapsed CPU time in seconds is returned.
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
The XXINIT routine is typically used to enter machine dependent parameters relating to error handling
by the host machine, the initialization of the machine dependent parameters that must be done at run
time on certain machines and performing any other machine or installation dependent actions that may
be useful. The XXINIT routine is called by the ASTROS main driver as the first executable statement
of the ASTROS.
Design Requirements:
None
Error Conditions:
None
Method:
This routine may be written in standard FORTRAN 77 using the internal file feature to write the integer
onto the character string. It is often more efficient to crack the integer into its constituent digits. Some
machines have local utilities that may be used.
Design Requirements:
None
Error Conditions:
None
Method:
The machine independent use of this function requires that NBIT be less than the smallest number of
bits in a word for any target machine (typically 32).
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
Returns uniformly distributed random numbers.
Design Requirements:
None
Error Conditions:
None
Method:
The machine independent use of this function requires that NBIT be less than the smallest number of
bits in a word for any target machine (typically 32).
Design Requirements:
None
Error Conditions:
None
Method:
This routine may be written in standard FORTRAN 77 using the internal file feature to write the real
onto the character string. It is often more efficient to crack the real into its constituent digits. Some
machines have local utilities that may be used.
Design Requirements:
None
Error Conditions:
None
The following sections document each of the database machine dependent routines contained in the DBMD
library. These routines tend to be site independent, but are highly machine dependent. Their develop-
ment on a new host system can become quite complex depending on the desired sophistication of the
interface. These routines deal with file structures I/O and memory management as well as certain CPU
critical string manipulation functions.
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
Phase 1 of the database configuration normally involves the determination of default values for the
database. The values that can be changed are defined in the /DBCONS/ common block. These values
can be hard coded in this routine, hard coded in the DBBD block data routine or read from a configuration
file.
The only required function of the routine is to return the number of words in the system dependent
portion of the DBNT.
Design Requirements:
None
Error Conditions:
None
Method:
When phase 2 of the database configuration is performed, the DBNT table has been allocated and partially
initialized. This routine must initialize the system dependent portion of the table. The location of this
data can be found as follows.
DBENTSD = Z (DBNT + DBNSD)
Z(DBENTSD + xx) = machine dependent data
It is also the responsibility of this routine to make sure that all of the following variables in /DBCONS/
have legal values.
DBDFIL default number of data files
DBMFIL maximum number of data files
DBDEFD default data file block size
DBDEFI default index file block size
DBMAXE maximum number of ENT entries
DBMAXD maximum number of DBNT entries
DBMAXN maximum number of NST entries
DBALGN required buffer alignment
Design Requirements:
None
Error Conditions:
None
Method:
This subroutine should return the current date and time in the appropriate locations. Each value
returned should be stored in two integer words with four hollerith characters per word.
Design Requirements:
None
Error Conditions:
None
Method:
The DBMDER routine is intended to be used in two ways. The first, denoted by a blank character string
on input, is to activate any machine dependent error handling. This is the interface to the DBMDER
routine from the machine independent library. For example, the DBMDER routine typically invokes the
host dependent mechanism to obtain a traceback to assist in locating the source of an error. The second
interface, using nonblank character strings on input, is intended for use by the machine dependent
(DBMD) library. In this function, the DBMDER routine typically writes out error messages identifying the
nature of the (machine dependent) error condition. This is useful for error checking the file naming
conventions, host I/O limitations, and other host dependent user interfaces to the ASTROS system.
Design Requirements:
None
Error Conditions:
None
Method:
On certain machines (notably VAX), the bytes in an integer word are stored in an order right to left.
When hollerith data are used, this feature complicates the comparison of two hollerith words. This
routine is called to reorder the bytes in an integer word to be left to right, independent of the storage
format on the machine. On machines that do not swap bytes, the DBMDFP function value should be set
equal to the INUM value.
Design Requirements:
None
Error Conditions:
None
Method:
If a database error occurs, portions of the in core control tables are dumped to help diagnose the problem.
It is most often desirable to see this data in a combined hex/octal and character format. This routine
usually dumps the desired data in the following form:
OFFH OFFD ............hex/octal data .....character data
OFFH - hex/octal offset of the data from /MEMORY/
OFFD - decimal offset of the data from /MEMORY/
Design Requirements:
None
Error Conditions:
None
Method:
Phase 1 of the I/O initialization is responsible for determining two values: the number of data files in
the database and the number of system dependent words required for each file in the DBDB. The DBINIT
call is provided with an argument called USRPRM. The contents of this character string are completely
machine dependent and can be used to specify any special processing. Examples of these fields are
provided in Section 1 of the User’s Manual.
The most difficult function of this routine is to determine the number of data files for a database. The
following ways could be used.
1. If the database has a status of NEW or TEMP, the number of data files is either the default or entered
using the USRPRM.
2. If the database has a status of OLD, the number of data files can either be a hard coded value (usually
1) or can be determined by opening files with the appropriate names until an open fails.
For OLD databases this routine should also determine the index and data file block sizes. This can
usually be done by one of the following two methods.
1. Inquire as to the physical attributes of the file to determine the block sizes.
2. Do a sequential read of the first block of the index file and extract the index and data file block sizes
that are stored there.
Design Requirements:
None
Error Conditions:
None
Method:
When phase 2 of the database I/O initialization is performed, the DBDB table has been allocated and
partially initialized. This routine must initialize the system dependent portion of the table. There are
also several words in the machine independent portion of the DBDB that must be initialized for each
index and data file. The following code shows how these words are located.
For the Index file:
DBDDBO = DBDB + DBDIFB
This routine will typically do any physical open or assign calls that are required to make all the index
and data files for this database available for processing.
Design Requirements:
None
Error Conditions:
None
Method:
The routine determines the absolute address of BASE and modifies the offset value to account for the
precision of the desired address. For example, the VAX machine returns the byte address from the system
utility %LOCF. To obtain the word address for the VAX, the byte address is divided by the number of
bytes per word (four). A check is made to determine if the byte address is an even multiple of four and/or
eight to check the single and double word alignment.
Design Requirements:
1. This routine is identical to DBMDLF except that the BASE array in this routine is character rather
than integer.
Error Conditions:
1. On certain machines, there is a requirement that the memory addresses be aligned on single and/or
double word boundaries. This routine should perform these checks and return the proper STAT value
if the required alignments are not met.
Method:
The routine determines the absolute address of BASE and modifies the offset value to account for the
precision of the desired address. For example, the VAX machine returns the byte address from the system
utility %LOCF. To obtain the word address for the VAX, the byte address is divided by the number of
bytes per word (four). A check is made to determine if the byte address is an even multiple of four and/or
eight to check the single and double word alignment.
Design Requirements:
1. This routine is identical to DBMDLC except that the BASE array in this routine is integer rather than
character.
Error Conditions:
1. On certain machines, there is a requirement that the memory addresses be aligned on single and/or
double word boundaries. This routine should perform these checks and return the proper STAT value
if the required alignments are not met.
Method:
The result, DBMDOF, is a FORTRAN index such that the same memory location is referenced by ARRAY1
(DBMDOF) and ARRAY21.. In the DBOPEN call, the user provides a 20-word INFO array. The last 10 words
of this block are available for any required user data. These 10 words can be modified anytime up to the
DBCLOS call for the entity. Since the INFO array is not passed on the DBCLOS call, the DBOPEN call must
remember where it is for later access by the DBCLOS call. The DBMDOF function allows the database to
remember where the INFO block is by saving its location relative to the /MEMORY/ common block at
open time.
The actual implementation of the call usually requires some method for obtaining the actual address
for a subroutine argument.
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
The function of this routine is to read a block from the database. The first step is to determine the
database file and block number to be read. the following code will perform this.
IF(FILE .LT. 0) THEN
IBLK = BLK/DBMFIL
IFILE = BLK - IBLK*DBMFIL
ELSE
IBLK = BLK
IFILE = FILE
ENDIF
The block should then be read into the I/O buffer using the appropriate calls for the target system. The
machine independent DBDB data, referenced from DBDBO, and machine dependent DBDB data, referenced
from DBDBSD can be obtained from the buffer header. The number of words to transfer, BLKSIZ, is
obtained from the DBDB.
IF(IFILE .EQ. 0) THEN
DBDBO = DBDB + DBDIFB
BLKSIZ = Z(DBDB+DBDIBS)
ELSE
DBDBO = DBDB + DBDDTA + (IFILE-1)*LENDDE
BLKSIZ = Z(DBDB+DBDDBS)
ENDIF
BUFIO = Z(BUFHD+BFIOBF)
DBDBSD = Z(DBDBO+DBDOSD)
After the I/O operation, the following two words of the buffer header should be updated:
Z(BUFHD+BFPBLK) = IBLK*DBMFIL + IFILE
Z(BUFHD+BFDBDB) = DBDB
Design Requirements:
None
Error Conditions:
None
Method:
This routine is called at program termination to do any system dependent termination processing for
each database. It is not required to do anything. Typically it will close all database files.
Design Requirements:
None
Error Conditions:
None
Method:
The function of this routine is to write a block to the database. The processing is similar to DBMDRD and
the same information is available to the routine.
When writing this routine one special case must be considered. Because of the dynamic way in which
database blocks are allocated and used, it can never be assumed that the database blocks are appended
in sequential order. For example block 10 may be written before block 9. If this situation is not allowed
on the target system then this routine should write dummy blocks to fill any gap before writing the
target block. The contents of these dummy blocks is unimportant.
After the I/O operation, the "buffer modified" flag in the buffer header should be set to zero.
Z(BUFHD+BFMOD) = 0
Also, this routine should maintain the word in the machine independent portion of the DBDB which
indicates the number of blocks on the physical file.
Z(DBDBO+DBDONB) = MAX(IBLK,Z(DBDBO+DBDONB))
Design Requirements:
None
Error Conditions:
None
unused 01 02 03 ... 31
This bit numbering scheme must be maintained regardless of the bit numbering scheme of the target
system.
The DBMDZB routine should return the first zero bit, starting from left to right. If all bits are one, then
a –1 is returned. This function typically uses the FORTRAN BTEST function (if one is provided) with
appropriate calculations to use the proper bit numbering scheme.
Design Requirements:
None
Error Conditions:
None
The purpose of SYSGEN is to create a system database (SYSDB) defining system parameters through
the interpretation of several input files. Also, a FORTRAN routine is written by SYSGEN that provides
the link between the ASTROS executive system and the application modules that comprise the run-time
library of the procedure. This program unit is then linked with the system during the assembly of the
ASTROS executable image. The resultant procedure makes use of the system database as a pool of data
that defines the system at run time. These data are
1. the contents of the ASTROS run-time library of MAPOL addressable modules including both
utility and application modules, usually delivered as MODDEF.DAT or MODDEF.DATA;
2. the ASTROS standard executive sequence composed of MAPOL source code statements, usually
delivered as MAPOLSEQ.DAT or MAPOLSEQ.DATA;
3. the set of bulk data entries interpretable by the system and defined through the specification
of bulk data templates to be interpreted by the ASTROS Input File Processor (IFP), usually
delivered as TEMPLATE.DAT or TEMPLATE.DATA;
4. the set of relational schemata used by the executive system to satisfy the declaration of relational
variables in the MAPOL sequence without forcing the user to explicitly define each schema at
run time, usually delivered as RELATION.DAT or RELATION.DATA; and
5. the set of error message texts from which the UTMWRT system message writer utility builds error
messages at run time, usually delivered as SERRMSG.DAT or SERRMSG.DATA.
There is an input file for each of these data which is interpreted by SYSGEN and used to write data to
SYSDB in particular formats. These database entities are then used by the ASTROS executive system,
application modules and utilities to perform certain functions. Since these program units are designed to
interpret the set of data that are present in the SYSDB entities, they are flexible in that virtually any
changes to the set of data can be accommodated without modification of the software that uses the data.
The following sections each contain a description of a SYSGEN input file and of the SYSDB database
entities that are filled with the corresponding data. These input files contain the definition of the system
as developed for the ASTROS procedure. The advanced user may, through the appropriate changes to
these inputs, add new modules, add new error messages that may be useful as part of the additional
module(s), add new bulk data inputs, add new relational schemata to those that exist or add new
attributes to an existing schema. Finally, the standard solution algorithm itself can be modified, either to
include (as a permanent modification) a new feature or to modify an existing capability. The advanced
user is cautioned, however, that the standard sequence represents a very tightly interwoven set of
functions and any changes should be carefully considered for their ramifications on the multidisciplinary
features of the system as it is currently defined.
The functional modules form the computational heart of the ASTROS system. A sequential file contains
character data records that define the following information for each module:
2. To allow the validation of module calls including type checking of the input parameters
3. To define the way in which the results of a module are used and to provide the actual FORTRAN
link to activate a module
The format used to provide these data is described in the following section
MODULE1. ENTRY
MODULE2. ENTRY
...
where the first line consists of MODNAME and NPARM and the second line consists of MODTYPE and the first
19 PARMTYPEs. The PARMTYPEs continue, 20 per line, on subsequent lines, as required to supply NPARM
PARMTYPEs.
The lines following the PARMTYPEs consist of FORTRAN code (including, but not requiring,
comments or blank lines) which implements the module interface to MAPOL.
The last line of a module entry is the word END starting in column 1.
Procedures
For procedures, a call is made to the ASTROS name with a parameter list having symbolic
arguments of the correct types. For example, if a module has the following parameters (in order)
with the specified data types:
3 Integers
1 Logical
1 Integer
1 Relation
1 Integer
1 Optional integer
1 Optional matrix
1 Matrix
1 Optional matrix
1 Logical
1 Matrix
1 Optional matrix
2 Matrices
1 Unstructured
2 Optional matrices
1 Optional logical
2 Optional matrices
1 Optional unstructured
AROSNSDR 23
102 1 1 1 4 1 7 1 -1 -8 8 -8 4 8 -8 8 8 9 -8 -8
-4 -8 -8 -9
C
C PROCESS ’AROSDR’ MODULE - SAERO CONSTRAINT SENS. DRIVER
C
CALL AROSDR ( IP1., IP2., IP3., LP4., IP5., EP6., IP7.,
1 IP8., EP9., EP(10), EP(11), LP(12), EP(13),
2 EP(14), EP(15), EP(16), EP(17), EP(18), EP(19),
3 LP(20), EP(21), EP(22), EP(23) )
END
The subscripted array elements are used by the ASTROS executive to pass the actual parameter values.
The subscripts must correspond to the order of the arguments in the MAPOL calling list. The following
array names are used:
IP – Integer Parameter
RP – Real Parameter
CP – Complex Parameter
LP – Logical Parameter
EP – Entity name
This method passes only scalar parameters to the FORTRAN driver. No mechanism is available to pass
FORTRAN arrays.
Functions
For a function, the resultant value is returned to MAPOL on the execution stack. To accomplish this, the
programmer must assign the numeric function result to the FORTRAN variables IOUT, ROUT, and COUT
that define the number to the executive. This is analogous to a function in FORTRAN, in which the value
must be assigned to the function name within the function unit.
If the value is integer, only IOUT2. must be defined. If the value is real, only ROUT2. must be defined.
If the value is complex, then either COUT must be defined, or ROUT2. and ROUT3. must be defined. What
is important is that the data in the second and third words be consistent with the type in IOUT1..
Further if the function operation depends on the types of arguments (as do the FORTRAN generic
functions, e.g. MAX, SIN), the array
may be read in the module definition code to determine the type of argument passed. The PARMTYPE
definition should then be 10 to allow any type to be passed. TP uses the same definitions as PARMTYPE,
except the actual type of the argument is stored. In other words, TP contains a 1, 2, or 3 in the location
associated with type 10 parameters, depending on the actual type passed in the current call.
For example, if the sine function is desired, the following module definition would be used:
SIN 1
100 10
C
C SIN - RETURN THE SINE OF THE ARGUMENT
C
IF( TP1. .LE. 2 ) THEN
IOUT1. = 2
IF ( TP1. .EQ. 1 ) RP1. = IP1.
ROUT2. = SIN(RP1.)
ELSE
IOUT1. = 3
COUT = SIN(CP1.)
ENDIF
END
The data defined by the module file are processed and the results are stored on the system database in
two entities. The first is a relation called MODINDEX. This relation has two attributes: the first, MODL-
NAME, is the module name and the second, ARGPONTR, is a pointer to the second entity. This second entity
is called MODLARGS. Each record of this unstructured entity contains the MODTYPE and PARMTYPE data
from the module definition file for a particular module. Additionally, the output of SYSGEN includes a
FORTRAN subroutine called XQDRIV. This routine is the module driver for the ASTROS execution
monitor. It must be compiled and linked into the system when adding or changing module definitions.
The standard multidisciplinary solution algorithm, in the form of MAPOL source code statements, is
contained in a sequential file. The SYSGEN program reads this file and compiles a standard sequence.
The results of the compilation are stored on the system database in the form of two relations and an
unstructured entity. The first relation is called &MAPMEM and has three attributes: ADDRESS, VARTYPE,
and CONTENT. This relation stores the execution memory map for the standard MAPOL sequence. AD-
DRESS is an integer containing the address of the variable, VARTYPE is an integer denoting the variable
type and CONTENT is a two-word integer array containing the current value of the variable.
The second relation output from the compilation of the standard sequence is called &MAPCOD and has
three attributes: INSSEQ, OPCODE, and ARGUMENT. This relation contains the ASTROS machine instruc-
tions that represent the compiled MAPOL sequence. INSSEQ is an integer containing the instruction
sequence number, OPCODE is the machine operation code to determine the action to be taken, and
ARGUMENT is the argument to the operation -- either a memory address of an immediate operand.
The final output from the standard algorithm definition is not directly related to the compilation of the
sequence. It is a relation called &MAPSOU containing the standard sequence source code statements
verbatim. This is stored on the system database, allowing the user to edit the standard sequence to
generate a new MAPOL program which directs the ASTROS procedure. The relation has two attributes:
LINENO and SOURCE. LINENO is an integer containing the line number and SOURCE is a string attribute
containing the 80-character source code line.
The ASTROS bulk data decoding module (IFP) is driven by templates that are stored on the system
database during system generation. The template format for IFP was adopted to allow for easy installa-
tion of new bulk data entries and for easy modification of existing bulk data entries. The sequential file
used by SYSGEN contains the bulk data templates for all the bulk data inputs defined to the ASTROS
system in arbitrary order.
MAXSET (I8)
NLPTMP (I8)
TEMPLATE 1
TEMPLATE 2
...
...
MAXSET is the maximum number of template sets used to define one bulk data input. Currently, this
value is five. NLPTMP is the number of lines in each template set. Currently there are six lines in each set.
A bulk data template therefore consists of 1,2,3,...MAXSET template sets, each of which consist of
NLPTMP template lines which define the structure of the input entry. The definition includes the field
size, the field label, the field data type, the field defaults, the field checks, the field database loading
position and, if necessary, a list of relational attributes. The structure of the template set is as follows:
CQUAD4 |EID |PID |G1 |G2 |G3 |G4 |TM |ZOFF |CONT |
CHAR INT INT INT INT INT INT INT/REALREAL CHAR
DEFAULT EID 0. 0.
CHECKS GT 0 GT 0 GT 0 UG 2 UG 3 UG 4 GT 0
1 2 3 4 5 6 7 9
CQUAD4 EID PID1 GRID1 GRID2 GRID3 GRID4 CID1 THETA
+CQUAD4| |TMAX |T1 |T2 |T3 |T4 | |
CHAR REAL REAL REAL REAL REAL
DEFAULT 1.E4
CHECKS GE 0. GE 0. GE 0. GE 0. GE 0.
10 11 12 13 -14
OFFST0 TMAX THICK1 THICK2 THICK3 THICK4 $
The first seven columns of the first template set line define the name of the input data entry. The eighth
column is reserved for a field mark, "|". Columns 9 through 72 define the field labels and these are
separated by the field mark "|". Columns 73 through 79 indicate to the decoding routines if a continu-
ation card is supported for this entry. On the first template set’s LABEL line (the template set for the
parent line of the bulk data entry), the character string CONT indicates that a continuation line is
supported. Otherwise, these columns are ignored and a continuation will not be allowed for the entry
defined by this template. A continuation template set’s LABEL line differs from the parent template in
that the character string ETC can be used in columns 73 through 79 of the continuation template set’s
LABEL line to indicate an open ended entry having a repeating continuation. In this case, the same
continuation template will be used to decode all remaining continuation entries. For free format input,
the continuation entries must have the input extend to the third field (the second data field). Also, note
that ALL template set’s LABEL lines must have a field mark in column 80 to end the line.
The FIELD DATA TYPE Template Set Line defines the types of data that are allowed for each
field. Possible data types include: blank, INT (integer), REAL (real), CHAR (character), INT/REAL (integer
or real), INT/CHAR (integer or character), and REL/CHAR (real or character). The data type definition
characters (i.e., INT) must be left justified in the fields.
The DEFAULT template set line defines the default values of the fields. If the input data entry has an
empty field, the default value will be used as the input. All values, like the data types, must be left
justified in the fields. Three special cases for default values have been incorporated into the decoding
routines. The first is the case of a special user input entry to define the defaults for a template. In this
case the user supplied values will be substituted for the normal default values. The BAROR and GRDSET
entries are examples of special inputs used to define the defaults. The addition of any other special inputs
like these requires program changes in routines IFPBD, BDMERG, and IFPDEC. Another special case is in
the referral to another field of the same template to obtain the default value. Referral values can exist for
all data types except character (CHAR) data. In the case of a default referral, the current template set
LABEL line is searched for the label referred to and, if the label string is not found, all other template
sets, starting at the parent template set, are searched for the string. When the string is found, the
corresponding entry field is decoded to obtain the default value. An example of a referral is the PID field
of the CQUAD4 card template. The third special case for defaults is the use of a multiplier for an integer
default referral. This is only valid for integer type data and the presence of a multiplier is defined by an
asterisk "*". For example, 3*NDN, where NDN is the label associated with another integer data field.
The ERROR CHECK template set line directs data checking for the decoded fields. Each error check
specifies both the type of check and the check value. The available check types depend on the data type
(for example, Integer, Real, or Character). Checks that are currently encoded are shown in Table 1.
If additional checks are needed, subroutine INTCHK must be modified for integer checks, RELCHK must be
modified for real checks and CHRCHK must be modified for character checks.
When two check values are needed, as for the IB and EB checks, the first is located on the ERROR CHECK
template set line and the second is located in the same column position on the FIELD LOADING POSI-
TION line.
The FIELD LOADING POSITION template set line is used to place the converted data into the database
loading array. The sequence of the numbering is dependent on three conditions. The first is the existence
of CHAR data on the card. In this case, two hollerith single precision words will be used to store an eight
character input and the numbering must account for the two words. The second condition is the sequence
of the attribute list for a relational bulk database entity. In this case the loading sequence is determined
by the sequence of the attribute list. The third condition occurs when a multiple data type field is
present, (e.g., REL/CHAR). In this case the first variable type is loaded at the given value and the second
variable type is loaded in the next word(s). Again this must be accounted for in the numbering sequence.
Finally, when a negative integer value is given as the loading position, the database loading array will be
loaded onto the database if input errors have not occurred. A negative value should be used on the field
associated with the last attribute of the relational projection. At least one negative loading position must
be defined.
The DATABASE ENTITY DEFINITION template set line names the database entity to be loaded in the
first eight columns of the parent template set, and the database attribute list for relational entities
(Columns 9-72), of the parent template set. Column 80 is reserved for a map-end character ($). The
map-end character indicates the end of the template, and so must occur only on the last database entity
definition line for the final set of the template.
The SYSGEN outputs consist of an unstructured entity called SYSTMPLT which contains the templates
and a relation called TMPPOINT which allows efficient access to particular templates. The SYSTMPLT
entity contains one RECORD for each bulk data template in the order it appears in the input template
definition file. Therefore, the RECORDs are of variable length with the longest RECORD containing 80
characters for each of MAXSET template sets of NLPTMP lines. The TMPPOINT relation has two attributes:
CARD and RECORD. CARD is an eight character string attribute containing the name of the bulk data entry
and RECORD is the number where the template is stored in SYSTMPLT.
Each relational database entity requires a SCHEMA that defines its data attributes. A sequential file,
containing character data, is used to define these schemata. For each relation there is a list of the
attribute names, their types, and, if they are arrays or character data, their length. The details of this file
are described below:
REL1. ENTRY
REL2. ENTRY
...
...
REL(NREL) ENTRY
Each RELATION entry has the following form of free field input. Each input may appear anywhere on the
line separated by one or more blanks except "RELATION" and "END".
RELATION RELNAME
ATTRNAME ATTRTYPE ATTRLEN
END
where
RELATION is the keyword "RELATION" which signifies that a new RELATION schema
follows. Must begin in column 1.
RELNAM is the name of the RELATION; it may be one to eight characters beginning
with a letter.
ATTRNAME is the name of the attribute; it may be one to eight characters beginning
with a letter.
ATTRTYPE is the type of the attribute selected from:
’INT’ Integer
’KINT’ Keyed Integer
’AINT’ Array of Integers
’RSP’ Real, single precision
’ARSP’ Array of real, single precision
’RDP’ Real, double precision
’STR’ Character string
’KSTR’ Keyed character string
ATTRLEN is the optional length of the Attribute. If it is of type AINT, ARSP, ARDP,
STR, or KSTR, the length is not optional. For other types, it should be zero or
not present.
END is the keyword "END" which signifies the END of the RELATION schema.
3.2.4.2 SYSGEN Output for Relations
The data defined by the RELATION schema file are processed and the results are stored in two entities on
the system database. The first is a RELATION called RELINDEX. This entity has an attribute RELTNAME
containing the name of the RELATION and an attribute SCHMPNTR which is an integer pointer to an
unstructured entity called RELSCHEM. The RELSCHEM entity contains a list of attribute names, types and
lengths for each RELATION. Each RELATION has one tuple in the RELINDEX RELATION and one RECORD
in the RELSCHEM entity. Each RELSCHEM RECORD consists of a four-word entry for each attribute: two
hollerith words containing the attribute name, one hollerith word containing the attribute type and an
integer word containing the attribute length.
The text of ASTROS run time messages is stored and maintained on a sequential file which is used
during system generation to create SYSDB entities for use by the ASTROS message writer utility
module. There are two reasons for maintaining the message text on an external file (and on SYSDB).
First, the storage of message text within the functional modules would use a large amount of memory
during execution. Second, storing the messages together in an external file allows for easier maintenance
and aids in avoiding needless duplication in message texts. The messages stored on SYSDB from this file
are used by the ASTROS utility UTMWRT to build error messages during execution.
The header is an optional label of any length or content up to 120 characters that typically would
describe the relationship among the messages for the specified module. In this way, messages that are
logically related (for example, all error conditions from the IFP module) can be grouped together for
simplified maintenance. The module number <n> is a unique integer identifying the base module number
for the group of error messages. It need not be consecutive, which allows for randomly numbered mod-
ules.
the string is enclosed in a single quotation marks because the message will be used as a character string
in a FORTRAN write statement. The $ (dollar sign) is used by the UTMWRT utility to place character
arguments into the string. For example,
would appear for CTRMEM element 100 attached to scalar point 1001:
If the user wishes the message to carry over to the next output record, the FORTRAN format RECORD
terminator (/) can be used outside the quotation marks to cause a record advance. For example:
Currently there is an implementation limit of 128 characters for the length of the message text after
including the arguments. Further details are given in the documentation for the UTMWRT utility module.
The data in the message file are used to create two system database entities. The first is an indexed
unstructured entity called ERRMSG. This entity contains one line of the message text file in each record.
The second is an unstructured entity called ERMSGPNT which has one record. That record has two words
for each module defined in the message file. Those words are the module number <n> and the record
number of the ERRMSG RECORD containing the module header. These are used by the UTMWRT to position
to the proper message text when called.
The SYSGEN execution is only required once to generate the standard version of ASTROS. This is the
version that is defined by the delivered set of SYSGEN input files described in Section 3.2. If, however,
the users of the system at a particular installation desire to insert additional modules, the SYSGEN
program must be re-executed to recreate the XQDRIV submodule. The users may also want to modify
other SYSGEN inputs to update the system database to include additional input entries or new relational
schema. These changes also require the re-execution of the SYSGEN program (to update the system
database) but do not require an update of the ASTROS system. For most purposes, only the module
definition file described in Section 3.2.1 requires that the ASTROS executable image be recreated.
Chapter 4.
EXECUTIVE SYSTEM
The ASTROS executive system, as described in Chapter 3 of the Theoretical Manual, may be viewed as a
stylized computer with four components: a control unit, a high level memory, an execution monitor and
an Input/Output subsystem. The first three components comprise the executive system modules, while
the I/O subsystem is embodied in the database. The modules that form the executive system perform
tasks to establish the ASTROS/host interface, initiate the execution and, upon completion of the MAPOL
instructions, terminate the execution. These modules also compile the MAPOL sequence, if necessary,
and initiate the execution monitor that interprets the MAPOL instructions and guides the execution.
This Chapter documents the modules of the ASTROS executive system.
The typical user of ASTROS need not be familiar with the executive system modules since their execution
path does not have the flexibility that is available for the engineering modules. The executive modules,
however, are important from the viewpoint of the system manager and the program developer for several
reasons. First, problems with the machine dependent library on a new host system often show up during
the executive modules’ initialization tasks. The executive system modules are also important in under-
standing the treatment of the user’s input data stream. To isolate the use of external files to the
executive system, for example, the PREPAS executive module reads the input data stream and loads those
portions that deal with the MAPOL, Solution Control and Bulk Data packets to the database. The system
manager, therefore, may find it useful to study the nature of the executive modules and their interrela-
tionships to better understand the implementation of the ASTROS architecture.
Method:
This routine completes the page titling information on the TITLE line used by the UTPAGE utility. The
ASTROS version number is placed in TITLE (which is in the /OUTPUT2/ common block) in characters
88 through 107. The current date is obtained using XXDATE and placed in characters 109 to 117. The
page number label is then placed in characters 120 to 121. Thus, the page number itself is left to fit in
characters 123 to 128.
Design Requirements:
None
Error Conditions:
None
3. EDIT denoting the start of the MAPOL packet containing edit commands to be applied to the
standard sequence
4. SOLUTION denoting the start of the solution control packet
5. "BEGIN_" denoting the start of the bulk data packet. Note the trailing blank after BEGIN and the
absence of the optional BULK keyword.
6. ENDDATA denoting the end of the bulk data packet
7. INCLUDE naming the secondary file from which to read the input
Note that all the keywords except ENDDATA and INCLUDE mark the beginning of a new packet. The
INCLUDE keyword does not change the current packet and ENDDATA marks the end of the valid input.
If the current record is one of the keyword records, flags are set to indicate that a new packet has been
initiated or, for INCLUDE, the include file is opened and processing continues with the new input file
until it is exhausted. Records that are not keyword records are processed as follows:
1. DEBUG packet records are sent to the CRKBUG utility to interpret the debug commands and set the
executive system debug command flags in the /EXEC02/ common and set the other debug command
flags by UTSFLG to activate run time debugs.
2. MAPOL packet records representing a replacement MAPOL sequence are written to the unstruc-
tured entity &MAPLPKT for processing in the MAPOL module.
3. MAPOL packet records representing an EDIT of the standard sequence are passed to the MAPEDT
subroutine to be interpreted. The resultant MAPOL sequence is written to the unstructured entity
&MAPLPKT for processing in the MAPOL module.
4. SOLUTION packet records are written to the unstructured entity &SOLNPKT for processing in the
SOLUTION module.
5. Bulk Data packet records are written to the unstructured entity &BKDTPKT for processing in the
IFP module.
Design Requirements:
None
Error Conditions:
1. Input stream does not begin with an ASSIGN DATABASE entry.
2. An input stream keyword appears out of order.
3. An ENDDATA statement appears outside the bulk data packet.
4. No filename was found on an INCLUDE statement.
5. INCLUDE file cannot be opened.
6. Input record lies outside any input packet (typically following an ENDDATA).
7. FORTRAN read error on the primary input stream or included file.
8. An INCLUDE statement appearing in a included file.
Method:
This routine establishes the initial block headers for open core memory. A block header is written
representing one free block of SIZE words less those required for the block header. The block header is
either six or eight words depending on whether the MEMORY debug has been selected by the user in the
input stream. The size must correspond to the actual declared length of the open-core common block
/MEMORY/.
Design Requirements:
Prior to calling this routine, you must get the value of MAXCOR with the call:
CALL SYSGET(’MAXCORE’,MAXCOR)
Error Conditions:
None
Method:
This routine opens the named database for access. It performs any machine and installation dependent
processing by accessing the database machine dependent library routines DBMDCX and DBMDIX. All the
in-core buffers required for subsequent database access are allocated using the database memory
management routines.
Design Requirements:
1. The first call to DBINIT must define the run-time or scratch database. Any other databases may
then be initialized.
Error Conditions:
1. Any error conditions on the file operations occuring in DBINIT will terminate the execution.
Method:
If no MAPOL packet was in the input stream, XQTMON copies the standard sequence machine instruc-
tions and memory map from the system database into the MCODE and MEMORY entities. If a MAPOL
compilation took place, the current sequence’s data are already in MCODE and MEMORY. Beginning with
the first instruction, which is passed to XQTMON from the MAPOL compiler or retrieved from the system
database, the machine instructions are executed by this module.
Most instructions are processed directly by the XQTMON module; for example, stack operations, entity
creations and scalar arithmetic operations. If the instruction is a module call, however, the XQDRIV
executive subroutine, previously written by the SYSGEN program, is called to access the MAPOL
module to which the machine instruction refers.
Design Requirements:
None
Error Conditions:
MAPOL run time errors
Method:
The database entity name table (ENT) is searched and all open entities corresponding to DBNAME are
closed and the ENT deleted. If DBNAME is blank, all open entities on all databases are closed but the ENTs
are not deleted. If all databases are to be closed, the database name table (DBNT) is searched and all
in-core buffers are freed. The first record of each database is updated to indicate that it was properly
closed and any system dependent termination is performed. If a particular database is to be closed, these
operations are done only for the named database. Finally, if all databases are closed, the ENT, DBNT and
the name substitution table (NST) are released at the close of the DBTERM.
Design Requirements:
None
Error Conditions:
None
Chapter 5.
ENGINEERING APPLICATION MODULES
The modules documented in this section fall under the category of engineering application modules.
These modules constitute the majority of the ASTROS software and do the tasks necessary to perform
the analyses supported by the ASTROS system. Unsurprisingly, the difference between an "engineering
application" module and other modules in the ASTROS system is not always clear. The most useful
definition may be that an engineering application module is one that does not fall into any other category.
They do, however, share some common attributes that can be used to help distinguish them from other
modules. First, an engineering application module has no application calling sequence: it is only accessi-
ble through the executive system. A related attribute is that no engineering module may be called by
another module, whereas utility modules may be called by other modules or by the executive system.
Finally, an engineering application module is one that establishes an open core base address by calling
the MMBASE and/or MMBASC utilities and uses that one base address throughout its execution.
The following subsections document each of the engineering application modules that comprise the
ASTROS system. Each module is documented using the standard format shown in the introduction, but
some additional comments are necessary. First, the MAPOL language allows the use of optional argu-
ments in the calling sequences. This feature has been used in many modules to provide optional print
selections or to allow the module to be used in slightly different ways. This is particularly true for the
matrix reduction "modules" (GREDUCE, FREDUCE and RECOVA) which may almost be considered utilities.
When the argument in the MAPOL calling sequence is optional, it is so indicated in the calling list. The
Method section then describes the alternative operations that take place depending on the presence of
the optional argument.
A second point to emphasize is the general nature of the Method sections for the engineering module
documentation. In no way does this documentation attempt to lead the reader through the code segments of
the module. Instead, a general descripton of the algorithm is given which, in combination with the in-line
comments, should give the programmer an adequate understanding of the module. The system programmer
wanting to make extensive software modifications to existing modules will still need to study the actual
code segments in some detail. The level of detail in the engineering module documentation is considered more
appropriate for the ASTROS analyst/designer who wants to understand how ASTROS uses the existing pool
of engineering modules and to know the "initial state" that the module expects to exist when it is called. The
analyst may then make "nonstandard" use of the module to perform alternative analyses. These, therefore,
are the data emphasized in the module documentation that follows.
A final note should be made relative to the description of the MAPOL module calling sequences. Version
12 of ASTROS has introduced user defined boundary condition identification numbers, called BCID in
this chapter, which are used when specifying user defined functions. These are not to be confused with
the boundary condition index, or subscript, which is a sequential counter used in MAPOL. This counter is
shown as the entity subscript BC.
Method:
The CASE relation is read first to retrieve the SUPPORT set for the current boundary condition. The
number and location of the support DOF are returned from the utility routine SEFCHK. Then the CONST
relation is read for active lift effectiveness (DCONCLA), aileron effectiveness (DCONALE) and stability
coefficient constraints (DCONSCF) for the current boundary condition, subscript and iteration.
The EFFSENS matrix, of dimension NSUP*NDV*NAUE where NSUP is the number of support dofs and
NAUE is the number of active pseudodisplacement fields of the set computed in SAERO for the applied
constraints.
The whole EFFSENS matrix is read into memory and then the loop over active constraints begins. For
each active constraint, the DISPCOL attribute of the CONST relation is used to determine which column
of pseudodisplacements is associated with the constraint. The PCAE entity is then used to determine
which column of the reduced set of active pseudodisplacement fields is the proper column. Once located,
the constraint sensitivities may be computed from the dimensional stability coefficient derivatives and
the normalization data stored in the CONST relation in the SAERO module. The constraint derivatives
are computed from the following relationships.
The flexible stability coefficient response sensitivities which are required by the active user function
constraints are also computed in this module. Those sensitivities are stored into relational and matrix
entities to be used by the user function evaluation utilities.
Lift Effectiveness:
Upper Bound
CLAREQ > 0.0
DG/DX = SENS ROW / (CLA * CLAREQ )
RIGID
CLAREQ < 0.0
DG/DX = -SENS ROW / (CLA * CLAREQ )
RIGID
CLAREQ = 0.0
DG/DX = SENS ROW / CLA
RIGID
Lower Bound
CLAREQ > 0.0
DG/DX = -SENS ROW / (CLA * CLAREQ )
RIGID
CLAREQ < 0.0
DG/DX = SENS ROW / (CLA * CLAREQ )
RIGID
CLAREQ = 0.0
DG/DX = -SENS ROW / CLA
RIGID
where CLARIGID is stored in the SENSPRM1 attribute of CONST and CLAREQ is stored in the SENSPRM2
attribute of CONST
Aileron Effectiveness:
Upper Bound
AEREQ > 0.0
DG/DX =(-SENS * CMXP + SENS * CMXA ) / (AEREQ * CMXP **2)
1 FLX 2 FLX FLX
AEREQ < 0.0
DG/DX =( SENS * CMXP - SENS * CMXA ) / (AEREQ * CMXP **2)
1 FLX 2 FLX FLX
AEREQ = 0.0
DG/DX =(-SENS * CMXP + SENS * CMXA ) / CMXP ** 2
1 FLX 2 FLX FLX
Lower Bound
AEREQ > 0.0
DG/DX =( SENS * CMXP - SENS * CMXA ) / (AEREQ * CMXP **2)
1 FLX 2 FLX FLX
AEREQ < 0.0
DG/DX =(-SENS * CMXP + SENS * CMXA ) / (AEREQ * CMXP **2)
1 FLX 2 FLX FLX
AEREQ = 0.0
DG/DX =( SENS * CMXP - SENS * CMXA ) / CMXP ** 2
1 FLX 2 FLX FLX
where CXMAFLEX is stored in the SENSPRM1 attribute of CONST and CMXPFLEX is stored in the
SENSPRM2 attribute of CONST and 2.0*AEREQ/(57.3*REFB) is in SENSPRM3
Stability Coefficient:
Upper Bound
REQ > 0.0
DG/DX = SENS ROW / REQ
REQ < 0.0
DG/DX = -SENS ROW / REQ
REQ = 0.0
DG/DX = SENS ROW
Lower Bound
REQ > 0.0
DG/DX = -SENS ROW / REQ
REQ < 0.0
DG/DX = SENS ROW / REQ
REQ = 0.0
DG/DX = -SENS ROW
where REQ, the dimensional required value is stored in the SENSPRM1 attribute of CONST
The rows of EFFSENS associated with each constraint are dependent on the constraint type in the
following way:
(1) Lift Effectiveness constraints always use the plunge DOF
(2) Aileron Effectiveness constraints always use the roll DOF
(3) Stability Coefficient constraints always use the row associated with the constrained axis. The
constrained axis number (1,2,3,4,5,6) is stored in real form in the SENSPRM2 attribute of CONST.
Design Requirements:
None
Error Conditions:
None
Where: Represents:
F+K Number of SUPORT point DOF
F Set of free accelerations, AR
K Set of known(FIXED) accelerations, AR
U+S Number of AERO parameters
U Set of unknown parameters
S Set of set(FIXED) parameters
Note that ARknown and DELs sensitivities are zero by defini-
tion.
These equations must be rearranged to get free accelerations and unknown delta’s on the same side of
the equation:
We must handle the degenerate case where all accelerations or all delta’s are known. Once the solution
is obtained, the free acceleration derivatives and unknown trim parameter derivatives are unscrambled
and loaded into subcase specific AAR and DDELDV entities.
Finally, if any active DCONTRM constraints exist, the AAR or DDELDV matrix for the current subcase is
used to compute the AMAT terms for them.
Upper bound
REQ > 0.0
DG/DX = SENS / REQ
REQ < 0.0
DG/DX = - SENS / REQ
REQ = 0.0
DG/DX = SENS
Lower bound
REQ > 0.0
DG/DX = - SENS / REQ
REQ < 0.0
DG/DX = SENS / REQ
REQ = 0.0
DG/DX = - SENS
Where REQ is stored in the SENSPRM1 attribute of CONST and SENS is the raw acceleration or deflection
sensitivity.
The final operation for the subcase is to merge the NDV AAR and DDELDV columns for the current subcase
into the output matrices. The output matrices have NDV columns for each active subcase in subcase
order of SAERO disciplines in the CASE relation.
The trim parameter response sensitivities which are required by the active user function constraints
are also computed in this module.
Design Requirements:
1. This module assumes that either strength and/or DCONTRM constraints exist for the static aeroelastic
analyses in the current boundary condition.
Error Conditions:
None
disciplines are associated with the Mach/SGRP set, the corresponding NJ columns of SKJ are extracted
from the SKJ list input in the calling sequence. Also, the NJ columns of AJJTL are extracted irrespective
of the discipline options. Finally, if the QKK matrix is to be formed, the D1JK and D2JK are processed
depending on the presence of both subsonic and supersonic forms. This processing consists of the
extraction of the second NK columns of D1JK and D2JK on the first supersonic Mach number encountered.
The appropriate matrices are then added together for the current reduced frequency as:
[DCJK] = [D1JK] + (0+ik)[D2JK]
At this point, the module is ready to deal with the AJJT matrix previously extracted. The processing of
this matrix depends on the presence of different interference groups in the unsteady aerodynamics
model. For the case with a single interference group, the extracted AJJT matrix is transposed and then
decomposed. If the QKK matrix is required, the following matrix is formed using the GFBS utility:
[SCRDC] = [AJJ]-1[DCJK]
If either the QJJ or QJK matrices are needed, the actual inverse of AJJ is formed and stored as QJJ. If
the QJK matrix is needed as well, the QJJ matrix is used to form the QJK matrix as:
[QJK] = [SKJ][QJJ]
If there is more than one interference group, the alternate path is used to obtain the SCRDC, QJJ and/or
QJK matrices. In this path, a loop is performed for each interference group. The second record of the
UNMK entity is used to determine the number of j-set and k-set degrees of freedom in the current
interference group. These are used to generate the PRTJ partitioning vector for the AJJ matrix. This
vector acts as a floating NJG-sized vector to extract the NJG columns and rows associated with the current
group. The AJJT matrix is then partitioned, transposed and decomposed to form AJJG. If the QKK matrix
is needed, the PRTK partitioning vector is also required. This vector is a floating NKG-sized vector to
extract the NKG columns or rows for the current interference group. The DCJK matrix is then partitioned
for the current group and used as follows:
[AJJDCG] = [AJJG]-1[DCJKG]
The INMAT utility is then used to merge this matrix into the SCRDC matrix using the interference group
partitioning information. As before, if the QJJ or QJK matrices are needed, the AJJG matrix is inverted
and stored as QJJG. The INMAT utility merges this matrix into the QJJ matrix. At the conclusion of the
interference group loop, the SCRDC and QJJ matrices are complete. At this point, the logic recombines
for both paths. If the QKK matrix is needed, the SCRDC matrix is used to compute QKK as:
[QKK] = [SKJ][SCRDC]
which is then appended onto the list of QKK matrices, QKKL. If the QJK matrix is needed, the QJJ matrix
is used to comput QJK as:
[QJK] = [SKJ][QJJ]
which is then appended onto the list of QJK matrices, QJKL. Finally, the computed QJJ matrix is
appended to the QJJL matrix list if it is required for this m-k/SGRP matrix. The module then continues
with the next m-k/SGRP matrix in the UNMK entity. Note that all the matrix lists are formed in the order
the m-k/SGRP data appear in the UNMK, although each list need not have all sets. Once the entire set of
mk/SGRP sets in the UNMK have been processed, the module terminates by destroying the numerous
scratch matrices used in the computations.
Design Requirements:
1. The FLUTTER bulk data entries and the CASE relation are used to determine the set of m-k/symmetry
pairs for each aerodynamic matrices required for each discipline. The data on the database will be used
to determine the set of matrices to be computed.
Error Conditions:
None
[UGA] Reduced g-set active displacement vectors for all static aero subcases that
have active trim parameter, stress, strain and/or displacement constraints.
This is a subset of the columns of [UAG(BC)] and does not include the GDR
scalar points, if any (Output)
[AGA] Reduced g-set active acceleration vectors for all static aero subcases that have
active trim parameter, stress, strain and/or displacement constraints. This is
a subset of the columns of [AAG(BC)] and does not include the GDR scalar
points, if any (Output)
[PGAA] Partitioning vector used to obtain [UGA] and [AGA] from [UAG(BC)] and
[AAG(BC)] (Output)
[PGAU] Partitioning vector relative to [UAG(BC)] and [AAG(BC)] that marks the
displacement/acceleration columns associated with subcases having active
stress, strain or displacement constraints. This vector will be identical to
[PGAA] unless there are subcases in which DCONTRM constraints are active and
no stress, strain or displacement constraints are active (Output)
PCAA An unstructured entity with one word for each active stress, strain or dis-
placement constraint in the current subscript related subcases. That word is
the subcase number associated with the constraint (Output)
PRAA An unstructured entity with one word for each element stress, strain or dis-
placement response function required by the active user function constraints
in the current subscript related subcases. That word is the subcase number
associated with the response (Character,Output)
[UAGC(BC,SUB)] g-set pseudodisplacement vectors (displacement fields due to loads arising
from unit values of trim configuration parameters) for all aeroelastic effective-
ness constraints (Input), where BC represents the MAPOL boundary condition
loop index number
[AAGC(BC,SUB)] g-set pseudoacceleration vectors (acceleration fields due to loads arising from
unit values of trim configuration parameters) for all aeroelastic effectiveness
constraints (Input), where BC represents the MAPOL boundary condition loop
index number
ACTAEFF Logical flag that is set to TRUE if there are any active constraints that require
the pseudodisplacements or pseudoaccelerations. Those constraints are DCON-
ALE, DCONCLA and DCONSCF (Output, Logical)
[AUAGC] Reduced g-set active pseudodisplacement vectors for all active effectiveness
constraints. This is a subset of the columns of [UAGC(BC)] and does not
include the GDR scalar points, if any (Output)
[AAAGC] Reduced g-set active pseudoacceleration vectors for all active effectiveness
constraints. This is a subset of the columns of [AAGC(BC)] and does not
include the GDR scalar points, if any (Output)
PCAE An unstructured entity with one word for each active effectiveness constraint
(DCONALE, DCONCLA, DCONSCF) in the current subscript’s related subcases.
That word is the column id of the first column associated with the constraint
(Output)
The element stress and strain responses; displacement responses; aeroelastic flexible stability coeffcient
responses; and trim parameter responses which are required by active user functional constraints at
the current boundary condition and subscript number are treated in the similar manner as those
corresponding constraints. The subcases which have active displacement or element stress/strain
response functions are also defined as active. The partitioning vector, PGAA, and the set of subcase
numbers that are active, PRAA, are loaded if necessary. The ACTAEFF and ACTUAG flags are also set for
active responses.
Design Requirements:
None
Error Conditions:
None
have a column in either MATSUB or MATOUT. For the active subcase id, the TRIM data are searched to
determine the subscript number associated with the subcase. If the subscript is less than SUB, a column
from MATOUT may be taken (if it was stored there on an earlier pass). If the subscript is equal to SUB,
it may be stored on the output matrix from MATSUB. If greater than SUB, it is ignored till later passes.
Once a column is identified as active in MATSUB (PGAA indicates active and subscript = SUB), an
additional check is made to see if the column is active in PGUA. Only those columns that are active in
PGUA are copied to MATOUT. This filtering is done to limit the amount of computational effort in the
stress, strain and displacement constraint sensitivity computations that proceed using the MATOUT
matrix. The MATSUB columns that are active due to DCONTRM constraints are no longer needed as these
sensitivities are assumed to have been computed already in the AEROSENS module.
Once the final matrix is formed, if MATOUT had had data in it, the name of the scratch matrix that was
loaded is switched with that of MATOUT. The scratch entity is then destroyed.
Design Requirements:
1. The assumption is that each MATSUB matrix contains the results from the "SUB"th subscript value in
the order of the trim id’s for that SUB appear in the TRIM relation.
2. The same MATOUT matrix must be passed into the AROSNSMR module on each call since the columns
associated with earlier subscript values are read from MATOUT into a scratch entity. The merged matrix
that results then replaces the input MATOUT.
3. The AEROSENS module is called upstream of the AROSNSMR module to process active DCONTRM
constraints for the current subscript. Thus, those columns that are active only for DCONTRM constraints
may be filtered out for the downstream processing of stress, strain and displacement constraints.
Error Conditions:
None
In each case, these entities contain only those data that relate to the current boundary condition. They will
be replaced in subsequent boundary conditions and/or iterations with the appropriate data on each pass.
Design Requirements:
None.
Error Conditions:
1. Initial error checking of each bulk data entry type is performed within this module.
"GBOX" Shape:
G1 = ( 2*D5 + D6 - D1 ) / ( 3*DMAX)
G2 = ( D3 + D4 - D2 ) / ( 2*DMAX ); where DMAX = MAX(D1,D2,D3,D4,D5,D6)
Note that D1, D2, D3, D4, D5, and D6 are BAR element cross-sectional parameters.
Design Requirements:
None
Error Conditions:
None
BGUST Gust option flag; =1 if any dynamic response disciplines include the GUST
option in the current boundary condition (Integer, Output)
NMPC Number of degrees of freedom in the m-set, (Integer, Output)
NSPC Number of degrees of freedom in the s-set, (Integer, Output)
NOMIT Number of degrees of freedom in the o-set, (Integer, Output)
NRSET Number of degrees of freedom in the r-set, (Integer, Output)
NGDR Denotes dynamic reduction in the boundary condition. (Output, Integer)
0 No GDR
–1 GDR is used
GPSP Flag controlling output (Integer, Input)
0 Standard output
≠0 Update standard to show GPSP results
Application Calling Sequence:
None
Method:
The USET entity and CASE relation are read to determine the sizes of the dependent structural sets and
to ensure that no illegal combinations of disciplines and matrix reduction methods reside in the same
boundary condition. The matrix reductions and analysis steps in the standard MAPOL sequence are
then guided by the flags from BOUND. A summary of the structural sets is printed to the output file
followed by a summary of the disciplines and subcases that have been selected.
Design Requirements:
1. The CASE relation must be filled with the information from the Solution Control Packet by the
SOLUTION module. Also, the MKUSET module must have loaded the USET entity.
Error Conditions:
None
dependent loads are encountered with care taken that all active loads (including design independent
loads) are accounted for in the column dimension of the matrix entity.
Design Requirements:
1. This module must be called to initialize the DDFLG flag that is used by the MAPOL sequence to
direct subsequent matrix operations relating to the load sensitivities even if no design dependent
loads exist in the boundary condition.
2. The module assumes that at least one active static applied load exists in the current boundary condition.
Error Conditions:
None
Design Requirements:
1. This module is called after all the analysis and gradient information has been computed for a design
iteration. It is therefore the last module within the design iteration loop.
Error Conditions:
1. The module does not have sufficient memory available.
[PFGLOAD] Applied load matrix for the frequency dependent loads when LOAD print is
requested (Character, Output)
[PFHLOAD] Applied load matrix for the frequency dependent loads when MODAL GUST
print is requested (Character, Output)
Application Calling Sequence:
None
Method:
The module first interrogates the CASE relation to see whether any dynamic analyses are to be performed
for the current boundary condition. If not, the module terminates. Error checking is performed to make
sure legal requests have been made and bookkeeping is performed to set up for matrix reductions and
extra points. Call(s) are then made to DMAPG to generate the applied loads in the p-set. DMAPG reduces
these loads to the d-or h-set, depending on the approach. Separate routines generate loads in the
frequency and time domains.
Design Requirements:
1. Follows computation of quantities in the a-set. If the MODAL approach is being used, the natural mode
shapes must be computed.
Error Conditions:
1. No more than one frequency and/or transient load is allowed per boundary condition.
Method:
The module first interrogates the CASE relation to see whether any dynamic analyses are to be performed
for the current boundary condition. If not, the module terminates. Bookkeeping is performed to set up
for any gust analyses and to process extra points. A loop on the number of cases with dynamic response
requirements for the current boundary condition is then made. Time or frequency points at which the
response is required are established and the required analyses are performed. Separate subroutines
control the performance of requested analyses:
ROUTINES PURPOSE
TRUNCS/D Uncoupled transient analysis
TRCOUP Coupled transient analysis
FRUNCS/D Uncoupled frequency analysis
FRCOUP Coupled frequency analysis
FRGUST Frequency response with gust
These routines fill output vectors with response quantities (displacement, velocity and acceleration). If
there are extra points, a partitioning operation is performed to segregate extra point data into separate
matrix entities.
Design Requirements:
1. Modules DMA and DYNLOAD prepare matrix quantities that are required for this module. If a gust
analysis is being performed, module QHHLGEN must have been processed as well.
Error Conditions:
None
[PHIGB(BC)] Matrix of global eigenvectors for BUCKLING analyses (Input), where BC repre-
sents the MAPOL boundary condition loop index number.
Application Calling Sequence:
None
Method:
The EOSUMMRY relation is opened and read for the current boundary condition. If any element output
requests exist, processing continues by loading the input matrices associated with the discipline
dependent displacement fields into an character array such that the order in which disciplines are
processed correspond to the order of the matrices. Following this, there is a section of code set aside for
discipline dependent processing. Currently, two tasks are performed:
(1) The number of mode shapes computed in the real eigenanalysis (if one was performed) is determined
by opening the PHIG matrix; and (2) any thermal load set ID’s in the CASE relation are replaced by the
record number in GRIDTEMP that corresponds to the applied load case.
The initialization tasks continue with a call to the PRELDV utility to set up for computation of the local
design variables associated with designed elements. The transformation matrices and material proper-
ties are also prepared for fast retrieval by the element routines. The GPFDATA relation is opened for
output and the EODISC data is read into memory. At this point, the EODISC record number in the
EOSUMMRY data is replaced by the open core pointer where the record begins in memory. With the
initialization complete, the EDR module proceeds to compute the desired element response quantities
for all the "subcases" (considered by EDR to be represented by a single displacement vector) for any or
all disciplines that have been analyzed in the current boundary condition. The computation occurs in
the following three steps:
(1) Determine the set of disciplines and subcases for which any element response quantities are
needed
(2) Read into open core as many displacement vectors (real and/or complex) as will fit
(3) Call element dependent routines to compute the stress, strain, strain energy, forces and grid
point forces for each displacement vector
To perform step (1), the EOSUMMRY data is read for each discipline and the corresponding EODISC data
is used to form a unique list of subcases for each discipline in the current boundary condition. A list of
the form:
These data are sorted by discipline type in the order defined in the /EDRDIS/ common block and by the
subcase "identification numbers" within each discipline. The subcase ID’s refer to the column number
in the displacement matrix for the discipline. For statics and modes, these numbers are incremented
by one for each new load condition or eigenvector while transient, frequency and blast results use an
increment of three to accommodate the velocity and accelerations that are stored in the same matrix.
After this in-core list has been formed, it is read to determine which displacement vectors are to be
brought into open core. The module determines the amount of remaining memory and brings as many
displacement vectors into memory as possible. The terms are converted to single precision at this point.
Once all the displacements are in memory, or memory is full, the element dependent routines are called.
Within each element dependent routine, the geometrical portion of the element processing is performed
once followed by a loop over all incore displacements to compute the element response quantities. For
each displacement set, all the element response quantities including grid point forces, stresses, strains,
strain energies and element forces are computed and stored on the EOxxxx element response quantity
relations. Note that the exact quantities requested by the user are not used at this point, but will only
be used to determine which data to print. Once all the elements have been processed, the module loops
back for any remaining displacement vectors and, when all of these are processed, terminates.
Design Requirements:
1. The PFBULK processing of the element output requests must have been completed and be compatible
with the data currently resident in the CASE relation.
2. The module may be called when no element output requests exist in the Solution Control.
Error Conditions:
None
mass assembly is required. If a discipline which requires a mass matrix is included in the solution
control, the mass terms are assembled in the second pass. If there are OPTIMIZE boundary conditions,
this module calculates the linear portion of sensitivity of the objective to the design variables regardless
of whether the DMVI0 matrices are required. If no mass information is required, control is returned to
the executive and the second pass through the module is not made. For the second pass, MELM data are
used. The structure of the assembly operation is otherwise much the same and GMMCT0 and DMVI0 data
are computed and stored.
Design Requirements:
1. This assembly operation follows the MAKEST and EMG modules.
2. Since gravity loads require DMVI0 data, it is necessary to perform EMA1 prior to calling LODGEN. EMA1
must always be called before EMA2.
Error Conditions:
None
Design Requirements:
1. For OPTIMIZE boundary conditions, EMA2 precedes the optimization boundary condition loop. For
ANALYZE boundary conditions, the module immediately precedes the loop on analyze boundary condi-
tions and the NITER argument is not required. In both cases, EMA2 must always follow NLEMA1.
2. NITER must be nonzero for optimization boundary conditions.
Error Conditions:
None
Method:
The EMG module performs the linear design variable part of the second phase of the structural element
preface operations with the MAKEST module performing the first phase. The NLEMG module performs
the nonlinear design variable part of the second phase. As a result, modules EMG and MAKEST are very
closely related. The first action of the EMG module is to determine if design variables and/or thermal
loads are defined in the bulk data. If they are, the special actions for design variable linking and thermal
stress corrections are taken in the element dependent routines. The PREMAT utility to set up the material
property data also returns the SCON logical flag to denote that there are stress constraints defined in
the bulk data. The initialization of the module continues with the retrieval of the MFORM data to select
lumped or coupled mass matrices in the elements that support both forms. The default is lumped
although any MFORM/COUPLED (even if MFORM/LUMPED also exists ) will cause the coupled form to be
used. If thermal loads exist, the module prepares the TREF entity to be written by the element dependent
routines. The GLBDES relation is opened and the design variable identification numbers are read into
memory. Finally, the DVCT entity is opened and flushed and memory is retrieved to be used in the DVCTLD
submodule to load the DVCT relation. The module then calls each element dependent routine in turn.
The order in which these submodules are called is very important in that it provides an implicit order
for the MAKEST, EMG, SCEVAL, EDR and OFP modules. That order is alphabetical by connectivity bulk
data entry and results in the following sequence:
(1) Bar elements
(2) Scalar spring elements
(3) Linear isoparametric hexahedral elements
(4) Quadratic isoparametric hexahedral elements
(5) Cubic isoparametric hexahedral elements
(6) Scalar mass elements
(7) General concentrated mass elements
(8) Rigid body form of the concentrated mass elements
(9) Isoparametric quadrilateral membrane elements
(10) Quadrilateral bending plate elements
(11) Rod elements
(12) Shear panels
(13) Triangular bending plate elements
(14) Triangular membrane elements
Within each element dependent routine, the xxxEST relation for the element is opened and read one
tuple at a time. If the EST relation indicates that the element is designed, the DESLINK data is used to
write one set of tuples to the DVCT relation for each unique design variable linked to the element. The
set of tuples consists of one row for each node to which the element is connected. If the element is not
designed, a single set of tuples is written connected to the "zeroth" (implicit) design variable. The element
dependent geometry processor is then called to generate the KELM, MELM and TELM entries for the
element. Note the KELM and TELM entities related to nonlinear design stiffnesses are empty, and MELM
entries related to nonlinear design mass are empty. These data must be generated before the next call
to DVCTLD since the DVCT forms the directory to all three of these entities. Once all the elements are
processed within the current element dependent routine, the TREF entity is appended with the vector
of reference temperatures for the current set of elements. Again, the order of these reference tempera-
tures are determined by the sequence listed above and is assumed to hold in other modules. When all
the element dependent drivers have been called by the EMG module driver, clean up operations begin.
The entities that have been open for writing by the element routines are closed, the remaining in-core
DVCT tuples are written to the data base and the DVCT relation is sorted. If there are design variables,
the DVCT is sorted on the DVID attribute and, within each unique DVID, by KSIL. If there are no design
variables (all DVID’s are zero), the DVCT is sorted only on KSIL. Finally, if stress or strain constraints
were defined in the bulk data stream, the SMAT matrix of constraint sensitivities to the displacements
is closed. SMAT was opened by the PREMAT module when the SCON constraint flag was set.
Design Requirements:
1. The MAKEST module must have been called prior to the EMG module.
Error Conditions:
1. Illegal element geometries and nonexistent material properties are flagged.
[MHHFL(BC,SUB)] Generalized mass matrix for the current flutter subcase in the h-set
(normal modes+extra points) including any transfer functions and M2PP input
(Output), where BC represents the MAPOL boundary condition loop index
number
[BHHFL(BC,SUB)] Generalized damping matrix for the current flutter subcase in the h-set (nor-
mal modes+extra points) including any transfer functions, B2PP input and
VSDAMP input (Output), where BC represents the MAPOL boundary condition
loop index number
[KHHFL(BC,SUB)] Generalized stiffness matrix for the current flutter subcase in the h-set (nor-
mal modes+extra points) including any transfer functions K2PP input and
VSDAMP input (Output), where BC represents the MAPOL boundary condition
loop index number
Application Calling Sequence:
None
Method:
CASE is checked to see if any FLUTTER subcases exist for the current boundary condition. If not, control
is returned to the MAPOL sequence. If FLUTTER subcases exist, the dynamic matrix descriptions for
the current subcase (as indicated by the SUB input) are brought into memory from CASE. Then the BGPDT
data are read into memory and the DMAPVC submodule is called to generate partitioning matrices to
expand the input matrices to the p-set from the g-set and to strip off the GDR extra points where
appropriate. If extra points are defined, the MAA, KAA, PHIA, TMN and GSUBO are then expanded to
include the d-set extra point DOF.
Following the expansion of the input matrices, the direct matrix input M2PP, B2PP and K2PP are
assembled and reduced to the direct d-set DOF in the submodule DMAX2. Modal transformations occur
later in the module. Following the x2PP formation, the VSDAMP data are set depending on the DAMPING
selection for the FLUTTER subcase. Finally, the LAMBDA relation is read into memory to have the modal
frequencies available for modal damping computations.
Following all these preparations, the utility submodules DMAMHH, DMABHH and DMAKHH are used to
assemble the modal mass, damping and stiffness matrices accounting for all the dynamic matrix options.
Control is then returned to the MAPOL program.
Design Requirements:
1. The FLUTDMA module is intended to be called once for each FLUTTER subcase in the boundary
condition. The ordering of subcases is that in the CASE relation. Each set of dynamic matrices in the
standard sequence is saved in a doubly subscripted set of matrices to be used in sensitivity analysis. It
is not necessary to save these matrices unless the sensitivity phase will be performed.
Error Conditions:
1. Missing damping sets called for on the FLUTTER entry are flagged.
2. Errors on TABDMP entries are flagged.
GEOMUA A relation describing the aerodynamic boxes for the unsteady aerodynamics
model. The location of the box centroid, normal and pitch moment axis are
given. It is used in splining the aerodynamics to the structure and to map
responses back to the aerodynamic boxes (Input)
[PHIKH] A modal tranformation matrix that relates the box-on-box aerodynamic mo-
tions to unit displacements of the generalized structural coordinates (modes)
(Output)
[QHHLFL(BC,SUB)] A matrix containing the list of h x h unsteady aerodynamics matrices for the
current flutter subcase related to the generalized (modal) coordinates and
including control effectiveness (CONEFFF), extra points and CONTROL matrix
inputs (Output), where BC represents the MAPOL boundary condition loop
index number
OAGRDDSP A relation containing the structural eigenvectors (generalized DOF) mapped
to the aerodynamic boxes for those AIRDISP requests in the Solution Control.
These terms are the columns of PHIKH put in relational form to satisfy the
output requests (Output)
Application Calling Sequence:
None
Method:
The CASE relation is read to obtain the SUB’th flutter subcase parameters: CONTROL and AIRDPRNT.
Then the FLUTTER relation is read for the current subcase and the KLIST and EFFID entries are
recovered.
If there is no CONTROL matrix, PHIA and UGTKA matrices are expanded to include dynamic degrees of
freedom using the utility module QHHEXP. GDR scalar points are handled to ensure that the final matrices
are in the d-set. If a CONTROL matrix does exist, its conformability is checked. The DMAPVC utility
submodule is used to create partitioning vectors and matrix reduction matrices to allow reduction of
the CONTROL matrix to the d-set. The FLCNTR submodule is then called to append the reduced CONTROL
matrix to the expanded UGTKA matrix. The PHIKH matrix is then computed as the product of the
expanded PHIA and the expanded and CONTROL-modified UGTKA:
[PHIKH] = [PHID]T[UGTKD]
Then, if control effectiveness correction factors are selected for the subcase, the PHIKH matrix terms are
adjusted by the input factors. This completes the computation of the PHIKH output matrix. The input
AIRDISP output requests are then processed to load the OAGRDDSP relation with the generalized
displacements on the unsteady aerodynamic geometry.
Finally, the QKK matrices that are associated with the user’s input Mach number and KLIST for the
subcase are reduced to the generalized degrees of freedom using the PHIKH matrix.
[QHHL] = [(PHIKH)]T[QKKL][PHIKH]
The premultiplication takes place in one MPYAD and the postmultiplication is done by looping over each
reduced frequency in the set, extracting the k columns of each h x k matrix and performing a separate
MPYAD. The results are then appended onto the output QHHL.
Design Requirements:
None
Error Conditions:
1. CONTROL matrix errors in conformability are flagged.
2. CONEFFF errors are flagged.
[PHIG(BC)] Matrix of real eigenvectors in the structural set (Input), where BC represents
the MAPOL boundary condition loop index number
[AMAT] Matrix of constraint sensitivities (Output)
−g ∗ KI
g ∗ KR
if structural damping, g, is included.
Simliarly, it is:
P1R ∗ g ∗ KR − P1I ∗ g ∗ KI
ω3 ω3
g g
P1R ∗ ∗ KI − P1I ∗ ∗ KR
ω3 ω3
LAMBDA Relational entity containing the output from the real eigenanalysis
(Input)
HSIZE(BC) Number of modal dynamic degrees of freedom in the current boundary condi-
tion (Input), where BC represents the MAPOL boundary condition loop index
number
ESIZE(BC) The number of extra point degrees of freedom in the boundary condition
(Integer, Input), where BC represents the MAPOL boundary condition loop
index number
[MHHFL(BC,SUB)] Modal mass matrix (Input), where BC represents the MAPOL boundary condi-
tion loop index number
[BHHFL(BC,SUB)] Modal flutter damping matrix (Input), where BC represents the MAPOL
boundary condition loop index number
[KHHFL(BC,SUB)] Modal flutter stiffness matrix (Input), where BC represents the MAPOL
boundary condition loop index number
CLAMBDA Relation containing results of flutter analyses (Character, Output)
CONST Relation of constraint values (Character, Input)
The next task of the module is to prepare for the actual flutter analysis by setting up the FLFACT bulk
data and the UNMK data using the PREFL and PRUNMK utilities, respectively. Then the reference unsteady
aerodynamic model data is retrieved from the AERO relation. Lastly, the velocity conversion factor, if
one has been defined, is read from the CONVERT relation. The generalized mass and damping matrices
are then read into memory and converted to single precision, followed by the natural frequencies
associated with the computed eigenvectors. Lastly, the generalized stiffness matrix is read in and
converted to single precision and the generalized aerodynamic influence coefficients are opened for
retrieval. This completes the preparations for the flutter discipline loop.
For the SUB’th flutter discipline requested in the CASE relation, a number of tasks are performed to set
up for the Mach number requested on the FLUTTER entry. These consist of the retrieval of the set of m-k
pairs for the current FLUTTER entry from the UNMK data and the lists velocities (which are converted to
the proper units, if necessary) and densities. If the boundary condition is an optimization boundary
condition, the table of required damping values is prepared using the PRFCON utility. Lastly, the set of
normal modes that are to be omitted are retrieved and the data prepared to perform the "partitioning"
of the generalized matrices. As a final step before processing the current FLUTTER entry, the local
memory required by the flutter analysis submodules is retrieved.
The subset of m-k pairs in the QHLL matrix list for the current Mach number is determined and the set
of associated reduced frequencies determined. The FA1PKI submodule is called with this data to
compute the interpolation matrix for the QHLL matrix list if the ORIG curve fit is used. Otherwise, the
fitting coefficients are computed on the fly in the QFDRV module. Then, the subset of the full QHLL matrix
associated with this flutter analysis is read into core and converted to single precision. At this point,
the imaginary part of the QZHH matrix is divided by the reduced frequency. Finally, the Mach number
dependent memory blocks are retrieved and the inner-most analysis loop on the density ratios is begun.
For each density ratio associated with the Mach number for the current flutter analysis discipline, the
FLUTTRAN module performs the flutter analysis. There are two distinct paths through the inner loop:
one for optimization and one for analysis. They differ in that the analysis loop refines the set of user
selected velocities to find a flutter crossing, while the optimization path computes the flutter eigenvalues
only at the user specified velocities and computes the corresponding flutter constraint value based on
the required damping table. Once all the loops have been completed, the module computes the flutter
responses which are required by any user function constraints, and then returns control to the executive.
Design Requirements:
1. The module assumes that at least one flutter subcase exists in the current boundary condition.
Error Conditions:
1. Referenced data on FLUTTER entries that do not exist on the data base are flagged and the execution
is terminated.
KOO ____
KOA
KFF →
T
KOA KAA
KOO KOA
____
KFF →
KAO KAA
[KOOINV] and [KOOU] are the Lower and Upper triangular factors of [KOO]
The KOOINV, KOOU, KAO, GSUBO and KAA arguments must be nonblank in the calling sequence. Note
that KAO is required since the asymmetric nature of KFF prohibits the transpose operation used in the
symmetric case. The module then checks if PF is nonblank. If so, the loads matrix reduction is performed.
Once again, there are two paths depending on the symmetry flag. If SYM is zero (symmetric), the
following operations are performed:
__
PO
_
PF →
PA
[SCR1] = [PO]T[GSUBO]
[SCR2] = [SCR1]T
___
[PA] = [PA] + [SCR2]
With the odd order of operations dictated by efficiency considerations in the matrix operations. Note
that the GSUBO, PA and PO arguments must be nonblank with the GSUBO argument an input if the
stiffness matrix was not simultaneously reduced. If SYM is nonzero (asymmetric), the following
operations are performed:
__
PO
_
PF →
PA
[SCR1] = [KOO]-1 [PO] (Asymmetric FBS)
Note that the KOOINV, KOOU, KAO, PA and PO arguments must be supplied with the KOOINV, KOOU,
and KAO arguments input if the (asymmetric) stiffness matrix is not being reduced in the same call.
Design Requirements:
None
Error Conditions:
None
Design Requirements:
1. The module is only called if there are active frequency constraints and therefore must follow the
ABOUND module.
2. The DESIGN module makes the assumption that data were written to AMAT from this module prior
to any subcase dependent sensitivities.
Error Conditions:
None
variable utility module PRELDV with entry points LDVPNT and GETLDV. Finally, the CONST relation
tuples associated with the stress constraints are retrieved. If no stress constraints are found, the module
cannot do any resizing and so modifies the MAPOL control parameters FSDS, FSDE, and MPS as outlined
below to prevent the further use of FSD in subsequent iterations.
If the appropriate constraints were found, the module loops through each local design variable and
determines which (if any) stress constraint is associated with that variable. When the matching
constraint is found, the new local variable is computed from:
tnew = ( g + 1.0 )α
If any shape function linked local variables are encountered during this phase, the starting and ending
iterations (FSDS and FSDE) and the appropriate other starting iteration number (MPS) are modified such
that FSD will not be called again. Then execution is returned to the executive. This prevents any further
attempts to use FSD with the shape function linking and directs that the current iteration be performed
used the appropriate alternative method. A warning is given and the execution continues.
Once the vector of new local variables are retrieved, the PTRANS data is brought into memory along with
the GLBDES data. The GLBDES data are used to reset any local variable values that are outside their
valid ranges to maximum or minimum gauge. Then the new vector of global variables are computed as:
tnew
vnew = max
Pi Pi
These constitute the new design from the FSD algorithm and are stored back to the GLBDES relation.
The DESHIST relation is updated and an informational message indicating the changes in the objective
function is written. The active and violated constraint tolerances are set to their FSD default values:
CTL=-1.0 x 10-3 and CTLMIN=5.0x10-3. This completes the action of the FSD module.
Design Requirements:
1. Only stress constraints (strain constraints are excluded) are considered in the FSD module. If none
are found, the module terminates cleanly with the FSD selection flags reset to avoid any further FSD
cycles.
2. Shape function design variable linking causes the module to terminate cleanly with the FSD selection
flags reset to avoid any further FSD cycles.
Error Conditions:
None
Design Requirements:
1. This module is an alternative to Guyan reduction and therefore parallels the reduction to the a-set.
Error Conditions:
1. j-set DOF have been constrained
2. o-set does not exist
3. Only a subset of roots are guaranteed to be accurate.
KMM KMN
KGG →
____
KNN
KNM
____
[SCR1] = [KNN] + [KNM][TMN]
[SCR1] = [KMM][TMN]
These operations require the creation of four scratch matrix entities for the intermediate results and
the partitions of the KGG matrix.
If the PG argument is not omitted, the following operations are performed using the large matrix utilities:
PG → ___
PM
PN
___
[PN] = [PN] + [TMN]T[PM]
These operations require the use of two scratch matrix entities for the partitions of the PG matrix. When
both KGG and PG are reduced, the scratch partition matrices are shared.
Design Requirements:
None
Error Conditions:
None
The column of the PG matrix associated with each right-hand side is assembled using the SMPLOD (and
LOAD) data. The thermal and gravity loads are special in that the GLBDES information must be retrieved
in order to assemble the loads representing the current design. The case where no design variables are
defined does not represent a special case, however, since the DPVRGI and DPTHGI entities always include
terms representing the "zeroth" design variable. Once all the STATICS cases have been processed, the
module terminates.
Design Requirements:
1. The module assumes that at least one STATIC load case is defined in the CASE relation for the current
boundary condition.
2. The SMPLOD entity from the LODGEN module must exist as must the DPVRGI and DPTHGI gravity and
thermal load sensitivity matrices.
Error Conditions:
1. No simple loads are defined in the SMPLOD entity
Purpose:
To process the Bulk Data Pocket and to load the input data to the data base. Also, to compute the
external coordinate system transformation matrices and to create the basic grid point data.
MAPOL Calling Sequence:
CALL IFP ( GSIZEB, EIDTYPE );
GSIZEB The size of the structural set (Integer, Output)
EIDTYPE Relation containing element identification numbers and their corresponding
element type (Character,Output)
Application Calling Sequence:
None
Method:
The Input File Processor module performs several tasks to initialize the execution of ASTROS procedure.
It begins by setting the titling information for the IFP bulk data echo (should that option be selected).
Following these tasks, the module continues with the interpretation of the bulk data packet of the input
stream. This packet resides on an unstructured entity called &BKDTPKT which is loaded by the executive
routine PREPAS during the interpretation of the input data stream. The IFP module proceeds in two
phases. In the first phase, the bulk data are read, expanded from free to fixed format and sorted on the
first three fields of each bulk data entry. If an unsorted echo is requested, that echo is performed as the
&BKDTPKT entity is read. If a sorted echo is desired, it is performed after the expansion and sort has
taken place. In either case, the bulk data is sorted by the IFP module. The resultant data are stored on
one or more scratch unstructured entities depending on how many passes must be performed to
accomplish the sort in the available memory. If all the bulk data fits into open core, only a single scratch
file is required.
For the MODEL punch option request, the expanded and sorted input Bulk Data entries are divided into
following categories:
(1) element definition entries (e.g. CBAR)
(2) property definition entries (e.g. PBAR)
(3) design variable linking and design constraint definition entries (e.g. DESELM, DESVARP, DESVARS
and DCONVMM, DCONxxx)
(4) the rest of the Bulk Data
Those entries in categories (1), (2) and (4) are stored in corresponding unstructured entities for use in
module DESPUNCH. Those in category (3) are not saved for DESPUNCH, since it will output a MODEL
without the design entries.
The second phase of the bulk data interpretation proceeds based on the sorted bulk data from the
expansion phase. This phase begins by reading the first bulk data entry in the sorted list and locating
its bulk data template in the set of templates stored on the system data base by the SYSGEN program.
This template defines the card field labels, the field variable type, the field default value, the field error
checks and information on where to load the field into the data base loading array. The template is
compiled once and all like bulk data entries are processed together. Any user input errors that are
detected are flagged with a message indicating the field that is in error and whether the error consists
of an illegal data type (i.e, an integer value in a real field) or of an illegal value for the given field (i.e.,
a negative element identification number). Note that the IFP module is only checking errors on a single
bulk data entry and does not perform any inter-entry compatibility checks.
This process is then repeated for each different bulk data entry type in the sorted list of bulk data entries.
If any errors have occured, the module terminates the ASTROS execution. As a final two steps, the IFP
module performs calls to the MKTMAT, MKBGPD and MKUSET submodules to create the transformation
matrices for any external coordinate systems, to generate the basic grid point data and to make an error
checking pass over the structural set definitions. These three tasks are not explicitly part of the IFP
module but are so basic to every execution that they cannot properly be considered MAPOL engineering
application modules. Any errors resulting in these two tasks will also cause the run to terminate with
the appropriate error messages.
Design Requirements:
None
Error Conditions:
1. User bulk data errors are flagged and cause program termination.
2. Inconsistent or illegal coordinate system definitions.
3. Illegal grid/scalar and/or extra point definitions.
4. Illegal structural set definitions in the MODEL.
where
tmin = ply or laminate minimum gauge
nply = number of designed plies defining the ply or laminate
For laminate composition constraints, the constraint derivatives are different depending on whether an
upper or lower bound constraint is imposed:
npp
∂tplyi
npl
∂tlamj
∂gupper 1
= 2 tlam ∑ − tply ∑
∂v tlam ∂v ∂v
i=1
j=1
npl
∂tlamj
npp
∂tplyi
∂glower 1
= 2 tlam ∑ − tply ∑
∂v tlam ∂v ∂v
j=1
i=1
where
tlam = current laminate thickness
tply = current ply thickness
npp = number of layers in the current ply
npl = number of layers in the current laminate
Design Requirements:
None
Error Conditions:
None
Design Requirements:
1. For the general case, this should be the last preface module because it may require inputs from EMG
and EMA1.
Error Conditions:
None
matrix in the order that active displacement constraints are encountered in the CONST relation.
Constraints are evaluated for each load condition within the active boundary condition in constraint
type order. The DFDU matrix is thus also formed in this order but the inactive constraints are ignored.
After processing the active displacement constraints (if any), the MAKDFU module processes the active
stress/strain constraints. The CONST relation is conditioned to retrieve the active stress and/or principal
strain constraints (CTYPE’s 4, 5 and 6). For each active constraint, the current boundary condition
number and the load condition number (stored on the CONST relation in the SCEVAL module) are used
to determine the column number of the SMAT or NLSMAT matrix that holds the sensitivity of the current
stress term to the displacements. Having recovered the SMAT or NLSMAT columns for the current active
constraint, the DFDU terms are computed based on the element type and constraint type. Where the
sensitivity is a function of the stress/strain values, the appropriate rows of the GLBSIG or NLGLBSIG
column associated with the current boundary condition/load condition/discipline are retrieved for use
in the computations.
Design Requirements:
None
Error Conditions:
None
After the local variables have been computed, the LOCLVAR relation (also built in the MAKEST module)
is used to determine the current total thickness for a layered composite element. The VFIXED entity
gives that portion of the thickness of composite elements that is not designed. The sensitivities of the
thickness constraints are essentially the appropriate column of the PMINT or PMAXT matrix. The column
is identified by the PNUM attribute of the CONST relation. If the particular local variable constraint is
controlled by move limits, however, the sensitivity becomes a function of the current thickness and must
be adjusted accordingly. This applies only to minimum gauge constraints, however, since move limits
are not applied to maximum thickness constraints. The resulting constraint sensitivities are loaded, in
the order processed, onto the AMAT matrix.
Design Requirements:
1. The move limit that is passed into this routine must match the value used to evaluate the constraints
in the TCEVAL module. If not, the constraint sensitivities will be in error with no warning given.
Error Conditions:
1. The move limit must be greater than 1.0 if it is imposed.
That order is alphabetical by connectivity bulk data entry and results in the following sequence:
(1) Bar elements
(2) Scalar spring elements
(3) Linear isoparametric hexahedral elements
(4) Quadratic isoparametric hexahedral elements
(5) Cubic isoparametric hexahedral elements
(6) Scalar mass elements
(7) General concentrated mass elements
(8) Rigid body form of the concentrated mass elements
(9) Isoparametric quadrilateral membrane elements
(10) Quadrilateral bending plate elements
(11) Rod elements
(12) Shear panels
(13) Triangular bending plate elements
(14) Triangular membrane elements
Within each element dependent routine, the xxxEST relation for the element is opened and flushed. If
design variables exist in the MODEL, the ELIST, PLIST and SHAPE entries associated with this element
type (if the element can be designed) are opened and read into memory for use in the design variable
linking. Then the connectivity relation for the element is opened and the main processing loop begins.
Each tuple is read, the grid point references are resolved into internal sequence numbers and coordi-
nates, the property entry is found from the proper property relation(s) and the EST relation tuple is
formed in memory. Numerous checks on the existence of grid points, property entries and the uniqueness
of the element identification number within each element type are performed.
Finally, if there are design variables, the DESCHK submodule is called to determine whether the element
is linked to a design variable. The DESCHK utility searches the in-core GLBDES, ELIST, PLIST and/or
SHAPE data and determines if the current element is designed. Also, the final attributes of the GLBDES
relation for physical and shape function linking are completed. The module performs error checks to
ensure that the rules for design variable linking are satisfied for each particular global design variable
and element.
On return to the element EST routine, the LOCLVAR, PTRANS, PMINT and/or PMAXT entities are built for
the local design variable if the element was found by DESCHK to be designed. Finally, the constraint
flags, design flags, design variable nonlinear flag, composite type flag and thermal stress information
are set. The constraint and thermal stress attributes will be revised as needed in the EMG and NLEMG
module.
When all the elements have been processed, the EST relation for the element type is loaded to the data
base. Care is taken that the final relation is sorted by the element identification number. When all the
element routines have been called, the DESLINK entity, which was formed on the fly in the element
routines, is loaded to the data base. This entity contains the number of and identification numbers for
each design variable connected to each designed element. These data are used to generate the DVCT,
DVCTD and/or DDVCT relations in the EMG and NLEMG module. All the other design variable linking
entities that have been built on the fly are also closed. Any queued error messages are dumped to the
user file and the module terminates.
Design Requirements:
1. The basic connectivity data from the IFP module must be available.
Error Conditions:
1. Numerous error checks are performed on the consistency of the bulk data for structural element
definition as well as of element geometry and connectivity.
2. Design variable linking errors are flagged.
The algorithm is somewhat more complicated than this in that the parts of the matrices that remain
after partitioning are renamed to FIRST and SECOND so that the partitioning operation becomes
successively smaller and no partition is required on the last pass through the loop. The extracted
matrices are then multiplied and the resulting matrix is either AMAT (when there is only one active
subcase and the AMAT matrix was empty on entering the module) or it is appended to AMAT. Once the
loop is completed, any scratch matrices are destroyed and control is returned to the executive.
Design Requirements:
1. This module is invoked at the end of the boundary condition loop in the sensitivity analysis portion
of the MAPOL sequence.
2. It is called only if there are active stress and displacement constraints for the boundary condition.
Error Conditions:
None
1. This module must be called prior to MKAMAT and after the active displacement vector is available.
Error Conditions:
None
Method:
This module first reads the USET record which contains the number of degrees of freedom in each
dependent set and the bit masks defining the structural sets to which the degrees of freedom belong.
Then, for each requested partitioning vector, the bit masks in USET are checked for all degrees of freedom
with the related structural sets, and then the vector is generated.
Design requirements:
Any of the partitioning vector names may be blank if the corresponding partition will not be used
subsequently.
Error:
None
The assignment of a bit position for each structural set is defined as shown below and are stored in the
/BITPOS/ common block:
BIT
SET DESCRIPTION
POSITION
UX 16
UJJP 17 Used for dynamic reduction
UJJ 18
UKK 19
USB 20 Single point constraints (SPC)
USG 21 Permanent SPCs
UL 22 Free points left for solution
UA 23 Analysis set
UF 24 Free degrees of freedom
UN 25 Independent degrees of freedom
UG 26 Dependent degrees of freedom
UR 27 Support set DOF
UO 28 Omitted (Guyan Reduction) DOF
US 29 Unions of USB and USG sets
UM 30 Dependent MPC DOF
The MKUSET module begins by preparing memory blocks for use by the module subroutines. The BGPDT
tuples associated with structural nodes are brought into core for use in conversion of external
identification numbers to internal identification numbers. Each separate structural set is processed by
an individual submodule of MKUSET with the defaulting for unspecified DOF taking place in the module
driver. The CASE relation is read to determine the boundary condition definition for the current boundary
condition. The submodule UMSET, responsible for multipoint constraint set definition also build the TMN
matrix while the USSET submodule for single point constraints builds the YS vector. After the USET
masks have been built for the boundary condition, extensive error checking occurs to ensure that each
point is placed in no more than one dependent structural set. If no errors have occured, the USET record
is written and the associated partitioning vectors are formed.
Design Requirements:
1. The MKUSET module requires that the CASE relation be complete from the SOLUTION module and that
the BGPDT be formed either by the BCBGPDT or IFP modules prior to execution.
Error Conditions:
1. Any inconsistent boundary condition specifications are flagged.
2. Any missing bulk data referenced by Solution Control is flagged.
Error Conditions:
None
GMKCT Relation containing connectivity data for the DKVI sensitivity matrix (Charac-
ter, Output)
DKVI Unstructured entity containing the total stiffness design sensitivity matrix in
a highly compressed format (Character, Output)
GMMCT Relation containing connectivity data for the DMVI sensitivity matrix
(Character, Output)
DMVI Unstructured entity containing the total mass design sensitivity matrix in a
highly compressed format (Character, Output)
GMKCTG Relation containing connectivity data for the DKVIG stiffness matrix
(Character, Output)
DKVIG Unstructured entity containing the stiffness matrix in a highly compressed
format (Character, Output)
GMMCTG Relation containing connectivity data for the DMVIG mass matrix
(Character, Output)
DMVIG Unstructured entity containing the mass matrix in a highly compressed for-
mat (Character, Output)
GMMCTD Relation containing connectivity data for the DMVID mass matrix
(Character, Output)
DMVID Unstructured entity containing the nonlinear part of mass matrix in a highly
compressed format (Character,Output)
DGMMCT Relation containing connectivity data for the DDMVI nonlinear mass sensitiv-
ity matrix (Character, Output)
DDMVI Unstructured entity containing the nonlinearly designed mass sensitivity ma-
trix in a highly compressed format (Character, Output)
DDWGH2 Unstructured entity containing the nonlinear portion of the sensitivity of
weight to the design variables (Character, Output)
Application Calling Sequence:
None
Method:
The module is executed in two passes; once for nonlinear design stiffness matrices and nonlinear
stiffness sensitivity matrices, and a second time for nonlinear design mass matrices and nonlinear mass
sensitivity matrices.
In the first pass, DVCTD information is read into core one record at a time. The algorithm is structured
to maximize the amount of processing done on a given design matrix(typically all of it) in core. Spill
logic is in place if a matrix cannot be completely held in core. For the assembly, subroutine NLRQCR
performs bookkeeping tasks to expedite the assembly and to determine whether spill will be necessary.
Subroutine NLASM1 retrieves KELMD information, performs the actual assembly operations and place
the results into the GMKCT8DKVI, and results in DGMKCT and DDKVI entities.
If a discipline which requires a mass matrix is included in the solution control, the mass terms are
assembled in the second pass. If there are OPTIMIZE boundary conditions, this module calculates the
nonlinear portion of the sensitivity of the weight to the design variables (DDWGH2) and the nonlinear
portion of the weight (DWGH2) regardless of whether the mass matrices are required. If no mass
information is required, the second pass is not made. For the second pass, MELMD and DMELM data are
used. The structure of the assembly operation is otherwise much the same and GMMCYD, DGMMCY, DMVID
and DDMVI data are computed and stored. After those two passes, the total weight is computed from
DWGH1 and DWGH2. GMKCT0, DKVI0, DGMKCT, DDKVI are merged into stiffness sensitivity entities GMKCT
and DKVI; GMMCT0, DMVI0, DDGMMCY, DDMVI are merged into mass sensitivity entities GMMCT and DMVI;
GMKCT0, DKVI0, GMMCYD, DKVID are merged into stiffness matrix entities GMKCTG and DKVIG; GMMCT0,
DMVI0, GMMCYD, DMVID are merged into mass matrix entities GMMCTG and DMVIG.
Design Requirements:
1. This assembly operation follows NLEMG within the MAPOL OPTIMIZE iterations.
2. Since gravity loads require DMVID and DDMVI data, it is necessary to perform NLEMA1 prior to
calling NLLODGEN. NLEMA1 must always be called before EMA2.
Error Conditions:
None.
TELMD Unstructured entity containing the nonlinearly designed thermal load parti-
tions (Character, Output)
DTELM Unstructured entity containing the nonlinear thermal load sensitivity matrix
partitions (Character, Output)
TREFD Unstructured entity containing the element reference temperatures for non-
linearly designed thermal loads. (Character, Output)
FDSTEP Relative design variable increment for finite difference (Real, Input)
Design Requirements:
None.
Error Conditions:
None.
Design Requirements:
1. If there are nonzero enforced displacements, the stiffness and loads reductions must be done
concurrently or the KFS partition must be included in the loads call as input.
2. The KFS argument is always required when YS is nonblank.
Error Conditions:
None
is encountered, the matrix A is equivalenced to B. That is, the data are not physically copied, but only
a pointer to the data is maintained. To break this pointer, you call NULLMAT( A).
Design Requirements:
Error Conditions:
None
OAGRDLOD A relation containing the rigid, flexible correction and flexible forces and pres-
sures for each SAERO subcase for the trimmed configuration parameters. Out-
puts are for the aerodynamic elements whose TPRESSURE output was
= QDP*[AICMAT]T[GSTKG]T[UAG]
where in each case the [DELTA] and [UAG] matrices are partitioned to include only the relevant
subcases for the current subscript.
Finally, the scratch matrices on which these results reside are read and output to the OAGRDLOD and
OAGRDDSP relations for the loads and displacements, respectively.
Design Requirements:
None
Error Conditions:
None
[AAG(BC)] Matrix of accelerations for all SAERO subcases in the current boundary condi-
tion in the order the subcases appear in the CASE relation (Input), where BC
represents the MAPOL boundary condition loop index number
[KFS] The off-diagonal matrix partition of the independent degrees of freedom that
results from the SPC partitioning (Input)
[KSS] The dependent DOF diagonal matrix partition of the independent degrees of
freedom that results from the SPC partitioning (Input)
[UAF] Matrix of free (f-set) static displacements for all SAERO subcases in the cur-
rent boundary condition in the order the subcases appear in the CASE relation
(Input)
[YS(BC)] Vector of enforced displacements for the boundary condition (one column)
(Input)
[PNSF(BC)] Partitioning vector to divide the independent DOFs into the free and SPC
DOFs (Input), where BC represents the MAPOL boundary condition loop in-
dex number
[PGMN(BC)] Partitioning vector to divide the g-set DOFs into the MPC and independent
DOF’s (Input), where BC represents the MAPOL boundary condition loop in-
dex number
[PFJK] Partitioning vector to divide the f-set DOFs that may include GDR generated
scalar points into the original f-set DOF’s
NGDR Denotes dynamic reduction in the boundary condition. (Input, Integer)
0 No GDR
–1 GDR is used
USET(BC) The unstructured entity of DOF masks for all the points in the current bound-
ary conditions (Input), where BC represents the MAPOL boundary condition
loop index number
OGRIDLOD Relation of loads on structural grid points (Output)
for all the appropriate columns of UAF that are associated with the SUB’th subscript. The input YS vector
is expanded to contain the correct number of columns.
Then the computation of the applied loads is done. First, the BGPDT data are read and the OGRIDLOD
relation is opened for output. Then the loads for each subcase in the subscript is solved for subject to
the existence of a print request for that subcase (either LOAD or SPCF). The following loads are computed:
= QDP*[GTKG][AIRFRC][DELTA]
Inertial Load
= –[MGG][AAG]
Where the appropriate inputs are not available, the computations are simply ignored with no warning.
Thus, the optional calling arguments may be used to perform parts of the computations without
requiring that all pieces be provided.
Then, the output LOADs matrices are opened and the CASE LOADs print and punch requests are used to
load the OGRIDLOD relation with the RIGID, FLEXIBLE, APPLIED and INERTIA loads.
Finally, if any SPCF output requests exist the APPLIED loads that were computed are combined with
the QGV1 terms to result in the SPC reaction forces:
For each DOF for which SPC forces have been requested.
Design Requirements:
1. SPC force computations for other disciplines occur in the OFPSPCF module.
2. Only those arguments that are present will be used. If data are missing, the dependent terms will be
omitted from the output.
Error Conditions:
None
been processed, the CASE tuple loop continues for the current discipline. When all disciplines or all CASE
tuples have been processed, the module terminates.
Design Requirements:
1. The OFPDISP module is designed to be called at the conclusion of the boundary condition loop when
all the physical nodal response quantities have been computed for all the analyzed disciplines.
Error Conditions:
None
If any GUST loads are requested, the modal dynamic loads are transformed to the physical degrees of
freedom as:
To perform these operations, the normal modes must be expanded to include extra points for the single
subcase of transient and or frequency that is allowed. Then the multiplications are performed.
Finally, once all the direct matrices are available, the CASE control print requests are processed, the
corresponding columns are identified by interpreting the TIME or FREQ options and the GRIDLIST data
are read to determine which points are chosen. The terms are then written to the OGRIDLOD relation
as APPLIED loads.
Design Requirements:
None
Error Conditions:
None
Design Requirements:
1. The OFPEDR module is designed to be called at the conclusion of the boundary condition loop when
all the physical nodal response quantities have been computed for all the analyzed disciplines.
2. The EDR module must have been called to store the computed element response quantities onto the
EOxxxx entities which are read by the OFPEDR module.
Error Conditions:
None
structural applied loads are stored in a scratch entity for use in the subsequent print processing. The
main loop in the module now begins. This loop is over all the disciplines that have applied loads. For
each discipline, there is a loop over all the CASE tuples retrieved at the beginning of the module. Only
those CASE tuples matching the current discipline are treated at each pass of the outermost loop. The
LODSUB submodule is called for each CASE tuple to determine the number and identification numbers
for each subcase for which output is desired. A subcase is considered to be one load vector for a particular
time step, frequency step, load condition, etc. Then, depending on the nature of the discipline, one of
two print routines is called to read into memory the proper vector and to print the terms to the user
output file. Once all the subcases for the current CASE tuple have been processed, the CASE tuple loop
continues for the current discipline. When all disciplines or all CASE tuples have been processed, the
module terminates.
Design Requirements:
1. The OFPLOAD module is designed to be called at the conclusion of the boundary condition loop.
Error Conditions:
None
[PGMN(BC)] Partitioning vector to divide the g-set DOFs into the MPC and independent
DOF’s (Input)
[PFJK] Partitioning vector to divide the f-set DOFs that may include GDR generated
scalar points into the original f-set DOF’s (Optional, but required if NGDR
<>0; Input)
[PHIG(BC)] Matrix of normal mode eigenvectors in the structural g-set
(Optional, Input)
[PGLOAD] Matrix of g-set applied dynamic loads for the direct transient or frequency
analyses (as appropriate for DISC) in the current boundary condition
(Optional, Input)
[PHLOAD] Matrix of h-set applied dynamic loads for the modal transient or frequency
GUST analyses (as appropriate for DISC) in the current boundary condition
(Optional, Input)
BGPDT(BC) Relation of basic grid point data for the boundary condition (including any extra
points and GDR scalar points which may be added by the GDR module) (Input),
where BC represents the MAPOL boundary condition loop index number
OGRIDLOD Relation of loads on structural grid points (Output)
where YS has been expanded to have the appropriate number of columns and the proper terms are
ignored if YS or PS is blank or empty.
Then the QSV matrix is expanded to the g-set, the nonzero terms are read and compared to the output
requests and the appropriate terms are loaded to the OGRIDLOD relation. For the dynamic response
disciplines, the applied loads PS are extracted from the g-set output of the DYSPCF submodule and the
reaction forces are adjusted accordingly.
Design Requirements:
1. SAERO single point constraint reactions are computed in the OFPALOAD module where the applied
loads are computed.
Error Conditions:
None
The module also checks that constraint requests specified in the FLUTTER solution control command
have corresponding DCONFLT bulk data entries.
As a final step, the PFBULK module performs the preliminary processing of solution control print
requests that depend on elements. These include all the element response quantities (i.e., stress or
strain) and the grid point force balance. The first stage is performed in the PREGPF submodule which
builds the GPFELEM relation from the element connectivity data and the sets of nodes for which a force
balance in requested. Then the PREEDR submodule is called to build the EOSUMMRY and EODISC entities
which list those elements for which element data recovery should be performed in the EDR module. These
entities are also used in OFPEDR to direct the printing of the computed quantities.
Design Requirements:
1. This is a preface module that called after EMG and MAKEST
Error Conditions:
None
aerodynamic degrees of freedom. QHLLGEN then calls the PRUNMK utility to prepare the UNMK data for
the discipline dependent unsteady aerodynamic matrices. The total number of m-k/symmetry sets
associated with the QKK matrix are computed and the requisite memory for the subsequent computations
is obtained. The module then proceeds with the premultiplication of the QKK matrix list by the PHIKH
matrix:
[QHKL] = [PHIKH][QKKL]
The QHLL output matrix is then flushed and computed using one of two paths. If there is only one
m-k/symmetry set (which is very rare), the QHLL matrix may be formed by a post-multiplication of QHKL
in one step. If more than one matrix is in the QHKL matrix list, however, the module extracts each matrix
individually using the EXQKK utility and performs the multiplication:
[QZHH] = [QHK][PHIKH]
The X and KRR matrices are then read into memory and two normalization measures are computed. The
first is the overall norm of each matrix:
nr nr
Xnorm = ∑∑ Xij
i=1 j=1
nr nr
KRRnorm = ∑∑ KRRij
i=1 j=1
Xnorm
εmatrix = KRR
no rm
nr
Xjno rm = ∑ Xij
i=1
nr
KRRjnorm = ∑ KRRij
i=1
Xj
εcol = norm
KRRj
norm
These error ratios and norms are then printed out along with the associated diagonal of X (the strain
energy) for each support degree of freedom.
Design Requirements:
None
Error Conditions:
None
The UOO terms are then computed from the inverted KOO terms based on the SYM flag; with the symmetry
flag indicating whether the general or symmetric forward backward substitution is used:
Note that the module assumes that the correct set of KOOINV, KOOU, IFM, AA, and PO matrices are
supplied to match the SYM and NRSET flags. If the PO argument is omitted from the calling sequence,
the UO terms are computed directly from:
[UO] = [GSUBO][UA]
with the GSUBO argument required to perform the computation. Note that these computations are the
same irrespective of the NRSET flag. When UO is complete, the module merges the computed UO terms
with the supplied UA terms to form the UF output.
Design Requirements:
None
Error Conditions:
None
where the DELRED matrix is reduced to only the active trim parameters and the effectiveness factors
have been included. Then the rigid and flexible loads are hit with the linking matrix to reduce the
problem to the relevant configuration parameters:
P2RED = P2 * TLINK
P2RED and RHSRED contain one row for each structural acceleration and one column for each label on
the trim entry This means that the total number of stability parameters (either fixed or free) is the
number of columns in P2 and RHS. Further, the order of the parameters is the order given on the TRIM
tuples.
Now the trim equations can be assembled. From the input, we have the relationship
Where: Represents:
F+K Number of SUPORT point DOFs
F Set of free accelerations, AR
K Set of known(FIXED) accelerations, AR
U+S Number of AERO parameters
U Set of unknown parameters
S Set of set(FIXED) parameters
These equations must be rearranged to get free accelerations and unknown delta’s on the same side of
the equation:
and we must handle the degenerate case where all accelerations or all delta’s are known.
Following rearrangement of the equations, the unknowns are solved for in the ARTRMS/D routine.
First the rigid masses and loads, P2RED and MRR are used to obtain the rigid trim and then the flexible
inputs RHSRED and LHS are used for the "real" solution.
Then, the flexible results are unscrambled and the rigid body accelerations (either input on the TRIM
or output from the solution of the above) are stored on the AAR matrix and the same is done with the
trim parameters after the TLINK matrix is used to recover the full vector from the reduced set. Then
the results for the rigid and flexible trim are printed.
Only if the print is requested or if constraints are applied are the stability coefficients computed. These
data are recomputed in each subcase because the effectiveness terms affect the stability derivative
outputs. The ARSCFS/D module is called to compute the flexible data from the forces on the support
degrees of freedom due to the unit configuration parameters:
[F] = [MRR][LHS]-1[RHS]
The P2 matrix contains the same information for the rigid aerodynamic loads (computed in the MAPOL
sequence). These data are then normalized and the stability coefficient table stored into memory. Once
complete, the stability coefficient table is printed using the effectiveness factors and linking terms to
assemble the "dependent" coefficients and factor all coefficients according to the user input.
Finally, using the in-core table of derivatives, the ARCONS/D submodule is called to evaluate the
constraints for the current subcase. These constraints are evaluated from the stability coefficient table
but, to prepare for eventual sensitivity computations, the additional outputs AEFLG, AARC and DELC are
needed. The first is a logical flag to indicate to the MAPOL sequence that the AARC and DELC matrices
are full. The AARC matrix and DELC matrix contain one or more columns for each constraint (appended
in the order the constraints are evaluated). The AARC contains the accelerations of the support DOFs
due to the unit configuration parameter vectors in DELC. This pair of matrices will allow the computation
of the derivative of the accelerations due to the unit parameters which is an essential ingredient in the
sensitivity computation.
For lift effectiveness constraints
AARC - 1 column due to unit ALPHA
DELC - 1 column containing a unit ALPHA with all others 0.0
For aileron effectiveness constraints
AARC - 2 columns; the first for unit SURFACE rotation and the second for unit roll rate (PRATE).
DELC - 2 columns containing a unit rotation of the named SURFACE and the second a unit PRATE
For stability coefficient constraints (DCONSCF)
AARC - 1 column due to unit PARAMETER where PARAMETER is that named on the constraint entry
DELC - 1 column containing a unit PARAMETER with all others 0.0
DCONTRM are evaluated at this time, but do not require any pseudodisplacements for sensitivity
evaluation. The pseudodisplacements are those which arise due to the unit accelerations that arise due
to unit configuration parameters.
After the stability coefficients (and constraints) are computed and printed, the rigid and flexible trim
results are printed and the module repeats the entire process for all the subcases that are associated
with the current SUBscript. Then the module terminates.
Design Requirements:
None
Error Conditions:
None
(3) Update the "subcript" attribute in TRIM to mark all the cases that are being processed. Also
load the SUBID to assist in re-merging the answers into CASE subcase order
(4) Check if any more saero subcases need to be processed and set the "loop" flag
After these steps have been completed, if the PRINT flag is nonzero, a summary of the selected TRIMs
is printed to the output file.
Design Requirements:
1. The TRIM relation is assumed to contain NULL values for SUBSCRPT on the first subscript of the first
design iteration (for OPTIMIZE boundary conditions) and for the first subscript of all ANALYZE boundary
conditions.
Error Conditions:
None
Design Requirements:
1. The assumption is that each MATSUB matrix contains the results from the "SUB"th subscript value in
the order the trim id’s for that SUB appear in the TRIM relation.
2. The same MATOUT matrix must be passed into the AROSNSMR module on each call since the columns
associated with earlier subscript values are read from MATOUT into a scratch entity. The merged matrix
that results then replaces the input MATOUT.
3. The AEROSENS module is called upstream of the AROSNSMR module to process active DCONTRM
constraints for the current subscript. Thus, those columns that are active only for DCONTRM constraints
may be filtered out for the downstream processing of stress, strain and displacement constraints.
Design Requirements:
None
Error Conditions:
None
latter is for thermal load corrections to the stresses and strains. If any thermal load cases were found,
the GRIDTEMP and TREF entities are opened.
If the current boundary condition is the first with stress or strain constraints, the running constraint
type count variables are reinitialized for the current design iteration. This type count provides a link
between the ACTCON print of design constraints and the debug print option supported by the SCEVAL
module. If any thermal loads exist for the current boundary condition, the GRIDTEMP, TREF and TREFD
entities are brought into memory to be available for the computation of the stress-free thermal strain
correction to the element stresses. Once these preparations have been made, the SMAT and NLSMAT
matrices of stress/strain sensitivities and the GLBSIG and NLGLBSIG matrices are opened and the
GLBSIG and NLGLBSIG matrices are positioned to the proper columns to pack additional stress/strain
components. Note that the GLBSIG and NLGLBSIG matrices store all the columns associated with the
current boundary condition since they are required for the constraint sensitivity computations.
Finally, the UG matrix of global displacements is opened. For each column in the UG matrix, the matrix
products
are calculated to obtain the component stress or strain values for linearly designed elements and
nonlinearly designed elements, respectively. Having calculated and stored in core these values, the
element dependent constraint evaluation routines are called to process each constraint. Note that the
order in which the element routines are called must be the same as the order the SMAT and NLSMAT
columns were formed. That order is:
1. Bar elements, BARSC (Using both [GMA] and [NLGMA])
2. Isoparametric quadrilateral membrane elements, QD1SC (Using [GMA] only)
3. Quadrilateral bending plate elements, QD4SC (Using both [GMA] and [NLGMA])
4. Rod elements, RODSC (Using [GMA] only)
5. Shear panels, SHRSC (Using [GMA] only)
6. Triangular bending plate elements, TR3SC (Using both [GMA] and [NLGMA])
7. Triangular membrane elements, TRMSC (Using [GMA] only)
On the first pass through the element dependent routines, all the xxxxEST tuples (i.e., RODEST and
TRMEMEST) with nonzero stress/strain constraint flags are retrieved from the data base. For subsequent
passes, this information is used directly from core. Each constraint is evaluated in turn with the stress
components modified by the thermal stress correction if the displacement field includes thermal strain
effects. The CONST relation is loaded with one tuple for each constraint as they are processed. When all
the constraints have been evaluated for the current loading condition, the adjusted linear design
variable and nonlinear design variable stress/strain constraint terms are packed to the GLBSIG and
NLGLBSIG matrices.
The element stress and strain responses which are required by any user function constraints are also
computed in this module. Those response values are stored into a relation entity to be used by user
function evaluation utilities.
Design Requirements:
1. The SMAT (or NLSMAT), GRIDTEMP and TREF (or TREFD ) entities must exist.
2. The CASE relation must be complete from SOLUTION.
Error Conditions:
1. A zero material allowable may cause division by zero in the computation of some of the constraints.
Design Requirements:
None
Error Conditions:
1. Each aerodynamic box may appear on only one SPLINE1, SPLINE2 or ATTACH entry although not all
boxes need appear. Missing boxes will not influence the aeroelastic response.
2. Missing structural grids or aerodynamic elements appearing on the spline definitions will be flagged.
Design Requirements:
None
Error Conditions:
1. Each aerodynamic box may appear on only one SPLINE1, SPLINE2 or ATTACH entry although not all
boxes need appear. Missing boxes will not influence the aeroelastic response.
2. Missing structural grids or aerodynamic elements appearing on the spline definitions will be flagged.
CAROGEOM A aerodynamic geometry relation output only for geometry checking. The
"grids" defined in AEROGEOM are "connected" to 2-node (RODs) and 4-node
(QUADs) elements in the CAROGEOM in such a way as to emulate the structural
MODEL. ICE may then be used to punch an equivalent structural model to
allow graphical presentation of the STEADY aero model
Application Calling Sequence:
None
Method:
The STEADY preface module performs initial aerodynamic processing for planar STEADY aerodynamics.
It is driven by the the TRIMDATA relation and the MINDEX value.
On each call, the TRIMDATA relation is queried to determine the MINDEX’th Mach number and whether
symmetric, antisymmetric or both boundary conditions are to be applied.
On the first call (determined by MINDEX=1) the STEADY module computes the planar STEADY aerody-
namic geometry in calls to GEOM. It then processes the current Mach number and stores the resultant
AIC terms in the AICMAT and/or AAICMAT entity (depending on the symmetry options) and in the
resultant rigid forces in the AIRFRC matrix. The STABCF relation is loaded for the current MINDEX value
with the symmetric and antisymmetric stability derivatives in the same order that the AIRFRC matrix
columns are loaded. Hence, the STABCF relation points to the corresponding AIRFRC column.
Design Requirements:
1. The STEADY module interacts with the executive in that the MINDEX parameter should be unique for
each call (although it need not be monotonically increasing). The MINDEX value must be 1 on the first
call to ensure that the geometry processing is done.
Error Conditions:
1. Errors in the STEADY aerodynamic MODEL specifications are flagged.
specified move limit on the local variables is to be applied to the minimum thickness constraints (note
that the maximum thickness constraints are always computed relative to their gauge limits rather than
to a move limit).
If move limits are applied (as they almost always are), the DCONTHK or DCONTH2 data are also brought
into core to identify which elements minimum gauge constraints are always to be retained by the
constraint deletion algorithm in the ACTCON module. The minimum gauge constraints are then
computed by performing the matrix multiplication:
t
{g} = 1.0 - [PMINT]T{v} = 1.0 -
tmin
The LOCLVAR data is then used to determine to which element each "g" applies. If the constraint value
is less critical (more negative) than the move limit value of
gretain = 0.10
Any minimum thickness constraints that are stored on CONST that do not appear on DCONTHK or
DCONTH2 entries will be subject to the normal constraint deletion criteria. The maximum gauge
constraints are then computed by performing the matrix multiplication:
The LOCLVAR data is then used to determine to which element each "g" applies. No move limits are
applied to these constraints and they are stored directly to the CONST relation to undergo the normal
constraint deletion in ACTCON.
Design Requirements:
1. This module should be the first module called in the optimization phase of the MAPOL sequence.
2. The move limit that is passed into this routine must match the value used to evaluate the constraints
in the MAKDFV module. If not, the constraint sensitivities will be in error with no warning given.
Error Conditions:
1. A local variable has become negative due to insufficient DCONTHK or DCONTH2 entries or illegal gauge
constraints.
Design Requirements:
1. The TRIMCHEK module interacts with the executive in that the LOOP variable is output on the first
call and the module expects to be called again as long as LOOP is true. For each time called, the MINDEX
parameter should be unique although it need not be monotonically increasing.
Error Conditions:
1. Errors in the TRIM specifications are flagged.
Once the geometry data are complete, the AMG submodule is called to compute the AJJTL, D1JK, D2JK
and SKJ matrix. These computations are done for all the {symm,m,k} sets in the bulk data. Each AJJT
matrix is appended to the AJJTL output matrix. The D1JK, D2JK and SJK matrices will have two
separate matrices stored in a similar fashion if and only if both subsonic and supersonic Mach numbers
appear in the UNMK sets. Once these computations are complete, UNSTEADY returns control to the
MAPOL sequence.
Design Requirements:
None
Error Conditions:
None
Design Requirements:
1. The YS matrix entity, if it is included in the calling sequence, must be null (no columns) or be a column
vector. If the matrix is null, the routine acts as though it were not included in the calling sequence.
Error Conditions:
None
Chapter 6.
APPLICATION UTILITY MODULES
Large software systems such as ASTROS require that similar operations be performed in many code
segments. To reduce the maintenance effort and to ease the programming task, a set of commonly used
application utilities were identified and used whenever the application required those tasks to be per-
formed. This section is devoted to the documentation of the set of application utilities in ASTROS. The
suite of utilities in ASTROS includes small (performed entirely in memory) matrix operations like linear
equation solvers, matrix multiplication and others. Another suite of utilities have been written to sort
tables or columns of data on real, integer and character values in the table. Other utilities search lists of
data stored in memory for particular key values, initialize arrays, operate on matrix entities and perform
other disparate tasks of a general nature. The ASTROS user who intends to write application programs
to be used within the ASTROS environment is strongly urged to study the suite of utilities documented in
this section. ASTROS software designed to make use of the suite of application utilities can be much
simpler to write, debug and maintain since these well-tested utilities can be substituted for code that
would otherwise require programming effort.
The following subsections document the interface to the application utilities in two formats; using the
executive system (MAPOL) and using the FORTRAN calling sequence. In most cases, there is no MAPOL
language interface since these utilities are useful only within an application module. In other cases,
however, the utility has been identified as a feature accessible through the executive. Finally, a small
number of these application utilities are intended for access only by the executive system. This family of
utilities is always associated with obtaining formatted output of data stored on the database.
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
The GMMATC routine assumes that sufficient storage space is available in core to perform the multipli-
cation. The matrices are checked to ensure that they are of proper dimensions to be multiplied. Complex
single-precision is used throughout the routine.
Design Requirements:
None
Error Conditions:
None
Method:
The GMMATD routine assumes that sufficient storage space is available in core to perform the multipli-
cation. The matrices are checked to ensure that they are of proper dimensions to be multiplied. Double
precision is used throughout the routine.
Design Requirements:
None
Error Conditions:
None
Method:
The GMMATS routine assumes that sufficient storage space is available in core to perform the multipli-
cation. The matrices are checked to ensure that they are of proper dimensions to be multiplied. Single
precision is used throughout the routine.
Design Requirements:
None
Error Conditions:
None
Method:
Note that all or the upper left square partition of the input array A may be inverted. If on input, the
value of ISING is less than zero, the determinant of the A matrix is not calculated. The value of DETERM
on return will be zero. The matrix inversion routine uses the Gauss-Jordian method with complete
row-column interchange. Sufficient core storage must be set aside in INDEX to complete the inversion.
Error Conditions:
None
Method:
Note that all or the upper left square partition of the input array A may be inverted. If on input, the
value of ISING is less than zero, the determinant of the A matrix is not calculated. The value of DETERM
on return will be zero. The matrix inversion routine uses the Gauss-Jordian method with complete
row-column interchange. Sufficient core storage must be set aside in INDEX to complete the inversion.
Error Conditions:
None
Method:
Note that all or the upper left square partition of the input array A may be inverted. If on input, the
value of ISING is less than zero, the determinant of the A matrix is not calculated. The value of DETERM
on return will be zero. The matrix inversion routine uses the Gauss-Jordian method with complete
row-column interchange. Sufficient core storage must be set aside in INDEX to complete the inversion.
Error Conditions:
None
Method:
The MSGDMP routine reads the queued messages written by UTMWRT from the queue file and writes them
onto the system output file. The queue file is then reset to accept the next set of messages. The intention
is that MSGDMP will be called after each module’s execution to allow easy determination of the last module
executed, should the execution terminate.
Error Conditions:
None
Method:
This routine performs double-precision polynomial evaluation from fit coefficients from solution of
Vandermonde matrix equations. It is taken from "Numerical Recipes," Section 3.5, routine POLCOE.
Design Requirements:
1. Use POLCOD to evaluate the fit coefficients.
2. Use POLSLD to evaluate the polynomial derivative values based on the computed coefficients.
Error Conditions:
None
Method:
This routine performs single-precision polynomial evaluation from fit coefficients from solution of
Vandermonde matrix equations. It is taken from "Numerical Recipes," Section 3.5, routine POLCOE.
Design Requirements:
1. Use POLCOS to evaluate the fit coefficients.
2. Use POLSLS to evaluate the polynomial derivative values based on the computed coefficients.
Error Conditions:
None
Method:
This routine performs double-precision polynomial derivative evaluation from fit coefficients from
solution of Vandermonde matrix equations. It is taken from "Numerical Recipes," Section 3.5, routine
POLCOE.
Design Requirements:
1. Use POLCOD to evaluate the fit coefficients.
2. Use POLEVD to evaluate the polynomial values based on the computed coefficients.
Error Conditions:
None
Method:
This routine performs single-precision polynomial derivative evaluation from fit coefficients from
solution of Vandermonde matrix equations. It is taken from "Numerical Recipes," Section 3.5, routine
POLCOE.
Design Requirements:
1. Use POLCOD to evaluate the fit coefficients.
2. Use POLEVD to evaluate the polynomial values based on the computed coefficients.
Error Conditions:
None
Method:
PS returns character string RDP on double-precision machines or RSP on single-precision machines for
input TYPFLG of R, in other words PS(’R’) returns either RSP or RDP. The complex equivalent CDP or
CSP is returned if TYPFLG is C.
Design Requirements:
None
Error Conditions:
1. If TYPFLG in PS is neither "R" nor "C", PS will return blank.
Method:
The matrix is opened, its size determined and a memory block with group name GRPN and block name
BLKN is allocated to hold the matrix data. The matrix is then read into core with special provisions being
taken to handle the case of null columns. The matrix is then closed. The calling routine is responsible
for freeing the memory block.
Design Requirements:
1. The matrix must be closed on calling this routine.
Error Conditions:
1. Insufficient open core memory will cause ASTROS termination.
Method:
The matrix is opened, its size determined and a memory block with group name GRPN and block name
BLKN is allocated to hold the matrix data. The matrix is then read into core with special provisions being
taken to handle the case of null columns. The matrix is then closed. The calling routine is responsible
for freeing the memory block.
Design Requirements:
1. The matrix must be closed on calling this routine.
Error Conditions:
1. Insufficient open core memory will cause ASTROS termination.
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
This module generates relations SHAPE and/or SHAPEM based on the element centroids of the
elements appearing in SHPGEN Bulk Data entries. It performs the linking relationship and, otionally,
prints or punches the resulting SHAPE and/or SHAPEM Bulk Data entries.
Design Requirements:
The DEBUG command SHPGEN must be specified to print or punch the results.
Error Conditions:
None
Method:
The source and destination arrays are operated on as integer arrays inside the UTCOPY routine. If
double-precision data are to be copied, the NWORD argument must be adjusted accordingly.
Design Requirements:
None
Error Conditions:
None
Method:
The UTEXIT routine is called to cleanly terminate the execution of the ASTROS system. It calls the
DBTERM database termination program to provide for normal closing of the database files, and dumps
the queued messages from the UTMWRT utility. When these tasks have been completed, the program
execution is terminated.
Design Requirements:
None
Error Conditions:
None
Method:
The UTMINT uses the DOUBLE function to determine if the matrix to be initialized is single or
double-precision. The requested entity is then opened and flushed. No check is made to ensure that the
requested matrix exists. Based on the value of VAL, one of several paths through the utility is taken. If
VAL is not zero, a diagonal matrix with diagonal terms given the value of VAL is created. If the non-zero
value is 1.0 and NROWS equals NCOLS, the resulting identity matrix is specifically declared as such in
the MXINIT call. If the matrix is rectangular, extra columns, if any, are null. If VAL is zero, a null matrix
of the requested row and column dimensions is created. Note that all the matrices created by this utility
are of the machine precision as determined by the DOUBLE function.
Design Requirements:
None
Error Conditions:
None
Method:
The UTMPRG, UTRPRG and UTUPRG MAPOL calls are defined to allow up to 10 entities of a single type
to be purged from the MAPOL sequence. The application interface is the DBFLSH routine which can take
a single argument of an entity name of any type.
Design Requirements:
None
Error Conditions:
None
Method:
If METHOD is zero (or absent from the MAPOL call), the matrix entity MATi is printed in a banded format:
that is, all the terms from the first non-zero term to the last non-zero term (inclusive) are unpacked and
printed. Null columns and groups of null columns are identified as such. Note that the MAPOL sequence
call allows for up to 10 matrix entities to be printed. A nonzero METHOD prints the column by string with
no intervening zeros.
Design Requirements:
None
Error Conditions:
None
Method:
The UTMWRT routine cracks the message number NUMBER into its three component integers: NN, the
module number, MM, the message number, and LL, the message length(in records). If LL is omitted (ie
NUMBER=NN.MM), it defaults to one record in length.
The correct message text is then recovered from the message file by querying the MSGLEN for the module
NN to obtain the starting record and adding the message number (MM) and message length (LL) to obtain
the record numbers where the message text is stored. The message text is of the form:
’---text---$----text--$--.....’
If any $ (dollar signs) exist in the message text, they are replaced by the ARGMTS supplied in the call
statement. Note that the final message text including the ARGMTS must be less than 128 characters in
length.
Design Requirements:
1. The pointers to the system database entity that contains the error message texts for each "module"
must be stored in memory. Currently, the array for pointer storage is 200 words long which means
that no more than 100 distinct "modules" can be defined. Note that this does not imply any limit on
the number of error messages within a particular module’s group of messages.
Error Conditions:
1. UTMWRT error: the number of modules exceeds the limit of $. This message results in program
termination and can only be fixed by increasing the size of the message pointer storage array.
2. Error in UTMWRT when processing message number $. This message is a system level error which
usually implies that a non-valid message number NN.MM.LL was passed to the module.
3. If the resultant message is longer than 128 characters, the unexpanded text is printed (with $’s)
and the arguments are echoed.
Method:
The UTPAGE routine keeps track of the total line count and the line count for the current page. The total
number of output lines allowed is maintained for use by this module. These quantities are stored in the
OUTPT1 common block. The OUTPT2 common block is also used to store the header and titling data for
the current execution. When output to the system output file is being performed, the line count is checked
by the current module against the number of lines per page, when the maximum lines per page is
reached, a call to UTPAGE causes a page advance on the system output file and the total number of
printed lines is updated. The header information can be modified by the application modules by simply
overwriting the current entries in the OUTPT2 common block. Note that all system output should be
performed using this utility module.
The UTPAG2 routine performs a page eject if the N lines will not fit on the current page.
Design Requirements:
None
Error Conditions:
None
Method:
The relational entity RELi is printed using the full relation projection. At present, if the full projection
is too large to be output on one 132 character record, the remaining attributes are ignored. Each
attribute, regardless of type, uses a 12 character format for output. The current version of UTRPRT has
a few additional restrictions. The first is that any string attribute that is more than eight characters in
length cannot be printed. The routine will ignore these attributes and write a message to that effect. In
addition, double-precision attributes are first converted to single-precision before output.
Design Requirements:
1. Only the following attribute types are supported:
INT, KINT, AINT
RSP, ARSP
RDP
STR, KSTR (first 8 characters only)
Error Conditions:
1. Relational entity REL does not exist.
2. Relational entity REL is empty.
3. A string attribute cannot be printed when longer than eight characters.
Method:
The UTRSRT routine uses a QUICKSORT algorithm outlined in "The Art Of Computer Programming,
Volume 3 / Sorting And Searching" by D.E. Knuth, Page 116. Several improvements have been made
over the pure quicksort algorithm. The first is a random selection of the key value around which the
array is sorted. This feature allows this routine to handle partially sorted information more rapidly than
the pure quicksort algorithm. The second improvement in this routine is that a cutoff array length is
used to direct further array sorting to an insert sort algorithm (Ibid. Page 81). This method has proven
to be more rapid than allowing small arrays to be sorted by the quicksort algorithm. Presently this cutoff
length is set at 15 entries. Studies should be conducted on each type of machine in order to set this cutoff
length to maximize the speed of this routine. This sorting algorithm requires a integer stack in which
to place link information during the sort. The maximum required size for this stack array in twice the
natural log of the number of rows in the table. At present, the UTRSRT routine has hard coded an array
of size (2,40) which provides for 1 trillion entries to be sorted.
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Notes:
1. Routine SETSYS uses UTSFLG to set output file unit number, PRINT (set to second word of /UNITS/
from XXBD) and to set the system precision PREC (=1 for single-precision; =2 for double-precision)
based on the DOUBLE function. Large matrix utilities fetch these FLAG values by using UTGFLG.
2. Some of the DEBUG parameters are set by UTSFLG and are retrieved by the application modules
using UTGFLG.
3. Routine TIMCOM uses UTSFLR to set system timing constants for matrix operations. The FLAGs are
named: TMUNIO, TMMXPT, TMMXUT, TMMXPK, TMMXUP, TMMXUM, TMMXPM, TMTRSP, TMTRDP,
TMTCSP, TMTCDP, TMLRSP, TMLRDP, TMLCSP, TMLCDP, TMCRSP, TMCRDP, TMCCSP, and
TMCCDP. Large Matrix utilities fetch these constants by using UTGFLR.
Method:
The UTSORT routine uses a QUICKSORT algorithm outlined in "The Art Of Computer Programming,
Volume 3 / Sorting And Searching" by D.E. Knuth, Page 116. Several improvements have been made
over the pure quicksort algorithm. The first is a random selection of the key value around which the
array is sorted. This feature allows this routine to handle partially sorted information more rapidly than
the pure quicksort algorithm. The second improvement in this routine is that a cutoff array length is
used to direct further array sorting to an insert sort algorithm (Ibid. Page 81). This method has proven
to be more rapid than allowing small arrays to be sorted by the quicksort algorithm. Presently this cutoff
length is set at 15 entries. Studies should be conducted on each type of machine in order to set this cutoff
length to maximize the speed of this routine. This sorting algorithm requires a integer stack in which
to place link information during the sort. The maximum required size for this stack array in twice the
natural log of the number of rows in the table. At present, the UTSORT routine has hard coded an array
of size (2,40) which provides for 1 trillion entries to be sorted.
Design Requirements:
None
Error Conditions:
None
Method:
The UTSRCH routine first calculates the number of key values to be searched. If there are less than a
minimum number of key values (presently 15), then the list is searched sequentially. If more than the
minimum exist, a binary search of the list is performed. If the value cannot be found, the routine returns
to *ERR.
Design Requirements:
None
Error Conditions:
None
Method:
The UTSRTD routine uses a QUICKSORT algorithm outlined in "The Art Of Computer Programming,
Volume 3 / Sorting And Searching" by D.E. Knuth, Page 116. Several improvements have been made
over the pure quicksort algorithm. The first is a random selection of the key value around which the
array is sorted. This feature allows this routine to handle partially sorted information more rapidly than
the pure quicksort algorithm. The second improvement in this routine is that a cutoff array length is
used to direct further array sorting to an insert sort algorithm (Ibid. Page 81). This method has proven
to be more rapid than allowing small arrays to be sorted by the quicksort algorithm. Presently this cutoff
length is set at 15 entries. Studies should be conducted on each type of machine in order to set this cutoff
length to maximize the speed of this routine. The algorithm used in this utility requires a stack array
for storing the linking information generated during the sort. The maximum size needed for this stack
is twice the natural log of the number of entries in the array. Currently, a stack of dimension (2,40) is
hard coded which allows for a trillion entries to be sorted.
Design Requirements:
None
Error Conditions:
None
Method:
The UTSRTI routine uses a QUICKSORT algorithm outlined in "The Art Of Computer Programming,
Volume 3 / Sorting And Searching" by D.E. Knuth, Page 116. Several improvements have been made
over the pure quicksort algorithm. The first is a random selection of the key value around which the
array is sorted. This feature allows this routine to handle partially sorted information more rapidly than
the pure quicksort algorithm. The second improvement in this routine is that a cutoff array length is
used to direct further array sorting to an insert sort algorithm (Ibid. Page 81). This method has proven
to be more rapid than allowing small arrays to be sorted by the quicksort algorithm. Presently this cutoff
length is set at 15 entries. Studies should be conducted on each type of machine in order to set this cutoff
length to maximize the speed of this routine. The algorithm used in this utility requires a stack array
for storing the linking information generated during the sort. The maximum size needed for this stack
is twice the natural log of the number of entries in the array. Currently, a stack of dimension (2,40) is
hard coded which allows for a trillion entries to be sorted.
Design Requirements:
None
Error Conditions:
None
Method:
The UTSRTR routine uses a QUICKSORT algorithm outlined in "The Art Of Computer Programming,
Volume 3 / Sorting And Searching" by D.E. Knuth, Page 116. Several improvements have been made
over the pure quicksort algorithm. The first is a random selection of the key value around which the
array is sorted. This feature allows this routine to handle partially sorted information more rapidly than
the pure quicksort algorithm. The second improvement in this routine is that a cutoff array length is
used to direct further array sorting to an insert sort algorithm (Ibid. Page 81). This method has proven
to be more rapid than allowing small arrays to be sorted by the quicksort algorithm. Presently this cutoff
length is set at 15 entries. Studies should be conducted on each type of machine in order to set this cutoff
length to maximize the speed of this routine. The algorithm used in this utility requires a stack array
for storing the linking information generated during the sort. The maximum size needed for this stack
is twice the natural log of the number of entries in the array. Currently, a stack of dimension (2,40) is
hard coded which allows for a trillion entries to be sorted.
Design Requirements:
None
Error Conditions:
None
Method:
For UTSTOD, TOTLEN entries of array RZ are copied to DZ and converted to double-precision.
Similarly, for UTDTOS, the entries of DZ are copied to RZ.
Design Requirements:
None
Error Conditions:
None
Method:
NWORDS of array ARRAY are initialized with the value VALUE. Note that VALUE and ARRAY must be
double-precision.
Design Requirements:
None
Error Conditions:
None
Method:
NWORDS of array ARRAY are initialized with the value VALUE. Both ARRAY and VALUE must be
single-precision
Design Requirements:
None
Error Conditions:
None
Chapter 7.
LARGE MATRIX UTILITY MODULES
Finite element structural analysis, which forms the core of the ASTROS system, requires a suite of
utilities for matrix operations which are able to efficiently handle very large, often sparse, matrices. This
section is devoted to the documentation of the large matrix utilities in ASTROS. the designation large
comes from the assumption made by each of these utilities that the relevant matrices are stored on the
CADBB database and will be operated on in a fashion that allows them to be of arbitrary order. Other
matrix operations are available in the general utility library documented in Section 6 for small matrices
stored in memory. The suite of large matrix utilities in ASTROS includes partition/merge operations,
decomposition and forward/backward substitutions, multiply/add and pure addition operations, transpose
operations and real and complex eigenvalue extraction.
The following subsections document the interface to the large matrix utilities in two formats: using the
executive system (MAPOL) and using the FORTRAN calling sequence. In some cases, the MAPOL
language supports the particular matrix operation directly. In such cases, the user need not make a call
to the particular utility, instead, the MAPOL compiler automatically generates the correct call to the
appropriate utility. These direct interfaces are so indicated in the documentation.
Note that the calling sequence for CDCOMP is through the MAPOL DECOMP module. The method is
automatically selected if the input matrix is complex.
Application Calling Sequence:
CALL CDCOMP (A, L, U, IKOR, RKOR, DKOR)
[A] The matrix to be decomposed (Input, Character)
[L] The lower triangular factor (Output, Character)
[U] The upper triangular factor (Output, Character)
IKOR,RKOR,DKOR The dynamic memory base address (Integer,Real and Double,Input)
Method:
The CDCOMP module decomposes complex matrices. The resultant lower, [L], and upper, [U], triangular
factors are specially structured matrix entities having control information in the diagonal terms. They
may only be reliably used by the back-substitution module GFBS.
Design Requirements:
1. The back-substitution phase of equation solving is performed with module GFBS.
2. The triangular factors [L] and [U] may not be used reliably by matrix utilities other than GFBS.
Error Conditions:
None
method that appears in the relation will control the extraction. The Inverse Power Method or the Upper
Hessenburg Method which is selected by EIGC data is used to solve the eigenvalue problem. (Subroutines
CINVPR or HESS1). In case there is insufficient core for Upper Hessenburg Method, the Inverse Power
Method will be used if the necessary data exist on EIGC.
Design Requirements:
1. The matrices [KDD,[BDD] and [MDD] must be complex, and matrices [BDD] and [CPHIDL] are
not required.
Error Conditions:
1. EIGC is not in the Bulk Data packet.
2. [KDD] and/or [MDD] do not exist.
3. [KDD] and [MDD] are not compatible.
4. [MDD] is singular in HESS method.
A ← A11 A12
MAPOL Calling Sequence:
CALL COLMERGE ([A], [A11], [A12], [CP]);
Method:
The partitioning vector [CP] must be a column vector containing zero and nonzero terms. The [A11]
partition will be placed in [A] at positions where [CP] is zero, and the [A12] partition is placed in [A]
at positions where [CP] is nonzero. If either of the partitions [A11] or [A12] is null, it may be omitted
from the MAPOL calling sequence or a BLANK may be used in the application calling sequence.
The COLPART large matrix utility module performs the inverse of this module.
Design Requirements:
None
Error Conditions:
None
A → A11 A12
MAPOL Calling Sequence:
CALL COLPART ([A], [A11], [A12], [CP]);
Method:
The partitioning vector [CP] must be a column vector containing zero and nonzero terms. The [A11]
partition will then contain those columns of [A] corresponding to a zero value in [CP], and the [A12]
partition is placed in [A] at positions where [CP] is nonzero. If either partition is not desired, it may
be omitted from the MAPOL calling sequence or a BLANK may be used in the application calling sequence.
Design Requirements:
None
Error Conditions:
None
Method:
The DECOMP module can decompose both real and complex matrices. The resultant lower, [L], and upper,
[U], triangular factors are specially structured matrix entities having control information in the
diagonal terms. They may only be reliably used by the back-substitution module GFBS.
Design Requirements:
1. DECOMP can process both real and complex machine-precision matrices.
2. The back-substitution phase of equation solving is performed with module GFBS.
3. The triangular factors [L] and [U] may not be used reliably by matrix utilities other than GFBS.
Error Conditions:
None
Method:
Given a real symmetric system of equations
[K][X] = ±[P]
the SDCOMP large matrix utility is used to compute
[K] = [L][D][L]T
such that [D] is a diagonal matrix. This module then completes the solution for [X] as
[L][Y] = ±[P]
[L]T[X] = [D]-1 [Y]
If [RHS] is blank, the inverse of the decomposed matrix will be returned in [ANS].
Design Requirements:
None
Error Conditions:
None
Method:
Given a general, real, or complex system of equations
[K][X] = ±[P]
the DECOMP large matrix utility is used to compute
[K] = [L][U]
This module then completes the solution for [X] as:
[L][Y] = ±[P]
[U][X] = [Y]
If [RHS] is blank, the inverse of the decomposed matrix will be returned in [ANS].
Design Requirements:
None
Error Conditions:
None
Method:
The partitioning vectors [CP] and [RP] must be column vectors containing zero and nonzero terms.
The [A11] partition will be placed in [A] at positions where both [RP] and [CP] are zero. The [A12]
partition will be placed in [A] at positions where [RP] is zero and [CP] is nonzero. The other partitions
are treated in a similar manner.
If some of the partitions are null, they may be omitted from the MAPOL calling sequence or a character
blank may be used in the application calling sequence. In a similar manner, if the row and column
partition vectors are the same, one of them may be omitted or left blank in the MAPOL call. They must
both be present in the application call.
If a row or column merge alone is required in the MAPOL sequence, the special purpose MAPOL utilities
ROWMERGE and COLMERGE may be used.
Design Requirements:
None
Error Conditions:
None
D = ± AT B ± C or D = ± AT B
MAPOL Calling Sequence:
None, the MAPOL syntax supports algebraic matrix operations directly
[D] := ±TRANS([A])*[B]±[C];
Method:
If no [C] matrix exists, the C argument should be blank and the SIGNC argument should be zero.
Design Requirements:
None
Error Conditions:
None
[C] := (α)[A]±(β)[B]
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
The partitioning vectors [CP] and [RP] must be column vectors containing zero and nonzero terms.
The [A11] partition will be formed from [A] at positions where both [RP] and [CP] are zero. The
[A12] partition will be formed from [A] at positions where [RP] is zero and [CP] is nonzero. The other
partitions are treated in a similar manner.
If some of the partitions are not desired as output, they may be omitted from the MAPOL calling
sequence or a character blank may be used in the application calling sequence. In a similar manner, if
the row and column partition vectors are the same, one of them may be omitted or left blank in the
MAPOL call. They must both be present in the application call.
If a simple row or column partition is required in the MAPOL sequence, the special purpose MAPOL
utilities ROWPART and COLPART may be used.
Design Requirements:
None
Error Conditions:
None
Method:
The matrices [K] and [M] must be real and the rigid body mass matrix [MR] and the rigid body
transformation matrix [DM] are not required. The REIG module must query the CASE relational entity
to determine which set of EIGR eigenvalue extraction data to use. Because of the multidisciplinary
nature of the code, REIG assumes that, if called, an eigenanalysis is required. It uses the EIGR data that
correspond to the selection for the current boundary condition, BCID.
Design Requirements:
None
Error Conditions:
None
Method:
The partitioning vector [RP] must be a column vector containing zero and nonzero terms. The [A11]
partition will be placed in [A] at positions where [RP] is zero. If either of the partitions [A11] or [A12]
is null, it may be omitted from the MAPOL calling sequence or a BLANK may be used in the application
calling sequence.
The ROWPART large matrix utility module performs the inverse of this module.
Design Requirements:
None
Error Conditions:
None
Method:
The partitioning vector [RP] must be a column vector containing zero and nonzero terms. The [A11]
partition will then contain those columns of [A] corresponding to a zero value in [RP]. If either partition
is not desired as output, it may be omitted from the MAPOL calling sequence or a BLANK may be used
in the application calling sequence.
Design Requirements:
None
Error Conditions:
None
A → L D LT
where [L] is a lower triangular factor and the diagonal matrix and [D] has been stored on the diagonal
of [L].
MAPOL Calling Sequence:
CALL SDCOMP ([A], [L], USET(BC), SETNAM);
Method:
The SDCOMP module can decompose real and complex symmetric matrices. The resultant lower factor,
[L], is a specially structured matrix entity having the terms of [D] on the diagonals. It may, therefore,
only be reliably used by the back-substitution module, FBS.
Design Requirements:
None
Error Conditions:
1. Matrix A is singular.
A → AT
MAPOL Calling Sequence:
CALL TRNSPOSE ([A], [ATRANS]);
Method:
The output matrix entity, [ATRANS], must already exist on the database. It will be flushed and loaded
by the transpose utility. All matrix types and precisions are supported. As a special feature, the user
controlled 11th through 20th words of the INFO array for the input matrix are copied onto the transposed
matrix.
Design Requirements:
1. The spill logic for the utility has a limit of eight scratch files to perform the transpose. If the transpose
cannot be performed in eight passes using the available memory, the utility will terminate.
Error Conditions:
None
Chapter 8.
THE CADDB APPLICATION INTERFACE
The Computer Automated Design Database (CADDB) is the heart of the ASTROS software system. It has
been designed to provide the structures and access features typically required for scientific software
applications development. CADDB can be viewed as a set of data entities that are accessible by a suite of
utility routines called the application interface as shown below:
APPLICATION INTERFACE
SYSTEM I/O
There are three types of entities: Unstructured, Relational, and Matrix. These are described in the
following sections.
Unstructured Entities.
Unstructured entities form the least organized data type that may be used. An unstructured entity may
be considered as a set of variable length records which have no predetermined structure and which may
or may not have any relationship with each other. This is illustrated by the following:
DATABASE
ENT1 ENT2 ENT3
Unstructured entities are typically used when "scratch" space is needed in an essentially sequential
manner. Two important points, however, are that each record may be accessed randomly if the entity is
created with an index structure, and that records may be read or written either in their entirety or only
partially. Details of these features are discussed in Section 8.6.
Relational Entities.
Relational entities are completely structured tables of data. The rows of the table are called entries or
tuples and the columns are called attributes, as shown below:
DATABASE
ENT1 ENT2 ENT3
GID X Y Z
ENTRIES
101 0.0 0.0 0.0
ATTRIBUTES 102 1.0 0.0 0.0
103 1.0 1.0 0.0
104 0.0 1.0 0.0
The definition of the attributes and their types is called the schema of the relation. Because the schema
is an inherent part of the relational data structure, each attribute may be referred to by its name. In
addition, because each of the attributes is independent of the others, it is possible to retrieve or modify
only selected attributes by performing a projection of the relation. Attributes may also be defined with
keys. If an attribute is keyed, an index structure is built that allows rapid direct access to a given entry.
There is a restriction, however, that a keyed attribute must have unique values for all entries.
Another powerful feature is the ability to retrieve entries that have been qualified by one or more
conditions. A condition is a constraint definition for an attribute value. For instance, in the example
above, the condition of X=1.0 might be specified prior to data retrieval. Only those entries that satisfy the
given constraint or constraints are then returned.
Relational entities are used when the data they contain will be accessed or modified on a selective basis.
This eliminates the need to move large sequential sets of data back-and-forth when modifying or retriev-
ing only small amounts of data. An additional feature available with CADDB is the "blast" access of a
relation. This allows the data to be treated sequentially while maintaining the relational form. These and
other features are fully described in Section 8.5.
Matrix Entities.
One of the most important data structures encountered in engineering applications is the matrix. Matrix
algebra forms the basis for the finite element method employed by ASTROS. The efficient performance of
this algebra, along with additional operations such as simultaneous equation solvers, eigensolvers and
integration schemes, is critical to such a software system. CADDB represents matrices in packed format.
This format has been used extensively by the NASTRAN system for the last 30 years with excellent
success. The representation of a matrix on CADDB is shown below:
DATABASE
ENT1 ENT2 ENT3
Referring to the figure, note that only the non-null columns of a matrix are stored, thus reducing disk
space utilization. Within each column are one or more strings. A string is a sequential burst of data
entities with a header that indicates the first row position of the data in the given column and "n", the
number of terms in the string. This representation allows a further data compression in that zero terms
in the column are not physically stored.
A complete library of matrix utilities is available within the ASTROS system. These utilities are coded to
use the packed format to its best advantage. All matrix data should be stored in this manner. Many
access methods are available for matrix entities. A matrix may be positioned randomly to a given column,
an entire column may be read or written, individual terms may be read or written and so on. These
functions are described in Subsection 8.4.
The ASTROS internal database is the proprietary eBASE database developed by UAI. In addition to the
CADDB interface described in this Chapter, you may directly use additional eBASE utilities. To do this,
you must license the eBASE software separately. While this is generally not necessary, extremely ad-
vanced users may find that it provides additional power and flexibility for large scale development within
the ASTROS environment. Contact the sales department of UAI for more information.
The performance goals of the database had to address both I/O and CPU issues. The optimization of I/O
performance is usually in direct conflict with minimal memory utilization. When faced with an I/O versus
memory conflict, reduced I/O was generally selected. Summarized in the following are typical design
decisions impacted by this issue:
1. All bit maps required by the database are kept in memory to reduce I/O requirements of free
block management.
2. Directory pointers for all entities, open or closed, are kept in memory to reduce directory search
time.
3. While any type of entity is open, all schema definition data are kept in memory.
CADDB was designed in top-down structured manner. It is divided into functional modules that simplify
implementation, testing, and maintenance. Generically, the functions of these modules are:
1. ENTITY CODE: Separate groups of routines are provided for each of the three entity types.
2. RELATIVE BLOCK: These routines process the block allocation tables to convert relative block
numbers used by entity routines into physical blocks.
4. FREE BLOCK MANAGEMENT: Performs the allocating and freeing of physical blocks.
5. INDEX PROCESSING: All index processing is done by these routines. Two sets of routines,
one for sequential indices and one for binary indices, are provided.
This highly modular design provides several advantages. The most important is that new features can be
added with a minimal effect on the existing code. For example, a double buffering scheme could be added
to reduce I/O wait time by simply modifying the buffer management routines.
Each physical database is comprised of a set of disk files. An index file and at least one data file are
required for each database. The index file contains the necessary control information to find entities on
the database. This information includes the following:
2. FBBM: The Free Block Bit Maps (FBBM) are used to keep track of the blocks which are allocated
and free.
3. BAT: The Block Allocation Table (BAT) is used to keep track of the physical blocks used by each
entity.
4. SCHEMA: The SCHEMA defines the attribute structure for each relational entity.
5. INDEX: Each matrix or unstructured entity can have optional indices built to allow quick access
to any column or record. Relational entities can also have indices built for any attribute.
The data files are used to store the actual information in each entity. Multiple data files can be used to
split the database over several physical disk drives. Free block allocation is performed in a cyclic fashion
among the data files to balance the I/O load on the system.
The design of a new ASTROS database was required to address deficiencies in existing available codes.
The GINO I/O system of NASTRAN, while efficient, is a file management system, not a database.
Separate files are required for each entity and only matrix and unstructured entities are supported. The
RIM database, developed by the IPAD program for NASA, supports the relational entity type but does
not adequately support either unstructured or matrix types. Additionally, the RIM system suffers from
severe restrictions and performance penalties. The following summarize the functional improvements
that make CADDB superior to these existing systems:
1. The three entity types have been combined into one database in as consistent a manner as
possible.
2. The dynamic memory manager (See Subsection 8.3) allows the database to be open-ended
without overburdening an application code which also makes large demands on memory.
3. Multiple databases and as many entities as memory allows may be processed simultaneously.
4. Multiple jobs can have READONLY access to the same database. With CADDB, a system
database, as described in Chapter 3.2, is provided. This database contains data required by each
ASTROS job.
5. An improvement over GINO allows existing records or columns of unstructured and matrix
entities to be rewritten without destroying any other data in the entity.
6. "Garbage-collection" of freed blocks is handled automatically by the database. The dump and
restore requirement of some databases, such as RIM, is eliminated.
7. The concept of projections has been added to all relational entity access calls. This allows
application codes to process only those attributes needed for each entry of the relation. This
allows a new attribute to be added to a relation without impacting previously coded modules
not requiring the new attribute.
As discussed in the introduction to this section, trade-offs in design between memory and I/O perform-
ance were generally made in favor of I/O. In this subsection, the general memory requirements of
CADDB are summarized. The equations below use the following symbols:
Using an index file block size of 256 words and a data file block size of 2,048 words, these relations
indicate that, for a typical engineering module, the memory requirement would be approximately 4,000
words greater than that required by the NASTRAN GINO system.
SUBROUTINE FUNCTION
General Utilities are those which apply to any entity type. Two additional general data utilities are
DBINIT and DBTERM. These are system level modules and are presented in Chapter 4.
To create a new database entity, the routine DBCREA is used. This utility enters the new entity name and
its type into the database directory. Although there are three entity classes, there are two options for
both matrix and unstructured entities, indexed or unindexed. Typical calls to create the three entities
pictured in Subsection 8.1 could be:
The ASTROS executive system automatically creates all database entities that are declared in the
MAPOL program. An application programmer usually creates only scratch entities within a given mod-
ule.
Accessing Entities.
Prior to adding new data, modifying existing data or accessing old data for an entity, the entity must be
opened, and when I/O is completed it must be closed. This is done to allow optional use of memory
resources as discussed in Subsection 8.1. Using the examples as before, I/O is initiated by the calls:
The array INFO is very important. It contains 20 words that provide information about the data contents
of the entity, such as the number of attributes and entries in a relation, the number of records in an
unstructured entity and the number of columns in a matrix. The first 10 words of INFO are used by the
database. The programmer may use the second 10 words for any purpose desired. The INFO array is then
updated when the entity is closed. As an option, access to an entity may request that the data contents of
the entity be destroyed, or FLUSHed, when opening it.
When all activity is completed for a given entity, it must be closed to free memory used for I/O. This is
done with a call such as:
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. ENTNAM may not open.
Error Conditions:
None
Method:
DBNEMP is a LOGICAL FUNCTION that returns TRUE if and only if the named ENTNAM exists and
contains entries if relational, columns if matrix or records if unstructured. Any other condition returns
a FALSE.
Design Requirements:
1. ENTNAM may not open.
Error Conditions:
None
1 REL 1 rectangular
2 MAT 2 symmetric
3 IMAT 3 diagonal
4 UN 4 identity
5 IUN 5 square
1 real, single-precision
2 real, double-precision
3 complex, single-precision
4 complex, double-precision
Method:
None
Design Requirements:
1. The entity may be open.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
BLOCK 1
EXECUTABLE CODE
AND DATA BLOCK 2
... GROUP 1
LOCAL ARRAYS ...
...
BLOCK n
BLOCK n+1
BLOCK n+2
... GROUP 2
FREE ...
MEMORY ...
BLOCK m
...
...
...
This dynamic memory area can be viewed as a type of virtual memory those paging is under control of
the programmer. Static memory languages, such as FORTRAN, are very inefficient users of memory. The
DMM can eliminate some of this inefficiency. As an example, consider a routine to perform the matrix
addition
Three possible implementations are shown, all based on the assumption that the available memory, after
all other components of the program are loaded, is 30,000 words.
The classical brute-force FORTRAN solution to this problem is to see that 3 arrays each dimensions 100
by 100 will fit perfectly in the available memory. The routine is duly coded as:
With this algorithm, the matrix sizes are fixed at 100 by 100. If the matrices are only 3 by 3, 99 plus
percent of the memory is wasted. Further, although a 20 by 500 matrix would occupy the same 10,000
words, it cannot fit into the predefined array. This latter problem can easily be fixed by storing the
matrix in a singly dimensioned array of 10,000, which already implies the programmer must manage the
array.
By using the dynamic memory manager both problems shown in the last section disappear. Consider the
code segment:
COMMON/MEMORY/ Z (1)
C
C ALLOCATE MEMORY FOR EACH MATRIX
C
CALL MMBASE ( Z )
CALL MMGETB ( ’AMAT’, ’RSP’, N*M, ’MAXT’, IA, ISTAT )
CALL MMGETB ( ’BMAT’, ’RSP’, N*M, ’MAXT’, IB, ISTAT )
CALL MMGETB ( ’CMAT’, ’RSP’, N*M, ’MAXT’, IC, ISTAT )
DO 100 I = 1, N*M
II = I – 1
Z ( IC + II ) = Z ( IA + II ) + Z ( IB + II )
100 CONTINUE
This code allows all 30,000 words of memory to be used regardless of the shape of the matrices. Addition-
ally, it uses exactly the memory required if the operation is smaller than the available memory.
Spill-logic can be implemented in a number of ways using the matrix utilities described in Subsection 8.4.
When spill-logic is used, only those portions of the matrices involved in an operation are brought into
memory. Operations are then performed and intermediate or final results stored on the database. With
this coding technique, problems of virtually unlimited size may be addressed. There are nine DMM
utilities that may be used by an application programmer. Each routine is prefixed with the letters MM. A
summary of these routines is shown below.
SUBROUTINE FUNCTION
MMBASE
MMBASC Used by each module to define the location of the memory base address
Method:
None
Design Requirements:
1. Only one call to MMBASC may be made in a module.
Error Conditions:
None
Method:
None
Design Requirements:
1. This routine must be the first called in each module that uses the memory manager.
2. It cannot be used for memory containing character data (see MMBASC).
Error Conditions:
None
Method:
None
Design Requirements:
1. The utility assumes the MMINIT has been called to initialize the open core memory block.
2. In some cases of corrupted memory, the use of this routine may result in an infinite loop.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
1. Attempt to reduce memory block to zero length.
Method:
None
Design Requirements:
1. This routine should not be used within an application routine, it is an executive memory manage-
ment function.
Error Conditions:
None
SUBROUTINE FUNCTION
After a matrix entity has been created it must be initialized before it can be used. The MXINIT call
provides information required for the storing of data into the matrix. For example, to create and initialize
a matrix entity for a real, single-precision, symmetric matrix with 1,000 rows the following code is
required:
Whenever a matrix is flushed, with either a DBOPEN or a DBFLSH call, the initialization data are cleared.
Therefore, an MXINIT call is required before reusing the matrix in this case. Similarly, if a matrix entity
is going to be redefined, it must be flushed before a new MXINIT call may be made.
The simplest method to process a matrix is with the full column routines MXPAK and MXUNP. Each of
these routines may process either a full column or a portion of a column. In either case, only one call is
allowed for each column. The subsequent call will process the next column. The following code illustrates
the packing and unpacking of matrix by columns:
C
C PACK MATRIX BY COLUMNS
C
DO 100 ICOL = 1, NCOL
CALL MXPAK (’KGG’, COLDTA(1,ICOL),1,1000)
100 CONTINUE
C
C UNPACK MATRIX BY COLUMNS
C
CALL MXPOS (’KGG’,1)
DO 200 ICOL = 1, NCOL
CALL MXUNP (’KGG’, DATA(1,ICOL),1,1000)
200 CONTINUE
The MXPAK routine removes any zero terms in the column to reduce the amount of disk space required to
store the matrix. Consecutive nonzero terms are stored in strings. Whenever a zero term is encountered,
the current string is terminated and a new string is started. The MXSTAT routine may be used to obtain
statistics about each column. The following code gives an example of how this information can be used to
unpack only those terms between the first and last nonzero terms.
A matrix can also be processed by individual terms. To pack a matrix termwise requires a series of calls
for each column. The first call must be a column initialization call, followed by a series of calls to pack
single terms and, finally, a column termination call. The following code shows the packing of an individ-
ual matrix column by terms:
C
C INITIALIZE TERM-WISE PACKING
C
CALL MXPKTI (’KGG’, IKGG)
C
C READ MATRIX TERM AND PACK
C
100 READ (5, *, END=200) IROW, VAL
CALL MXPKT (IKGG, VAL, IROW)
GO TO 100
C
C TERMINATE COLUMN
C
200 CALL MXPKTF (’KGG’)
Note that the termwise packing must be done in ascending row order.
A similar set of calls is required to unpack a matrix by terms. The MXSTAT routine is used to determine
the number of nonzero terms that exist in the column. The following code will unpack and print the
nonzero terms for a matrix column.
It is not required that each term in the column be unpacked. If any terms are left, the MXUPTF routine
will ignore them and position the matrix to the next column.
As explained earlier, matrix data are actually stored in strings of terms with intervening zero terms
compressed. A series of routines is provided to allow matrices to be accessed by strings. The use of these
routines is similar to the termwise routines in that there is a column initialization call, a call for each
string, and a column termination call. The following code shows the packing of a matrix which contains
two strings, the first with five terms and the second with three terms.
C
C INITIALIZE FOR STRING PACKING
C
CALL MXPKTI (’KGG’, IKGG)
C
C PACK OUT TWO STRINGS
C
CALL MXPKTM (IKGG, STR1, 10, 5)
CALL MXPKTM (IKGG, STR2, 20, 3)
C
C TERMINATE STRING PACKING
C
CALL MXPKTF (’KGG’)
Packing a column by strings differs in several respects from packing by columns. First, more than one
MXPKTM call is allowed for each column. With MXPAK only one call per column is allowed. Secondly, no
compression of zero terms is done within a string. This feature can be used to insure that certain terms of
a matrix are stored, even if zero, so they can later be rewritten randomly.
The unpacking of a matrix by strings is shown in the following code example. Note the use of the MXSTAT
routine to determine the number of strings stored in the column.
C
C DETERMINE THE NUMBER OF STRINGS
C
CALL MXSTAT (’KGG’, COLID, FNZ, LNZ, NZT, DEN, NSTR)
C
C UNPACK COLUMN BY STRINGS
C
CALL MXPKTI (’KGG’, IKGG)
It is important that MXSTAT be used to determine the number of strings in a column because, under
several conditions, the number may be different from the number of strings packed. First, if the string
packed does not fit in the current buffer, it will be automatically split over two buffers. Secondly, if two
strings are packed consecutively, they will be automatically merged into one string in the buffer.
Several routines are provided to position a matrix randomly to a given column. This operation can be
done on either indexed, IMAT, or unindexed matrices, but the presence of an index speeds up the
processing greatly. The following code shows the use of these routines to randomly read three matrix
columns.
C
C POSITION TO COLUMN 10 AND UNPACK
C
CALL MXPOS (’KGG’, 10)
CALL MXUNP (’KGG’, DATA, 1, 1000)
C
C POSITION FORWARD 5 COLUMNS
C
CALL MXRPOS (’KGG’, +5)
CALL MXUNP (’KGG’, DATA , 1, 1000)
C
C POSITION TO NEXT NONNULL COLUMN
C
CALL MXNPOS (’KGG’, ICOL)
CALL MXUNP (’KGG’, DATA, 1, 1000)
The first MXUNP retrieves the data for column 10 and leaves the matrix positioned at the start of column
11. The MXRPOS call positions the matrix forward five columns to column 16. The second MXUNP call then
retrieves the data for column 16. The results of the MXNPOS call depend on the data stored in the matrix.
If column 17 has nonzero terms, it will be positioned there. If column 17 is null, the matrix will be
positioned forward until a nonnull column is found. Note that both MXPOS and MXRPOS require that the
column to which the matrix is positioned exists. The MXNPOS, utility is the more general in that it
determines the next column that exists. Null matrix columns can be packed in two ways. The following
code gives examples which are quite different in that the first example creates a column with no nonzero
terms while the second example creates a null column which does not exist.
C
C CREATE A COLUMN OF ZEROS WITH MXPAK
C
CALL MXPAK (’KGG’, 0.0,1,1)
C
C CREATE NULL COLUMN
C
CALL MXPKTI (’KGG’, IKGG)
CALL MXPKTF (’KGG’)
For extremely sparse matrices it is possible to pack only the columns which contain data. This feature
can greatly reduce disk space requirements for these matrices because space is not wasted for column
headers and trailers. It also simplifies coding because it is not required to pack null columns. The
following example shows the packing of an extremely sparse matrix which contains only two items.
C
C PACK DIAGONAL TERM IN COLUMN 100
C
CALL MXPOS (’KGG’, 100)
CALL MXPAK (’KGG’, 1.0,100,1)
C
C PACK DIAGONAL TERM IN COLUMN 500
C
CALL MXPOS (’KGG’, 500)
CALL MXPAK (’KGG’, 1.0,100,1)
When a matrix does not have all its columns stored, care must be used when unpacking it. Since the
routines only operate on columns physically stored in the matrix only two sets of calls are required to
unpack the matrix.
The following code example shows one method of unpacking this matrix.
C
C UNPACK TWO MATRIX COLUMNS
C
DO 100 ICOL = 1,2
CALL MXSTAT (’KGG’, COLID, FNZ,LNZ,NZT,DEN,NSTR)
WRITE (6,*) ’DATA FOR COLUMN’, COLID
CALL MXUNP (’KGG’, DATA,1,1000)
100 CONTINUE
This example illustrates a disadvantage. The code must know the exact number of columns stored in the
matrix. There is no method provided to determine this. The next example shows how MXNPOS can be used
to produce a code sequence that will work no matter how many physical columns are stored in the matrix.
C
C POSITION TO NEXT COLUMN
C
100 CALL MXNPOS (’KGG’,ICOL)
IF ( ICOL.GT.0 ) THEN
WRITE (6,*) ’DATA FOR COL’, ICOL
CALL MXUNP (’KGG’, DATA,1,1000)
GO TO 100
ENDIF
The MXPOS and MXRPOS utilities should be used with extreme caution if the matrix does not contain all
physical columns. These routines work on actual column numbers and will cause fatal errors if the
column does not exist. For example, an MXPOS to column 200 will cause an error because the column is
not stored in the matrix. If the matrix is positioned at column 100, an MXRPOS of +100 will also fail
because column 200 is not stored in the matrix.
Once a matrix has been packed, it is possible to rewrite certain columns of the matrix without disturbing
the data stored in any other columns. The only restriction is that the topology of the matrix terms cannot
change. For example, if MXPAK was used to pack the column, all zero terms are compressed. Since these
terms are not physically stored in the matrix, they cannot at a later time be replaced by a nonzero term.
This can be avoided by using the termwise or stringwise calls, which perform no zero compression. The
following example shows the packing of a matrix column and the subsequent repacking of it.
C
C PACK COLUMN 1 OF MATRIX
C
CALL MXPOS (’KGG’,1)
CALL MXPKTI (’KGG’, IKGG)
CALL MXPKTM (IKGG, STR, 10,10)
CALL MXPKTF (’KGG’)
C
C READ COLUMN 1 OF MATRIX
C
CALL MXPOS (’KGG’,1)
CALL MXPKTI (’KGG’, IKGG)
CALL MXPKTM (IKGG, DATA,IROW,NROW)
CALL MXPKTF (’KGG’)
C
C DOUBLE EACH NUMBER IN THE STRING
C
DO 100 I=1, NROW
DATA (I)=DATA (I) * 2.0
100 CONTINUE
C
C REPLACE THE STRING
C
CALL MXPOS (’KGG’,1)
CALL MXPKTI (’KGG’, IKGG)
CALL MXPKTM (IKGG, DATA,IROW,NROW)
CALL MXPKTF (’KGG’)
All the matrix pack utility calls, i.e., column, term and string, may be used to repack matrix columns.
Method:
None
Design Requirements:
1. If there are no more nonnull columns in the matrix, ICOL is set to zero
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. A matrix may be packed by columns using MXPAK, by terms, or by partial columns, but not by any
combination.
Error Conditions:
None
Method:
None
Design Requirements:
1. ROW1 and NROW must be positive.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. Positive DELCOL positions forward, negative position backward from current column.
Error Conditions:
None
Method:
None
Design Requirements:
1. Note that for very large matrices, DEN is a single-precision number and may be numerically zero
even if there are nonzero terms in the matrix.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. A matrix may be unpacked by columns using MXUNP, by term, or by partial columns, using MXUPT
and MXUPTM, but not by any combination.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
SUBROUTINE FUNCTION
This subsection provides specific examples of operating with relations. Particular attention should be
given the use of double-precision data attributes. Special routines are provided for such attributes when
used by themselves or when "mixed" with other data types.
A relational entity has both a name and a schema. The schema defines the attributes of a relation and
their data types. Therefore, a call to the RESCHM routine is required in addition to a DBCREA call in order
to complete the creation of a relational entity. For example, to create relation GRID (shown in the
introduction to Section 8), the following code is required:
C
C DEFINE ATTRIBUTES TYPES, AND LENGTHS
C
CHARACTER *8 GATTR (4)
CHARACTER *8 GTYPE (4)
INTEGER GLEN (4)
DATA GATTR / ’GID’, ’X’, ’Y’, ’Z’/
DATA GTYPE / ’KINT’, ’RSP’, ’RSP’,’RSP’ /
DATA GLEN / 0,0,0,0 /
C
C CREATE A RELATION AND SCHEMA
C
CALL DBCREA ( ’GRID’ , ’REL’)
CALL RESCHM (’GRID’, 4, GATTR, GTYPE, GLEN )
The schema is specified by attribute name and data type. Various data types are available. In the
example, the grid ID, GID, is called a keyed integer (KINT). This causes an index structure to be created
that will allow fast direct access to a given entry. The coordinate values X, Y, and Z are defined as real,
single-precision (RSP). The length parameters (GLEN) are only used for character attributes and for arrays
of integers or real numbers. An array of values would be used if the overall data organization is relational
but some groups of values are only used on an all-or-nothing basis.
Once a relation has been created it may be loaded with data. There are two modes of adding data: one
entry at a time, or a "blast" add wherein the entire relation, or a large part of it, has been accumulated in
memory. For each mode, there are two options, one when none of the attributes are real, double-precision,
and a second if one or more attributes are real, double-precision. Using the relation GRID, the example
below indicates how it could be loaded on an entry-by-entry basis.
C
C ALLOCATE BUFFER AREA FOR ENTRIES AND INFO
C
INTEGER IBUF (4), INFO (20)
C
C USE EQUIVALENCES TO HANDLE REAL DATA
C
EQUIVALENCE (IGID, IBUF (1)), (X, IBUF (2) )
EQUIVALENCE (y, IBUF (3)), (Z, IBUF (4) )
C
C DEFINE THE PROJECTION AS THE FULL RELATION
C
CHARACTER *8 PATTR (4)
DATA PATTR / ’GID’, ’X’, ’Y’, ’Z’/
C
C OPEN THE ENTITY FOR I/O
C
CALL DBOPEN (’GRID’, INFO, ’R/W’, ’FLUSH’, ISTAT)
CALL REPROJ (’GRID’, 4, PATTR)
C
C READ AN ENTRY FROM INPUT, ADD TO RELATION
C
DO 100 I=1, IEND
READ (5, 101) IGID, X,Y,Z
CALL READD (’GRID’, IBUF)
100 CONTINUE
C
C
C
CALL DBCLOS (’GRID’)
Space must first be allocated to contain an entire entry of the relation. This buffer must be of a specific
type so that equivalences must be used if the attributes are of mixed data types. The projection of the
relation must be defined (via REPROJ) even if all attributes are being selected. If the data had been stored
in memory first, the REAB routine could have been used to "blast" all of the entries into the relation with
a single call.
A relation is accessed by a set of four routines: REGET, REGETM, REGB, and REGBM. Several other routines
now come into play. The first are REPOS and RECPOS. These routines are used to find an entry within a
relation whose key is equal to a specific value. The second is the group of routines RECOND, RESETC,
REENDC, and RECLRC. These allow the specification of more complex "where" clauses that are used to
qualify an entry of the relation.
As an example, suppose that the X, Y, and Z coordinates are to be retrieved for a grid point whose GID is
1. The code segment below could be used to perform this:
C
C ALLOCATE BUFFER – ALL OUTPUT IS REAL
C
DIMENSION COORDS (3), INFO (20)
C
C DEFINE THE PROJECTION
C
CHARACTER *8 PATTR (3)
DATA PATTR / ’X’, ’Y’, ’Z’/
C
C OPEN THE ENTITY FOR I/O
C
CALL DBOPEN (’GRID’, INFO, ’R/W’, ’NOFLUSH’, ISTAT)
CALL REPROJ (’GRID’, 3, PATTR)
C
C POSITION TO THE DESIRED ENTRY
C
CALL REPOS (’GRID’, ’GID’, 1)
C
C GET THE ENTRY
C
CALL REGET (’GRID’, COORDS, ISTAT)
To qualify an entry by more than one attribute, a sequence of an RECOND call, any number of RESETC
calls, and an REENDC call can be used. For instance, to find any or all grid points whose coordinates are
X=1, Y=2, Z=3, the code segment below could be used:
Each call to REGET will retrieve an entry that satisfies the specified conditions. An ISTAT value greater
than zero indicates the end of successful retrievals. Conditions may include any of the relational opera-
tors, the MAX and MIN selectors and the Boolean operators AND and OR. If one of either MIN or MAX issued,
however, it is the only condition allowed. To reset a new set of conditions on an open relational entity, the
utility RECLRC may be called to destroy the current conditions.
One of the most powerful features of the relational database is the ability to randomly modify a small
number of data items efficiently. To do this, the utilities REDUPD and REDUPM are used. The update
procedure is a simple one. First, the projection is set. This is followed by positioning to a row or rows by
specifying a REPOS, RECPOS or an RECOND. Routine REGET is then used to fetch the entry. One or more of
attributes may then be modified in the buffer and an REDUP used to accomplish the update. Note that
attributes not in the projection, and attributes not modified in the buffer, will remain unchanged.
If it is necessary for an application to determine the schema of a given relation, this may be done with the
utility REQURY. This routine returns the names and types of each attribute in the schema. Finally, a
relation may be sorted in an ascending or descending manner on one or more of its attributes by using
the utility RESORT.
Method:
None
Design Requirements:
1. Only integer, real single-precision, or string attributes may be added with this routine.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. Only integer, single-precision, or string attributes may be added with this routine.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. VAL must be the same type as the ATTRNAM. All RELOPs are legal for attributes of type ’INT’,
’KINT’, ’RSP’, and ’RDP’. Only ’EQ’ and ’NE’ are valid for attribute types ’STR’ and ’KSTR’.
Attribute types of ’AINT’, ’ARSP’, and ’ARDP’ may not be used in a condition. Also, for attributes
of type ’STR’ or ’KSTR’, their length must be 8 or fewer characters. Note that string attribute
values are passed as hollerith data.
2. Any RECOND call removes any existing conditions, that is, it performs an RECLRC internally.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. The string attribute, FIELDS, must be passed as a hollerith.
Error Conditions:
None
Method:
None
Design Requirements:
1. The attribute must be keyed.
2. If KEY=’ENTRYNUM’ then VAL must be 1.
Error Conditions:
1. A database fatal error occurs if the requested entry does not exist.
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
1. VAL must be the same type as the ATTRNAM. A maximum of 10 conditions may be specified. All
RELOPs are legal for attributes of type ’INT’, ’KINT’, ’RSP’, and ’RDP’. Only ’EQ’ and ’NE’
are valid for attribute types ’STR’ and ’KSTR’. Attribute types of ’AINT’, ’ARSP’, and ’ARDP’
may not be used in a condition. Also, for attributes of type ’STR’ or ’KSTR’, their length must be
8 or fewer characters. Only one condition may have a ’MAX’ or ’MIN’ RELOP. Note that string
attribute values are passed as hollerith data.
Error Conditions:
None
Method:
None
Design Requirements:
1. The sort sequence is performed in the order that the attributes are specified in ATTRLIST.
2. The relation RELNAM must be closed when RESORT is called.
Error Conditions:
None
Method:
None
Design Requirements:
1. Only integer, single-precision or string attributes may be updated with this routine.
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
SUBROUTINE FUNCTION
UNPOS
Positions to a given unstructured record
UNRPOS
UNSTAT Returns the length of a record
UNGET Gets, or fetches, an entire record
UNGETP Gets, or fetches, a partial record
UNPUT Adds a new record to the unstructured entity
UNPUTP Adds a partial record to the entity
As seen in Subsection 8.2, the first step in generating any entity is to perform a DBCREA. This is followed
by a DBOPEN, any desired I/O activity, and finally a DBCLOS. Suppose, for example, the local coordinates
X, Y, and Z of 1000 grid points have been computed and are in a block of dynamic memory called GRID
whose location pointer is IGRD (see Section 8.3). Further, assume that these coordinates have also been
converted to the basic coordinate system, and that these transformed coordinates are located in block
NGRD with pointer IGND. These data will be used in a subsequent routine or module in their entirety. It
will therefore be written into an unstructured entity called COORD in two distinct records. The code
segment to perform this is shown below:
C
C CREATE THE NEW ENTITY AND OPEN FOR I/O
C
CALL DBCREA (’COORD’, ’IUN’)
CALL DBOPEN (’COORD’, INFO, ’R/W’, ’FLUSH’, ISTAT)
C
C WRITE THE TWO UNSTRUCTURED RECORDS
C
CALL UNPUT (’COORD’, Z (IGRD), 3000)
CALL UNPUT (’COORD’, Z (IGND), 3000)
C
C I/O COMPLETE CLOSE ENTITY
C
CALL DBCLOS (’COORD’)
The UNPUT call loads a complete record into the entity. Therefore, the above operations generate two
records in COORD. If an operation is being performed "on-the-fly," or complete records do not fit in
memory, then a "partial" put, UNPUTP, may be performed.
Now assume that the local coordinates are all in memory, but that the transferred coordinates will be
generated on a point-by-point basis and written to the COORD entity. Subroutine TRANSF transforms a set
of three local coordinates to basic coordinates stored in a local array XNEW. This is illustrated below:
C
C CREATE AND OPEN THE ENTITY
C
CALL DBCREA (’COORD’, ’UN’)
CALL DBOPEN (’COORD’, INFO, ’R/W’, ’FLUSH’, ISTAT)
C
C FIRST WRITE THE LOCAL COORDINATE
C
CALL UNPUT (’COORD’, Z (IGRD), 3000)
C
C NEXT, COMPUTE NEW COORDINATES ONE-AT-A-TIME
C
DO 100 I=0,2999,3
CALL TRANSF (Z (IGRD+I), XNEW)
CALL UNPUTP (’COORD’, XNEW, 3)
100 CONTINUE
C
C TERMINATE PARTIAL RECORD AND CLOSE
C
CALL UNPUT (’COORD’, 0,0)
CALL DBCLOS (’COORD’)
Note that a record of an unstructured entity that is created by partial puts must be "closed" by a call to
UNPUT. In this case, the final put operation does not extend the record but only terminates it.
The UNPUT and UNPUTP utilities have direct analogs in UNGET and UNGETP for the retrieval of data. Three
other utilities are available for data access. The first two, UNPOS and UNRPOS, allow an unstructured
entity to be positioned to a specific record. Note that this access is much faster if the entity was created
with an index structure, that is, the DBCREA call specified type ’IUN’. The third utility, UNSTAT, is used
to find length of a given record. These utilities are demonstrated in the example below. the second record
of COORD will be accessed and each coordinate set used individually. It is not assumed that the number of
grid points is known to this application.
C
C OPEN THE ENTITY, READONLY MODE
C
It is also possible to modify, or update, the contents of an individual record within an unstructured entity.
The only limitation to this feature is that the length of the record must be the same as, or less than, the
length of the originally created record.
Method:
If NWORD is less than the total number of words, the remaining data will not be retrieved. UNGET positions
the entity to the next record after the retrieval.
Design Requirements:
None
Error Conditions:
None
Method:
Following the retrieval, the entity is still positioned at the same record, a subsequent UNGET or UNGETP
will get the next words in the record.
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None
Method:
None
Design Requirements:
None
Error Conditions:
None