Containers Best Practices
Containers Best Practices
The Lawn
22-30 Old Bath Road
Newbury, Berkshire RG14 1QN
UK
http://www.microfocus.com
MICRO FOCUS, the Micro Focus logo and are trademarks or registered trademarks of Micro
Focus or one of its affiliates.
14 July 2020
The COBOL-related functionality described in this document is available in Enterprise Developer 6.0
and Enterprise Server 6.0. Some, but not all, of the COBOL-related functionality is available in Visual
COBOL 6.0 and COBOL Server 6.0.
The information in this document is intended to supplement the documentation for Enterprise
Developer and Visual COBOL rather than to replace it. As a result, this document includes a number
of links to the current documentation for Enterprise Developer. The links are to the documentation
for Enterprise Developer for Eclipse, so if you are using Enterprise Developer for Visual Studio or
Visual COBOL you will need to ensure that the information referred to is relevant to the product you
are using.
Reference to third party companies, products and web sites is for information purposes only and
constitutes neither an endorsement nor a recommendation. Micro Focus provides this information
only as a convenience. Micro Focus makes no representation whatsoever regarding the content of
any other web site or the use of any such third party products. Micro Focus shall not be liable in
respect of your use or access to such third party products/web sites.
Document Versions
Version Date Changes
1.0 June 2020 Initial version of document to accompany release of Visual COBOL and
Enterprise Developer 6.0.
1.1 July 2020 Additional information on safely scaling down resources added to section
13.1. Prepare the application for scale-out deployment.
Methodology Overview
The methodology can be broken down into three distinct stages, which can in turn be broken down
into smaller steps:
1. Ensure all your existing application source files are held in a version control system
2. Ensure you can build your application in a consistent way which requires no human
intervention once started, and which uses only source files extracted from version control
3. Prepare your application configuration for version control
4. Store your application configuration in version control
5. Test the application you have built from version control using the application configuration
from version control
Each stage, and the steps comprising each stage, are covered in the remainder of this document.
Using a version control system is not a requirement for moving your application to a containerized
environment, but Micro Focus recommends you use one regardless of whether you are moving to a
containerized environment. This is because using a version control system enables you to track
changes to your application builds and track which version is being used in the different application
lifecycle stages (such as development, QA, and production). Storing your configuration files (as well
as your source files) in version control enables changes to those files to be controlled and tracked in
just the same way as changes to the application programs.
Note: While Micro Focus recommends that you store your source and configuration files in a
version control system, this is not absolutely necessary and you could use a less formal form
of file storage instead. For the sake of brevity, this section uses the term "version control
system" to mean whatever method of version control or file storage that you have adopted.
Neither Enterprise Developer nor container environments impose any special requirements
on how you store application source files in a version control system, so you are free to use
whatever standards and conventions you would normally use when using a version control
system.
You should also store in your version control system any scripts or project files that are used to the
build the application. Ensure that none of the files you add to version control contain secrets such as
user credentials or certificates.
This definition of the build and runtime environment should be held as a document in version
control with the rest of the source code, and later in the process it will be used to help containerize
the application.
the build environment defined in section 2.1 Define the build and runtime environments
files extracted from your version control system
Preferably this would be done using a clean machine in order to guarantee that all installed software
is tightly controlled.
Deployment packages should be generated for Interface Mapping Toolkit (IMTK) services - these are
in the form of COBOL archive (.car) files created using the imtkmake command-line utility. You can
integrate these as part of the Visual Studio or Eclipse project builds.
You can use Enterprise Server utilities to export the configuration to XML text files. To reduce the
size of these XML files, before exporting the configuration you should perform routine maintenance
and clean-up on the Enterprise Server configuration, and the configuration should be adjusted to
enable it to be more easily used in a containerized environment. The following sections outline these
steps.
How: Enterprise Developer includes the caspcupg command line utility which you can use to
upgrade your CICS resource definition files.
Note: Targeting a different version of Enterprise Server also requires you to recompile,
rebuild and retest your application. Those activities are not specific to containerization so
are not covered here.
To do this tidying you use the MVSSPLHK spool housekeeping process to archive and remove all
spool data files and job records for jobs that have exceeded the maximum retain period. See
MVSSPLHK Spool Housekeeping Process for more information.
Enterprise Developer includes the mvspcrn command that enables you to make bulk updates such
as this to a catalog file. See Bulk Update of Catalog for more information.
Setting your environment variables in Enterprise Server in this way means that all of the information
required to configure and run the region is contained within the region.
To do this, look for environment variables that are being used in the Enterprise Server configuration.
For example:
$MY-ENV-NAME
or:
$MY-ENV-NAME/myfolder/myfile.txt
You can use the Enterprise Server Administration interface to check that the region listeners are
using fixed ports, that is they are not listed as *:* or network-adapter:*. You should make a note of
the fixed ports that are in use and their numbers as you will need this information when you come
to containerize the application.
See Listeners and To set a fixed port in a listener endpoint address for more information.
Credentials that are used by Enterprise Server should be stored in a vault, so if you are not already
using the Vault Facility you should enable it. You enable the vault for use by the Micro Focus
Directory Server (MFDS) by specifying the MFDS_USE_VAULT=Y environment variable.
The files created by the vault are encrypted using information in the file $COBDIR/etc/secrets.cfg
(on Linux) or %PROGRAMDATA%\Micro Focus\Enterprise Developer\mfsecrets\secrets.cfg (on
Windows). You can see the encrypted files created by the vault within the vault file system location
(defined by the location element in the secrets.cfg file).
If the vault is not enabled when you export the Enterprise Server and MFDS configuration,
passwords will appear as plain text in the configuration XML files. For security reasons, you should
not store passwords as plain text in your version control system.
See Vault Facility and Configure the Default Vault for more information.
This section contains information on steps you need to perform to create or export Enterprise Server
data and configuration files into formats that are suitable for use in a version control system. The
key benefits of storing your configuration in version control are as follows:
that you get the same trackability and accountability for your application's configuration files
that you get for the source files
configuration changes are stored alongside the application programs to which they refer,
preventing the two from becoming out of step
It is possible that your application uses some application-specific configuration files that are not
easily converted into a text-based format. You should still add these files to the version control
system so that they are available in your containerized environment, but you will not be able to use
the version control system's full range of features on them.
mfds
casrdtex
Importing and Exporting a Catalog: import export utility
casesxml
Notes:
If you use the mfds command you must use the /x 5 parameter, or the output produced
will not be an XML file.
If you use the casrdtex command you must use the /xm switch or the output produced
will be a resource definition table (.rdt) file rather than an XML file.
It can be helpful when testing your application to able to consistently setup a test environment, so
you should consider storing some test data in version control. Never store sensitive production data
in version control.
This section of the document looks at the steps involved in the next stage of the process, which is to
take your existing application and get it running in a containerized environment.
If you need to switch from using 32-bit regions to 64-bit regions the easiest way is to edit the
exported XML region definition, search for the “mfCAS64Bit” element and set the value to 1.
Alternatively, you can use Enterprise Server Administration to make a copy of a 32-bit region but use
the Working Mode option to specify that the new copy is 64-bit. Copying a region in this way ensures
that the old and new regions are configured the same apart from the bitism.
If you do switch the regions that an application uses from 32-bit to 64-bit mode you must also
recompile, rebuild and retest your application. Those activities are not specific to containerization so
are not covered here. See 64-Bit Native COBOL Programming for more information.
If your application does not use any third-party software, this step just requires you to build the
Enterprise Developer Build Tools for Docker containers which you can do by running bld.sh or
bld.bat in the container demonstration that come with Enterprise Developer. See Running the
Container Demonstration for the Enterprise Developer Base Image for more information.
If your application requires additional software you will need to also create container images which
include that software. The process you use to generate these images should be capable of being
automated, and the scripts, batch files, and other files used (such as Dockerfiles) should be stored in
version control.
or:
-rm specifies that the container's file system is automatically removed when the container
exits.
-v specifies the volume to mount into the container.
-w specifies the default working directory for running binaries within the container
If your application consists of one or more IMTK services, you must create COBOL archive (.car) files
for the services as part of the build process (and deploy these later using the command-line utility
Note that although the contents of the containers themselves remain constant once you have
created them, you can configure the container images at run-time by using environment variables or
volume-mounted files and directories, enabling you to configure items such as credential "secrets",
Transport Layer Security (TLS) certificates/keys and database connection strings.
A Dockerfile contains all the necessary instructions to build your application into a container. Micro
Focus recommends using a multi-stage Dockerfile, as this enables you to build the application using
a container that includes the relevant utilities such as compilers, but the production application
container runs without those utilities, lessening the security risk. See Use multi-stage builds on the
Docker web site for more information.
To achieve this, a multi-stage Dockerfile enables you to build the application in one stage, then in a
later stage you assemble the final deployment image. There are other approaches that you can use
to achieve the same thing, such as a Continuous Integration (CI) pipeline, but using a multi-stage
Dockerfile generally provides most platform portability.
Note: If you are using Docker, you must be using Docker 17.05 or later to use multi-stage
builds.
If you are using Visual COBOL/Enterprise Developer projects to build your application from the IDE,
Visual COBOL/Enterprise Developer can create a template Dockerfile for you. On Eclipse, right-click
your project in the Application Explorer view, COBOL Explorer view or Project Explorer view, and
click New > Dockerfile. On Visual Studio, right-click your project in Solution Explorer and click Add >
COBOL Docker Support. See To add a Dockerfile to a native COBOL project for more information.
The following Dockerfile commands show an example of how this might be done:
Building the tests is similar to building your application in that your Dockerfile must first copy all the
test source code into the container from the "build context", then it needs to invoke the necessary
tools from within the container to build your tests.
In a separate stage in the Dockerfile, COPY statements copy the required files (including the
application and test binary modules) from the previous stages, and then the cobmfurun command
executes the tests:
The following Dockerfile commands show an example of how this might be done:
The Dockerfile should also set any system environment variables which were previously identified:
Building the image should also import the Enterprise Server configuration; that is the MFDS and
region configuration, so that this is does not need to be performed when the container starts. IMTK
services which have been assembled into deployment packages (.car files) during the build stage can
be installed into Enterprise Server using the mfdepinst command-line utility.
You should ensure that when the application is run, the user name (or UID) is set to be a user with
the minimum permissions necessary to run the application. Do not use "root" unless it is essential to
do so. See USER instruction on the Docker web site for more information.
# Swap away from being the root user so Enterprise Server is not running
# with elevated privileges
USER esadm
One effect of using an alternative user to root is that your user needs special permissions to bind to
network ports < 1024. By default, MFDS uses port 86, so when starting MFDS for a non-root user you
should set the CCITCP2_PORT environment variable to override the default port on which MFDS
listens. You can set this environment variable in the Dockerfile using an ENV statement such as the
following:
The image should also declare all the listener ports used by the image, and the command it runs
needs to start MFDS and the Enterprise Server:
Add a HEALTHCHECK statement to your Dockerfile to ensure that your Enterprise Server region is
running:
See HEALTHCHECK instruction on the Docker web site for more information.
The following example docker run command shows how you could specify an environment
variable that defines a password and volume mount a folder containing certificate details:
You need to create an executable script that is to be run when the container is started; that is, the
script will be specified by the ENTRYPOINT or CMD command in the Dockerfile. The script should
ensure that the environment is setup before starting MFDS and the Enterprise Server.
#!/bin/bash
. $MFPRODBASE/bin/cobsetenv $MFPRODBASE
As noted in the section 3.8. Identify secrets and use the vault, you should already have ensured that
the Vault Facility is enabled for your application. You should now ensure that before MFDS is started,
the necessary vault secrets are recreated within the running container.
You can use environment variables or volume mounts to pass the secret values into the container
and then set these into the vault using mfsecretsadmin command-line utility. See The
mfsecretsadmin Utility for more information.
For example, the following command will recreate the pgsql.pg.postgres.password secret in the
vault using the value of the DB_PASSWORD environment variable:
Micro Focus recommends that you unset these environment variables in the container before
starting MFDS in order to avoid the values of the environment variables being visible to anyone
using ESCWA, MFDS, or Enterprise Server Monitor and Control (ESMAC) to remotely monitor the
server.
The contents of the console.log file should be made easily available outside the container, as can be
seen in the example script above, where the tail command is used for this. Other logs, such as the
Micro Focus Communications Server (MFCS) log.html, are also useful for monitoring and diagnostic
purposes.
For more information see Enterprise Server Log Files and Communications Process Log Files.
Because administrative functions need to be performed using FSView, you need to configure
Fileshare to use the vault with suitable FSView credentials specified. You use the /uv option when
starting Fileshare to do this.
fs.cfg
fhredir.cfg
dbase.ref
If you are not already using an External Security Manager (ESM) to control access to
Enterprise Server you should start using one. For example, you could use Active Directory or
some other security manager to restrict access to Enterprise Server and ESCWA.
Restrict the use of listeners as much as possible.
Do not expose any ports that are not strictly required to run your application.
Enable auditing within the security manager. For best performance, run a local syslog server
which you can configure to write to a file and/or forward the events to an external syslog
server.
Run the container using a user name (UID) with minimum possible permissions
For more information see Enterprise Server Security and Enterprise Server Auditing.
If you are moving your application from Windows to Linux (or vice versa), bear in mind the
following:
o The default file extension of data files is different between the two platforms, so
could require additional configuration using the IDXNAMETYPE option in the File
Handler configuration file (extfh.cfg). For more information see Configuration File
and IDXNAMETYPE.
o The different platforms have different behavior in a number of areas such as case
sensitivity, and path and filename separators. You must ensure that your application
correctly handles these areas on the platforms it will be deployed on.
Your container might require additional software and configuration to allow SQL access. For
example, to use ODBC this would be as follows:
o Install database drivers such as openODBC.
o Build the appropriate XA switch modules.
o Configure the ODBC configuration file.
The steps required would be different if you were using a different database.
For more information see Building RM Switch Modules and Using XA-compliant Resources
(XARs).
You need to consider where you are going to store your data files. Any changes made to
data within a container are lost when the container is stopped. This can be useful during
testing but is unlikely to be suitable for production use. Options you could consider using
include the following:
o Volume mounts to some persistent storage
o Data volumes
o An external database
For example:
Containerized scale-out environments, such as Kubernetes, provide scaling at the level of a logical
grouping of containers. Kubernetes refers to this as a Pod. Many Pods can run on the same node
(computer) and the Pods can be monitored by Kubernetes controllers so that the number of Pods
scales up or down depending on the application load and any failed Pods are automatically
restarted.
Creating scale-out applications poses a challenge to maintain data consistency across the different
application instances while increasing availability. To help with this, Enterprise Server includes the
Micro Focus Database File Handler (MFDBFH). MFDBFH enables sequential, relative, and indexed
sequential files to be migrated to an RDBMS in order to provide improved scaling, and is a
requirement for running Performance and Availability Clusters (PACs). MFDBFH is not currently
supported by Visual COBOL file access.
For more information see Micro Focus Native Database File Handling and Enterprise Server Region
Database Management and Scale-Out Performance and Availability Clusters.
Using MFDBFH enables you to store and subsequently access VSAM data in an RDBMS by only
making application configuration changes; that is, without making any changes to your program
source code. Section 10.1. Migrate VSAM to RDBMS using MFDBFH provides more information on
this.
You could use a Fileshare server in a scale-out environment, but the Fileshare technology was not
specifically designed for such an environment so you must ensure that it would be suitable for your
use. In particular, you should ensure that the performance and recovery options of Fileshare meet
your needs.
If data is already stored in an RDBMS you should review whether the current server configuration is
suitable for a scale-out deployment.
You should migrate all non-static data. This includes catalogued and spool files (which you should
have performed housekeeping on as described in 3. Prepare your application configuration for
version control when you moved your application configuration files to your version control system),
and any other CICS files that are accessed via FCTs. In your final production move this will need to be
carefully planned. Sections 10.2. Deploy the catalog and data files to an RDBMS and 10.3. Deploy the
CICS resource definition file to an RDBMS provide more information on this.
Enterprise Server supports the use of a scale-out environment with a Performance and Availability
Cluster (PAC). A PAC uses MFDBFH and the open-source, in-memory data structure store Redis to
share data between members of the PAC. You can deploy multiple instances of your Enterprise
Server container and store data in an RDBMS using SQL natively within the application. Visual COBOL
does not support the use of Performance and Availability Clusters or MFDBFH. For more information
see Redis on the Redis web site and Scale-Out Performance and Availability Clusters.
If you have not already done so, convert any catalog entries that are not using relative paths
to be so, or convert them to have the correct paths for use with MFDBFH. Use Unix-style file
separators ("/" rather than "\") as they can be deployed on either Windows or Linux.
Tip: Micro Focus recommends that you convert your catalog to use only relative paths so
that as soon as you deploy your catalog to an RDBMS, the paths for all the entries will be
correct for using MFDBFH.
If you do not convert all the entries to be relative paths you will need to keep a copy of
the original catalog. This is to enable you to retrieve the physical file location when
deploying the data files to the RDBMS using the dbfhdeploy tool.
Check to see whether you have any PCDSN overrides in your JCL job cards. Look for
%PCDSN% in your job card. Ideally you should change these to use catalog lookup, although
you could alternatively specify the correct paths for MFDBFH.
If you are running CICS and are using FCTs rather than the catalog, you will need to change
these entries to MFDBFH locations.
10.2. Deploy the catalog and data files to an RDBMS
When deploying the catalog and data files to an RDBMS you should consider the following points:
You should use your catalog to retrieve a list of files that need to be deployed.
When deploying files without headers; that is, fixed block sequential and line sequential
files, you will need to supply information regarding format and record lengths. You can
retrieve this information from the catalog.
Tip: For production this will be a one-time operation. For testing and development,
deploying a "known" catalog and set of data improves the repeatability of your
environment.
Regions that handle CICS should be part of a Performance and Availability Cluster (PAC).
Micro Focus recommends that you deploy your CICS resource definition file to MFDBFH and
configure your region to use it, although you can replicate the CICS Resource Definition file
locally within each container image.
Tip: Any static changes that you make to resources in the CICS resource definition file
should be committed to version control.
For more information see Configuring CICS Applications for Micro Focus Database File Handling,
Configuring CICS resources for Micro Focus Database File Handling and PAC Best Practices.
Reconfigure shared data access to work with a scale-out capable data source, for example
MFDBFH backed by a SQL database. Update the Dockerfile to install any additional RDBMS
drivers that are required:
Configure Enterprise Server to specify the database to be used by MFDBFH. This involves
configuring the XA Resource Configuration with settings appropriate to the specific database
instance.
You can specify the XA Switch module as an environment variable within the configuration
and set the value of that environment variable within the Dockerfile, or you can simply
specify the full path to where the XA switch module is located within the container image
file system.
For more information see Building RM Switch Modules and Using XA-compliant Resources
(XARs).
For more information see Database File Handling Environment Variables, Micro Focus Native
Database File Handling and Enterprise Server Region Database Management and Scale-Out
Performance and Availability Clusters.
You must give careful consideration to the security requirements of your application, such as which
user id an application runs as, which network ports are exposed, and what firewall rules are applied.
When considering these aspects, the aim should always be to use only the minimum possible
permissions and minimum open ports in order to reduce security vulnerability. Any files which you
create or modify while implementing the security requirements should be stored in version control.
You should use lifecycle hooks to ensure that scaling down your application's resources does not
result in the unexpected termination of Enterprise Server work, particularly in the case of long-
running batch jobs. By using lifecycle hooks you can prevent new work from being allocated to a
region, and also guarantee that the region will remain running until it has completed its current
workload. The auto-scaling rules that you define should specify a grace period during which this
shutdown procedure will be allowed to run before the host instance is terminated. In the case where
a region is running batch jobs, this grace period should be at least the expected amount of time for
the batch job to complete.
There are many different valid configurations that you can use to deploy the application either using
Cloud provider-managed services (such as Amazon ElastiCache for Redis, Microsoft Azure Cache for
Redis. Microsoft Azure Database for PostgreSQL, or Amazon Aurora) or running equivalents within
the cluster or on premise. You need to evaluate which option best suits your requirements.
If you are using Enterprise Server, in the StatefulSet configuration specify the use of a Performance
and Availability Cluster (PAC). This involves specifying the following:
For more information see PAC and SOR Environment Variables and Database File Handling
Environment Variables.
If you are using Visual COBOL for SOA, specify a label for the application which can be used by
ESCWA’s Kubernetes auto-discovery mechanism to efficiently select appropriate Pods (see below for
more details). Define the port used by the Micro Focus Directory Server; that is, the value of the
CCITCP2_PORT environment variable, running in the Pod with the IANA-registered name
"mfcobol" even when the default port (86) is not being used as this allows ESCWA to automatically
discover the port it should use to communicate with the directory server.
Tip: Kubernetes secrets are used to inject credentials and certificates into the application
using environment variables and files. Use an init container to populate the vault with the
credentials stored in environment variables, then use a shared "Memory" emptyDir volume
to add these into the application container, for example, by volume-mounting to the
location /opt/microfocus/EnterpriseDeveloper/etc/secrets/microfocus. If you were to
directly inject the credentials environment variables into the application container, they
would be visible (unencrypted) to anyone with ESCWA access to the running Pod.
See Distribute Credentials Securely Using Secrets on the Kubernetes web site for more
information.
The following YAML fragment shows how to specify the scale-out features, configure a PAC and SOR
for use with your application, and specify liveness and readiness probes to check that the application
is running correctly:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ed60-bnkd-statefulset-mfdbfh
labels:
app: ed60-bnkd-mfdbfh
spec:
replicas: 1
selector:
matchLabels:
app: ed60-bnkd-mfdbfh
serviceName: ed60-bnkd-svc-mfdbfh
template:
metadata:
labels:
app: ed60-bnkd-mfdbfh
spec:
# Allow time for clean shutdown of Enterprise Server
terminationGracePeriodSeconds: 120
nodeSelector:
"beta.kubernetes.io/os": linux
securityContext:
runAsUser: 500
fsGroup: 500
initContainers:
# Initialize the local vault with needed secrets
The above fragment includes references to the files vaultconfig.sh and pre-stop.sh. Example
contents for vaultconfig.sh are shown below:
#!/bin/bash
. $MFPRODBASE/bin/cobsetenv
#!/bin/bash
The following YAML fragment shows how to use sidecar containers to output any log files needed for
diagnostic purposes into the Kubernetes logging framework:
The following YAML fragment shows how to use a sidecar to run a syslog daemon within the Pod to
receive Enterprise Server audit output and potentially forward it to a remote syslog daemon:
- name: mfaudit-log
image: bankdemo:latest
command: ["rsyslogd", "-n", "-f", "/etc/mfaudit/rsyslog.conf"]
imagePullPolicy: "Always"
ports:
- name: incoming-logs
containerPort: 2514
lifecycle:
preStop:
exec:
# Wait for the server to signal to have been shutdown then
# terminate the syslog daemon
command: ["/bin/sh", "-c", "while [ ! -f
/var/mfcobol/es/BANKDEMO/shutdown.txt ]; do sleep 3; done;
RSYSLOG_PID=`pgrep rsyslogd`; kill -s SIGTERM $RSYSLOG_PID"]
volumeMounts:
# RSYSLOG Configuration - rsyslog.conf loaded from configmap
- name: syslog-conf-volume
mountPath: /etc/mfaudit/
- name: region-workarea-volume
mountPath: /var/mfcobol/es/BANKDEMO
The following YAML fragment shows how to use a sidecar to run a Prometheus metrics provider to
allow Horizontal Pod Autoscaling to scale based on the Enterprise Server metrics.
Typically, you will define Kubernetes services for the application listeners that you need to expose,
such as 3270 and Web Service listeners, will front these with a load balancer. The following YAML
shows how you might do this:
kind: Service
apiVersion: v1
metadata:
name: ed60-bnkd-svc-mfdbfh
spec:
Your chosen database server must also be available. Create Kubernetes services to route
connections to Redis and the database servers and use these service addresses when configuring
Enterprise Server resources. For example:
apiVersion: v1
kind: Service
metadata:
labels:
app: ed60-bnkd-svc-redis
name: ed60-bnkd-svc-redis
spec:
externalName: <address of Redis server>
selector:
app: ed60-bnkd-svc-redis
type: ExternalName
status:
27 | Best Practices for Moving to Containers
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: pgsql-svc
name: ed60-bnkd-svc-pgsql
spec:
externalName: <address of database server>
selector:
app: pgsql-svc
type: ExternalName
status:
loadBalancer: {}
If you are using Visual COBOL for SOA within a Kubernetes cluster you can run ESCWA within the
same Kubernetes cluster, and configure it to scan the cluster for Enterprise Server Pods.
For example, use a sidecar container to run kubectl proxy using the Pod's port 8001 and start ESCWA
with the command line:
--K8sConfig={"Direct":false,
"Port":"8001",
"LabelEnvironment":false,
"Label":"app%3Dmyapp-label"}
If you are using Enterprise Server and your servers are running as part of a Performance and
Availability Cluster (PAC) you can alternatively/additionally configure ESCWA to show the members
of the Performance and Availability Cluster.
--SorList=[{"SorName":"MySorName",
"SorDescription":"My PAC instance",
"SorType":"redis",
"SorConnectPath":"my-redis-server:6379",
"Uid" : "1"}]
ESCWA should be configured with TLS and ESM security enabled, and the ESCWA deployment should
not be replicated as this is currently not supported. For more information see Transport Layer
Security (TLS) in ESCWA and Specifying an External Security Manager.
You should adjust the number of replicas and the number of SEPs within each replica to achieve the
best balance of performance and resilience. You can use tools such as LoadRunner and UFT to test
various scenarios. For more information see LoadRunner Professional and UFT One.
Kubernetes autoscaling works particularly well for stateless applications, such as stateless REST
applications, but many Enterprise Server applications are stateful in nature, as is typical of CICS 3270
applications. Once a 3270 user session is connected to a particular server pod instance, that session
is "sticky" to that pod which means that if the pod becomes overloaded, scaling up the cluster will
not necessarily improve the performance for existing sessions. Also, if the cluster is scaled down,
active sessions connected to a pod which is terminated as part of that scale down will be
disconnected and in-flight transactions might not be completed.
For applications where load is known but varies at different times of day, for example, online load
during the day and batch load at night, manual (or timed) scaling of the application might be more
appropriate than using the Kubernetes autoscaling support. You could use a command of the
following form to achieve this:
Kubernetes provides a number of built-in metrics such as memory and CPU usage but these are not
always the most appropriate metrics for use with Enterprise Server. You can supplement the
standard Kubernetes metrics with the use of custom Prometheus metrics, and Enterprise Server
contains an Event Manager exit module called casuetst which when enabled via the environment
variable ES_EMP_EXIT_n (ES_EMP_EXIT_1=casuetst, for example) creates a file called
ESmonitor1.csv which is a simple comma-separated value file containing information such as the
numbers of tasks processed per minute, average task latency, average task duration, and task queue
length.
YYYYMMDDhhmmssnn,Tasks-PM,AvLatency,AvTaskLen,-Queue--,TotalTasks,SEPcount,-
Dumps--,FreeSMem,
where:
YYYYMMDDhhmmssnn is the date and time that the metrics were recorded
Tasks-PM is the numbers of tasks processed per minute
These metrics can be added to a Prometheus server which the Kubernetes metrics server can be
configured to query (through the use of the Kubernetes Prometheus adapter).
You can expose the Enterprise Server metrics using a side-car container running a small program
which reads the contents of the ESmonitor1.csv file and exposes the relevant values in the
Prometheus metrics format. Client libraries for a number of programming languages are available to
make this straight-forward, one of easiest to use being the Golang version.
A sample Golang program is shown below. It demonstrates the creation of Prometheus "Gauges" for
the Enterprise Server metrics (which a background thread keeps up to date based on the changing
values in the ESmonitor1.csv file), with the metrics themselves returned by an http GET request to
the pod’s port 8080/metrics:
package main
import (
"io/ioutil"
"log"
"net/http"
"os"
"strconv"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func recordMetrics() {
// Start background thread which reads ESmonitor1.csv and
// updates the gauges, then sleeps 30 seconds and repeats
go func() {
arg1 := os.Args[1]
time.Sleep(60 * time.Second)
for {
data, err1 := ioutil.ReadFile(arg1)
if err1 != nil {
log.Printf("File reading error: %v", err1)
time.Sleep(60 * time.Second)
}
log.Printf("Contents of file: %s", string(data))
s := strings.Split(string(data), ",")
i1, err := strconv.ParseFloat(s[1], 64)
log.Printf("Tasks per minute : %f", i1)
i2, err := strconv.ParseFloat(s[2], 64)
log.Printf("Average Latency : %f", i2)
func init() {
// Metrics have to be registered to be exposed:
prometheus.MustRegister(tPM)
prometheus.MustRegister(avgLatency)
prometheus.MustRegister(avgTaskDuration)
prometheus.MustRegister(workQueued)
}
func main() {
recordMetrics()
port := os.Getenv("LISTENING_PORT")
if port == "" {
port = "8080"
- name: custom-metrics
image: es-metrics:latest
imagePullPolicy: "Always"
ports:
- name: metrics
containerPort: 8080
volumeMounts:
- name: region-workarea-volume
mountPath: /esmetric/workarea
Kubernetes autoscaling works by querying the Kubernetes Metrics Server, so in order for
Prometheus metrics to be used for autoscaling they must first be accessible via the Kubernetes
Metrics Server. This is achieved using the k8s-prometheus-adapter with a configuration that
details the required Prometheus metrics, and by adding annotations to the application pod template
specification to make Prometheus scrape the actual metric values from the pods.
For more information see Metrics Server on the Kubernetes web site, k8s-prometheus-adapter on
GitHub, and the Scraping Pod Metrics via Annotations section of Prometheus on Helm Hub.
For example, the following pod annotations would indicate that Prometheus should scrape metrics
from the pod's port 8080:/metrics URL:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
The Prometheus adapter must be configured with rules that detail which metric values to add to the
Kubernetes Metrics Server, as described in Configuration Walkthroughs on GitHub.
rules:
default: true
custom:
- seriesQuery: 'es_tasks_per_minute'
seriesFilters: []
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
The metric can then be used with the Kubernetes Horizontal Pod Autoscaler by applying a suitable
HorizontalPodAutoscaler resource, as shown below:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-bankdemo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: ed60-bnkd-statefulset-mfdbfh
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metricName: es_tasks_per_minute
targetAverageValue: 100