Temenos on IBM LinuxONE Best Practices Guide
Temenos on IBM LinuxONE Best Practices Guide
Vic Cross
Ernest Horn
Colin Page
Jonathan Page
Robert Schulz
John Smith
Chris Vogan
Redbooks
IBM Redbooks
February 2020
SG24-8462-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction to Temenos and IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Temenos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Temenos on IBM LinuxONE solves an industry challenge . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Why is Temenos on IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 IBM LinuxONE value for Temenos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Lab environment testing of Temenos on IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Lab environment results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Hardware configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.4 Configuration (Logical architecture) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.5 Transaction mix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.6 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4 Temenos modules supported/unsupported by IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Solution Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Financial Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Business and Technical Sales Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Chapter 3. Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1 Traditional on-premises (non-containerized) architecture . . . . . . . . . . . . . . . . . . . . . . . 54
3.1.1 Key benefits of architecting a new solution instead of lift-and-shift. . . . . . . . . . . . 54
3.2 Machine configuration on IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2.1 System configuration using IODF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3 IBM LinuxONE LPAR Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3.1 LPAR Layout on IBM LinuxONE CPCs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.4 Virtualization with z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.4.1 z/VM installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4.2 z/VM SSI and relocation domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4.3 z/VM memory management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4.4 z/VM paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4.5 z/VM dump space and spool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.6 z/VM minidisk caching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.7 z/VM share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.8 z/VM External Security Manager (ESM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.4.9 Memory of a Linux virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4.10 Simultaneous Multi-threading (SMT-2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4.11 z/VM CPU allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4.12 z/VM configuration files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.13 Product configuration files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.14 IBM Infrastructure Suite for z/VM and Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.5 Pervasive Encryption for data-at-rest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.5.1 Data-at-rest protection on Linux: encrypted block devices . . . . . . . . . . . . . . . . . . 72
3.5.2 Data-at-rest protection on z/VM: encrypted paging. . . . . . . . . . . . . . . . . . . . . . . . 74
3.6 Networking on IBM LinuxONE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.6.1 Ethernet technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Appendix A. Sample product and part IBDs and model numbers . . . . . . . . . . . . . . . 103
Appendix B. Creating and working with the first IODF for the server . . . . . . . . . . . . 107
The first IODF for the server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Contents v
vi Temenos on IBM LinuxONE Best Practices Guide
Notices
This information was developed for products and services offered in the US. This material might be available from IBM in
other languages. However, you may be required to own a copy of the product or product version in that language in order
to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program,
or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing
of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may
not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve
as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product
and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any
obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual performance results
may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM
products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and represent
goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely
as possible, the examples include the names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming techniques on
various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application
programming interface for the operating platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function
of these programs. The sample programs are provided “AS IS”, without warranty of any kind. IBM shall not be liable for any
damages arising out of your use of the sample programs.
Temenos contributed content and sections to this publication: Copyright Temenos Headquarters SA. Reprinted by
Permission
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
BLU Acceleration® IBM BLU® Redbooks (logo) ®
Db2® IBM Cloud™ Storwize®
DB2® IBM Spectrum® System Storage™
DS8000® IBM Z® System z®
Easy Tier® Insight® Tivoli®
FICON® Interconnect® WebSphere®
FlashCopy® OMEGAMON® z Systems®
GDPS® Parallel Sysplex® z/Architecture®
HyperSwap® RACF® z/OS®
IBM® Redbooks® z/VM®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Ansible, Fedora, JBoss, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
The world’s most successful banks run on IBM®, and increasingly IBM LinuxONE. Temenos,
the global leader in banking software, has worked alongside IBM for many years on banking
deployments of all sizes. This book marks an important milestone in that partnership.
Temenos on IBM LinuxONE Best Practices Guide shows financial organizations how they can
combine the power and flexibility of the Temenos solution with the IBM platform that is
purpose built for the digital revolution.
Authors
This book was produced by a team of specialists from around the world working at IBM
Redbooks Centers in Raleigh, NC and Montpellier, France.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us
your comments about this book or other IBM Redbooks publications in one of the following
ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
xii Temenos on IBM LinuxONE Best Practices Guide
1
Chapter 1. Introduction
The world's most successful banks run on IBM, and increasingly IBM LinuxONE. Temenos,
the global leader in banking software, has worked alongside IBM for many years on banking
deployments of all sizes. This book marks an important milestone in that partnership.
Temenos on IBM LinuxONE Best Practices Guide shows financial organizations how they can
combine the power and flexibility of the Temenos solution with the IBM platform that is
purpose built for the digital revolution.
1.1.1 Temenos
Temenos is a world leader in banking software. Over 3000 clients across the globe, including
41 of the top 50 banks, use Temenos to deliver banking services to more than 500 million
customers.
Temenos' objective is to provide financial organizations of all sizes with the software they
need to thrive in the new era of Open Banking, instant payments and cloud. The integrated
Temenos platform supports traditional Linux deployments.
Temenos' core banking solutions are centered around two products: Temenos Infinity and
Temenos Transact. Both solutions give banks the most complete set of digital front office and
core banking capabilities. Using the latest cloud-native, cloud-agnostic technology, banks
rapidly and elastically scale, benefiting from the highest levels of security and multi-cloud
resilience, generating significant infrastructure savings. Advanced API-first technology is
coupled with leading design-led thinking and continuous deployment. As a result, banks are
empowered to rapidly innovate, connecting to ecosystems and enabling developers to build
in the morning and consume in the afternoon. These substantial benefits apply to banks
whether they are running their software on-premises, on private or public clouds.
Temenos invests 20% of annual revenue in R&D to continue driving technological innovation
for clients. Combined with the largest global community of banks, FinTechs, developers and
partners in the financial industry, Temenos is leading the digital banking revolution.
Temenos products have helped their top performing clients achieve industry-leading
cost-income ratios of 25.2% and returns on equity of 25.0%, which is twice the industry
average.
The modular design of both Infinity and Transact, where banking capability is deployable as
separate, productized modules, allows Temenos to deliver capability on any scale within the
bank's ecosystem.
The deployment of Infinity or Transact is different for every bank. Infinity's marketing and user
modules can renew the bank's digital channels by powering a new mobile application. Infinity
can also replace the bank's entire front office, through a phased roll-out that protects
business continuity. Origination or onboarding can be rolled out first, for example, before the
bank migrates its legacy channels and finally its core front office functionality.
Transact brings a similar flexibility to core banking. Transact can perform a set or subset of
core processing services for a bank, ranging from retail banking to treasury and Islamic
Temenos stacks
Temenos uses preferred software stacks to help clients deploy Temenos banking software
quickly. Although every customer is different and might want to use alternative software for
certain tasks, the preferred stacks are an ideal starting point for implementations. The
preferred stacks are tested and supported by Temenos Runbooks, technical approvals and
other customer documentation. Figure 1-2 on page 4 shows four IBM variants on Temenos
preferred software stacks, suitable for IBM LinuxONE on-premises deployments.
1
Courtesy of Temenos Headquarters SA
Chapter 1. Introduction 3
Figure 1-2 Temenos stacks. 2
Note: Temenos provides minor software releases every month, leading up to and including
a major annual software release. For example, the R19 AMR (Annual Maintenance
Release) includes all post R18 AMR releases up to and including R19 AMR.
IBM LinuxONE and Temenos' cloud native and cloud agnostic strategy both recognize the
importance of choice, flexibility, scalability, security and low TCO to today's digital banks. See
Figure 1-3.
2
Courtesy of Temenos Headquarters SA
Organizations can now take advantage of the performance enhancements, cost savings and
improved customer experience offered by cloud technologies without sacrificing what already
works well in their platform and architecture.
The core value proposition for running Temenos on IBM LinuxONE is the ability to:
Consolidate strategic workloads (Scale up and Scale out)
Deliver greater than 40% lower TCO over x86 architecture
Deliver on core to digital strategy, inclusive of payments
Scale capacity for future payments and digital growth
Support regulatory requirements for high availability and minimal recovery times in the
event of a disaster
Deliver in various consumption models, whether on-premises, in a hybrid cloud, or on
public cloud
Keep banking secure through:
Chapter 1. Introduction 5
– Pervasive encryption to encrypt data at rest and data in flight. With On-chip Central
Processor Assist for Cryptographic Functions (CPACF)
– Hardware Security Module (HSM) which provides tamper protection to meet
regulations such as PCI DSS and PSD2
– Logical Partition with EAL5+ isolation to fully separate several workloads
IBM Secure Service Container (SSC) provides restricted administrator access and isolation
for Linux workloads.
For more information about IBM LinuxONE, see the following link:
https://www.ibm.com/it-infrastructure/linuxone
IBM LinuxONE cores are more powerful than x86 cores. A combination of processor
architecture, clock-speed, multiple cache levels, optimization, and I/O offloading differentiates
this machine. Though security and scalability are the key differentiators of these platforms,
the hardware also provides reliability and performance benefits for many important
workloads.
IBM LinuxONE supports core banking and payment services as follows: 1) delivering higher
transaction response times for the growing digital channels and 2) providing high-volume
batch or interbank settlement services with speed, scale, security and data integrity.
Figure 1-4 shows the scalability from a single frame IBM LinuxONE III machine (up to 30
processors) to a 4-frame machine (max 190 processors).
With IBM LinuxONE you can do more with less. IBM LinuxONE is designed to run at near
100% utilization. This is in contrast to an x86 machine with a utilization of 50% or less. IBM
LinuxONE is the ideal platform for workload consolidation. A financial benefit can be gained
with software priced on a per core base. A fully configured IBM LinuxONE server (generation
II) is able to run 2 million docker container.
Security is critical. With pervasive encryption configured using a tamper protected Hardware
Security Module and IBM Secure Service Container, you are successful in protecting your
data without the need to change application or database services. IBM Secure Service
Container restricts administrator access to help prevent the misuse of privileged user
credentials.
IBM LinuxONE is becoming a key element in cloud strategies by integrating the quality of
service features — namely Reliability, Availability, Security, Serviceability, and Data Integrity
— to deliver the required business services without compromise. Temenos, as the core
banking system, benefits from combining with the IBM LinuxONE platform. For example,
encrypting all client data is not a new concept but it can be a slow process. Encryption with a
high level of hardware support (IBM LinuxONE) makes it faster. Pervasive encryption
methods encrypt all data at rest (data placed on storage) and in flight (data currently in
memory or on wire). Hardware-assisted compression, which is provided by IBM LinuxONE III
prior to the encryption process, delivers additional performance.
For more information about Temenos on IBM LinuxONE, read the following white paper,
“Leveraging IBM LinuxONE and Temenos Transact for Core Banking Solutions,” available at
the following link:
https://www.ibm.com/downloads/cas/NEO7QNLJ
Chapter 1. Introduction 7
1.3 Lab environment testing of Temenos on IBM LinuxONE
Temenos has worked with IBM for many years, supporting various releases of their solutions
on IBM platforms. In Spring 2017, they ported their Transact Core Banking R17 application to
run on Red Hat Enterprise Linux on IBM LinuxONE for a specific customer opportunity. This
was the TAFJ (full Java) release. No problems were encountered so the customer progressed
with acquiring this solution and adopting IBM LinuxONE with a successful deployment of the
application in 2H 2018.
In 2018, Temenos came to IBM’s Montpellier facilities to complete a full technical evaluation,
in a lab environment. Temenos evaluated Transact on IBM LinuxONE II platform for
functional, performance, and platform-specific capabilities. Those tests are noted in the
following list:
Functional testing was conducted on the combined Transact R1802 running as an Oracle
WebLogic Server 12.2.1.3.0 application using Oracle 12c R2 database platform on Red
Hat 7.2 native LPARs on IBM LinuxONE II server with integrated I/O channels using IBM
FICON® attached IBM DS8886 disk storage.
Performance testing was conducted to assess the high-water mark of mixed online and
batch retail banking workloads for peak volumes, alongside End of Day/End of Month
batch and High-Volume Throughput (such as capitalization) workloads.
IBM LinuxONE II-specific testing focused on the use of hardware encryption with CPACF
and dm-crypt functionality to migrate the data volumes to fully encrypted disk volumes.
Temenos returned to the IBM Montpellier facilities in 2019 to evaluate, in a lab environment,
the next release of Temenos Transact R1908 on IBM LinuxONE III platform. The 2019 tests
built on what was learned during the 2018 testing and included additional unique features to
IBM LinuxONE. Those tests are noted in the following list:
Functional testing was conducted on the combined Transact R1908 running as an Oracle
WebLogic Server 12.2.1.3.0 application using Oracle 19c database platform on Red Hat
7.6 Linux guests on IBM z/VM® 7.1 operating system on IBM LinuxONE III server with
integrated I/O channels, using FCP attached to IBM DS8886 disk storage.
IBM Spectrum Scale storage 4.2.3.18 was used to demonstrate the capabilities of using
directly attached shared storage on IBM LinuxONE III to host high-volume workloads in
conjunction with Temenos Transact.
Performance testing was conducted on an Oracle 19c database — which was expanded
to 7 TB with 50m accounts to assess the high-water mark of mixed online and batch retail
banking workloads for peak volumes — alongside End of Day/End of Month batch and
High-Volume Throughput (such as capitalization) workloads.
IBM LinuxONE III-specific testing focused on the use of hardware encryption with CPACF
and dm-crypt functionality to migrate the data volumes to fully encrypted disk volumes
using AES-CBC encryption algorithm.
Note: This section and the book provide the configuration of our IBM lab test environment
and some results. Results in your own lab environment can vary.
The lab environment engaged with IBM LinuxONE and Oracle SMEs, WW Java Technology
SMEs, and Temenos UK and India development teams.
The functional results demonstrated:
In summary, we can conclude that the Temenos Transact and Oracle DB on IBM LinuxONE
outperforms other platforms (such as x86) by delivering benefits in terms of functionality,
performance, security risks and total cost of ownership. This is achieved through these
means:
Reduce the number of processing cores for the Applications and DB.
Dedicate hardware to accelerate encryption and Java workloads, thereby increasing
throughput and decreasing wait times.
Enable CPU cores to run at higher utilization rates without the reduction in response times
seen on other platforms.
Figure 1-5 shows the example architecture used in our lab environment for testing Temenos
on IBM LinuxONE.
Chapter 1. Introduction 9
Figure 1-5 IBM LinuxONE III four frame configuration.
The LPARs were defined in PR/SM using Shared IFL and had a sufficient weight to ensure
that all processors were configured in Vertical-High (both for z/VM and Native Linux). Using
Vertical-High processing allows the best performance for IBM LinuxONE processors. It
removes the constraint of having a dedicated processor, which makes changes on the LPAR
less dynamic. In this lab environment, the IFL processors were used with and without SMT2
to minimize the number of physical cores defined, while providing sufficient bandwidth for
applications with many threads.
In this configuration, database disks (managed by Oracle ASM) were stored on HPFE and
Linux root file systems were stored on HDD. HDD was also used to store backups from the
system using IBM FlashCopy® services. From a format perspective, RAID 5 was used for all
Network
In this lab environment, 3 types of network interfaces were used, depending on the network
primary usage, as shown in Table 1-1.
Administration: 1 Gbps This network was used for ssh connection (OSA Express 7S),
Ethernet (OSA Express 7S) monitoring, consoles, ftp, and so on.
Data network: 10 Gbps This network was used to transfer data between the injectors and the
Ethernet (OSA Express 7S)) main components, and for the exchanges between the components
of the solution: MQ, WebLogic Server, and database.
In an IBM LinuxONE, OSA Express cards can be shared by several
partitions, allowing a lower latency for the transfers.
Inter-node network: This network was created through cross-memory technology called
HiperSockets HiperSockets, available on IBM LinuxONE. This allows a low
network latency, but the drawback is that this network is not
reachable outside the box. Therefore, it is used here for the RAC
Interconnect® network.
1.3.3 Software
Table 1-2 provides the software used in this lab environment.
Persistent shared storage - Spectrum Scale 4.2.3.18 (for Temenos LOGS and Libraries)
Chapter 1. Introduction 11
Table 1-3 Logical architecture
Product Description
Oracle WebLogic Server Oracle WebLogic Server cluster was hosting the Transact/TAFJ
application and was applying the business logic to the incoming
workload. During our tests, the cluster had up to 26 active
members at the same time, all running the same deployed
application. Each cluster member was a separate JVM with its own
pool of resource (JDBC, JMS connections).
Database The database layer was provided by Oracle RAC component, was
an active/active trend. This means that both members were
processing the incoming requests from the WebLogic Server
application. The JDBC connection was established through the
scan hostname, which was resolved using DNS to 3 IP addresses,
as shown in Figure 1-5 on page 10.
Payments PAYMENT.ORDER 4
The aim was to compare a non-encrypted batch (T80516R1) with a batch with database disks
encrypted (T80516R2).
When the CPU activity was compared between these, the behavior was roughly the same, as
shown in Figure 1-6.
The encrypted scenario shows a difference of 2.06% CPU and 1.9% elapsed time.
To conclude, only the DB disks were encrypted because these disks were processing many
I/O operations during the COB test. This encrypted scenario was consuming 2.06% more
CPU than the clear one and the time elapsed to complete was 1.9% higher. This test shows
that the IBM LinuxONE CPACF allows encryption of disk devices without significant
performance impact.
Chapter 1. Introduction 13
1.5 Solution Details
Table 1-5 notes the deployments described in this IBM Redbooks Publication.
To drive even more value from an IBM LinuxONE implementation, IBM offers a Temenos
financial business case service that optimizes cost versus functional / non-functional
requirements to create the correct solution for your organization. The service assesses not
just the organization's core banking platform but the mission critical estate and applications
that are dependent on it.
Contact at IBM:
John Smith
WW Offering Manager for Temenos | Linux Software Ecosystem Team
WW Offering Management, Ecosystem & Strategy for IBM LinuxONE
[email protected]
Contact at Temenos:
Simon Henman
Temenos Product Manager, Benchmarking and Sizing | Temenos UK Ltd
[email protected]
To contact the Temenos sales team or any of the Temenos offices, use the following
information:
Temenos Headquarters SA
2 Rue de l’Ecole-de-Chimie
CH - 1205 Geneva
Switzerland
Tel: + 41 (0) 22 708 1150
https://www.temenos.com/contact-us
Chapter 1. Introduction 15
16 Temenos on IBM LinuxONE Best Practices Guide
2
An IBM LinuxONE machine is designed to be fully fault tolerant inside. It has built-in
redundancy for all hardware components in the CPC. This redundancy includes aspects from
dual cooling units to multiple entry points and to power the system. It is recommended that
power is sourced from different locations in the data center to ensure that no single power
interruption can cause an outage. Processor drawers also contain spare processors. In the
event a processor fails, it is replaced automatically and concurrently, preventing any loss of
processing power due to the failure. It also ensures the failure does not cause any system
outage. The system memory in each processor drawer is RAIM memory, which is auto
correcting if any memory module should fail. Most failures do not cause processing to stop on
the system. When a failure occurs, errors are logged and the system contacts IBM so the
hardware team can review the system. If needed, IBM dispatches onsite to replace the
damaged component. Most replacement work is nondisruptive to the system and customer
workloads.
On the external side, plan redundant cabling for power, I/O, and network.
There is additional spare capacity installed in the server that is not used. IBM offers several
kinds of capacity records that allow you to use this additional capacity for defined periods of
time. Such capacity-on-demand records can be used to address outages or special workload
peaks (such as month end, year-end processing, or capitalization) from a period of 1 hour up
to 90 days.
Additionally, there are more things to consider in planning for high availability (HA) and
disaster recovery (DR). The concept of high availability (HA) and disaster recovery (DR)
depends on the number of installed IBM LinuxONE servers. The following sections explain
different scenarios.
PR/SM
Processor Resource and System Manager, or PR/SM, is the first-level virtualization software
on the IBM LinuxONE system.
CPACF
CPACF stands for Central Processor Assist for Cryptographic Functions. It enhances the
instruction set of the IBM LinuxONE CPU, providing accelerated instructions for encryption
and message digest functions (hashing). When combined with the Linux instruction set for
DM-Crypt functionality, you can encrypt all file paths including cache and log files and Oracle
ASM file structures to provide a fully encrypted file-based system.
2.1.2 LPARs
These logical partitions (LPARs) are defined directly in the hardware where an operating
system runs natively without any additional hypervisor. This offers the maximum performance
for an application or service but takes away some flexibility. Resources are more or less
dedicated and not shareable. These LPARS are controlled by an internal management
system called Processor Resource/Systems Manager (PR/SM).
Figure 2-1 shows a possible configuration using one IBM LinuxONE server.
LPAR considerations
Ensure that the LPARS are spread across the machines, run more than one instance of each
service and distribute them over the defined LPARs. This two-server configuration with these
basics provides redundancy for both planned and unplanned outages in hardware and
software. Figure 2-2 on page 20 shows a two-site configuration.
Note: A quorum is the majority of nodes in a cluster required to run the service. In an
outage, the cluster needs to know what kind of outage happened and how to continue the
service. A quorum decision is based on the predefined rules that an organization sets. It
often requires some additional coding to find the correct decision for quorum or to reach a
target state (stop or start services somewhere).
A secondary less robust method is to define only one disk storage subsystem as primary. The
disk storage for DR has some amount of cache installed but receives only write updates from
the primary storage. In this case, the installed cache is not used by the workload. The cache
in the primary storage needs to carry the full workload. The same principle applies to the
processor caches in the IBM LinuxONE servers.
Quorum placement
The placement of the quorum site is important. It needs to be as independent as possible
from the other sites utilizing separate power, an independent network and so on. Locating it at
a different site location provides for stronger autonomy. Another option is to place the quorum
in a separate computing room at one site to reduce dependency (shared cooling or power
supply).
The quorum itself (remember it is reduced to stages) needs less capacity because it
supervises only the main computing sites and addresses split brain situations. For example,
if one of your computing sites has a failure, another surviving site takes over the workload
automatically. However, in such a situation the surviving site does not know whether the failed
site is completely down or if it has simply lost connectivity. Split brain situations are where
both sites are fully available but have lost their intercommunication. Both sites have full
access to the data but run in an uncoordinated way. This can lead to a damage in the data. In
this situation, to get certainty, you need to ask a third party, such as the quorum site. The
quorum site first checks its own connectivity to all sites. If it has lost connectivity too, the
remaining site receives the quorum and does the takeover automatically. If the quorum site
has connectivity to the failed site (a split-brain situation), there must be additional rules in
place for how to continue the operation while ensuring data integrity. The quorum sites
function is to find a decision, so it needs only limited capacity in terms of CPU power and disk
space. A quorum cluster must always have an odd number of nodes. In case of a split-brain
network failure between the cluster nodes, the portion that holds more than half of the nodes
that the cluster initially had — including the quorum nodes — receives the quorum and
continues to run. This portion is referred to as the island.
The functions that can use quorum are noted in the following list:
Server-Time-Protocol (STP) can use a quorum (called Arbiter) to select the preferred time
server
A clustered file system (GPFS or Spectrum Scale) uses quorum to maintain data
consistency in the event of a node failure
Note: More detail about I/O configuration on IBM LinuxONE can be found in 3.2, “Machine
configuration on IBM LinuxONE” on page 55.
DPM supports most IBM LinuxONE hardware, including OSA Express and FICON Express
adapters.
At the time of this publication, DPM does not provide support for two important IBM LinuxONE
functions that are important to the reference architecture described in this book. These
functions are:
FICON channel-to-channel (CTC) devices
The LPAR to run a GDPS Virtual Appliance
The GDPS Virtual Appliance described in 2.4.6, “GDPS and Virtual Appliance (VA)” on
page 40 requires a specific configuration of the LPAR the appliance runs in. DPM cannot
configure an LPAR to support the GDPS Virtual Appliance.
Note: Using IODF or DPM mode does not change how operating systems run on IBM
LinuxONE, it just changes the way that partition configuration is performed. The same
underlying LPAR management firmware (PR/SM) is used in either mode. You can choose
to use DPM in support of a Temenos implementation on IBM LinuxONE. But you will need
to change aspects of the architecture to cater for functions that cannot be used as a result.
The needed changes, such as substituting a different Linux clustering capability to make
up for not have hypervisor clustering (z/VM SSI), are not described in this book.
STP is a proprietary protocol and requires dedicated adapters and extra cabling. The z/VM
member and the SSI cluster are synchronized with the same time resource.
TCP/IP is still used to establish the connection and provide security, however, SMC
eliminates TCP/IP processing in the data path. The TCP/IP connection can be provided using
OSA or HiperSockets connectivity. After the initial handshake is complete, communications
then use sockets-based SMC.
SMC-R is a protocol solution that is based on sockets over RDMA and the Internet
Engineering Task Force (IETF) Request for Comments (RFC) 7609 publication. It is confined
to socket applications by using Transmission Control Protocol (TCP) sockets over IPv4 or
IPv6. SMC-R solution enables TCP socket applications to transparently use RDMA, which
enables direct, high-speed, low-latency, and memory-to-memory (peer-to-peer)
communications.
Communicating peers, such as TCP/IP stacks, dynamically learn about the shared memory
capability by using the traditional TCP/IP connection establishment flows. This process
enables the TCP/IP stacks to switch from TCP/IP network flows to optimized direct memory
access flows that use RDMA.
Figure 2-5 shows SMC-R communication flow between two hosts. By using the TCP option,
TCP synchronization operation determines whether both the hosts support SMC-R protocol
solution, and then establishes the RoCE network.
Figure 2-6 on page 25 shows a test of the IBM Competitive Project Office team. The test
compares the pause times, response times, and throughput for the same test workload
running on an IBM LinuxONE server and running on Linux on an x86 server.
Figure 2-6 Garbage collection pause data: IBM LinuxONE as related to x86.
The DS8950F and DS8910F offer the same functional capabilities and share the same set of
built-in caching algorithms, which enhances disk system performance. The models differ on
available physical resources such as those noted in the following list:
The internal POWER9 processors
The number of processor cores
Total system memory (cache) capacity
Flash drive sizes
Total usable capacity
The number of host adapters (which are used for FICON I/O, Fibre Channel I/O, and Fibre
Channel-based disk replication)
Available zHyperLink (ultra-low latency direct host to disk I/O connectivity) ports
The IBM DS8900F family supports IBM LinuxONE hardware and operating systems, IBM
Hypervisor z/VM and KVM. The supported native Linux operating systems include Red Hat,
SuSE, and Ubuntu.
Both the DS8950F and DS8910F offer the same set of industry-leading Copy Services.
These services can be implemented into unique 2-site, 3-site, and 4-site solutions to provide
high availability, disaster recovery, and logical corruption protection. Copy Services include
the following assets:
IBM FlashCopy (point-in-time copy)
Metro Mirror (synchronous copy)
HyperSwap (dynamic automated failover between primary and secondary disk systems
for planned and unplanned outages)
Global Mirror (asynchronous copy)
Metro Global Mirror (sync and async copy)
Safeguarded Copy (logical corruption protection)
Copy Services replication management is available through IBM Copy Services Manager
(application), IBM GDPS (services offering), or through custom scripting.
To learn more about the DS8900F family, visit the Redbooks offerings available at the
following link:
http://www.redbooks.ibm.com
The channel adapters in the IBM LinuxONE machine operate either in FICON mode or in
FCP mode. Volumes in ECKD format on an enterprise class storage subsystem offer some
additional useful features.
FBA is more commonly associated with SAN or SCSI-based storage controllers and can be
attached to IBM LinuxONE based systems. ECKD mode requires the use of IBM DS8000 or
HPS/EMC Enterprise class storage controllers to provide direct attach storage performance.
The channel adapters in the IBM LinuxONE machine are able to operate either in FICON
mode or in FCP mode. The IBM LinuxONE III server introduced the use of FCP32S ports
capable of 32gbs data transfer rates. FBA and ECKD storage can be attached and operated
in parallel. When using GDPS VA for HA/DR between two or more sites, only ECKD storage
mode is permitted.
The latest and most advanced disk enterprise storage system in the DS8000 series is the
IBM System Storage DS8800. It represents the latest in the series of high-performance and
high-capacity disk storage systems. The DS8800 supports IBM POWER6+ processor
technology to help support higher performance.
The DS8000 series support functions such as point-in-time copy functions with IBM
FlashCopy, FlashCopy Space Efficient, and Remote Mirror and Copy functions with Metro
Mirror, Global Copy, Global Mirror, Metro/Global Mirror, IBM z/OS Global Mirror, and z/OS
Metro/Global Mirror. Easy Tier functions are supported also on DS8800 storage units.
All DS8000 series models consist of a storage unit and one or two management consoles,
two being the recommended configuration. The graphical user interface (GUI) or the
command-line interface (CLI) provide the ability to logically partition storage and use the
built-in Copy Services functions. For high availability, the hardware components are
redundant.
The IBM Tivoli® Key Lifecycle Manager (TKLM) software performs key management tasks for
IBM encryption-enabled hardware, such as the DS8000 series. TKLM protects, stores, and
maintains encryption keys that are used to encrypt information being written to, and decrypt
information being read from, encryption-enabled disks. TKLM operates on a variety of
operating systems.
With DS8000 support both ECKD and FBA disk format together. ECKD requires a FICON
attachment. If you plan to run it on SAN, the SAN switches must support the FICON protocol.
The advantages of IBM all-flash storage systems are that they are engineered to meet
modern high-performance application requirements. Ultra-low latency, cost-effectiveness,
operational efficiency, and mission-critical reliability are built into every product.
For building high-performance storage systems that have to efficiently deliver consistently low
latency and high throughput for mixed enterprise workloads, there is no doubt that NVMe is a
better core technology than SCSI. NVMe is specifically developed for flash storage, allowing
it to make much more efficient use of flash performance and capacity. NVMe will be the I/O
protocol of choice to support new emerging memory technologies like storage-class memory
going forward. NVMe supports generally at least 50% lower latencies on a per-device basis
and up to an order of magnitude higher bandwidth and throughput. IBM storage systems are
expanded to use NVMe.
If there is no aliasing of disk devices then only one I/O transfer can be in progress to a device
at a time. This is regardless of the actual capability of the storage server to handle concurrent
access to devices. Parallel access volume exists for Linux on IBM LinuxONE in the form of
PAV is a static assignment for one or more aliases to a single base ECKD disk. HPAV is a
dynamic assignment of an alias to a base disk during an I/O operation. HPAV follows the
current workload needs and requires fewer alias devices compared to PAV. HPAV is
recommended in most cases.
A suggested starting point for configuration is 8-16 HPAV devices configured per logical
control unit in the DS8000.If PAV is used, as starting point, define the same number of PAV
devices as for the base volumes (1:1 relation).
The TCW enables multiple channel commands to be sent to the control unit as a single entity.
The channel forwards a chain of commands and does not need to keep track of each single
CCW. This leads to reduction in the FICON overhead and increases the maximum I/O rate on
a channel. The performance improvement depends on your workload.
There are also non-enterprise distributions available for IBM LinuxONE (such as ClefOS
(CentOS), openSUSE, and Fedora). These distributions do not provide enterprise support.
IBM also offers z/VM as a hypervisor for Linux on IBM LinuxONE. z/VM has been developing
virtualization for over 40 years and throughout these decades IBM has modified z/VM to run
Linux with the best possible performance.
2.2.3 Ubuntu
Ubuntu is the youngest distribution on this platform and was started with version 16.04 in April
2016, shortly after the announcement of IBM LinuxONE in 2015. Ubuntu is a distribution and
is sponsored by Canonical. Canonical is founded 2004 in UK. Every even year Ubuntu
announces Long Term Support (LTS) versions. They also provide interim versions. Only the
LTS versions of Ubuntu are officially supported for IBM LinuxONE. The lifecycle for an LTS
release is 5 years. Afterward, there are optional security maintenance extensions available.
2.2.4 z/VM
z/VM is an operating system developed by IBM and first announced in August 1972. It was
designed to run workloads in a virtualized environment. At the time Linux was born, z/VM
began to transform into a powerful hypervisor for Linux. z/VM is in continuously evolution to
gain the most performance for Linux on IBM LinuxONE.
2.3 Software
The following sections discuss the required and optional software components needed to run
Temenos on the IBM LinuxONE server. This section further discusses how to get the most
from the facilities of an IBM LinuxONE server.
2.3.1 RACF
Resource Access Control Facility (IBM RACF®) is a software security product that controls
access to protect information. This software also does the following actions:
Controls what a person can do on the operating system and therefore protects resources
Provides this security by identifying and verifying users
Authorizes users to access protected resources
Records and reports access attempts
EAP is based on the open source application server project Wildfly. It in turn uses Enterprise
JavaBeans APIs and containers to manage its transactions and business logic. Applications
can be deployed on a variety of server situations, including physical, virtual, private, and
public clouds.
2.3.6 IBM MQ
IBM MQ can transport any type of data as messages, enabling businesses to build flexible,
reusable architectures such as service-oriented architecture (SOA) environments. It works
with a broad range of computing platforms, applications, web services, and communications
protocols for security-rich message delivery. IBM MQ provides a communications layer for
visibility and control of the flow of messages and data inside and outside your organization.
IBM MQ is messaging and queuing middleware, with several modes of operation including
point-to-point, publish/subscribe, and file transfer. Applications can publish messages to
many subscribers over multicast.
The following list gives further details about some aspects of IBM MQ:
Messaging
Programs communicate by sending each other data in messages rather than by calling
each other directly.
Queuing
Messages are placed on queues, so that programs can run independently of each other,
at different speeds and times, in different locations, and without having a direct connection
between them.
Point-to-point
Applications send messages to a queue, or to a list of queues. The sender must know the
name of the destination, but do not have to know where it is.
Publish/subscribe
Applications publish a message on a topic, such as the result of a game played by a team.
IBM MQ sends copies of the message to applications that subscribe to the results topic.
They receive the message with the results of games played by the team. The publisher
does not know the names of subscribers, or where they are located.
2.3.7 Databases
The IBM LinuxONE platform is one of the best systems to host databases. IBM LinuxONE
has greater processing power than the current x86 processor. IBM LinuxONE is capable of
Simultaneous Multi-Threading (SMT). This allows more instructions through the same
processor.
Each virtual Linux guest can be assigned enough memory to allow most data transactions to
happen in real memory. This provides for increased transaction speeds and allows the
customer to reduce the number of virtual servers and processors needed to service database
transactions.
Because of the ability to create Linux guests with large memory sizes, major cost savings in
database software licensing can be realized. Performance gains are also noticed because
you do not have to shard large databases across multiple systems.
There are some database options available for running Temenos Transact core banking
database on IBM LinuxONE:
IBM Db2®: A relational database for transactional processing. It is designed to have high
availability, scalability with high performance. Other features such as IBM BLU®
Acceleration® for in-memory column-organization and selective compression to optimize
database performance. IBM Db2 is also optimizable for data analytics and AI processing.
Oracle Database (Oracle DB): This is the preferred database for Temenos Transact on
IBM LinuxONE with the tradition deployment architecture. Oracle Real Application
Clusters (RAC) is used to ensure High Availability (HA) of the database in the event of an
outage on one of the database nodes.
Linux KVM is also supported on IBM LinuxONE. KVM is integrated into the Linux distribution
of your choice. For Temenos workloads, they currently certify the use of Red Hat Enterprise
Linux distribution only.
Whether you use z/VM or KVM (or a combination of both) is a choice you need to decide for
your environment. There are some support considerations in making the decision. The
following sections provide some information about the choices available.
The z/VM hypervisor provides deep integration with the IBM LinuxONE platform hardware
and provides rich capabilities for system monitoring and accounting.
z/VM, as the hypervisor for your Linux systems, gives you the ability to share the resources in
a granular way. It has a long history of sharing system resources with one to many Linux
guests running in an LPAR. IBM z/VM was developed to give the Linux guest access to
system resources with little hypervisor overhead. z/VM offers a feature to cluster up to four
members to a Single System Image (SSI). A useful feature within an SSI cluster is Life Guest
Relocation (LGR). This feature moves a running guest to another z/VM member without
interruption. This enables you to do maintenance in z/VM without stopping your workload.
2.4.2 KVM
Kernel Virtual Machine uses the acronym KVM. KVM for IBM LinuxONE is an open source
virtualization option for running Linux-centric workloads that use common Linux based tools
and interfaces. KVM uses the full advantage of the robust scalability, reliability, and security
that is inherent to IBM LinuxONE platform. The strengths of the IBM LinuxONE platform have
been developed and refined over several decades to provide additional value to any type of
IT-based services.
Using KVM as a hypervisor allows an organization to use existing Linux skills to support a
virtualized environment. Though KVM provides flexibility in the choice of management tools
that can be used, it does not provide as deep a platform integration.
KVM provides a facility for relocating virtual machines between KVM hosts.
Note: Oracle does not support their product for use with KVM on IBM LinuxONE, therefore
it is not recommended to use KVM with any Oracle product in a production environment.
A libvirt-based manager is usually the first choice for managing KVM on IBM LinuxONE.
There are a number of choices including virt-manager. Virt-manager is a Python-based
graphical tool that allows some configuration of hypervisor resources (network connections,
disk pools, and so on) and access to and control of virtual machines. A web-based utility
called Kimchi uses libvirt and works with KVM on IBM LinuxONE and can be used to manage
many aspects of hypervisor operation.
A KVM host is a Linux system that runs as a hypervisor. Therefore, managing the basic Linux
aspects of a KVM host is essential. Normal Linux command-line utilities can be used for this,
but there are other interesting options. One such option is Cockpit, which provides a
HTML-based management interface for managing many aspects of a Linux server. Cockpit is
installed by default in RHEL 8.
Red Hat OpenStack provides another option for virtual machine management with KVM. This
is more complicated however, as a complex set of services need to be configured on the KVM
host to allow it to be managed by a Red Hat OpenStack orchestrator.
Migration of memory pages is done over TCP connections, over the standard network
interfaces of the KVM host. By default, there is no encryption of this TCP connection, so the
memory of the guest being migrated appears on the network in the clear. Also, problems in
the network during migration interfere with it completing successfully. Conversely, the
bandwidth used by the migration might interfere with other services on the network.
A key difference between using a bridge or a MacVTap is that MacVTap connects directly to
the network interface in the KVM host. This direct connection effectively shortens the
codepath by bypassing much of the code and components in the KVM host associated with
connecting to and using a software bridge. This shorter codepath usually improves
throughput and reduces latencies to external systems.
Figure 2-7 on page 36 shows the basic structure of a cluster with four members. The cluster
is self-managed by the nucleus using ISFC messages that flow across channel-to-channel
(CTC) devices between the members. All members can access shared DASD volumes, the
same Ethernet LAN segments, and the same storage area networks (SANs).
A useful feature in an SSI cluster is Life Guest Relocation (LGR). LGR is able to move a
running Linux guest to another member in the cluster. This can be useful in the following
situations:
Memory or CPU member that is constrained
You can either move more hardware resources to this member or move heavy or
important guest systems off this member.
Activate service to the z/VM hypervisor
After the installation of fixes or a recommended service, you need to restart the z/VM
member. LGR moves away the active Linux guests without taking them down. You are
able to restart this z/VM member without stopping your services.
Distribute Linux guests
You can move away some Linux guests to prepare the member for a special workload,
such as Ultimo. You can also distribute the guests to reach a nearly equal utilization of
your z/VM members.
Figure 2-8 on page 37 show a sample panel. All the panels can be tailored individually.
Figure 2-8 Sample window of IBM Tivoli OMEGAMON XE on z/VM and Linux.
You can define and control network, storage and communication devices, and view servers
and storage utilization graphically with customizable views. It also allows you to monitor and
manage your system from a single point of control. The low maintenance agentless discovery
process detects servers, networks, storage, and more. You have the flexibility to instantly
provision, clone, and activate virtual resources, optionally using scripts for additional
customization. Routine management tasks are performed with ease. There is accountability
built in as you can assign and delegate role-based administrative access with an audit trail of
all activities performed. See Figure 2-10 on page 39.
GDPS was initially designed for z/OS and extended to include z/VM and Linux running in
z/VM. For IBM LinuxONE-only platforms, this solution has become known as GDPS VA
(Virtual Appliance). GDPS is a collection of offerings each addressing a different set of IT
resiliency goals that can be tailored to meet the recovery objectives for your business. Every
With IBM GDPS, you can be confident that your key business applications will be available
when your employees, partners, and customers need them. Through proper planning and
maximization of the IBM GDPS technology, enterprises can help protect their critical business
applications from an unplanned or planned outage event.
When using ECKD formatted disk, GDPS Metro can provide the reconfiguration capabilities
for Linux on IBM LinuxONE and data in the same manner as for z/OS systems and data.
GDPS Metro can be used for planned and unplanned outages.
For a pure IBM LinuxONE environment, IBM offers GDPS Virtual Appliance (GDPS VA). It
includes a predefined black boxed LPAR running z/OS with GDPS policies but requires no
z/OS knowledge. In an IBM LinuxONE server, it will require that one processor is configured
as a general-purpose engine (CP). GDPS VA runs with this limited CP capacity and it is the
only authorized workload for this processor.
Figure 2-11 on page 40 show an IBM LinuxONE server with one LPAR running GDPS VA.
If you plan to implement GDPS and HyperSwap, the ECKD disk format is a requirement.
GDPS can be a key component addressing your Recovery Point Objective (RPO) and
Recovery Time Objective (RTO) requirements.
The functions provided by the GDPS Virtual Appliance fall into two categories: protecting your
data and controlling the resources managed by GDPS. These functions include the following:
Protecting your data:
– Ensures the consistency of the secondary data if there is a disaster or suspected
disaster, including the option to also ensure zero data loss
– Transparent switching to the secondary disk using HyperSwap
– Management of the remote copy configuration
Controlling the resources managed by GDPS during normal operations, planned changes,
and following a disaster:
– Monitoring and managing the state of the production Linux for IBM LinuxONE guest
images and LPARs (shutdown, activating, deactivating, IPL, and automated recovery)
– Support for switching your disk, systems, or both to another site
– User-customizable scripts that control the GDPS Virtual Appliance action workflow for
planned and unplanned outage scenarios
For more information about GDPS and Virtual Appliance, visit the following link to download
“IBM GDPS: An introduction to Concepts and Capabilities:”
https://www.redbooks.ibm.com/redbooks/pdfs/sg246374.pdf
2.4.7 HyperSwap
HyperSwap provides the ability to dynamically switch to secondary volumes without requiring
applications to be quiesced. Typically done in 3–15 seconds in actual customer experience,
this provides near-continuous data availability for planned actions and unplanned events.
GDPS can perform a graceful shutdown of z/VM and its guests and perform hardware actions
such as LOAD and RESET against the z/VM system's partition. GDPS supports taking a PSW
restart dump of a z/VM system. Also, GDPS can manage CBU/OOCoD for IFLs and CPs on
which z/VM systems are running.
The following components and minimum release levels are certified to run Temenos Transact.
Red Hat Enterprise Linux 7
Java 1.8
IBM WebSphere MQ 9
Application Server (noted in the following list)
Oracle DB 12c
The Temenos Transact software is JAVA based and requires an application server to run.
There are several options of application servers:
IBM WebSphere 9
Red Hat JBoss EAP
Oracle WebLogic Server 12c (JDBC driver)
The Temenos Stack Runbooks provide more information about using Temenos stacks with
different application servers. Temenos customers and partners can access the Runbooks
through either of the following links:
The Temenos Customer Support Portal: https://tcsp.temenos.com/
The Temenos Partner Portal: https://tpsp.temenos.com/
IBM LinuxONE III has several hardware and firmware features for running Java-based
workloads. These features would apply to Temenos applications running TAFJ. The
performance benefits were based on using IBM SDK for Java 8 SR6 and are described in the
informational graphic in Figure 2-12.
Infinity uses APIs, rather than tight coupling, to connect to the bank's core banking platform,
its peripheral systems and independent resources (such as Temenos Analytics or Risk and
Compliance). This helps Infinity fit anywhere, and allows the bank to swap in, swap out or
develop resources as its digital strategy evolves. Temenos open APIs (a readymade
repository of over 700 customizable banking APIs) backed by a developer portal, makes it
straightforward for banks to bring innovative third-party providers into the customer
experience.
Infinity is design led, making it easier for banks to acquire, service, retain, and cross-sell to
customers based on their needs. Infinity's design tools allow banks to quickly adjust or create
1
Courtesy of Temenos Headquarters SA
Higher satisfaction rates are partly about speed, for example, one credit union achieved loan
application times between 3-15 minutes, from application to funding. But the deeper
explanation for Infinity's agility and flexibility lies in the decision to bring the marketing catalog,
product details and banking processes out of Transact (the core banking product) and into a
completely stand-alone front office.
Features
Temenos Infinity has the following features:
Deployable in part or in whole, independent of the core banking system in use
Easily integrated with other Temenos products, third-party providers and the bank's own
peripheral systems
Designed to offer a consistent customer experience across all banking capability, from
acquisition and origination through to mobile banking and customer retention
Design led, allowing banks to quickly adapt or create products and services that directly
address customer needs
Built to maximize analytics and AI to improve business agility and the customer
experience
Since its creation over 25 years ago, Temenos has committed a significant proportion of its
annual revenue to improving and expanding the functionality of Transact. For example, 148
enhancements were delivered to Transact in 2018 alone. Recent product announcements
include Advanced Cash Pooling, Personalized Pricing and Fee Bundles, Capital Efficient Risk
Limits, Analytics packs for the Retail, Corporate and Wealth sectors, and additional PSD2
and Customer Data Protection support.
Temenos has always emphasized its commitment to relentless innovation. But what this
continuous development also recognizes is that banks need responsiveness in their core
banking solution, not just their front office, to meet the challenges of Open Banking and the
new technologies.
Transact, like Infinity, also operates on a cloud native, cloud agnostic technology platform.
This doesn't commit organizations to a future in the cloud, of course, but it does allow banks
to keep their options open, and maximize cloud technologies in on-premises deployments,
and even avoid vendor lock in when the time is correct to move to the cloud.
The API led design of Transact allows banks to deploy the product independently of the front
office. APIs make it easier to integrate Transact with the bank's wider ecosystem, including
third-party providers, and even extend and modify the behavior of its banking capability. The
introduction of open APIs, covering every aspect of core banking, together with a dedicated
API Developer portal, helps banks maximize Fintech innovations and tailor their products to
customer needs.
Temenos helps its core banking customers accelerate product development and shorten
upgrade cycles still further through Temenos Continuous Deployment. This managed service
applies the latest DevOps methodology to the design and delivery of functional innovations
within the bank's implementation of Transact. Enhancements from both Temenos and the
bank are automatically assembled, tested, and delivered in very short cycles, dramatically
increasing the bank's ability to respond to both changes in the marketplace and new
opportunities.
2
Courtesy of Temenos Headquarters SA
The minimum recommendation is to have an IBM LinuxONE server configuration with two
servers. This enables both high availability and disaster recovery functions. Install them with
a location distance as described in 2.1, “Hardware” on page 18. The server configuration
should contain sufficient resource capacity for both servers and for running production and
non-production environments. These resources include processors, memory, I/O channels,
networking products (such as SAN Switches), security firmware and both primary and
secondary storage controllers and disks.
LPARs
The target is to run a z/VM SSI cluster with four members for production workload. Also, plan
additional z/VM SSI clusters for your additional stages (test, pre-production, and so on).
Distribute the LPARs equally across your IBM LinuxONE servers.
Each LPAR has a weight factor. The value for weight is finally defined in the share this LPAR
receives. This calculation is not easy because it is a relative value. It is relative to the other
active LPARs. The key messageis the phrase active LPAR. Any LPAR that is defined but not
activated does not apply in this calculation. Sum the value for weight for all active LPARs.
This result is 100%. Then, divide the LPAR weight by the sum and the result is the percentile
of the machine this LPAR is able to receive. Note this is only a good approximation as there
are more items influencing the share. Some of those factors are identified in the following list:
Too few logical processors assigned to the LPAR which then cannot use all the share
A processor-bound workload instead of an I/O-bound workload
Processor cores
Depending on the model of the IBM LinuxONE server, there is a maximum number of cores
(IFL) available. The best way to get the maximum flexibility in the computing power is to
define the cores as shared and define a number of shared cores to the LPARs. Do not
dedicate cores to LPARs; you lose the computing power of such cores in the other LPARs if
they are not fully utilized. Plan for Capacity on Demand (either Capacity BackUp (CBU) or
On/Off Capacity on Demand (OOCoD)) in the activation profile based on the variability of your
peak workloads and planned or unplanned downtimes. These logical processors are initially
offline but they can be activated dynamically as your workload demands change. With these
settings, you gain additional flexibility in the logical processor assignment.
Note: Software licensing must be considered when defining shared IFLs, OOCoD, and
CBU with products that are core licensed with the total number of IFLs available.
Each IFL has a pre-determined capacity based on the server model and capacity marker
indicator. These processors run at 100% capacity all the time. By enabling Simultaneous
Multi-Threading (SMT-2), you can increase the concurrent transaction processing capacity
based on the individual task assignment. For Java based workloads (and according to
Temenos testing in 2017), SMT-2 enablement increased transaction throughput by
approximately 25%.
IBM LinuxONE has an internal function called HiperDispatch (HD) which attempts to utilize
the cores as effectively as possible and to manage dispatching of the LPAR workload.
HiperDispatch monitors the cache misses in the internal processor caches. A cache miss
means this core waits and does nothing until the data is available in cache. The fewer cache
misses that occur the more effective the core. The fewer LPARs a core has to work for, the
fewer cache misses that occur. This is exactly the function of HD. HD tries to always dispatch
an LPAR on the same physical cores. If this is not possible, HD parks a logical processor for
a specific LPAR to reduce the number of LPARs one core is working for. HD is doing this
calculation periodically. So, any change in the current workload (prime shift, off shift, peaks
and so on) get incorporated in this calculation.
This calculation is called vertical assignment. The following list shows the four levels of
vertical assignment ranked from highest to lowest performance for a core:
1. Vertical High
2. Vertical Medium
3. Vertical Low
4. Parked
Cores assigned to vertical high allows the best performance for using IBM LinuxONE
processors.
Processor memory
The IBM LinuxONE server has a specific amount of memory installed. Calculate the memory
requirements of your workload for each application instance and assign these portions of
memory to each corresponding LPAR in the activation profile to the appropriate LPARs. z/VM
offers the possibility to overcommit memory but do not apply this to your production
environment(s).
It is also recommended to plan for reserved storage to each LPAR in the activation profile.
Doing so enables you to add memory to or shift memory between LPARs without taking them
down.
If you still have memory left at the end of your calculation, do not keep this memory unused in
the machine. Assign this portion again to the LPARs. System memory that is not used by an
application will be used for caching (both in z/VM and in Linux).
2.6.2 Network
IBM LinuxONE offers several types of external and internal network interfaces as noted in the
following list:
Open Systems Adapter (OSA)
HiperSockets (HS)
Share Memory Communication (SMC)
Both z/VM and Linux have the ability to group these interfaces. OSA also supports VLAN for
network separation.
HiperSockets (HS)
HiperSockets provides a virtual high-speed low-latency network technology within the IBM
LinuxONE server. LPARs in an IBM LinuxONE server can be attached to a HiperSockets
channel, which functions like an IP network. Up to 32 HiperSockets networks can be created
on an IBM LinuxONE server, with each HiperSockets network also providing VLAN support
for further traffic separation if needed.
Another key feature of HiperSockets is support for large transmission frame sizes, which can
allow for an IP Maximum Transmission Unit (MTU) of up to 56 kilobytes to be used.
All TCP connections start with the three-way handshake; the standard flow that starts a TCP
connection. Systems that support SMC include extra information about the SMC support they
each have in their handshake packets. If it is determined that they are both attached to the
same SMC environment, they switch their data transfer from the Ethernet path used to start
Note: The RoCE Express2 feature, available on IBM LinuxONE III, IBM LinuxONE
Emperor II, and IBM LinuxONE Rockhopper II servers, can be used by Linux to support
both standard Ethernet-based IP communication and SMC-R.
z/VM VSWITCH
VSWITCH virtualizes an OSA by connecting a real OSA to it and defining a virtual NIC device
to the guest. It allows easy and central management of the network connectivity for the guest
and it offers several security functions such as network separation and isolation. This simple
management is useful especially if you clone Linux guests or install them from templates.
Each guest must be granted use of VSWITCH. This grant is granular and can be qualified
down to VLANs or port numbers.
To achieve redundancy, VSWITCH can drive up to three OSA ports in active-backup mode.
This support provides the ability to aggregate physical OSA-Express features for use by a
single virtual switch (Exclusive mode) or by multiple virtual switches (Multi-VSwitch LAG
mode).
Both Exclusive and Multi-VSwitch LAG configurations provide the same industry standard
IEEE 802.3ad Link Aggregation protocol support with an external partner switch. Both LAG
configurations are completely transparent to the z/VM guest hosted by a simulated NIC
connected directly to the VSwitch.
For general networking, netiucv has been replaced by the z/VM Guest LAN and Virtual
Switch. Because Guest LANs and VSwitches implement Ethernet-like networks, they are
generally simpler to configure and more flexible. Note that netiucv links are point-to-point, so
connecting many virtual machines using netiucv is cumbersome. However, there are still
cases where netiucv can be used:
Restricted Linux administrative access
An extra Linux instance can be installed offering administrative access to all the other
Linux systems in the z/VM SSI cluster using IUCV. This network is isolated from the LAN.
You can also use this technique to create a secure transport between a Linux system and
z/VM service virtual machines (such as the System Management API servers).
Heartbeat network
A heartbeat network is a private network within a cluster shared only by specific cluster
nodes. It is used to monitor each individual node in the cluster and for coordination within
the nodes. A heartbeat network needs to be as different and as separated as the primary
LAN network. IUCV is a recommended choice as a fallback or emergency heartbeat
because it does not require a physical interface and runs through the CTC connections
within the SSI cluster.
Using the IUCV-based console system, one or more Linux guests can be configured to
connect to all others using IUCV. On all the other guests, a terminal manager program
(usually agetty or mingetty) is run against the /dev/hvc0 device that represents the IUCV
connection. From the management Linux guest, the iucvconn program is then used to
connect over IUCV to the destination Linux console.
Note: For more information about the IUCV-based console support, including full
instructions on how to set it up, see the manual, “How to Set up a Terminal Server
Environment on z/VM,” SC34-2596.
Copy functions
Disk storage can have copy functions implemented but it depends on the type and model as
to what functions are available. An IBM DS8000 provides advanced copy services including
FlashCopy and Mirroring.
z/VM
The backup and restore of the z/VM hypervisor is based on the FICON attached disk.
Infrastructure Suite
IBM Backup and Restore Manager, part of the IBM Infrastructure Suite, is the product to
manage this type of backup and restore. Backups can be performed in a periodic manner at
the file or image level and as incremental or full backups.
If your environment does not use FICON attached disk, you need to think about how to back
up and restore the z/VM hypervisor. There are several options available using built-in tools
but there is no common best practice available. You need to have a discussion with your IBM
representative to develop an individual solution.
Operating system
An operating system offers some built-in tools for backup purposes. A storage subsystem can
offer some functionality that can be used to back up application user data. There are
commercial solutions available that offer full and incremental backups with snapshots.
Depending on the failure that you want to address and the time you have available before
services are required up again (RTO), the appropriate technology must be chosen.
Both products can monitor Linux systems and the z/VM hypervisor. Linux distributions have
commands and packages included that are used for monitoring. You can also buy other
commercially available solutions.
Chapter 3. Architecture
Running Temenos applications on IBM LinuxONE provides a robust Enterprise platform for
mission critical banking services. In designing the correct solution, there are a number of
architectural options. Choosing the correct path varies based on your own or your clients'
architectural foundations, which are often influenced by budgetary constraints. Architectural
workshops should be run to reach agreement about the correct ingredients and should
encompass both the functional (application, database, system) and non-functional
(availability, security, integrity, reliability) characteristics of your requirements.
In this chapter, a sample Reference Architecture is proposed and is based on a two server
HA/DR configuration and additional components and decision points as appropriate.
Note: This chapter focuses on the Traditional on-premises, non-containerized solution and
later updates to this book will include the other types of architecture including cloud native
and on-premises cloud as they become available.
Perhaps unlike any other system architecture on which the Temenos applications can be
installed, IBM LinuxONE provides alternatives for hypervisor and other aspects of
deployment. The following paragraphs describe some of the architectural choices available on
IBM LinuxONE and their considerations.
However, architecting a new solution specifically for IBM LinuxONE allows you to take
advantage of the following important capabilities:
Scalability, both horizontal and vertical
Hypervisor clustering
Reliability, Availability, and Serviceability
On other architectures, the total number of instances deployed might be greater than the
number required on IBM LinuxONE. A single virtual instance on IBM LinuxONE can scale
vertically to support a greater transaction volume than is possible in a single instance on other
platforms. Alternatively, you can decide to employ horizontal scaling at the virtual level and
use the greater capacity per IBM LinuxONE footprint to deploy more virtual instances. This
can provide more flexibility in workload management by lessening the impact of removing a
single virtual instance from the pool of working instances.
Hypervisor clustering
The z/VM hypervisor on IBM LinuxONE provides a clustering technology known as Single
System Image (SSI). SSI allows a group of z/VM systems to be managed as a single virtual
compute environment. Most system definitions are shared between the members of the
cluster, providing these benefits:
Consistency in the system definition process: no need to replicate changes between
systems as the systems all read the same configuration
Single source for user directory: all definitions of the virtual instances are maintained in a
single source location, again eliminating the need to replicate changes between systems
Flexibility for deployment of virtual instances: allowing functions such as start and stop,
live-relocate, and virtual instances between member z/VM systems
An IBM LinuxONE server is designed to provide the highest levels of availability in the
industry. First, the internal design of the system provides a high degree of redundancy to
greatly reduce the likelihood that a component failure will affect the machine availability.
Secondly, the IBM LinuxONE server provides functions that allow it to remain fully operational
during service actions such as firmware updates. This means that in the majority of cases an
IBM LinuxONE server does not have to be removed from service for hardware upgrades or
firmware updates.
Note: Though DPM is simpler to use for newcomers to the IBM LinuxONE platforms, there
are some limitations in supported configurations. Using the traditional IODF method
ensures that partitions can utilize all hardware and software capabilities of the IBM
LinuxONE platform. The recommended architecture, which uses z/VM SSI, requires IODF
mode. This is because DPM is not able to configure the FICON CTC devices needed for
SSI.
Some degree of knowledge of I/O configuration on IBM LinuxONE is needed to perform this
process. Understanding how to use the tools to create an I/O configuration and channel
subsystem concepts is required to achieve a functional configuration.
Chapter 3. Architecture 55
Hardware Configuration and Definition (HCD)
HCD is the set of utilities used to create and manage IO Definition Files (IODFs).
On the z/OS operating system, HCD includes a rich Interactive System Productivity Facility
(ISPF) interface for hardware administrators to manage IODFs. The ISPF interface for HCD is
not provided on z/VM. So instead, a graphical Microsoft Windows-based tool called Hardware
Configuration Manager (HCM) is used to interact with the HCD code in z/VM and perform
IODF management tasks.
HCM is a client/server program that needs to have access to a z/VM host (over TCP/IP, to a
server process called the HD Dispatcher). HCM also has a stand-alone mode that works
separately from the Dispatcher. However, in the stand-alone mode, no changes can be made
to IODFs.
After the changes are complete, the work IODF is converted to a new production IODF. This
new production IODF can then be dynamically applied to the IBM LinuxONE server.
Stand-Alone IOCP is described in the IBM manual “Stand-Alone I/O Configuration Program
User’s Guide”, SB10-7173-01.
If there is already an existing IBM LinuxONE server on which the IODF for the new machine
can be created, the IODF for the new machine can be created there and then exported from
the existing machine. Using Stand-Alone IOCP on the new machine, the IODF is written to
the IOCDS of the new machine and can then be activated.
However, what if this machine is the first IBM LinuxONE server at your installation? In this
case, Stand-Alone IOCP must be used to create a valid IODF. To make the process easier,
rather than attempting to define the entire machine using this method a minimal IOCP deck
defining a single LPAR and basic DASD can be used. This simple IOCP can be activated to
make available a single system into which a z/VM system can be installed. This z/VM system
is then used to download the HCM code to a workstation and start the HCD Dispatcher. HCM
is then installed and used to create an IODF with more complete definitions of the system.
When an IODF is written to the IOCDS of an IBM LinuxONE machine, HCD knows to write
only the portions of the IODF that apply to the current machine.
We have created some definitions to describe the roles that various systems have in the I/O
Definition process:
I/O Definition system This system is the one from which you do all of the HCM work of
defining your I/O Configurations. This is the system you use to run the
HCD Dispatcher when needed, and all of the work IODF files are kept
there. As noted previously, there should be one I/O Definition system
across your IBM LinuxONE environment.
I/O Managing system This system runs the HCD programs to dynamically activate a new
IODF and to save the IOCDS. Each CPC requires at least one z/VM
system to be the I/O Managing system. The I/O Definition system is
also the I/O Managing system for the CPC on which it runs.
I/O Client system These are all the other z/VM systems in your IBM LinuxONE
environment. These systems do not need a copy of the IODF, and they
are not directly involved in the I/O definition process. When a dynamic
I/O operation takes place (driven by the I/O Managing system), the
channel subsystem signals the operating system about the status
changes to devices in the configuration.
For backup and availability reasons, it is a good idea to back up or copy the IODF files (by
default the files are kept on the A disk of the CBDIODSP user on the I/O Definition system). This
allows another system to be used to create an IODF in an emergency.
Chapter 3. Architecture 57
In an SSI cluster, the PMAINT CF0 disk is common between the members of the cluster. This
means that, if the I/O Managing systems for two CPCs are members of the same SSI cluster,
those I/O Managing systems can share the same copy of the IODF. This reduces the number
of IODF copies that exist across the IBM LinuxONE environment.
The following paragraphs describe the layout of LPARs in the recommended architecture.
The division and setup for logical partitions (LPARs) include the following aspects:
Two LPARs for Core Banking Database
There are a number of database solutions available for the TEMENOS Banking platform.
When implementing any core banking database, high availability is key. Best practices
suggest each IBM LinuxONE CPC have a z/VM LPAR with a minimum of two Linux guests
hosting the core banking database. Isolating core banking databases in their own LPAR
reduces the core licensing costs by dedicating the fewest number of IFLs to the core
banking database.
Oracle is the prevalent database used in Temenos deployments.
Two LPARs for Non-Core Database Farm (this can include any databases needed for
banking operations)
In these LPARs, we can run the databases that support banking operations. These
include credit card, mobile banking, and others. Each database will be running in a virtual
Linux Guest running on the z/VM hypervisor.
Four LPARs for Application servers
The four LPARs run z/VM Hypervisor managed by a single system image (SSI). SSI
allows sharing of virtual Linux guests across all four LPARs. Linux guests can be moved
between any of the four LPAR clusters. This movement can be done by either of these
methods:
– Bringing a server down and then bringing that same server up on another LPAR
– By issuing the Live Guest Relocation (LGR) command to move the guest to another
LPAR without an outage of the Linux guest
SSI will also let you install maintenance once and push to the other LPARs in the SSI cluster.
In this cluster, the Temenos Transact application server will run in each LPAR. This allows the
banking workload to be balanced across all four LPARs. Each Temenos Transact application
server can handle any of the banking requests.
Chapter 3. Architecture 59
3.4 Virtualization with z/VM
The z/VM hypervisor provides deep integration with the IBM LinuxONE platform hardware
and provides rich capabilities for system monitoring and accounting.
z/VM was selected for this architecture to take advantage of several unique features of IBM
LinuxONE. IBM LinuxONE uses these features to reduce downtime and system
administration costs and are noted in the following list:
GDPS for reduced and automatic failover in the event of an outage
SSI clustering to manage the resources and maintenance of systems
Live guest migration between LPARs or CPCs
z/VM provides a clustering capability known as Single System Image (SSI). This capability
provides many alternatives for managing the virtual machines of a compute environment,
including Live Guest Relocation (LGR). LGR provides a way for a running virtual machine to
be relocated from one z/VM system to another, without downtime, to allow for planned
maintenance.
It is recommended that z/VM systems have Recommended Service Updates (RSUs) applied
approximately every six months. When an RSU is applied to z/VM, it is usually necessary to
restart the z/VM system. In addition, z/VM development uses a model known as Continuous
Delivery to provide new z/VM features and functions in the service stream. If one of these
new function System Programming Enhancements (SPEs) updates the z/VM Control
Program, a restart of z/VM is required for those changes to take effect. Whenever z/VM is
restarted, all of the virtual machines supported by that z/VM system must be shut down,
causing an outage to service.
With SSI and LGR, instead of taking down the Linux virtual machines they can be relocated to
another member of the SSI cluster. The z/VM system to be maintained can be restarted
without any impact to service.
LGR is a highly reliable method of moving running guests between z/VM systems. Before you
perform a relocation, test the operation to see whether any conditions might prevent the
relocation from completing. Example 3-1 shows examples of testing two targets for
relocations.
When a guest is being relocated, its memory pages are transferred from the source to the
destination over FICON CTC devices. FICON provides a large transfer bandwidth, and the
CTC connections are not used for anything else in the system other than SSI. This means
guest memory can be transferred quickly and safely.
z/VM also provides interfaces to allow third-party programs to enhance these built-in
functions. IBM provides two of these on the z/VM installation media as additional products
that can be licensed for use:
Directory Maintenance Facility for z/VM
RACF Security Server for z/VM
Broadcom also provides products in this area, such as their CA VM:Manager suite, which
includes both user and security management products.
When installed, configured, and activated, the directory manager takes responsibility for
management of the system directory. A directory manager also helps (but does not eliminate)
the issue of the clear text password.
Note: The directory manager might not remove the USER DIRECT file from MAINT 2CC for
you. Usually the original USER DIRECT file is kept as a record of the original supplied
directory source file, but this can lead to confusion.
We recommend that you perform the following actions if you use a directory manager:
Rename the USER DIRECT file (to perhaps USERORIG DIRECT) to reinforce that the original
file is not used for directory management
Regularly export the managed directory source from your directory manager and store
it on MAINT 2CC (perhaps as USERDIRM DIRECT if you use DirMaint). This file can be used
as an emergency method of managing the directory in case the directory manager is
unavailable. The DirMaint USER command can export the directory.
If you are not using an External Security Manager (such as IBM RACF), you can export
this file with the user passwords in place. This helps its use as a directory backup, but it
potentially exposes user passwords.
When a directory manager is used, it can manage user passwords. In the case of IBM
DirMaint, z/VM users can have enough access to DirMaint to change their own passwords.
Also, when a directory entry is queried using the DirMaint REVIEW command, a
Chapter 3. Architecture 61
randomly-generated string is substituted for the password. However, it is still possible for
privileged DirMaint users to set or query the password of any user. For this reason, the only
completely effective way to protect against clear text passwords in the directory is to use an
External Security Manager (such as IBM RACF).
The recommendation is to install z/VM as a two- or four-member SSI cluster with one or two
z/VM members on each IBM LinuxONE server. You will be prompted to select an SSI or
non-SSI installation during the installation.
If z/VM 7.1 is installed into an SSI, at least one extended count key data (ECKD) volume is
necessary for the Persistent Data Record (PDR). If you plan to implement RACF, the
database must be configured as being shared and at least two ECKD DASD volumes are
necessary. Concurrent virtual and real reserve/release must always be used for the RACF
database DASD when RACF is installed in an SSI.
FICON CTC
An SSI cluster requires CTC connections, always size them in a pair of two. If possible, use
different paths for the cables. During normal operation, there is not much traffic on the CTC
connection. LGR is dependent of the capacity of these channels, especially for large Linux
guests. The more channels you have between the members, the faster a relocation of a guest
completes. This is a valid reason to plan four to eight CTC connections between the IBM
LinuxONE servers. Keep in mind, if you run only two machines, this cabling is not an
obstacle. But if you plan to run three or four servers, the physical weight can become heavy
as the connection must be point-to-point and you need any-to-any connectivity.
Note: FICON CTCs can be defined on switched CHPIDs, which can relieve the physical
cable requirement. For example, by connecting CTC paths using a switched FICON fabric
the same CHPIDs can be used to connect to multiple CPCs.
Also, FICON CTC control units can be defined on FICON CHPIDs that are also used for
DASD. Sharing CHPIDs between DASD and CTCs can be workable for a development or
test SSI cluster, this can further reduce the physical connectivity requirements.
Relocation domains
A relocation domain defines a set of members of an SSI cluster among which virtual
machines can relocate freely. A domain can be used to define the subset of members of an
SSI cluster to which a specific guest can be relocated. Relocation domains can be defined for
business or technical reasons. For example, a domain can be defined having all of the
architectural facilities necessary for a particular application, or a domain can be defined to
allow access only to systems with a particular software tool. Whatever the reason for the
When a guest system logs on, z/VM assigns the maximum common subset of the available
hardware and z/VM features for all members belonging to this relocation domain. This means
that by default, in the configuration described previously, guests started on the IBM LinuxONE
III server have access to only the architectural features of the IBM LinuxONE Emperor. There
also can be z/VM functions that might not be presented to the guests under z/VM 7.1
because the cluster contains members running z/VM 6.4.
To avoid this, a relocation domain spanning only the z/VM systems running on the IBM
LinuxONE III server are defined. Guests requiring the architectural capabilities of the IBM
LinuxONE III or of z/VM 7.1 are assigned to that domain, and are permitted to execute only
on the IBM LinuxONE III servers.
Memory overcommitment
Virtual machines do not always use every single page of memory allocated to them. Some
programs read in data during initialization but only rarely reference that memory during run
time. A virtual machine with 64 GB of memory that runs a variety of programs can actually be
actively using significantly less than the memory allocated to it.
Chapter 3. Architecture 63
z/VM employs many memory management techniques on behalf of virtual machines. One
technique is to allocate a real memory page only to a virtual machine when the virtual
machine references that page. For example, after our 64 GB virtual machine has booted it
might have referenced only a few hundred MB of its assigned memory, so z/VM actually
allocates only those few hundred MB to the virtual machine. As programs start and workload
builds the guest uses more memory. In response, z/VM allocates it, but only at the time that
the guest actually requires it. This technique allows z/VM to manage only the memory pages
used by virtual machines, reducing its memory management overhead.
These capabilities are why memory can be overcommitted on IBM LinuxONE to a higher
degree with lower performance impact than on other platforms.
Note these considerations when working with AGELIST, EARYLWRITE, and KEEPSLOT. It is
important to save I/O for paging or paging space, especially for systems with a large amount
of memory. EARLYWRITE specifies how the frame replenishment algorithm backs up page
content to auxiliary storage (paging space). When Yes is specified, pages are backed up in
advance of frame reclaim to maintain a pool of readily reclaimable frames. When No is
specified, pages are backed up only when the system is in need of frames. KEEPSLOT indicates
whether the auxiliary storage address (ASA) to which a page is written during frame
replenishment should remain allocated when the page is later read and made resident.
Specifying Yes preserves a copy of the page on the paging device and eliminates the need to
See also the page space calculation guidelines that are located in the CP Planning and
Administration Manual located at the following z/VM 7.1 library:
https://www-01.ibm.com/servers/resourcelink/svc0302a.nsf/pages/zVMV7R1Library?Open
Document
One important item is dump space. At IPL time, z/VM reserves a space in spool for a system
dump. The size depends on the amount of memory in the LPAR. It is important to ensure that
there is sufficient dump space in the spool.
The SFPURGER tool can be used to maintain the spool. If you use an automation capability
(such as the Programmable Operator facility or IBM Operations Manager for z/VM) you can
schedule regular runs of SFPURGER to keep spool usage well managed.
With command, CP QUERY MDCACHE, you can control the setting and the usage.
Deactivate minidisk caching for Linux swap disks. To do so, code MINIOPT NOMDC operands on
the MDISK directory statement of the appropriate disk.
Relative share is factored similarly as the LPAR weight factor. The sum of the relative share of
all active virtual machines in conjunction to the share setting of an individual virtual machine.
Relative share ranges from 1-9999.
Absolute share is expressed in percent and defines a real portion of the available cpu
capacity of the LPAR dedicated to a specific virtual machine. This portion of the cpu capacity
is reserved for that virtual machine, as long as it can be consumed. The remaining piece,
Chapter 3. Architecture 65
which cannot be consumed, is returned to the system for further distribution. It ranges from
0.1-100%. If the sum of absolute shares is greater than 99%, it will be normalized to 99%.
Absolute share users are given resource first.
The default share is RELATIVE 100 to each virtual machine. The value can be changed
dynamically by the command CP SET SHARE or permanently at the user entry in the z/VM
directory.
To make sure that adding virtual CPUs actually results in extra CPU capacity to your virtual
machines, make sure the SHARE value is increased when virtual CPUs are added.
z/VM allows the built-in security structure to be enhanced through the use of an External
Security Manager (ESM). When an ESM is enabled on z/VM, various security decisions can
be handled by the ESM rather than by CP. This allows for greater granularity of configuration,
better auditing capability and the elimination of queryable passwords for resources.
The IBM Resource Access Control Facility for z/VM (RACF) is one ESM available for z/VM. It
is a priced optional feature preinstalled on a z/VM system. Broadcom also offers ESM
products for z/VM, such as CA ACF2 and CA VM:Secure.
Note: IBM strongly recommends the use of an ESM on all z/VM systems.
The evaluation process is performed against a specific configuration of z/VM which includes
RACF. The configuration that IBM applies to the systems evaluated for Common Criteria
certification is described in the z/VM manual “z/VM: Secure Configuration Guide,” document
number SC24-6323-00. This document is located at the following link:
http://www.vm.ibm.com/library/710pdfs/71632300.pdf
By following the steps in this manual you can configure your z/VM system in a way that meets
the standard evaluated for Common Criteria certification.
Virtualized x86 systems often retain the same memory usage patterns. Because memory is
considered to be inexpensive, virtual machines are often configured with more memory than
actually needed. This leads to accumulation of Linux buffer cache in virtual machines; on a
typical x86 virtualized environment a large amount of memory is used up in such caching.
In z/VM the virtual machine is sized as small as possible, generally providing enough memory
for the application to function well without allowing the same buffer cache accumulation as
occurs on other platforms. Assigning a Linux virtual machine too much memory can allow too
much cache to accumulate, which requires Linux and z/VM to maintain this memory. z/VM
sees the working set of the user as being much larger than it actually needs to be to support
the application, which can put unnecessary stress to z/VM paging.
Real memory is a shared resource. Caching disk pages in a Linux guest reduces memory
available to other Linux guests. The IBM LinuxONE I/O subsystem provides extremely high
I/O performance across a large number of virtual machines, so individual virtual machines do
not need to keep disk buffer cache in an attempt to avoid I/O.
This creates a tension in the best configuration approach to take. Linux needs enough
memory for programs to work efficiently without incurring swapping, yet not so much memory
that needless buffer cache accumulates.
One technology that can help is the z/VM Virtual Disk (VDISK). VDISK is a disk-in-memory
technology that can be used by Linux as a swap device. The Linux guest is given one or two
VDISK-based swap devices, and a memory size sufficient to cover the expected memory
consumption of the workload. The guest is then monitored for any swap I/O. If swapping
occurs, the performance penalty is small because it is a virtual disk I/O to memory instead of
a real disk I/O. Like virtual machine memory, z/VM does not allocate memory to a VDISK until
the virtual machine writes to it. So memory usage of a VDISK swap device is only slightly
more than if the guest had the memory allocated directly. If the guest swaps, the nature of the
activity can be measured to see whether the guest memory allocation needs to be increased
(or if it was just a short-term usage bubble).
Using VDISK swap in Linux has an additional benefit. The disk space that normally is
allocated to Linux as swp space can be allocated to z/VM instead to give greater capacity and
performance to the z/VM paging subsystem.
Hotplug memory
Another memory management technique is the ability to dynamically add and remove
memory from a Linux guest under z/VM, known as hotplug memory. Hotplug memory added
to a Linux guest allows it to handle a workload spike or other situation that could result in a
memory shortage.
Chapter 3. Architecture 67
We recommend that you use this feature carefully and sparingly. Importantly, do not configure
large amounts of hotplug memory on small virtual machines. This is because the Linux kernel
needs 64 bytes of memory to manage every 4 kB page of hotplug memory, so a large amount
of memory gets used up simply to manage the ability to plug more memory. For example,
configuring a guest with 1 TB of hotplug memory consumes 16 GB of the guest’s memory. If
the guest only had 32 GB of memory, half of its memory is used just to manage the hotplug
memory.
When configuring hotplug memory, be aware of this management requirement. You might
need to increase the base memory allocation of your Linux guests to make sure that
applications can still operate effectively.
SMT can be activated only from the operating system and requires a restart of z/VM. We
recommend activating multithreading in z/VM by defining MULTITHREADING ENABLE in the
system configuration file. The remaining defaults of this parameter set the maximum number
of possible threads (currently two) for all processor types. This parameter also enables the
command CP SET MULTITHREAD to switch multithreading back and forth dynamically without a
restart.
Linux Guests
It is important that Linux is given enough opportunity to access the amount of CPU capacity
needed for the work being done. However, allocating too much virtual CPU capacity can, in
some cases, reduce performance.
z/VM does not virtualize SMT for guests. Guest virtual processors in z/VM are single-thread
processors. z/VM uses the threads provided by SMT-enabled CPUs to run more virtual CPUs
against them.
On the IBM LinuxONE IFL, up to two instruction queues can be used (referred to as SMT-2).
These multiple instruction queues are referred to as threads.
Two steps are required to enable SMT for a z/VM system. First, the LPAR needs to be defined
to permit SMT mode. Second, z/VM must be configured to enable it. This is done using the
MULTITHREAD keyword in the SYSTEM CONFIG file.
When z/VM is not enabled for SMT, logical processors are still referred to as processors.
When SMT is enabled, z/VM creates a distinction between cores and threads, and treats
threads in the same way as processors in non-SMT.
The following section introduces details about CPU configuration in IBM LinuxONE.
LPAR weight
IBM LinuxONE is capable of effectively controlling the CPU allocated to LPARs. In their
respective Activation Profile, all LPARs are assigned a value called a weight. The weight is
used by the LPAR management firmware to decide the relative importance of different LPARs.
Note: When HiperDispatch is enabled, the weight is also used to determine the
polarization of the logical IFLs assigned to an LPAR. More about HiperDispatch and its
importance for Linux workloads is in the following section.
LPAR weight is usually used to favor CPU capacity toward your important workloads. For
example, on a production IBM LinuxONE system, it is common to assign higher weight to
production LPARs and lower weight to workloads that might be considered discretionary for
that system (such as testing or development).
z/VM HiperDispatch
z/VM HiperDispatch feature uses the System Resource Manager (SRM) to control the
dispatching of virtual CPUs on physical CPUs (scheduling virtual CPUs). The prime objective
of z/VM HiperDispatch is to help virtual servers achieve enhanced performance from the IBM
LinuxONE memory subsystem.
z/VM HiperDispatch works toward this objective by managing the partition and dispatching
virtual CPUs in a way that takes into account the physical machine's organization (especially
its memory caches). Therefore, depending upon the type of workload, this z/VM dispatching
method can help to achieve enhanced performance on IBM LinuxONE hardware.
Chapter 3. Architecture 69
The processors of an IBM LinuxONE are physically placed in hardware in a hierarchical,
layered fashion:
CPU cores are fabricated together on chips, perhaps 10 or 12 cores to a chip, depending
upon the model
Chips are assembled onto nodes, perhaps three to six chips per node, again, depending
upon model
The nodes are then fitted into the machine's frame
To help improve data access times, IBM LinuxONE uses high-speed memory caches at
important points in the CPU placement hierarchy:
Each core has its own L1 and L2
Each chip has its own L3
Each node has its own L4
Beyond L4 lies memory
One-way z/VM HiperDispatch tries to achieve its objective is by requesting that the PR/SM
hypervisor provisions the LPAR in vertical mode. A vertical mode partition has the property
that informs the PR/SM hypervisor to repeatedly attempt to run the partition's logical CPUs on
the same physical cores (and to run other partitions' logical CPUs elsewhere). For this
reason, the partition's workload benefits from having its memory references build up context
in the caches. Therefore, the overall system behavior is more efficient.
z/VM works to assist HiperDispatch to achieve its objectives by repeatedly running the guests'
virtual CPUs on the same logical CPUs. This strategy ensures guests experience the benefit
of having their memory references build up context in the caches. This also enables the
individual workloads to run more efficiently.
If you run a directory management software such as IBM Directory Maintenance Facility for
z/VM (DirMaint), this file is no longer used. See “User and security management” on page 61
for more information about using a directory manager.
Autostart file
You need to invoke commands every time the z/VM system starts. User AUTOLOG1 is
automatically started after IPL. Inside AUTOLOG1 the file PROFILE EXEC on minidisk 191 is
automatically executed. Every command in that file will be automatically invoked after the
system starts.
Note: IBM Wave for z/VM also uses the AUTOLOG1 user for configuration of entities (such as
z/VM VSwitches) managed by IBM Wave.
Consult the product documentation for each of the products being customized for the role and
correct contents for these files.
First, for the base Infrastructure Suite, you need to install and configure DirMaint and
Performance Toolkit. Then for the advanced tools, a suggested setup is to create five LPARs
to host the following parts:
IBM Wave UI Server (Wave)
Tivoli Storage Manager Server (TSM)
Tivoli Data Warehouse (TDW) with Warehouse Proxy and Summarization and Pruning
Agents
IBM Tivoli Monitoring (ITM) Servers: Tivoli Enterprise Portal Server (TEPS) and Tivoli
Enterprise Management Server (TEMS)
JazzSM server for Dashboard Application Services Hub (DASH) and Tivoli Common
Reporting (TCR)
Chapter 3. Architecture 71
These five LPARs need to be set up only once for your enterprise. You can use any existing
servers that meet the capacity requirements.
Before installing IBM Wave, check for the latest fixpack for IBM Wave and install it. The initial
setup for IBM Wave is simple. All required setup in Linux and z/VM is done automatically by
the installation scripts. IBM Wave has a granular role-based user model. Plan the roles in IBM
Wave carefully according to your business needs.
Protected key encryption uses an encryption key that is derived from a master key and kept
within the Crypto Express card to generate a wrapped key that is stored in the Hardware
System Area (HSA) of the IBM LinuxONE system. The key is used by the CPACF instructions
to perform high-speed encryption and decryption of data, but it is not visible to the operating
system in any way.
Note: LUKS2 format is the preferred option for IBM LinuxONE data at-rest encryption.
The plain format does not include a header on the volume and no formatting of the volume is
required. However, the key must be stored in a file in the file system. The key and cipher
information must be supplied with every volume open.
Chapter 3. Architecture 73
3. The dm-crypt passes the secure key to paes for conversion into a protected key by using
pkey
4. The pkey module starts the process for converting the secure key to a protected key
5. The secure key is unwrapped by the CCA coprocessor in the Crypto Express adapter by
using the master key
6. The unwrapped secure key (effective key) is rewrapped by using a transport key that is
specific to the assigned domain ID
7. By using firmware, CPACF creates a protected key
Paging occurs when the z/VM system does not have enough physical memory available to
satisfy a guest’s request for memory. To obtain memory to meet the request, z/VM finds some
currently allocated but not recently used memory and stores the contents onto persistent
storage (a disk device). z/VM then reuses the memory to satisfy the guest’s request.
When a paging operation occurs, the content of the memory pages is written to disk (to
paging volumes). It is during this process that the possible exposure occurs. If the memory
being paged-out happened to contain a password, the private key of a digital certificate, or
other secret data, z/VM has stored that sensitive data onto a paging volume outside the
control of Linux. Whatever protections were available to that memory while it was resident are
no longer in effect.
To protect against this situation occurring, z/VM Encrypted Paging uses the advanced
encryption capability of the IBM LinuxONE system to encrypt memory being paged out and
decrypt it after the page-in operation. Encrypted Paging uses a temporary key (also known as
an ephemeral key) which is generated each time a z/VM system is IPLed. If Encrypted Paging
is enabled, pages are encrypted using the ephemeral key before they are written to the
paging device.
Adding dedicated OSA ports to Linux guests can be ideal for use as interconnect interfaces
for databases or clustered file systems. Using a dedicated OSA can reduce the path length to
the interface, but you will need to decide your own method for providing failover.
Also, if you dedicate an OSA interface it can be used for only one IP network by default. You
can use the Linux 8021q module to provide VLANs, managed within Linux.
OSA Express cards can be used in conjunction with software networking facilities in z/VM
and Linux (such as z/VM Virtual Switch and Open vSwitch in Linux). When used in
conjunction, together they support connectivity for virtual machines in virtualized
environments under z/VM.
HiperSockets
HiperSockets (HS) interconnects LPARs that are active on the same machine by doing a
memory-to-memory transfer at processor speed. HS can be used for both TCP and UDP
connections.
VSwitch
A z/VM Virtual Switch (VSwitch) will provide the network communication path for the Linux
guests in the environment. Refer to sections 2.6.3, “z/VM networking” on page 49 and 3.6.3,
“Connecting virtual machines to the network” for more information about VSwitch.
We recommend that a Port Group is used, for maximum load sharing and redundancy
capability. The Link Aggregation Control Protocol (LACP) can enhance the usage of all the
ports installed in the Port Group.
z/VM Virtual Switch also provides a capability called Multi-VSwitch Link Aggregation, also
known as Global Switch. This allows the ports in a Port Group to be shared between several
LPARs.
The RoCE Express card can be used by Linux for both SMC communication and for standard
Ethernet traffic. At this time however, the RoCE Express card does not have the same level of
availability as the OSA Express card (for example, firmware updates are disruptive). For this
reason and at the time of this publication, we recommend the following condition. If RoCE
Chapter 3. Architecture 75
Express is being used for Linux, it should be used in addition to a standard OSA
Express-based communications path (either direct-OSA or VSwitch). For more information
about SMC, see 2.1.8, “Shared Memory Communication (SMC)” on page 24.
Note: This is the only way that HiperSockets can be used by a Linux guest under z/VM. For
OSA Express, the z/VM Virtual Switch is an alternative. See “z/VM Virtual Switch” on
page 76.
To allow a guest to access an OSA Express card or HiperSockets network directly, the z/VM
ATTACH command is used to connect devices accessible by z/VM directly to the Linux virtual
machine. If the Linux guest needs access to the network device at startup, use the DEDICATE
directory control statement. This statement attaches the required devices to the Linux guest
when it is logged on.
The adapter can still be shared with other LPARs on the IBM LinuxONE server. It can also be
sharable with other Linux guests in the same LPAR. However, adapter sharing has a
dependency. There must be enough subchannel devices defined in the channel subsystem
to allow more than one Linux guest in the LPAR to use the adapter at the same time.
Note: The way adapter sharing is done is different between the IODF mode and the DPM
mode of the IBM LinuxONE server.
When you attach a Linux guest to an OSA Express adapter in this way, you need to consider
how you will handle possible adapter or switch failures. Usually you attach at least two OSA
Express adapters to the guest and use Linux channel bonding to provide interface
redundancy. You can use either the Linux bonding driver or the newer Team softdev Linux
driver for channel bonding. You have to repeat this configuration on every Linux guest.
Managing this configuration across a large number of guests is challenging and is one reason
this is not the preferred connection method for Linux guests.
A VSwitch can support IEEE 802.1Q Virtual LANs (VLANs). It can either manage VLAN
tagging on behalf of a virtual machine or can let the virtual machine do its own VLAN support.
VSwitches also provide fault tolerance on behalf of virtual machines. This is provided either
using a warm standby mode, or link aggregation mode using a Port Group. In the warm
standby mode, up to three OSA Express ports are attached to a VSwitch with one carrying
A z/VM VSwitch can also provide isolation capability, using the Virtual Edge Port Aggregator
(VEPA) mode. In this mode, the VSwitch no longer performs any switching between guests
that are attached to it. Instead, all packets generated by a guest are transmitted out to the
adjacent network switch by the OSA uplink. The adjacent switch must support Reflective
Relay (also known as hairpinning) for guests attached to the VSwitch to communicate.
Any of the network technologies described in Section 3.6.3, “Connecting virtual machines to
the network”, can be used for a cluster interconnect. Our architecture recommends the use of
OSA Express adapters for cluster interconnect for the following reasons:
Provides cluster connectivity between CPCs without changes
Provides support for all protocols supported over Ethernet
Cross-CPC connectivity
HiperSockets is a natural first choice for use as a cluster interconnect: it is fast, and highly
secure. It can be configured with a large MTU size, making it ideal as a database or file
storage interconnect.
However, because HiperSockets exists only within a single CPC, it cannot be used when the
systems being clustered span CPCs. If a HiperSockets-based cluster interconnect is
implemented for nodes on a single CPC, the cluster is changed to a different interconnect
technology if the nodes were to be split across CPCs.
When an OSA Express-based interconnect can be configured with a large MTU size (referred
to as Ethernet jumbo frames), OSA Express is a good choice. This is because of the flexibility
of OSA Express in being able to deploy cluster nodes across CPCs.
Protocol flexibility
The SMC networking technologies, SMC-D and SMC-R, can also be considered as cluster
interconnect technologies. They offer high throughput with low CPU utilization. Unlike
HiperSockets, the technology can be used between CPCs (SMC-R).
SMC can increase only the performance of TCP connections; therefore, it might not be
usable for all cluster applications (Oracle RAC, for example, uses both TCP and UDP on the
interconnect network). SMC operates as an adjunct to the standard network interface and not
as a separate physical network. Because of this, it doesn’t meet the usual cluster
interconnect requirement of being a logically and physically separate communication path.
Chapter 3. Architecture 77
3.7 DS8K Enterprise disk subsystem
Determining what type of storage your organization needs can depend on many factors at
your site. The IBM LinuxONE can use FCP/SCSI, FICON ECKD or a mix of storage types.
The storage decision should be made early on and with the future in consideration. Changing
decisions later in the process can create longer migrations to another storage type. The
architecture described in this book is based around FICON and ECKD storage, which is
required for SSI and the high availability features that it brings.
There are two options available for which disk storage type to choose:
A 512-byte fixed block open system storage based on the FCP protocol, which is the
same storage as for the x86 platform. On this storage you need to define LUNs with the
appropriate sizes, which can be found in the product documentations.
ECKD storage, which requires an enterprise class storage subsystem (IBM DS8000)
based on FICON protocol. ECKD volumes need to be defined in the storage subsystem. If
the product documentation defined the disk size in GB or TB then you need to transform
the sizes in number of cylinders or 3390 models.
The following section helps you to do the calculations for ECKD volume size.
The 3390-9 can be used for the operating system (especially for z/VM) and the other types for
data. In an enterprise class storage, you find these volume models as predefined selections
in the configuration dialog. However, you are not restricted to these specific sizes as you can
define any number of cylinders within the limit of max 262668. The ECKD EAV is the
extended addressability volume. This means there is no further specific type defined beyond
3390-54.
FBA storage is more common because it is also usable as storage for the x86 platform. FBA
storage has the following attributes:
Any kind of SCSI disk storage can be used
It fits into your already implemented monitoring environment
It does not need any special hardware (SAN switches) or dedicated cabling
It is less expensive as an enterprise class storage
If you run IBM LinuxONE in DPM mode, FBA storage is the preferred storage
Some functions are not available when compared to an enterprise class storage
Multipathing must be done at the operating system level
It has limits in scalability
It does not support GDPS
ECKD or enterprise class storage is unique to the IBM LinuxONE architecture. ECKD storage
has the following attributes:
It supports all the functions available for disk storage systems
It offers the most performance and scalability
No additional driver is necessary for multi-pathing; it is implemented in the FICON protocol
It is supported by GDPS
It is more expensive when compared to FBS storage
It requires enterprise class SAN switches
FICON needs dedicated cabling
If you are considering running GDPS, you are required to use FICON and enterprise class
storage. Otherwise, FBA storage is also a good option.
This new application suite approach allows clients to integrate new modules and modify or
update existing services without impacting the runtime services. It also reduces the
development and testing effort required and appeals to the larger community of Java
developers. This has also become the de-facto standard for Cloud-based adoption using
containers (such as Docker and Podman) and orchestration technologies (for example,
Kubernetes) based on Java frameworks.
Chapter 3. Architecture 79
In summary, the use of the latest Temenos TAFJ-based suite brings many functional
advantages. This is based on the ability to exploit the latest Java, Cloud and associated
runtime technologies, while allowing the non-functional architectural requirements such as
Availability, Scalability, (Transaction) Reliability and Security to be fully exploited on the IBM
LinuxONE platform.
The software used when running Temenos Transact is specific and exact. In fact, there is only
one specific version of the Linux operating system that is compatible for use. If an
organization defers from the recommended list of software, Temenos can deny support.
The following components and minimum release levels are certified to run Temenos Transact:
Red Hat Enterprise Linux 7
Java 1.8
IBM WebSphere MQ 9
Application Server (noted in the next list)
Oracle DB 12c
The Temenos Transact software is JAVA based and requires an application server to run.
There are several options of application servers:
IBM WebSphere 9
Red Hat JBoss EAP
Oracle WebLogic Server 12c (JDBC driver)
The Temenos Stack Runbooks provide more information about using Temenos stacks with
different application servers. Temenos customers and partners can access the Runbooks
through either of the following links:
The Temenos Customer Support Portal: https://tcsp.temenos.com/
The Temenos Partner Portal: https://tpsp.temenos.com/
For the installation and configuration process of IBM MQ see “Installing IBM MQ server on
Linux,” located at the following link:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.ins.doc/q00
8640_.htm
Each DB Node is a stand-alone Linux server. However, Oracle Clusterware allows all Oracle
RAC nodes to communicate with each other. Installation of Oracle DB or updates to it could
happen across all DB nodes automatically.
Oracle Clusterware has additional shared storage requirements: a voting disk to record node
membership and the Oracle Cluster Registry (OCR) for cluster configuration information.
Chapter 3. Architecture 81
3.12.4 Oracle Automatic Storage Management (ASM)
ASM is a volume manager and file system that groups storage devices into storage groups.
ASM simplifies the management of storage devices by balancing the workload across disks in
the disk group and exposes the file system interface for the Oracle database files. ASM is
used alternatively to conventional volume managers, file systems and raw devices.
The use of ASM is optional and Oracle now supports Spectrum Storage (GPFS) on IBM
LinuxONE as an alternative with Oracle RAC.
High availability of Oracle RAC is achieved by removing single points of failure of single node
or single-server architectures with multi-node deployments while maintaining the operational
efficiency of a single node database. Node failures do not affect the availability of the
database because Oracle Clusterware migrates and balances DB traffic to the remaining
nodes whether the outage was planned or unplanned. IBM LinuxONE can achieve high
availability Oracle RAC clusters on a single IBM LinuxONE. This is done with the use of
multiple LPARs to a server as individual DB nodes or with multiple IBM LinuxONE systems in
a data center. See the architectural diagram shown in Figure 4-6 on page 91.
With IBM LinuxONE, scalability can be achieved in multiple ways. One way is to add compute
capacity to existing Oracle DB LPARs by adding IFLs. A second way is to add additional
Oracle DB LPARs to the environment. The ability for scaling the Oracle DB by adding IFLs
one at a time is another unique feature of IBM LinuxONE. This ability can have a distinct CPU
core savings instead of deploying an entire Linux Server that can have dozens of cores to
your Oracle architecture.
Oracle RAC, in an active/active configuration, offers the lowest Recovery Time Objective
(RTO). However, this mode is the most resource intensive. Better DB performance has been
observed using Oracle RAC One Node. In the event of a failure, Oracle RAC One Node will
relocate database services to the standby node automatically. Oracle RAC One Node is a
great fit with the scale-up capability of IBM LinuxONE.
See the Oracle documentation for system prerequisites and detailed information for
installation and operation at the following link:
https://docs.oracle.com/cd/E11882_01/install.112/e41962/toc.htm
Chapter 3. Architecture 83
84 Temenos on IBM LinuxONE Best Practices Guide
4
The Temenos Runbook architecture described in this chapter is based on Stack 2, IBM Java
1.8, IBM MQ, WebSphere, and Oracle DB 18c. See Figure 4-1 on page 86.
The standard solution that is presented in Figure 4-2 allows you to build a strong foundation
for the future. This solution gives the customer the ability to provide maintenance to the base
infrastructure with minimal impact to production. Building on the standard solution provides a
pathway for the customer to continuous availability with other IBM products like GDPS and
other storage mirroring solutions. Figure 4-2 shows the overall deployment architecture for a
standard Temenos Solution on IBM LinuxONE Systems. The orange box represents the IBM
LinuxONE III CPC.
1
Courtesy of Temenos Headquarters SA
CORBNK1 OTHERDB1 APP1 APP3 PRECOR1 PREOTH1 PREAPP1 PREAPP3 Banking DB APP Sand Box
Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux
Linux
Systems Systems Systems Systems Systems Systems Systems Systems Systems Systems
Systems
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
When deploying any production hardware or application, it is important to ensure that there is
no single point of failure. Considering this, always plan to have at least two or more of:
hardware, Linux systems, applications systems, network equipment and connections, storage
infrastructure and so on.
Working together (the customer and the IBM engineer), the I/O configuration and logical
partitions layout is created. The Input Output Definition File (IODF) configures the IBM
LinuxONE server hardware and defines the partitions (LPARs) on the IBM LinuxONE. IBM
Engineers use the IODF to create the mapping of where each I/O cable is to be plugged into
the IBM LinuxONE.
Each LPAR is set up with real memory layout and the number of IFLs assigned to the LPAR.
Working together, you and the IBM Team set up the system using IBM LinuxONE best
practices and IBM LinuxONE and Temenos recommendations.
The following sections give a visual perspective and a high-level overview of the installation
journey.
Production Development/
Sand Box
Core banking
Database
Pre-Production Test
NON-Core Databases Application workloads
CORBNK1 OTHERDB1 APP1 APP3 PRECOR1 PREOTH1 PREAPP1 PREAPP3 Banking Sand Box
APP
DB
Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux
Linux
Systems Systems Systems Systems Systems Systems Systems Systems Systems Systems
Systems
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
We are now ready to install the Sandbox LPAR systems. IBM Hypervisor (IBM LinuxONE
z/VM) is the first operating system that is made operational. These Sandbox systems are
used to provide training and help in verifying that all network connections and hardware
connections are working correctly. Each LPAR is set up as a Single System Image (SSI), so
they are the same as all the other IBM LinuxONE z/VM LPARs. This provides the foundation
where future IBM LinuxONE z/VM maintenance and upgrades are installed. This Sandbox
Production Development/
Core banking Pre-Production Test Sand Box
Database NON-Core Databases Application workloads
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
CORBNK1 OTHERDB1
APP2 APP4 PRECOR2 PREOTH2 PREAPP2
PREAPP4 Banking DB APP Sand Box
The Development and Test environment defines the first set of LPARs used to create or
develop Temenos and IBM LinuxONE systems on this platform. The Development or Test
environment consists of four LPARs within a single SSI cluster with each LPAR running an
IBM LinuxONE z/VM Hypervisor.
Two LPARs run the Temenos Application software, web services, and any other
non-database software. The virtual Linux guests, running on these two LPARs, have many
version or levels of applications and Linux operating systems installed on them. It is only on
these virtual Linux guests where development and initial testing should occur.
The other two LPARs run both core and non-core banking databases. One of the benefits of
running these databases only on these two LPARs is a reduction in database licensing costs.
The segregation of development or test databases to their own two LPARs ensures that
application development processes (running on the other Development or Test LPARs) can
proceed unaffected by database workloads.
CORBNK1 OTHERDB1 APP1 APP3 PRECOR1 PREOTH1 PREAPP1 PREAPP3 Banking Sand Box
DB APP
Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux
Linux
Systems Systems Systems Systems Systems Systems Systems Systems Systems Systems
Systems
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
Within the systems environment hierarchy, the Pre-Production environment is second only to
the Production environment. Pre-production systems provide for last chance verification of
how any changes might affect production. This is a set of systems that mimic the real
production environment. It is in this environment that errors or performance issues due to
changes or updates can be caught. It is important that this environment is set up to replicate
production as closely as possible.
This configuration matches the Production environment setup also shown in Figure 4-5.
CORBNK1 OTHERDB1 APP1 APP3 PRECOR1 PREOTH1 PREAPP1 PREAPP3 Banking Sand Box
APP
DB
Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux
Linux
Systems Systems Systems Systems Systems Systems Systems Systems Systems Systems
Systems
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM z/VM
z/VM
(ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD) (ECKD)
The work that has been done setting up Sandbox, Development and Test, and Pre-Production
systems provides the foundation to create a Production environment that can take advantage
of the IBM LinuxONE.
Clustering the virtual banking application Linux guests across four LPARs allows room for
each LPAR to grow when workload increases. The database servers are split between core
banking and non-core banking databases. This split of the databases provides savings in
software licensing costs.
ECKD DASD
ECKD DASD ECKD DASD
ECKD DASD
ECKD DASD
GDPS ECKD DASD
Production
site Production site
ECKD DASD HyperSwap ECKD DASD
IBM LinuxONE has a unique disaster recovery capability. Architecturally, every IBM
LinuxONE is engineered to strict standards which ensures that no IBM LinuxONE differs
architecturally from another; no matter the version. Because they are architecturally identical,
any virtual Linux guest from any LPAR or IBM LinuxONE can run on any other LPAR or IBM
LinuxONE as long as it has access to the same network, storage, or copy of the storage. That
means no changes are needed for any virtual or native Linux guests to run on another IBM
LinuxONE CPC. This portability does not exist on any other hardware platform. Therefore,
any Linux guest, native or virtual, can be interchanged seamlessly with another IBM
LinuxONE CPC.
The practice of instantaneous data storage mirroring between production and DR sites
ensures any change, modification, or update applied in production to Linux guests is
automatically replicated on the DR site.
Capacity Backup (CBU) processors are another unique and cost advantageous feature of the
IBM LinuxOne offering. CBUs are processors that are available only on the DR system and
are priced at a lower cost than production processors. These DR CBU CPCs are based on
the permanent production configuration. They are not active while your production CPC is
operational. As such, IBM software licensing and requisite fees apply only to those
processors that are active (based on the CPC permanent configuration).
There can be additional fees for non-IBM software. In addition, some non-IBM software
packages can require new license keys to take advantage of the additional capacity. Check
with your software vendor for details.
Figure 4-8 shows how disaster recovery (DR) matches the production environment.
Disaster Recovery Site Production Sand Box Disaster Recovery Site Production Sand Box
IBM LinuxONE DR CPC – CBU Processors IBM LinuxONE DR CPC – CBU Processors
Disaster recovery (DR) CPCs match the production environments. This includes the number
of processors, memory, network, and I/O configuration. You can design the DR site to
handle only the production workload or you can build the DR site to handle both production
and non-production workloads.
Production Pre-Production Dev/Test Sand Box Production Pre-Production Dev/Test Sand Box
Disaster Recovery Site Production Sand Box Disaster Recovery Site Production Sand Box
IBM LinuxONE DR CPC – CBU Processors IBM LinuxONE DR CPC – CBU Processors
Production Pre-Production Dev/Test Sand Box Production Pre-Production Dev/Test Sand Box
Disaster Recovery Site Production Sand Box Disaster Recovery Site Production Sand Box
IBM LinuxONE DR CPC – CBU Processors IBM LinuxONE DR CPC – CBU Processors
With this type of disaster recovery setup, a runbook can be created documenting the steps to
transfer Production to the DR site. This runbook allows anyone in the Support Team structure
to execute the process. It can be as simple as the process shown in the following steps:
1. Verify that all Production LPARs are down
2. Activate DR Production LPARs
3. Bring up DR Production systems (IPL systems)
4. Verify DR production virtual servers are active and ready to accept workloads
IMPORTANT: For DR site planning and setup purposes, all non-IBM equipment and
workloads running in Production should also be replicated on the Disaster Recovery site.
This ensures a complete and seamless recovery process.
Figure 4-11 shows the disaster recovery process with IBM GDPS Virtual Appliance.
Disaster Recovery Site Production Sand Box Disaster Recovery Site Production Sand Box
GDPS VA
IBM LinuxONE DR CPC – CBU Processors IBM LinuxONE DR CPC – CBU Processors
IBM GDPS Virtual Appliance (IBM GDPS (VA)) is designed to facilitate near-continuous
availability and disaster recovery by extending GDPS capabilities for IBM LinuxONE. It
substantially reduces recovery time and the complexity associated with manual disaster
recovery.
Virtual Appliance requires its own LPAR with a dedicated special processor.
4.2 Tuning
This section contains the Linux and Java tuning considerations to optimize Temenos Transact
and its dependent software on IBM LinuxONE.
This section talks about IBM LinuxONE specifics for the Linux operating system.
Huge pages
Defining large frames allows the operating system to work with memory frames of 1 MB
rather than the default 4 K. This allows smaller page tables and more efficient Dynamic
Address Translation. Enabling fixed large frames can save CPU cycles when looking for data
in memory. Disable the transparent huge pages (enabled per default) to ensure that the 1 MB
pool is assigned when specified. Transparent huge pages tries to assign 2 MB pages until
enough contiguous memory is available. The longer that the system is running, this
effectiveness can diminish and the more that memory fragmentation occurs. Large page
support entails support for the Linux hugetlbfs file system. To check whether 1 MB large
pages are supported in your environment, issue the following command:
grep edat /proc/cpuinfo
An output line that lists edat as a feature indicates 1 MB large page support.
Defining huge pages with those kernel parameters allocates the memory as part of a pool. To
monitor the pool usage, the information can be found using the cat /proc/meminfo output.
The two components that can benefit from the large frames are Java and Oracle.
Calculate the number of pages according to the application requirements. The number can be
about three/fourths (3/4) of the memory to the instance.
Heap size
Set the minimum (option -Xms) and the maximum (option -Xmx) JAVA heap size as the same.
This ensures the heap size does not change during run time. Make the heap size large
enough to accommodate the requirements of your applications but small enough not to
impact performance. A heap size that is too large can also impact performance. You can run
out of memory or increase the time that it takes the system to clean up unreferenced objects
in the heap. This process is known as Garbage Collection.
The Pause-less GC mode is not enabled by default. To enable the new Pause-less GC mode
in your application, introduce -Xgc:concurrentScavenge to the JVM options.
Large pages
JVM "-Xlp" startup-option is set in the WebLogic Application server for the Temenos Transact
application. This setting indicates that the JVM should use Large pages for the heap. If you
use the SysV shared memory interface, which includes java -Xlp, you must adjust the
shared memory allocation limits to match the workload requirements.
Customers who feel confident that their systems are well protected might want to disable
some or all of the protection mechanisms. For more information about controlling the impact
of microcode and security patches, read the following Red Hat article, which describes the
vulnerabilities patched by Red Hat and how to disable some or all of these mitigations:
https://access.redhat.com/articles/3311301
As discussed previously, it might be possible to use a single IBM LinuxONE server. However,
using two IBM LinuxONE servers provide greater flexibility in managing situations that require
a server to be removed from service temporarily.
Step 2: Hypervisor
We use z/VM as the hypervisor, using the SSI function. This improves the manageability of
virtual instances by eliminating the need to synchronize configuration details between shadow
virtual instances. It also offers easier options for local recovery of virtual instances (restart on
the same IBM LinuxONE server or restart on the other one) in the event of a restart being
needed.
Running the members of the SSI cluster across the two IBM LinuxONE servers provides the
maximum flexibility.
z/VM also allows a high degree of horizontal scalability by supporting large numbers of virtual
instances per system. This provides the option of adjusting the number of instances to make
sure that there were enough to prevent a noticeable impact to operation. For example, if a
virtual instance needed to be removed from the environment for maintenance or in the event
of a failure.
Step 4: Java
Migrating Java applications from one platform to another is easy compared to the migration
effort required for C or C++ applications. Even though Java applications are operating system
independent, the following implementation and distribution specifics need to be considered:
Most of the Java distributions have their own Java virtual machine (JVM) implementations.
There will be differences in the JVM switches. These switches are used to make the JVM
and the Java application run as optimally as possible on that platform. Each JVM switch
that is used in the source Java environment needs to verify for a similar switch in the
target Java environment.
Even though Java SE Developer Kits (JDKs) are expected to conform to common Java
specifications, each distribution will have slight differences. These differences are in the
helper classes that provide functions to implement specific Java application programming
interfaces (APIs). If the application is written to conform to a particular Java distribution,
the helper classes referenced in the application must be changed to refer to the new Java
distribution classes.
Special procedures must be followed to obtain the best application migration. One critical
point is to update the JVM to the current stable version. The compatibility with earlier
versions is significant and performance improvements benefit applications.
Ensure that the just-in-time (JIT) compiler is enabled.
Set the minimal heap size (-Xms) equal to the maximal heap size (-Xmx). The size of the
heap size should always be less than the total of memory configured to the server.
For detailed guidance on migrating IBM WebSphere Application Server see the following link:
http://www.redbooks.ibm.com/redbooks/pdfs/sg248218.pdf
Deploying Oracle database in a z/VM SSI environment gives some choices for how the
system can be configured. Oracle RAC One Node is a configuration of Oracle specifically
designed to work with virtualized environments like z/VM. It can offer most of the availability
benefits of full RAC without most of the cluster overhead. It does this by sharing some of the
availability responsibility with the hypervisor. For example, being able to relocate a database
guest from one z/VM system to another might be enough to provide database service levels
high enough for your installation.
A typical migration consists of running the TAFC and TAFJ environments side by side during
the migration process. Then a phased approach is used to upgrade the multiple parts of the
core banking solution with the least amount of impact on the core banking operations. An
important consideration is to ensure that all customizations and applications support the
JDBC driver for connectivity to the database. This is the only driver supported by Temenos
and IBM LinuxONE. See Figure 4-12.
2
Courtesy of Temenos Headquarters SA
Figure 4-14 on page 102 shows one option for a cloud architecture.
3
Courtesy of Temenos Headquarters SA
Chapter 4. Temenos Deployment on IBM LinuxONE and IBM Public Cloud 101
Figure 4-14 Cloud architecture.
This offering provides the highest levels of security and secure data residency for your core
and delivers the benefits that are inherent with cloud native architecture.
This offering also allows for the concept of Hybrid cloud. Simultaneously maintaining the core
data on-premises cloud native and using IBM Cloud (or other cloud providers) in a consistent
and governed manner. This is achieved through the combination of Red Hat OpenShift and
IBM Cloud Paks.
A possible use case is Temenos Transact deployed on IBM LinuxONE on-premises cloud
native and Temenos Infinity on IBM Hyper Protect public cloud.
Table A-1 provides a sample eConfig for a single instance of an IBM 3907-LR1 IBM
Rockhopper LinuxONE II Server with 4 x IFLs, 832gb memory, Dynamic Partition Manager
and key I/O technologies OSA-Express6S and FCP Express32S port channels.
16 HW for DPM 1
19 Manage FW Suite 1
1064 IFL 4
Table A-2 provides a sample IBM Software Bill of Materials (includes both systems software
and middleware) based upon 4 x IFLs on IBM LinuxONE II (same as in the previous
configuration).
Appendix A. Sample product and part IBDs and model numbers 105
106 Temenos on IBM LinuxONE Best Practices Guide
B
This appendix provides an example of how to create the server’s first IODF and the following
additional aspects:
An example of a minimal IOCP deck to perform this operation
Listed important aspects or parts of the operation
Enabling the IOCP
A success verification example of the process
If there is already an existing IBM LinuxONE server on which the IODF for the new machine
can be created, the IODF can be exported from the existing machine to be installed (using
Standalone IOCP) on the new machine. However, what if this machine is the first IBM
LinuxONE server at your installation? In that scenario, the Standalone IOCP must be used.
Rather than attempting to do the initial definition for the entire machine using this method, a
minimal IOCP deck defining a single LPAR and basic DASD can be used. This simple IOCP
can be activated to make available a single system into which a z/VM system can be installed.
This z/VM system is then used to download the HCM code to a workstation and start the HCD
Dispatcher. HCM is installed and used to create an IODF with more complete definitions of
the system.
An example of a minimal IOCP deck to perform this operation is shown in Example 4-1.
The important parts of this IOCP deck are as noted in the following list:
The SYSTEM keyword on the ID macro indicates the machine type as 3906, which is an IBM
LinuxONE Emperor II. This value is updated to suit the machine being installed.
To enable this minimal IOCP, the Input/output (I/O) Configuration task is started from the
Support Element.
Note: The most convenient way to access the Support Element for this operation is to use
the Single Object Operations function from the HMC.
If the machine is not already operating, a Power-on Reset (POR) is performed using the
Diagnostic (D0) IOCDS. After the POR has completed, you can select one of the diagnostic
LPARs and start the Input/output (I/O) Configuration task.
Four entries, labeled A0 to A3, are shown. D0 is also shown, but it is not user modifiable.
These are the IOCDS slots, that contain the hardware portion of the I/O definition information.
To update and generate the minimal IOCP deck, select one of the A-slots and choose Edit
source from the menu. When the edit window appears, copy and paste the minimal IOCP
deck you have edited into the edit window. From the File menu click Save, and then close the
editor window. You can then select Build dataset from the menu. After confirming your
selection, the Standalone IOCP program is loaded into the Diagnostic LPAR and processes
the IOCP deck. Progress messages appear in the status box. Ideally, the IOCP deck was
successfully processed and the binary IOCDS slot has been updated with your configuration.
If an error in processing occurred, the Standalone IOCP program updates the source file with
comments to explain the error.
If Standalone IOCP successfully generated your deck, the source file is also updated with
comments that provide some information. Example 4-2 shows an example of this information.
Appendix B. Creating and working with the first IODF for the server 109
Example 4-2 IOCP comments after successful run
*ICP ICP071I IOCP GENERATED A DYNAMIC TOKEN HOWEVER DYNAMIC I/O
*ICP CHANGES ARE NOT POSSIBLE WITH THIS IOCDS
*ICP ICP063I ERRORS=NO, MESSAGES=YES, REPORTS PRINTED=NO,
*ICP IOCDS WRITTEN=YES, IOCS WRITTEN=YES
*ICP ICP073I IOCP VERSION 05.04.01
The most important part of this output is in ICP063I, where we see IOCDS WRITTEN=YES.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
These IBM Redbooks publications provide additional information about topics in this book.
Note some books are available only in soft copy.
Practical Migration from x86 to LinuxONE
http://www.redbooks.ibm.com/abstracts/sg248377.html?Open
Oracle on LinuxONE
http://www.redbooks.ibm.com/abstracts/sg248384.html
Scale up for Linux on LinuxONE
https://www.redbooks.ibm.com/abstracts/redp5540.html?Open
Securing Your Cloud: IBM Security for LinuxONE
https://www.redbooks.ibm.com/abstracts/sg248447.html?Open
OpenShift OKD on IBM LinuxONE, Installation Guide
http://www.redbooks.ibm.com/abstracts/redp5561.html?Open
IBM DB2 with BLU Acceleration
http://www.redbooks.ibm.com/abstracts/tips1204.html?Open
WebSphere Application Server V8.5 Migration Guide
http://www.redbooks.ibm.com/redbooks/pdfs/sg248218.pdf
GDPS Virtual Appliance V1R1 Installation and Service Guide
https://www.redbooks.ibm.com/redbooks/pdfs/sg246374.pdf
Implementing IBM Spectrum Scale
https://www.redbooks.ibm.com/redpapers/pdfs/redp5254.pdf
IBM Spectrum Scale (GPFS) for Linux on z Systems
https://www.redbooks.ibm.com/redpapers/pdfs/redp5308.pdf
Best practices and Getting Started Guide for Oracle on IBM LinuxONE
http://www.redbooks.ibm.com/redpieces/pdfs/redp5499.pdf
Maximizing Security with LinuxONE
https://www.redbooks.ibm.com/redpapers/pdfs/redp5535.pdf
Hyper Protect Services
https://www.ibm.com/cloud/hyper-protect-services
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
Leveraging IBM LinuxONE and Temenos Transact for Core Banking Solutions
https://www.ibm.com/downloads/cas/NEO7QNLJ
LinuxONE for Dummies
https://www.ibm.com/downloads/cas/LBOVYYJJ
Installing IBM MQ server on Linux
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.ins.doc/
q008640_.htm
SG24-8462-00
ISBN 0738458457
Printed in U.S.A.
®
ibm.com/redbooks