AWS Cloud Native Reference Architecture
AWS Cloud Native Reference Architecture
Contents
1 History 4
2 Legal 5
3 Introduction 6
3.1 Introduction 6
3.2 Architecture diagram 6
3.3 Virtual Private Cloud (VPC) 7
3.4 Users 7
4 Peripheral services 8
4.1 Amazon Route 53 8
4.2 VPC endpoints 8
4.3 API gateway 9
4.4 AWS WAF (Web Application Firewall) 10
4.4.1 Customisable web security rules 10
4.4.2 Deploying AWS WAF 10
4.5 EC2 load balancer 11
4.5.1 Network load balancer 11
4.5.2 Application load balancer 12
4.6 Secrets Manager 12
5 Amazon Elastic Container Service (ECS) 13
5.1 Automating containers 13
5.2 About ECS 13
5.3 EC2 and Fargate 14
5.4 ECS terminology 14
5.5 ECS in the reference architecture 15
5.5.1 Deployment model 15
5.5.1.1 Example deployment model 15
6 Amazon MQ 16
7 Microservices 17
7.1 Amazon Kinesis 17
7.2 AWS Lambda 18
7.3 Amazon DynamoDB 18
8 Database 19
8.1 Amazon RDS for Oracle 19
8.1.1 Licensing models 19
8.1.2 Replication 19
2
AWS Cloud Native R19 Architecture Reference
8.2 NuoDB 20
3
AWS Cloud Native R19 Architecture Reference
1 History
Version Date Change Author
4
AWS Cloud Native R19 Architecture Reference
2 Legal
© Copyright 2020 Temenos Headquarters SA. All rights reserved.
TM
The information in this guide relates to TEMENOS information, products and services. It also includes
information, data and keys developed by other parties.
While all reasonable attempts have been made to ensure accuracy, currency and reliability of the content
in this guide, all information is provided "as is".
There is no guarantee as to the completeness, accuracy, timeliness or the results obtained from the use of
this information. No warranty of any kind is given, expressed or implied, including, but not limited to
warranties of performance, merchantability and fitness for a particular purpose.
In no event will TEMENOS be liable to you or anyone else for any decision made or action taken in
reliance on the information in this document or for any consequential, special or similar damages, even if
advised of the possibility of such damages.
TEMENOS does not accept any responsibility for any errors or omissions, or for the results obtained from
the use of this information. Information obtained from this guide should not be used as a substitute for
consultation with TEMENOS.
References and links to external sites and documentation are provided as a service. TEMENOS is not
endorsing any provider of products or services by facilitating access to these sites or documentation from
this guide.
The content of this guide is protected by copyright and trademark law. Apart from fair dealing for the
purposes of private study, research, criticism or review, as permitted under copyright law, no part may be
reproduced or reused for any commercial purposes whatsoever without the prior written permission of the
copyright owner. All trademarks, logos and other marks shown in this guide are the property of their
respective owners.
5
AWS Cloud Native R19 Architecture Reference
3 Introduction
3 .1 Introduction
Our AWS Cloud Native Reference Architecture, though tried and tested, is strictly a model
architecture. You’re free to change any part of it to meet your particular requirements.
We designed this architecture for Transact, our next generation core banking solution. Transact is a
classic three tier application – Web, Application and Database. We’ve included the message broker for
scalability and availability reasons.
Our reference architecture is designed to run in containers in the public AWS1 cloud. The containers run
in EC2 instances. That’s different from traditional on premise architectures, which run on either virtual or
physical hardware.
3 .2 Architecture diagram
1Amazon Web Services (AWS) is a subsidiary of Amazon that provides cloud computing platforms to both individuals and organisations.
6
AWS Cloud Native R19 Architecture Reference
3 .4 Users
Our channels users are internet users. Our branch users are the bank’s employees and their customers,
who are accessing the bank’s system from a branch.
Branch connections are usually made through a secure VPN, but our architecture needs Internet
gateways and API gateways – this ensures that our infrastructure is safe from the internet.
1A virtual private cloud (VPC) is an on-demand configurable pool of shared computing resources allocated within a public cloud environment,
providing a certain level of isolation between the different organizations (denoted as users) using the resources.
7
AWS Cloud Native R19 Architecture Reference
4 Peripheral services
4 .1 Amazon Route 53
Our reference architecture uses a managed service offering from AWS called Amazon Route 53. Amazon
Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. DNS servers
will play an important part in your architecture, if you’re exposing part of your architecture over the internet.
DNS servers:
l Geo DNS
l Geo Proximity
4 .2 VPC endpoints
The architecture consists of two VPCs1. One hosts the Transact services, and the other acts as a
gateway to this VPC. The branch users access the gateway VPC, which has a VPC endpoint attached
that routes it to the API gateway service, and then forwards the traffic to the TransactVPC endpoint.
1A virtual private cloud (VPC) is an on-demand configurable pool of shared computing resources allocated within a public cloud environment,
providing a certain level of isolation between the different organizations (denoted as users) using the resources.
8
AWS Cloud Native R19 Architecture Reference
From there and only there, the Transactservices are reachable, creating a secure, ring-fenced, one-way-
in, and one-way-out architecture. There are also endpoints deployed for ECR1 to allow the services to pull
container images, one for S32 as this is required by Fargate3, and one for logging to CloudWatch4.
4 .3 API gateway
The API gateway handles all requests from the client, determines which services are needed, and
combines them into a synchronous experience for the user. In this architecture, we use it for REST calls
against TransactAPIs, as well as HTTP traffic requesting BrowserWeb5. The API gateway is reachable
from the public internet, so using it for all incoming traffic renders an internet gateway obsolete.
The API Gateway can help you with several aspects of creating and managing APIs:
l Metering.
NOTE: You’ll need to define a plan to meter or restrict third party access to your APIs.
l Access control
l Resiliency
Each API created in API gateway has a stage, and each stage has a unique HTTP endpoint. The name of
the stage is present in the endpoint request URL. Going directly via the stage endpoint, therefore, means
that the BrowserWeb container running in ECS receives the request path as /<stage_
name>/BrowserWeb, which returns a 404 error code as WildFly doesn’t recognise the stage name. To
circumvent this, we need either a custom domain name, or failing that, a CloudFront distribution.
Using a custom domain name for the API allows us to configure the routing ourselves, and in doing so
means that we can prevent the BrowserWeb container from receiving the stage name in the HTTP
request.
9
AWS Cloud Native R19 Architecture Reference
If a custom domain name is not available, CloudFront can be used instead. Creating a distribution that has
the API gateway stage as its origin with the stage name included, allows the user to navigate the to the
CloudFront distribution endpoint, which will route via the API gateway, back to CloudFront, and onwards
to the BrowserWeb container. This achieves the goal of removing the API stage name from the request
that is sent to the load balancer and is consumed by WildFly.
AWS WAF gives you control over which traffic to allow or block to your web applications by defining
customisable web security rules. You can use AWS WAF to create:
l Custom rules that block common attack patterns, such as SQL injection or cross-site scripting.
4.4.2 Deploying AW S W AF
AWS WAF enables you to deploy new rules within minutes, letting you respond quickly to changing traffic
patterns. It also includes a fully featured API that you can use to automate the creation, deployment, and
maintenance of web security rules.
l The Application Load Balancer (ALB) that fronts your web servers or origin servers running on
10
AWS Cloud Native R19 Architecture Reference
EC21.
AWS WAF pricing is based on how many rules you deploy and how many web requests your web
application receives. There are no upfront commitments.
Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic
scaling, and robust security necessary to make your applications fault tolerant:
From the API gateway, the request is routed via the VPC link to a network load balancer in the target
Transact VPC; the link cannot route directly to an application load balancer.
The network load balancer’s target group is periodically updated with the IP addresses of the application
load balancer, by running a scheduled lambda function which performs a DNS lookup against the
application load balancer to return its private IP addresses, and subsequently registers the IPs as targets
of the network load balancer. The network load balancer access is locked down to be reachable only from
the API gateway.
You can deploy AWS WAF on the Application Load Balancer (ALB) that fronts your web servers.
1Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides compute capacity in the cloud.
2Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides compute capacity in the cloud.
3AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services.
11
AWS Cloud Native R19 Architecture Reference
The traffic is routed via the network load balancer to the application load balancer. Once it reached the
ALB, the request is handled according to the load balancers rules. A rule dictates the action for a given
port and/or request, that is, the path. The rule in the case of this architecture, is to forward all requests on
port 80 (HTTP) to web target group, which has the BrowserWeb service as a registered target.
When more than one Transact Browser container is running, the load balancer distributes the load
between them. The traffic to web containers is HTTP or HTTPS. Application load balancer is perfect for
this requirement.
Operating at the individual request level (Layer 7), application load balancer routes traffic to targets within
Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request. The application load
balancer is only reachable from the network load balancer.
4 .6 Secrets Manager
Secrets Manager allows secure transmission of secrets to the applications running in the ECS services.
Theses secrets can be database credentials, Amazon MQ1 credentials, passwords, API keys or even
arbitrary text.
In this architecture, we have a secrets manager endpoint attached to the cluster where the Transact
services are running, allowing the SM service to inject the required credentials for the services that the
applications rely on through a call to its API. This allows credentials to be periodically updated without
having to redeploy the applications.
1A managed message broker service for ActiveMQ that makes it easy to set up and operate message brokers in the cloud.
12
AWS Cloud Native R19 Architecture Reference
Read more
Kubernetes (K8S)1 is the most popular container management/orchestration system. Most cloud
providers provide K8S as a managed service to make it simple for users. However, there are a number of
alternatives to K8S now available – Azure offers Service Fabric Mesh2 and AWS offers ECS (Elastic
Container Service)3.
5 .2 About ECS
We’ve chosen to use Amazon ECS in our reference architecture. ECS is a highly scalable, high-
performance container orchestration service that supports Docker containers.
1Open-source system for automating deployment, scaling, and management of containerized applications
2A cloud-scale platform from Microsoft Azure for hosting Windows or Linux container applications
3AWS container orchestration service that supports Docker containers
13
AWS Cloud Native R19 Architecture Reference
With the EC2 launch type, the user has to launch and manage the EC2 machines manually. Fargate does
this for you, allowing you to focus on building and running applications.
However, Fargate doesn’t provide as much control of your ECS instances – that is, low level access – to
support compliance and governance requirements or broader customisation options.
NOTE: The launch type you use may vary, depending on your requirements. Tony Coleman used AWS
Fargate for Transact and APIs for his main stage cloud demo at GSM2019. However, for the NuoDB layer he
used ECS without Fargate.
5 .4 ECS terminology
Task definition
Task definitions specify the container information for your application, such as what containers are part
of your task, what resources they will use, how they are linked together, and which host ports they will
expose.
Service
A service lets you specify how many copies of your task definition to run and maintain in a cluster. You
can optionally use an Elastic Load Balancing load balancer to distribute incoming traffic to containers in
your service.
Amazon ECS maintains that number of tasks and coordinates task scheduling and routing with the
load balancer. You can also use Service Auto Scaling to automatically adjust the number of tasks in
your service, based on load.
Cluster
An Amazon ECS cluster is a regional grouping of one or more container instances on which you can
run task requests.
Task
1Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides compute capacity in the cloud.
2A compute engine for Amazon ECS that lets you run containers without having to manage servers or clusters.
14
AWS Cloud Native R19 Architecture Reference
Tasks are created when your service scales up and stopped when you scale back down.
l One cluster for public facing applications (user agents and public facing APIs).
We could use just one cluster and place all our services within it. We can secure the services by running
them in their own subnets. To make it simpler to manage, we can use multiple clusters too. Multiple
clusters are an inexpensive option – with AWS Fargate we’re only paying for the vCPUs and RAM we
actually use.
Our architecture deploys user agents and public facing APIs in their own services. We use the same
approach with our applications. Sometimes we may have to run more than one task definition in a service.
For example, we might run NuoDB’s3 TE (Transaction Engine) and Transact as different tasks within the
same Transact service.
1Temenos' digital front office, focused on customer journeys from acquisition through retention.
2Temenos payments solution for high-value, low-volume payments in the corporate banking space
3Distributed SQL database built for the Enterprise.
15
AWS Cloud Native R19 Architecture Reference
6 Amazon MQ
As mentioned earlier in this document, we need a message broker for scalability and availability reasons.
TAFJ, our Platform framework, uses JMS1 and Amazon MQ supports JMS. This makes Amazon MQ an
ideal choice for us.
Message brokers allow different software systems - often using different programming languages, and on
different platforms - to communicate and exchange information.
Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and
operate message brokers in the cloud. Amazon MQ reduces your operational load by managing the
provisioning, setup, and maintenance of ActiveMQ, a popular open-source message broker.
Connecting your current applications to Amazon MQ is easy because it uses industry-standard APIs and
protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using
standards means that in most cases, there’s no need to rewrite any messaging code when you migrate to
AWS.
We are running Amazon MQ in an Active-Standby, multi-availability zone deployment. This offers a great
deal of resilience by providing a fail-over in the event of a broker or even an entire availability zone going
down, which means zero downtime for the running Transactservices.
1Java Message Service (JMS) is an application program interface (API) from Sun Microsystems that supports the formal communication known
as messaging between computers in a network.
16
AWS Cloud Native R19 Architecture Reference
7 Microservices
It’s sensible to isolate online transaction processing from read only requests. In our reference architecture:
2. We push the data into a high performance NOSQL database and make it available for READ
requests.
To do this, we use a data streaming service to collect and process the real-time data. Once the data is
processed, it’s then pushed into the NOSQL database for consumption.
We also need Java code to retrieve the data from the NOSQL database. We can call this code as either a
function, because it performs a specific function, or as a microservice, since this is a fine grained service.
The function or microservice only needs to run when its invoked, which reduces hardware utilisation.
7 .1 Amazon Kinesis
Our reference architecture uses Amazon Kinesis1 to capture, process, and store data streams. Kinesis
enables you to process and analyse data as it arrives and respond to new information instantly – you don’t
have to wait until all your data is collected before the processing can begin.
l Choose the tools that best suit the requirements of your application.
l Ingest real-time data, including video, audio, application logs, website clickstreams, and IoT
telemetry data for machine learning, analytics, and other applications.
NOTE: When we started working with Kinesis, AWS MSK - Kafka as a managed service - wasn’t yet
available.
1Processes big data in real time. AWS Kinesis can process hundreds of terabytes per hour from high volumes of streaming data from sources
such as operating logs, financial transactions and social media feeds
17
AWS Cloud Native R19 Architecture Reference
7 .2 AWS Lambda
AWS Lambda1 is an event driven serverless computing platform. AWS Lambda lets you run code
without provisioning or managing servers. You pay only for the compute time you consume - there is no
charge when your code is not running.
l Run code for virtually any type of application or backend service, all with zero administration.
l Set up your code to automatically trigger from other AWS services or call it directly from any web or
mobile app.
Transact microservices are developed using Java and AWS Lambda can run a Java function. Although
you can see only one Lambda in the architecture diagram, our reference architecture uses two Lambdas:
l The second Lambda is used for servicing READ requests from users.
7 .3 Amazon DynamoDB
Our architecture uses Amazon DynamoDB2, a high performance NOSQL database for servicing read
only queries.
Amazon DynamoDB:
l Is a key-value and document database that delivers single-digit millisecond performance at any
scale.
l Is a fully managed, multiregion, multimaster database with built-in security, backup and restore.
l Can handle more than 10 trillion requests per day and support peaks of more than 20 million
requests per second.
1AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services.
2A fully managed proprietary NoSQL database service that supports key-value and document data structures. It is offered by Amazon.com as
part of the Amazon Web Services portfolio.
18
AWS Cloud Native R19 Architecture Reference
8 Database
8 .1 Amazon RDS for Oracle
Amazon RDS makes it easy to set up, operate, and scale Oracle Database deployments in the cloud.
Amazon RDS allows you to:
l Deploy multiple editions of Oracle Database in minutes with cost-efficient and resizable hardware
capacity.
Amazon RDS for Oracle runs under two different licensing models.
License Included
n this service model, you don’t need separately purchased Oracle licenses - the Oracle Database
software has already been licensed by AWS.
Bring-Your-Own-License (BYOL)
If you already own Oracle Database licenses, you can use the BYOL model to run Oracle databases on
Amazon RDS. The BYOL model is designed for customers who prefer to use existing Oracle database
licenses or purchase new licenses directly from Oracle.
NOTE: For more information, see Licensing Amazon RDS for Oracle. Oracle EE runs under the BYOL model.
8.1.2 Replication
Amazon RDS for Oracle makes it easy to use replication to enhance availability and reliability for
production workloads.
Using the Multi-AZ deployment option you can run mission critical workloads with high availability and
built-in automated fail-over from your primary database to a synchronously replicated secondary database
in case of a failure (so no RAC and Golden Gate).
19
AWS Cloud Native R19 Architecture Reference
8 .2 NuoDB
NuoDB’s distributed SQL database combines the elastic scale and continuous availability of the cloud with
the transactional consistency and durability that databases of record demand. Most importantly, NuoDB is
Active Active cross cloud.
20