SlideShare a Scribd company logo
Introducing Streaming at
Nationwide Building Society
May 2019
Rob Jackson and Pete Cracknell
Nationwide Building Society
2
Contents
1. Nationwide Building Society
2. What is the business challenge we’re responding to?
3. What is the Speed Layer?
4. Typical current state architecture
5. Target state architecture
6. How does data flow through the Speed Layer?
7. How we consume data from the Speed Layer
8. How the Speed Layer is deployed
9. Progress
10. Streaming assessment
11. Value achieved
12. Demo
3
Nationwide Building Society
• Formed in 1884 and renamed to
become the Nationwide Building
Society in 1970
• We’re the largest building society
in the world.
• A major provider of mortgages,
loans, savings and current
accounts in the UK and launched
the first (or 2nd) Internet Banking
Service in 1997
• We recently announced an
investment of an additional £1.4
billion (total £4.1bn) over 5 years
to simplify, digitise and transform
our IT estate.
• Confluent and Kafka form the
heart of an important part of that
investment.
4
• Regulation such as Open Banking
• Business growth
• 24 x 7 availability expectations from customers
and regulators
• Cloud adoption
• Capitalising on our data
• A need for agility and innovation
… and our existing platforms were making this
harder than we’d like.
The Speed Layer will be the preferred source of data for high-volume read-only data requests and event sourcing. It will
deliver secure, near real data from back end systems with speed and resilience. It will use the latest technologies built for cloud
and designed to be highly available. It will provide NBS with the first event based real-time data platform ready for digital.DEFINITION
FOUR KEY CHARACTERISTICS
SCALABILITY:
The Speed Layer platform will be
built using internet scale
technologies hosted on cloud ready
PaaS architecture.
FAST AND AGILE:
The Speed Layer will unlock data
from our Systems of Record
enabling digital and agile
development teams to rapidly
deliver new features and services
RICH DATA SET:
Provide a rich data set enhanced
with 3rd party data and analytics
RESILIENT:
Reduce the load on core systems
and isolate them from the demands
of the digital platforms. Designed to
be highly resilient and tolerate
multiple infrastructure failures
5
What is the Speed Layer?
6
As is logical E2E Architecture
API Gateway
Channel Web Services
Enterprise Web Services
Back-end Services
Mainframes
Fairly normal, is there a problem?
7
API Gateway
Stream Processing
Mainframes + other sources of data
Kafka Topics
Target System Architecture
CDC
Kafka Topics
Microservices Channel Services
Enterprise Services
Protocol adapters
WritesReads
System of Record(s)
CDC
Replication
Engine
Source
DB
Kafka Raw Topic – raw data
Stream
processingA
Microservice
Kafka Published Topic – processed data
Materialisation Microservice
NoSQL tables
{REST APIs}
1. Change Data Capture (CDC) is deployed to the System of Record (SoR) and
pushes changes from source database to Kafka Topic
2. Kafka topics contain data in the format of the source system. There will be one
raw topic per table replicated. Data is typically held here for c7 days.
3. Streams processing (Kafka Streams framework) is used to transform data into
processed data made available to consumers through “Published Topics”
4. Kafka Published Topics retain data long term (in line with retention policies
and GDPR) and can be used by many Speed Layer Microservices.
5. Speed Layer Microservices are consumers of Kafka Published Topics and push
the data they need into their persistence store (NoSQL, in-memory, etc.)
6. APIs expose data to consumers
7. Channel applications call Speed Layer Microservices to request data
8. Note, applications can subscribe to events and respond to events without
materialising them in a database, e.g., push notification to device.
124
5
3
6
Consuming Applications
7 8
Data Flow Diagram
9
There are three main approaches for consumption of data from Speed Layer. 1. Immediate real time message consumption in the Event Driven pattern, 2. Usage specific
data sets are materialised and exposed through APIs in the Request Driven pattern. 3. Functionally aligned enterprise level data stores are materialised.
Enterprise/
Functional
microservices
Consumption
Ms & apps
Consumers are microservices that subscribe to
topics and materialise data to their requirements.
A set of functional microservices are created. For
example an “account” microservice from which all
consuming microservices and applications read
account data when needed.
Kafka consumers listen and respond to
messages that are arriving in near real time and
take immediate action on receipt of the message.
In this pattern there is no need to materialise the
data.
SL
subs
Producer Producer Producer Producer
SL
subs
Producer Producer Producer Producer
SL
Producer Producer Producer Producer
FUNCTIONAL SERVICEEVENT DRIVEN REQUEST DRIVEN
Applications and/or services can be re-written to consume data from the Speed Layer to improve performance and reduce demand on back-end systems.
Consumption Patterns Overview
10
Multi-site deployment and resilience
Primary DC for SORs Standby DC for SORs Cloud hosting Deployment
1. CDC writes to a local Kafka Cluster, i.e., in the same
DC as the mainframe
2. Kafka topics are replicated to a separate Kafka cluster
in our 2nd
DC
3. Independent database clusters in each datacentre.
4. When required, Kafka topics are replicated using
Confluent Replicator to cloud providers
DB
Progress so far…
• Architectural PoC completed:
1. Initial logical proving
2. Functional and non functional proving
3. Load testing/benchmarking in Azure and IBM labs, > 80k TPS through a single broker
• Project launched to deliver the production capability and first use cases
1. Split into 3 use cases, with the first one code complete, 2 & 3 progressing well
• Adopting Confluent Kafka across multiple lines of business
1. Speed Layer
2. Event Based designs for originations journeys
3. High volume messaging in Payments
• Working on Streaming Maturity Assessment with Confluent
Adopting an Enterprise Event-Streaming Platform is a Journey
Nationwide nearly here - with Speed Layer + platforms
for Mortgages & Payments - but more potential to
share common ways of working and utilise a common
platform for more use cases
VALUE
1
Early interest
2
Identify a project /
start to set up
pipeline
3
Mission-critical, but
disparate LOBs
4
Mission-critical,
connected LOBs
5
Central Nervous
System
Projects Platform
Developer downloads
Kafka & experiments,
Pilot(s).
LOB(s); Small teams
experimenting;
→ 1-3 basic pipeline use
cases - moved into
Production - but fragmented.
Multiple mission critical use
cases in production with
scale, DR & SLAs.
→ Streaming clearly
delivering business value,
with C-suite visibility but
fragmented across LOBs.
Streaming Platform managing
majority of mission critical
data processes, globally, with
multi-datacenter replication
across on-prem and hybrid
clouds.
All data in the
organization managed
through a single
Streaming Platform.
Typically → Digital
natives / digital pure
players - probably using
Machine Learning & AI.
Expected value (this time next year)
Ø Enables agility and autonomy in digital development teams
Ø The first use case alone will remove c7bn requests / year from the mainframes.
Ø Will help us maintain our service availability despite unprecedented demand
Ø Kafka and streaming being adopted across multiple lines of business
Ø The move to micro services and Kafka enables Nationwide to onboard new use cases
quickly and easily
Ø Speed Layer, Streaming and Kafka will help Nationwide head-off the threat from agile
challenger banks
The Speed Layer will help Nationwide provide customers with a better customer experience leading to
better customer retention and new revenue streams.
14
Demo of Speed Layer
Ø Why we did the Proof of Concept
Ø Functional walk through
Ø Non-functional view

More Related Content

PDF
Evolving from Messaging to Event Streaming
PDF
How to Build an Apache Kafka® Connector
PDF
Enhancing Apache Kafka for Large Scale Real-Time Data Pipeline at Tencent | K...
PDF
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LME
PDF
Apache Kafka from 0.7 to 1.0, History and Lesson Learned
PDF
Streaming all over the world Real life use cases with Kafka Streams
PDF
Mind the App: How to Monitor Your Kafka Streams Applications | Bruno Cadonna,...
PPTX
Building a Modern, Scalable Cyber Intelligence Platform with Apache Kafka | J...
Evolving from Messaging to Event Streaming
How to Build an Apache Kafka® Connector
Enhancing Apache Kafka for Large Scale Real-Time Data Pipeline at Tencent | K...
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LME
Apache Kafka from 0.7 to 1.0, History and Lesson Learned
Streaming all over the world Real life use cases with Kafka Streams
Mind the App: How to Monitor Your Kafka Streams Applications | Bruno Cadonna,...
Building a Modern, Scalable Cyber Intelligence Platform with Apache Kafka | J...

What's hot (20)

PPTX
Kafka error handling patterns and best practices | Hemant Desale and Aruna Ka...
PDF
Building Event-Driven Services with Apache Kafka
PDF
Fan-out, fan-in & the multiplexer: Replication recipes for global platform di...
PDF
How to over-engineer things and have fun? | Oto Brglez, OPALAB
PDF
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
PDF
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...
PDF
What every software engineer should know about streams and tables in kafka ...
PDF
Kafka in Context, Cloud, & Community (Simon Elliston Ball, Cloudera) Kafka Su...
PDF
Availability of Kafka - Beyond the Brokers | Andrew Borley and Emma Humber, IBM
PPTX
Introduction to ksqlDB and stream processing (Vish Srinivasan - Confluent)
PDF
user Behavior Analysis with Session Windows and Apache Kafka's Streams API
PPTX
Kafka Deployment to Steel Thread
PDF
Data integration with Apache Kafka
PPTX
How Zillow Unlocked Kafka to 50 Teams in 8 months | Shahar Cizer Kobrinsky, Z...
PDF
Introduction to Stream Processing
PDF
A Marriage of Lambda and Kappa: Supporting Iterative Development of an Event ...
PDF
Give Your Confluent Platform Superpowers! (Sandeep Togrika, Intel and Bert Ha...
PPTX
Data Integration with Apache Kafka: What, Why, How
PPTX
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...
PDF
Leveraging Mainframe Data for Modern Analytics
Kafka error handling patterns and best practices | Hemant Desale and Aruna Ka...
Building Event-Driven Services with Apache Kafka
Fan-out, fan-in & the multiplexer: Replication recipes for global platform di...
How to over-engineer things and have fun? | Oto Brglez, OPALAB
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to Streamin...
Introduction to Stream Processing with Apache Flink (2019-11-02 Bengaluru Mee...
What every software engineer should know about streams and tables in kafka ...
Kafka in Context, Cloud, & Community (Simon Elliston Ball, Cloudera) Kafka Su...
Availability of Kafka - Beyond the Brokers | Andrew Borley and Emma Humber, IBM
Introduction to ksqlDB and stream processing (Vish Srinivasan - Confluent)
user Behavior Analysis with Session Windows and Apache Kafka's Streams API
Kafka Deployment to Steel Thread
Data integration with Apache Kafka
How Zillow Unlocked Kafka to 50 Teams in 8 months | Shahar Cizer Kobrinsky, Z...
Introduction to Stream Processing
A Marriage of Lambda and Kappa: Supporting Iterative Development of an Event ...
Give Your Confluent Platform Superpowers! (Sandeep Togrika, Intel and Bert Ha...
Data Integration with Apache Kafka: What, Why, How
SingleStore & Kafka: Better Together to Power Modern Real-Time Data Architect...
Leveraging Mainframe Data for Modern Analytics
Ad

Similar to Introducing Events and Stream Processing into Nationwide Building Society (Robert Jackson and Pete Cracknell, Nationwide Building Society) Kafka Summit London 2019 (20)

PPTX
Introducing Events and Stream Processing into Nationwide Building Society
PDF
Confluent kafka meetupseattle jan2017
PPTX
Streaming Data and Stream Processing with Apache Kafka
PDF
Cloud-Native Patterns for Data-Intensive Applications
PDF
From Monoliths to Microservices - A Journey With Confluent With Gayathri Veal...
PDF
Initiative Based Technology Consulting Case Studies
PPT
Technology Overview
PDF
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...
PDF
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...
PDF
Confluent Partner Tech Talk with QLIK
PDF
ALT-F1.BE : The Accelerator (Google Cloud Platform)
PDF
Application Modernisation through Event-Driven Microservices
PPTX
Black Friday Brilliance Managing a Billion Transactions with Tech, Tactics, a...
PDF
Confluent & Attunity: Mainframe Data Modern Analytics
PPTX
OCC Overview OMG Clouds Meeting 07-13-09 v3
PDF
OpenStack and Cloud Foundry - Pair the leading open source IaaS and PaaS
PDF
Message Driven and Event Sourcing
PDF
Confluent Partner Tech Talk with Reply
PDF
IoT Physical Servers and Cloud Offerings.pdf
PDF
How to Migrate Applications Off a Mainframe
Introducing Events and Stream Processing into Nationwide Building Society
Confluent kafka meetupseattle jan2017
Streaming Data and Stream Processing with Apache Kafka
Cloud-Native Patterns for Data-Intensive Applications
From Monoliths to Microservices - A Journey With Confluent With Gayathri Veal...
Initiative Based Technology Consulting Case Studies
Technology Overview
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...
Confluent Partner Tech Talk with QLIK
ALT-F1.BE : The Accelerator (Google Cloud Platform)
Application Modernisation through Event-Driven Microservices
Black Friday Brilliance Managing a Billion Transactions with Tech, Tactics, a...
Confluent & Attunity: Mainframe Data Modern Analytics
OCC Overview OMG Clouds Meeting 07-13-09 v3
OpenStack and Cloud Foundry - Pair the leading open source IaaS and PaaS
Message Driven and Event Sourcing
Confluent Partner Tech Talk with Reply
IoT Physical Servers and Cloud Offerings.pdf
How to Migrate Applications Off a Mainframe
Ad

More from confluent (20)

PDF
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
PPTX
Webinar Think Right - Shift Left - 19-03-2025.pptx
PDF
Migration, backup and restore made easy using Kannika
PDF
Five Things You Need to Know About Data Streaming in 2025
PDF
Data in Motion Tour Seoul 2024 - Keynote
PDF
Data in Motion Tour Seoul 2024 - Roadmap Demo
PDF
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
PDF
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
PDF
Data in Motion Tour 2024 Riyadh, Saudi Arabia
PDF
Build a Real-Time Decision Support Application for Financial Market Traders w...
PDF
Strumenti e Strategie di Stream Governance con Confluent Platform
PDF
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
PDF
Building Real-Time Gen AI Applications with SingleStore and Confluent
PDF
Unlocking value with event-driven architecture by Confluent
PDF
Il Data Streaming per un’AI real-time di nuova generazione
PDF
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
PDF
Break data silos with real-time connectivity using Confluent Cloud Connectors
PDF
Building API data products on top of your real-time data infrastructure
PDF
Speed Wins: From Kafka to APIs in Minutes
PDF
Evolving Data Governance for the Real-time Streaming and AI Era
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
Webinar Think Right - Shift Left - 19-03-2025.pptx
Migration, backup and restore made easy using Kannika
Five Things You Need to Know About Data Streaming in 2025
Data in Motion Tour Seoul 2024 - Keynote
Data in Motion Tour Seoul 2024 - Roadmap Demo
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
Data in Motion Tour 2024 Riyadh, Saudi Arabia
Build a Real-Time Decision Support Application for Financial Market Traders w...
Strumenti e Strategie di Stream Governance con Confluent Platform
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
Building Real-Time Gen AI Applications with SingleStore and Confluent
Unlocking value with event-driven architecture by Confluent
Il Data Streaming per un’AI real-time di nuova generazione
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
Break data silos with real-time connectivity using Confluent Cloud Connectors
Building API data products on top of your real-time data infrastructure
Speed Wins: From Kafka to APIs in Minutes
Evolving Data Governance for the Real-time Streaming and AI Era

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Sensors and Actuators in IoT Systems using pdf
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
Event Presentation Google Cloud Next Extended 2025
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Transforming Manufacturing operations through Intelligent Integrations
PDF
Chapter 2 Digital Image Fundamentals.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Omni-Path Integration Expertise Offered by Nor-Tech
PDF
Advanced IT Governance
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
REPORT: Heating appliances market in Poland 2024
PDF
Reimagining Insurance: Connected Data for Confident Decisions.pdf
PDF
Modernizing your data center with Dell and AMD
Big Data Technologies - Introduction.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Sensors and Actuators in IoT Systems using pdf
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
Event Presentation Google Cloud Next Extended 2025
Review of recent advances in non-invasive hemoglobin estimation
Transforming Manufacturing operations through Intelligent Integrations
Chapter 2 Digital Image Fundamentals.pdf
Understanding_Digital_Forensics_Presentation.pptx
GamePlan Trading System Review: Professional Trader's Honest Take
MYSQL Presentation for SQL database connectivity
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
NewMind AI Weekly Chronicles - August'25 Week I
Omni-Path Integration Expertise Offered by Nor-Tech
Advanced IT Governance
20250228 LYD VKU AI Blended-Learning.pptx
REPORT: Heating appliances market in Poland 2024
Reimagining Insurance: Connected Data for Confident Decisions.pdf
Modernizing your data center with Dell and AMD

Introducing Events and Stream Processing into Nationwide Building Society (Robert Jackson and Pete Cracknell, Nationwide Building Society) Kafka Summit London 2019

  • 1. Introducing Streaming at Nationwide Building Society May 2019 Rob Jackson and Pete Cracknell Nationwide Building Society
  • 2. 2 Contents 1. Nationwide Building Society 2. What is the business challenge we’re responding to? 3. What is the Speed Layer? 4. Typical current state architecture 5. Target state architecture 6. How does data flow through the Speed Layer? 7. How we consume data from the Speed Layer 8. How the Speed Layer is deployed 9. Progress 10. Streaming assessment 11. Value achieved 12. Demo
  • 3. 3 Nationwide Building Society • Formed in 1884 and renamed to become the Nationwide Building Society in 1970 • We’re the largest building society in the world. • A major provider of mortgages, loans, savings and current accounts in the UK and launched the first (or 2nd) Internet Banking Service in 1997 • We recently announced an investment of an additional £1.4 billion (total £4.1bn) over 5 years to simplify, digitise and transform our IT estate. • Confluent and Kafka form the heart of an important part of that investment.
  • 4. 4 • Regulation such as Open Banking • Business growth • 24 x 7 availability expectations from customers and regulators • Cloud adoption • Capitalising on our data • A need for agility and innovation … and our existing platforms were making this harder than we’d like.
  • 5. The Speed Layer will be the preferred source of data for high-volume read-only data requests and event sourcing. It will deliver secure, near real data from back end systems with speed and resilience. It will use the latest technologies built for cloud and designed to be highly available. It will provide NBS with the first event based real-time data platform ready for digital.DEFINITION FOUR KEY CHARACTERISTICS SCALABILITY: The Speed Layer platform will be built using internet scale technologies hosted on cloud ready PaaS architecture. FAST AND AGILE: The Speed Layer will unlock data from our Systems of Record enabling digital and agile development teams to rapidly deliver new features and services RICH DATA SET: Provide a rich data set enhanced with 3rd party data and analytics RESILIENT: Reduce the load on core systems and isolate them from the demands of the digital platforms. Designed to be highly resilient and tolerate multiple infrastructure failures 5 What is the Speed Layer?
  • 6. 6 As is logical E2E Architecture API Gateway Channel Web Services Enterprise Web Services Back-end Services Mainframes Fairly normal, is there a problem?
  • 7. 7 API Gateway Stream Processing Mainframes + other sources of data Kafka Topics Target System Architecture CDC Kafka Topics Microservices Channel Services Enterprise Services Protocol adapters WritesReads
  • 8. System of Record(s) CDC Replication Engine Source DB Kafka Raw Topic – raw data Stream processingA Microservice Kafka Published Topic – processed data Materialisation Microservice NoSQL tables {REST APIs} 1. Change Data Capture (CDC) is deployed to the System of Record (SoR) and pushes changes from source database to Kafka Topic 2. Kafka topics contain data in the format of the source system. There will be one raw topic per table replicated. Data is typically held here for c7 days. 3. Streams processing (Kafka Streams framework) is used to transform data into processed data made available to consumers through “Published Topics” 4. Kafka Published Topics retain data long term (in line with retention policies and GDPR) and can be used by many Speed Layer Microservices. 5. Speed Layer Microservices are consumers of Kafka Published Topics and push the data they need into their persistence store (NoSQL, in-memory, etc.) 6. APIs expose data to consumers 7. Channel applications call Speed Layer Microservices to request data 8. Note, applications can subscribe to events and respond to events without materialising them in a database, e.g., push notification to device. 124 5 3 6 Consuming Applications 7 8 Data Flow Diagram
  • 9. 9 There are three main approaches for consumption of data from Speed Layer. 1. Immediate real time message consumption in the Event Driven pattern, 2. Usage specific data sets are materialised and exposed through APIs in the Request Driven pattern. 3. Functionally aligned enterprise level data stores are materialised. Enterprise/ Functional microservices Consumption Ms & apps Consumers are microservices that subscribe to topics and materialise data to their requirements. A set of functional microservices are created. For example an “account” microservice from which all consuming microservices and applications read account data when needed. Kafka consumers listen and respond to messages that are arriving in near real time and take immediate action on receipt of the message. In this pattern there is no need to materialise the data. SL subs Producer Producer Producer Producer SL subs Producer Producer Producer Producer SL Producer Producer Producer Producer FUNCTIONAL SERVICEEVENT DRIVEN REQUEST DRIVEN Applications and/or services can be re-written to consume data from the Speed Layer to improve performance and reduce demand on back-end systems. Consumption Patterns Overview
  • 10. 10 Multi-site deployment and resilience Primary DC for SORs Standby DC for SORs Cloud hosting Deployment 1. CDC writes to a local Kafka Cluster, i.e., in the same DC as the mainframe 2. Kafka topics are replicated to a separate Kafka cluster in our 2nd DC 3. Independent database clusters in each datacentre. 4. When required, Kafka topics are replicated using Confluent Replicator to cloud providers DB
  • 11. Progress so far… • Architectural PoC completed: 1. Initial logical proving 2. Functional and non functional proving 3. Load testing/benchmarking in Azure and IBM labs, > 80k TPS through a single broker • Project launched to deliver the production capability and first use cases 1. Split into 3 use cases, with the first one code complete, 2 & 3 progressing well • Adopting Confluent Kafka across multiple lines of business 1. Speed Layer 2. Event Based designs for originations journeys 3. High volume messaging in Payments • Working on Streaming Maturity Assessment with Confluent
  • 12. Adopting an Enterprise Event-Streaming Platform is a Journey Nationwide nearly here - with Speed Layer + platforms for Mortgages & Payments - but more potential to share common ways of working and utilise a common platform for more use cases VALUE 1 Early interest 2 Identify a project / start to set up pipeline 3 Mission-critical, but disparate LOBs 4 Mission-critical, connected LOBs 5 Central Nervous System Projects Platform Developer downloads Kafka & experiments, Pilot(s). LOB(s); Small teams experimenting; → 1-3 basic pipeline use cases - moved into Production - but fragmented. Multiple mission critical use cases in production with scale, DR & SLAs. → Streaming clearly delivering business value, with C-suite visibility but fragmented across LOBs. Streaming Platform managing majority of mission critical data processes, globally, with multi-datacenter replication across on-prem and hybrid clouds. All data in the organization managed through a single Streaming Platform. Typically → Digital natives / digital pure players - probably using Machine Learning & AI.
  • 13. Expected value (this time next year) Ø Enables agility and autonomy in digital development teams Ø The first use case alone will remove c7bn requests / year from the mainframes. Ø Will help us maintain our service availability despite unprecedented demand Ø Kafka and streaming being adopted across multiple lines of business Ø The move to micro services and Kafka enables Nationwide to onboard new use cases quickly and easily Ø Speed Layer, Streaming and Kafka will help Nationwide head-off the threat from agile challenger banks The Speed Layer will help Nationwide provide customers with a better customer experience leading to better customer retention and new revenue streams.
  • 14. 14 Demo of Speed Layer Ø Why we did the Proof of Concept Ø Functional walk through Ø Non-functional view