0% found this document useful (0 votes)
17 views

IBM Agile Integration Architecture

The document discusses agile integration architecture using lightweight integration runtimes to implement a container-based and microservices-aligned integration architecture. It covers three aspects: fine-grained integration deployment, decentralized integration ownership, and using cloud-native integration infrastructure. The document argues that this approach allows for more agile and scalable application and integration development compared to past approaches like SOA and centralized ESBs.

Uploaded by

Bang Luu Van
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

IBM Agile Integration Architecture

The document discusses agile integration architecture using lightweight integration runtimes to implement a container-based and microservices-aligned integration architecture. It covers three aspects: fine-grained integration deployment, decentralized integration ownership, and using cloud-native integration infrastructure. The document argues that this approach allows for more agile and scalable application and integration development compared to past approaches like SOA and centralized ESBs.

Uploaded by

Bang Luu Van
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Agile integration

architecture
Using lightweight integration runtimes to implement
a container-based and microservices-aligned
integration architecture
Home 22

C o n t e n t s : Authors 5
How to navigate the book ............................................................................................................................. 7

Section 1: The Impact of Digital Transformation on Integration 8

Chapter 1: Integration has changed 9

The impact of digital transformation ........................................................................................................... 9


The value of application integration for digital transformation .................................................................. 10

Chapter 2: The journey so far: SOA, ESBs and APIs 10

The forming of the ESB pattern ................................................................................................................... 12


What went wrong for the centralized ESB pattern? .................................................................................... 13
The API economy and bi-modal IT .............................................................................................................. 14
The rise of lightweight runtimes .................................................................................................................. 16
Microservices architecture: A more agile and scalable way to build applications ..................................... 16
A comparison of SOA and microservice architecture? ................................................................................ 18

Chapter 3: The case for agile integration architecture 20

Microservice architecture ............................................................................................................................ 20


Agile integration architecture ...................................................................................................................... 22
Aspect 1: Fine-grained integration deployment ......................................................................................... 22
Aspect 2: Decentralized integration ownership .......................................................................................... 24
Aspect 3: Cloud-native integration infrastructure ....................................................................................... 25
How has the modern integration runtime changed to accommodate agile integration architecture? ...... 25
Home 3

Section 2: Exploring agile integration architecture in detail 27

Chapter 4: Aspect 1: Fine-grained integration deployment 27


C o n t e n t s :
Breaking up the centralized ESB ................................................................................................................... 27
What characteristics does the integration runtime need? .......................................................................... 29
Granularity .................................................................................................................................................... 30
Conclusion on fine-grained integration deployment ................................................................................... 31
Lessons Learned ........................................................................................................................................... 32

Chapter 5: Aspect 2: Decentralized integration ownership 33

Decentralizing integration ownership .......................................................................................................... 33


Does decentralized integration also mean decentralized infrastructure ................................................... 35
Benefits for cloud ......................................................................................................................................... 36
Traditional centralized technology-based organization .............................................................................. 36
Moving to a decentralized, business-focused team structure .................................................................... 37
Big bangs generally lead to big disasters ..................................................................................................... 38
Prioritizing Project Delivery First .................................................................................................................. 39
Evolving the role of the Architect ................................................................................................................. 40
Enforcing governance in a decentralized structure ..................................................................................... 41
How can we have multi-skilled developers? ................................................................................................ 43
Conclusions on decentralized integration ownership ................................................................................. 45
Lessons Learned ........................................................................................................................................... 46

Chapter 6: Aspect 3: Cloud native integration infrastructure 47

Cattle not pets ............................................................................................................................................... 47


Integration pets: The traditional approach .................................................................................................. 47
Home 4

Integration cattle: An alternative lightweight approach ............................................................................. 49


Contents:
What’s so different with cattle ..................................................................................................................... 50
Pros and cons ............................................................................................................................................... 52
Application and integration handled by the same team ............................................................................. 52
Common infrastructure enabling multi-skilled development ..................................................................... 55
Portability: Public, private, multicloud ......................................................................................................... 56
Conclusion on cloud native integration infrastructure ................................................................................ 57
Lessons Learned ........................................................................................................................................... 58

Section 3: Moving Forward with an Agile Integration Architecture 59

Chapter 7: What path should you take? 59

Don’t worry…we haven’t returned to point-to-point .................................................................................... 60


Deployment options for fine-grained integration ......................................................................................... 61
Agile integration architecture and IBM ........................................................................................................ 63

Chapter 8: Agile integration architecture for the Integration Platform 63

What is an integration platform? .................................................................................................................. 63

The IBM Cloud Integration Platform ............................................................................................................ 63

Emerging use cases and the integration platform ....................................................................................... 65

Appendix One: References 72


Home 5

Authors

Kim Clark Tony Curcio Nick Glowacki


Integration Architect Director Application Integration Technical Specialist
[email protected] [email protected] [email protected]

Kim is a technical strategist on IBMs integration After years of implementing integration solutions Nick is a technical evangelist for IBMs
portfolio working as an architect providing guidance in a variety of technologies, Tony joined the IBM integration portfolio working as a
to the offering management team on current trends offering management team in 2008. He now leads technical specialist exploring current
and challenges. He has spent the last couple of the Application Integration team in working with trends and building leading edge solutions.
decades working in the field implementing customers as they adopt more agile models for He has spent the last 5 years working in
integration and process related solutions. building integration solutions and embrace cloud the field and guiding a series of teams
as part of their IT landscape. through their microservices journey.
Before that he spent 5+ years in various
other roles such as a developer, an architect
and a IBM DataPower specialist. Over the

Sincere thanks go to the following people for their course of his career he’s been a user of
node, xsl, JSON, Docker, Solr, IBM API
significant and detailed input and review of the material: Connect, Kubernetes, Java, SOAP, XML,
THANK YOU WAS, Docker, Filenet, MQ, C++, CastIron,
IBM App Connect, IBM Integration Bus.
Carsten Bornert, Andy Garratt, Alan Glickenhouse,
Rob Nicholson, Brian Petrini, Claudio Tagliabue,
and Ben Thompson.
Home 6

Executive Summary

The organization pursuing digital transformation must embrace


new ways to use and deploy integration technologies, so they can
move quickly in a manner appropriate to the goals of multicloud,
decentralization and microservices. The application integration layer
must transform to allow organizations to move boldly in building new
customer experiences, rather than forcing models for architecture
and development that pull away from maximizing the organization’s
productivity.

Many organizations have started embracing agile application techniques


such as microservice architecture and are now starting to see the
benefits of that shift. This approach complements and accelerates
an enterprise’s API strategy. Businesses should also seek to use this
approach to modernize their existing ESB infrastructure to achieve
more effective ways to manage and operate their integration services
in their private or public cloud.

This book explores the merits of what we refer to as agile integration


architecture1 - a container-based, decentralized and microservice-
aligned approach for integration solutions that meets the demands
of agility, scalability and resilience required by digital transformation.

Agile integration architecture enables building, managing and operating effectively and efficiently to achieve the goals of digital
transformation. It includes three distinct aspects that we will explore in detail:
a) Fine-grained integration deployment | b) Decentralized integration ownership and | c) Cloud-native integration infrastructure

Note that we have used the term “lightweight integration” in the past, but have moved to the more appropriate “agile integration architecture”.
1
Home 7

How to navigate the book The book is divided into three sections.

Section 1: The Impact of Section 2: Exploring agile Section 3: Moving


Digital Transformation on integration architecture Forward with an Agile
Integration in detail Integration Architecture

Chapter 1: Integration has changed Chapter 4: Aspect 1: Fine-grained Chapter 7:


Explores the effect that digital transformation integration deployment Addresses the What path should you take?
has had on both the application and integration benefits an organization gains by breaking
landscape, and the limitations of previous up the centralized ESB. Explores several ways agile integration
techniques. architecture can be approached
Chapter 5: Aspect 2: Decentralized
Chapter 2: The journey so far: SOA, ESBs integration ownership Discusses how Chapter 8: Agile integration
and APIs Explores what led us up to this point, shifting from a centralized governance and architecture for the Integration
the pros and cons of SOA and the ESB pattern, development practice creates new levels of Platform Surveys the wider landscape
the influence of APIs and the introduction of agility and innovation. of integration capabilities and relates
microservices architecture. agile integration architecture to other
Chapter 6: Aspect 3: Cloud native styles of integration as part of a holistic
Chapter 3: The case for agile integration integration infrastructure Provides a strategy.
architecture Explains how agile integration description of how adopting key technologies
architecture exploits the principles of and practices from the cloud native application
microservices architecture to address these discipline can provide similar benefits to
new needs. application integration.
Home 88

Section 1:
The Impact of Digital Transformation on Integration

The impact of digital transformation

The rise of the digital economy, like most of the seismic technology shifts over the past several
centuries, has fundamentally changed not only technology but business as well. The very concept
of “digital economy” continues to evolve. Where once it was just a section of the economy that was
built on digital technologies it has evolved becoming almost indistinguishable from the “traditional
economy” and growing to include almost any new technology such as mobile, the Internet of Things,
cloud computing, and augmented intelligence.

At the heart of the digital economy is the basic need to connect disparate data no matter where
it lives. This has led to the rise of application integration, the need to connect multiple applications
and data to deliver the greatest insight to the people and systems who can act on it. In this section
we will explore how the digital economy created and then altered our concept of application
integration.

- Chapter 1: Integration has changed


Explores the effect that digital transformation has had on both the application and integration
landscape, and the limitations of previous techniques.

- Chapter 2: The journey so far: SOA, ESBs and APIs


Explores what led us up to this point, the pros and cons of SOA and the ESB pattern, the influence
of APIs and the introduction of microservices architecture.

- Chapter 3: The case for agile integration architecture


Explains how agile integration architecture exploits the principles of microservices architecture
to address these new needs.
Home 9

Chapter 1: Integration has changed

The impact of digital transformation changes in how organizations are building solutions. Progressive IT
shops have sought out, and indeed found, more agile ways to develop
than were typical even just a few years ago.
Over the last two years we’ve seen a tremendous acceleration in the
pace that customers are establishing digital transformation initiatives.
In fact, IDC estimates that digital transformation initiatives represent
a $20 trillion market opportunity over the next 5 years. That is a To drive new customer experiences
staggering figure with respect to the impact across all industries and organizations must tap into an
companies of all sizes. A primary focus of this digital transformation
is to build new customer experiences through connected experiences ever-growing set of applications,
across a network of applications that leverage data of all types. processes and information sources
However, bringing together these processes and information sources – all of which significantly expand
at the right time and within the right context has become increasingly
complicated. Consider that many organizations have aggressively
the enterprise’s need for
adopted SaaS business applications which have spread their key data and investment in
sources across a much broader landscape. Additionally, new data
sources that are available from external data providers must be
integration capabilities.
injected into business processes to create competitive differentiation.

Finally, AI capabilities - which are being attached to many


customer-facing applications - require a broad range of information
to train, improve and correctly respond to business events. These
processes and information sources need to be integrated by making
them accessible synchronously via APIs, propagated in near real time
by event streams, and a multitude of other mechanisms, more so
than ever before.

It is no wonder that this growing complexity has increased the


enterprise’s need for and investment in integration capabilities.
The pace of these investments, in both digital transformation
generally and integration specifically, have led to a series of
2
IDC MaturityScape Benchmark: Digital Transformation Worldwide, 2017, Shawn Fitzgerald.
Home 10

The value of application integration for digital


transformation

When we consider the agenda for building new customer experiences and focus on how data is
accessed and made available for the services and APIs that power these initiatives, we can clearly
recognize several significant benefits that application integration brings to the table.

1. Effectively address disparity: 2. Expertise of the endpoints:

One of the key strengths of integration tooling Each system has its own peculiarities that must
is the ability to access data from any system be understood and responded to. Modern
with any sort of data in any sort of format and integration includes smarts around complex
build homogeneity. The application landscape protocols and data formats, but it goes much
is only growing more diverse as organizations further than that. It also incorporates
adopt SaaS applications and build new solutions intelligence about the actual objects, business
in the cloud, spreading their data further across and functions within the end systems.
a hybrid set of systems. Even in the world of Application integration tooling is compassionate
APIs, there are variations in data formats and - understanding how to work with each system
structures that must be addressed. distinctly. This knowledge of the endpoint must
include not only errors, but authentication
Furthermore, every system has subtleties in the protocols, load management, performance
way it enables updates, and surfaces events. optimization, transactionality, idempotence,
The need for the organization to address and much, much more. By including such
information disparity is therefore growing at features “in the box”, application integration
that same pace, and application integration yields tremendous gains in productivity over
must remain equipped to address the challenge coding, and arguably a more consistent level
of emerging formats. of enterprise-class resiliency.
Home 11

3. Innovation through data: 4. Enterprise-grade artifacts: Each of these factors (data disparity,
expert endpoints, innovation through
data, and enterprise grade artifacts)
is causing a massive shift in how an
integration architecture needs to be
conceived, implemented and managed.
The result is that organizations, and
architects in particular, are reconsidering
Applications in a digital world owe much of their Integration flows developed through application what integration means in the new digital
innovation to their opportunity to combine data integration tooling inherit a tremendous amount age. Enter agile integration architecture,
that is beyond their boundaries and create new of value from the runtime. Users can focus on a container-based, decentralized and
meaning from it. This is particularly visible in building the business logic without having to microservices-aligned approach for
microservices architecture, where the ability of worry about the surrounding infrastructure. integration solutions that meets the
application integration technologies to The application integration runtime includes demands of agility, scalability and
intelligently draw multiple sources of data enterprise-grade features for error recovery, resilience required by digital
together is often a core business requirement. fault tolerance, log capture, performance transformation.
Whether composing multiple API calls together analysis, message tracing, transactional update
or interpreting event streams, the main task of and recovery. Additionally, in some tools the The integration landscape is changing
many microservices components is essentially artifacts are built using open standards and apace with enterprise and marketplace
integration. consistent best practices without requirements computing demands, but how did we get
for the IT team to be experts in those domains. from SOA and ESBs to modern,
containerized, agile integration
architecture?

Application Integration benefits organizations building digital transformation solutions by


effectively addressing information disparity, providing expert knowledge of application
endpoints, easily orchestrating activities across applications, and lowering the cost of
building expert-level artifacts.
Home 12

Chapter 2: The journey so far: SOA, ESBs and APIs

Before we dive into agile integration From this series of events, service-oriented architecture was born. The core purpose
architecture, we first need to understand what of SOA was to expose data and functions buried in systems of record over well-formed,
came before in a little more detail. In this simple-to-use, synchronous interfaces, such as web services. Clearly, SOA was about more
chapter we will briefly look at the challenges than just providing those services, and often involved some significant re-engineering to align
of SOA by taking a closer look at what the ESB the back-end systems with the business needs, but the end goal was a suite of well-defined
pattern was, how it evolved, where APIs came common re-usable services collating disparate systems. This would enable new applications
to be implemented without the burden of deep integration every time, as once the integration
onto the scene, and the relationship between all
was done for the first time and exposed as a service, it could be re-used by the next application.
that and microservices architecture.

Let’s start with SOA and the ESB and what However, this simple integration was a one-sided equation. We might have been able to
standardize these protocols and data formats, but the back-end systems of record were
went wrong.
typically old and had antiquated protocols and data formats for their current interfaces.
Figure 1 below shows where the breakdown typically occurred. Something was needed
to mediate between the old system and the new cross-platform protocols.
The forming of the ESB
pattern

Engagement
Applications
As we started the millennium, we saw the
beginnings of the first truly cross-platform
protocol for interfaces. The internet, and with it
HTTP, had become ubiquitous, XML was limping
Enterprise API
its way into existence off the back of HTML, and
the SOAP protocols for providing synchronous Integration Runtime
Integration runtime

web service interfaces were just taking shape. Request/response integration


Relatively wide acceptance of these standards Asynchronous integration
hinted at a brighter future where any system Scope of the ESB pattern
could discover and talk to any other system via
of Record
Systems

a real-time synchronous remote procedure call,


without reams of integration code as had been
required in the past.

Integration
Runtime

Figure 1. Synchronous centralized exposure pattern


Home 13

This synchronous exposure pattern via web The ESB pattern often took the “E” in ESB very
services was what the enterprise services bus literally and implemented a single infrastructure
(ESB) term was introduced for. It’s all in the for the whole enterprise, or at least one for each ESB patterns have had
name—a centralized “bus” that could provide significant part of the enterprise. Tens or even
web “services” across the “enterprise”. hundreds of integrations might have been issues ensuring continued
We already had the technology (the integration installed on a production server cluster, and if
runtime) to provide connectivity to the that was scaled up, they would be present on
funding for cross-enterprise
back-end systems, coming from the preceding every clone within that cluster. Although this initiatives since those do
hub-and-spoke pattern. These integration heavy centralization isn’t required by the ESB
runtimes could simply be taught to offer pattern itself, it was almost always present in not apply specifically within
integrations synchronously via SOAP/HTTP, the resultant topology. There were good the context of a business
and we’d have our ESB. reasons for this, at least initially: hardware and
software costs were shared, provisioning of the initiative.
What went wrong for the servers only had to be performed once, and due
to the relative complexity of the software, only
centralized ESB pattern? one dedicated team of integration specialists
needed to be skilled up to perform the Often, line-of-business teams that were
While many large enterprises successfully development work.
implemented the ESB pattern, the term is often expecting a greater pace of innovation in
disparaged in the cloud-native space, and The centralized ESB pattern had the potential to their new applications became
especially in relation to microservices deliver significant savings in integration costs if increasingly frustrated with SOA, and by
architecture. It is seen as heavyweight and interfaces could be re-used from one project to extension the ESB pattern.
lacking in agility. What has happened to make the next (the core benefit proposition of SOA).
the ESB pattern appear so outdated? However, coordinating such a cross-enterprise Some of the challenges of a centralized
initiative and ensuring that it would get ESB pattern were:
SOA turned out to be a little more complex than continued funding—and that the funding only
just the implementation of an ESB for a host of applied to services that would be sufficiently • Deploying changes could potentially
reasons—not the least of which was the question re-usable to cover their creation costs—proved destabilize other unrelated interfaces
of who would fund such an enterprise-wide to be very difficult indeed. Standards and running on the centralized ESB.
program. Implementing the ESB pattern itself tooling were maturing at the same time as the
also turned out to be no small task. ESB patterns were being implemented, so the • Servers containing many integrations
implementation cost and time for providing a had to be kept running and patched live
single service were unrealistically high. wherever possible.
Home 14

• Topologies for high availability and disaster Integration engines of today are significantly But the end goal was the same: to make
recovery were complex and expensive. more lightweight, easier to install and use, and functions and data available via
can be deployed in more decentralized ways standardized interfaces so that new
• For stability, servers typically ran many that would have been unimaginable at the time applications could be built on top of them
versions behind the current release of the ESB concept was born. As we will see, agile more quickly.
software reducing productivity. integration architecture enables us to overcome
the limitations of the ESB pattern. With the broadening usage of these
• The integration specialist teams often didn’t
service interfaces, both within and
know much about the applications they were If you would like a deeper introduction into beyond the enterprise, more formal
trying to integrate with. where the ESB pattern came from and a mechanisms for providing services were
detailed look at the benefits, and the challenges required. It quickly became clear that
• Pooling of specialist integration skilled people that came with it, take a look at the source
resulted in more waterfall style engagement simply making something available over
material for this section in the following article: a web service interface, or latterly as a
with application teams.
RESTful JSON/HTTP API, was only part
http://ibm.biz/FateOfTheESBPaper
• Service discovery was immature so of the story.
documentation became quickly outdated.

The result was that creation of services by The API economy and That service needed to be easily
discovered by potential consumers,
this specialist SOA team became a bottleneck bi-modal IT who needed a path of least resistance
for projects rather than the enabler that it was for gaining access to it and learning how
intended to be. This typically gave by External APIs have become an essential part of to use it. Additionally, the providers of the
association the centralized ESB pattern the online persona of many companies, and are service or API needed to be able to place
a bad name. at least as important as its websites and mobile controls on its usage, such as traffic
applications. Let’s take a brief look at how that control and an appropriate security
Formally, as we’ve described, ESB is an evolved from the maturing of internal SOA
architectural pattern that refers to the exposure model. Figure 2 below demonstrates how
based services. the introduction of service/API gateways
of services. However, as mentioned above, the
term is often over-simplified and applied to the effects the scope of the ESB pattern.
SOAP-style RPC interfaces proved complex
integration engine that’s used to implement the to understand and use, and simpler and more
pattern. This erroneously ties the static and
consistent RESTful services provided using
aging centralized ESB pattern with integration
JSON/HTTP became a popular mechanism.
engines that have changed radically over the
intervening time.
Home 15

Externally exposed services/APIs

While logically, the provisioning of APIs


Exposure Gateway (external)
outside the enterprise looks like just an
extension of the ESB pattern, there are
Engagement
Applications
both significant infrastructural and design
differences between externally facing
APIs and internal services/APIs.

Internally exposed services/APIs • From an infrastructural point of view,


it is immediately obvious that the APIs
are being used by consumers and
Exposure Gateway
devices that may exist anywhere from
Integration Runtime
a geographical and network point of
view. As a result, it is necessary to
Public API
design the APIs differently to take into
Enterprise API
account the bandwidth available and
of Record

API Gateway
Systems

Integration runtime
the capabilities of the devices used
Request/response integration as consumers.
Asynchronous integration

Scope of the ESB pattern • From a design perspective, we should


Integration
Runtime
not underestimate the difference in
the business objectives of these APIs.
Figure 2. Introduction of service/API gateways internally and externally
External APIs are much less focused
on re-use, in the way that internal
APIs/ services were in SOA, and more
The typical approach was to separate the role of service/API exposure out into a separate gateway.
focused on creating services targeting
These capabilities evolved into what is now known as API management and enabled simple
specific niches of potential for new
administration of the service/API. The gateways could also be specialized to focus on API
business. Suitably crafted channel
management-specific capabilities, such as traffic management (rate/throughput limiting),
specific APIs provide an enterprise
encryption/decryption, redaction, and security patterns. The gateways could also be supplemented
with the opportunity to radically
with portals that describe the available APIs which enable self-subscription to use the APIs along
broaden the number of innovation
with provisioning analytics for both users and providers of the APIs.
partners that it can work with
(enabling crowd sourcing of new ideas),
Home 6
1
16

and they play a significant role in the disruption Sound familiar? It should. These were exactly If you take a closer look at microservices
of industries that is so common today. This the same challenges that application concepts, you will see that it has a much
realization caused the birth of what we now call development teams were facing at the same broader intent than simply breaking
the API Economy, and it is a well-covered topic time: bloated, complex application servers that things up into smaller pieces. There are
on IBMs “API Economy” blog. contained too much interconnected and cross- implications for architecture, process,
dependent code, on a fragile cumbersome organization, and more—all focused on
The main takeaway here is that this progression topology that was hard to replicate or scale. enabling organizations to better use
exacerbated an already growing divide between Ultimately, it was this common paradigm that cloud-native technology advances to
the older traditional systems of record that still led to the emergence of the principles of increase their pace of innovation.
perform all the most critical transactions microservices architecture. As lightweight
fundamental to the business, and what became runtimes and application servers such as Node. However, focusing back on the core
known as the systems of engagement, where js and IBM WAS Liberty were introduced— technological difference, these small
innovation occurred at a rapid pace, exploring runtimes that started in seconds and had tiny independent microservices components
new ways of interacting with external footprints—it became easier to run them on can be changed in isolation to create
consumers. This resulted in smaller virtual machines, and then eventually greater agility, scaled individually to
bi-modal IT, where new decentralized, within container technologies such as Docker. make better use of cloud-native
fast-moving areas of IT needed much greater infrastructure, and managed more
agility in their development and led to the ruthlessly to provide the resilience
invention of new ways of building applications Microservices architecture: required by 24/7 online applications.
using, for example, microservices architecture. A more agile and scalable Figure 3 below visualizes the
way to build applications microservices architecture we’ve just
The rise of lightweight described.

runtimes In order to meet the constant need for IT to


improve agility and scalability, a next logical
Earlier, we covered the challenges of the heavily
step in application development was to break
centralized integration runtime—hard to safely
up applications into smaller pieces and run
and quickly make changes without affecting
them completely independently of one
other integrations, expensive and complex to
another. Eventually, these pieces became
scale, etc.
small enough that they deserved a name,
and they were termed microservices.
Home 17
17

Externally exposed services/APIs

Exposure Gateway (external)

Not least is your challenge of deciding the


Engagement
Applications

Microservice
Applications
shape and size of your microservices
components. Add to that equally critical
design choices around the extent to
which you decouple them. You need to
constantly balance practical reality with
Public API
aspirations for microservices-related
Enterprise API
Exposure Gateway benefits. In short, your microservices-
API Gateway
Integration Runtime
based application is only as agile and
Lightweight language runtime
Integration runtime
scalable as your design is good, and your
Request/response integration
methodology is mature.
Asynchronous integration
Microservice Microservice application boundary
of Record

Applications
Systems

Integration
Runtime

Figure 3. Microservices architecture: A new way to build applications

In theory, these principles could be used anywhere. Where we see them most commonly is in the
systems of engagement layer, where greater agility is essential. However, they could also be used
to improve the agility, scalability, and resilience of a system of record—or indeed anywhere else in
the architecture, as you will see as we discuss agile integration architecture in more depth.

Without question, microservices principles can offer significant benefits under the right
circumstances. However, choosing the right time to use these techniques is critical, and getting
the design of highly distributed components correct is not a trivial endeavor.
Home 18

A comparison of SOA and microservice architecture


Microservices inevitably gets compared to SOA in architectural discussions, not least because they Service-oriented
share many words in common. However, as you will see, this comparison is misleading at best, since architecture is an
the terms apply to two very different scopes. Figure 4 demonstrates how microservices are
application-scoped within the SOA enterprise service bus. enterprise-wide
initiative.
Microservices
architecture is an
option for how you
might choose to write
an
individual application.

Figure 4. SOA is enterprise scoped, microservices architecture is application scoped


It’s critical to recognize this difference in
scope, since some of the core principles
of each approach could be completely
incompatible if applied at the same
Service-oriented architecture is an enterprise-wide initiative to create re-usable, synchronously
scope. For example:
available services and APIs, such that new applications can be created
more quickly incorporating data from other systems.

Microservices architecture, on the other hand, is an option for how you might choose to write an
individual application in a way that makes that application more agile, scalable, and resilient.
Home 19

• Re-use: In SOA, re-use of integrations is • Data duplication: A clear aim of providing


the primary goal, and at an enterprise services in an SOA is for all applications
level, striving for some level of re-use is to synchronously get hold of, and make
essential. In microservices architecture, changes to, data directly at its primary
creating a microservices component that is source, which reduces the need to
re-used at runtime throughout an maintain complex data synchronization
application results in dependencies that patterns. In microservices applications,
each microservice ideally has local access
reduce agility and resilience. Microservices
to all the data it needs to ensure its
components generally prefer to re-use
independence from other microservices,
code by copy and accept data duplication
and indeed from other applications—even
to help improve decoupling between one
if this means some duplication of data in
another. other systems. Of course, this duplication
adds complexity, so it needs to be
• Synchronous calls: The re-usable services balanced against the gains in agility and
in SOA are available across the enterprise performance, but this is accepted as a
using predominantly synchronous reality of microservices design.
protocols such as RESTful APIs. However,
within a microservice application, So, in summary, SOA has an enterprise scope
synchronous calls introduce real-time and looks at how integration occurs between
dependencies, resulting in a loss of applications. Microservices architecture has
resilience, and also latency, which impacts an application scope, dealing with how the
performance. Within a microservices internals of an application are built. This is a
application, interaction patterns based on relatively swift explanation of a much more
asynchronous communication are complex debate, which is thoroughly explored
preferred, such as event sourcing where a in a separate article:
publish subscribe model is used to enable http://ibm.biz/MicroservicesVsSoa
a microservices component to remain up
However, we have enough of the key concepts
to date on changes happening to the data
to now delve into the various aspects of agile
in another component.
integration architecture.
Home 20

Chapter 3: The case for agile integration architecture

Let’s briefly explore why Microservices architecture


microservices concepts Microservices architecture is an alternative approach to structuring applications. Rather
have become so popular than an application being a large silo of code all running on the same server, an application
in the application space. is designed as a collection of smaller, completely independently running components.
This enables the following benefits, which are also illustrated in Figure 5 below:
We can then quickly see
how those principles can
be applied to the
modernization of
integration architecture.
greater Agility
They are small enough to be understood
completely in isolation and changed independently

elastic Scalability
Their resource usage can be truly tied to the
business model

discrete Resilience
With suitable decoupling, changes to one
microservice do not affect others at runtime

Figure 5 Comparison of siloed and microservices-based applications


Home 21

Microservice components are often made from As with any new approach there are challenges tests are a must. The developers who
pure language runtimes such as Node.js or Java, too, some obvious, and some more subtle. write code must be responsible for it in
but equally they can be made from any suitably Microservices are a radically different approach production. Build and deployment chains
lightweight runtime. The key requirements to building applications. Let’s have a brief look need significant changes to provide the
include that they have a simple dependency- at some of the considerations: right separation of concerns for a
free installation, file system based deploy, start/ microservices environment.
stop in seconds and have strong support for • Greater overall complexity: Although the
container-based infrastructure. individual components are potentially simpler, Microservices architecture is not the
and as such they are easier to change and solution to every problem. Since there is
scale, the overall application is inevitably a an overhead of complexity with the
collection of highly distributed individual parts. microservices approach, it is critical to
ensure the benefits outlined above
Microservices architectures • Learning curve on cloud-native outweigh the extra complexity. However,
lead to the primary benefits infrastructure: To manage the increased if applied judiciously it can provide order
number of components, new technologies and of magnitude benefits that would be hard
of greater agility, elastic frameworks are required including service to achieve any other way.
scalability, and discrete discovery, workload orchestration, container
management, logging frameworks and more. Microservices architecture discussions are
resilience. Platforms are available to make this easier, but often heavily focused on alternate ways
it is still a learning curve. to build applications, but the core ideas
behind it are relevant to all software
• Different design paradigms: components, including integration.
Microservices architecture enables developers The microservices application architecture
to make better use of cloud native infrastructure requires fundamentally different approaches
and manage components more ruthlessly, to design. For example, using eventual
providing the resilience and scalability required consistency rather than transactional
by 24/7 online applications. It also improves interactions, or the subtleties of asynchronous
ownership in line with DevOps practices whereby communication to truly decouple components.
a team can truly take responsibility for a whole
microservice component throughout its lifecycle • DevOps maturity: Microservices require a
and hence make changes at a higher velocity. mature delivery capability. Continuous
integration, deployment, and fully automated
Home 22

Agile integration architecture There are three related, but separate aspects
to agile integration architecture: Aspect 1:
If what we’ve learned from microservices • Aspect 1:
Fine-grained integration
architecture means it sometimes makes sense Fine-grained integration deployment
to build applications in a more granular deployment.
lightweight fashion, why shouldn’t we apply
that to integration to? What might we gain by breaking out the
integrations in the siloed ESB into separate
Integration is typically deployed in a very siloed runtimes?
and centralized fashion such as the ESB pattern.
What would it look like if we were to re-visit that • Aspect 2:
in the light of microservices architecture? Decentralized integration
It is this alternative approach that we call
“agile integration architecture”.
ownership. The centralized deployment of
integration hub or enterprise services
How should we adjust the organizational bus (ESB) patterns where all integrations
structure to better leverage a more are deployed to a single heavily nurtured
fine-grained approach? (HA) pair of integration servers has been
Agile integration architecture shown to introduce a bottleneck for
is defined as • Aspect 3: projects. Any deployment to the shared
Cloud native integration servers runs the risk of destabilizing
existing critical interfaces. No individual
“a container-based, infrastructure.
project can choose to upgrade the
decentralized and What further benefits could we gain by a version of the integration middleware
to gain access to new features.
microservices-aligned fully cloud-native approach to integration.

architecture for integration Although these each have dedicated chapters, We could break up the enterprise-wide
it’s worth taking the time to summarize them ESB component into smaller more
solutions”. at a conceptual level here. manageable and dedicated pieces.
Perhaps in some cases we can even get
down to one runtime for each interface
we expose.
Home 23

These “fine-grained integration deployment” patterns provide specialized, right-sized containers, Agility:
offering improved agility, scalability and resilience, and look very different to the centralized ESB
patterns of the past. Figure 6 demonstrates in simple terms how a centralized ESB differs from Different teams can work on integrations
fine-grained integration deployment.B patterns of the past. independently without deferring to a
centralized group or infrastructure that
can quickly become a bottleneck.
Individual integration flows can be
changed, rebuilt, and deployed
independently of other flows, enabling
safer application of changes and
maximizing speed to production.
Consumers

Scalability:
Integrations
Individual flows can be scaled on their
own, allowing you to take advantage of
Providers efficient elastic scaling of cloud
infrastructures.

Centralized ESB Fine-grained integration


deployment
Resilience:

Figure 6: Simplistic comparison of a centralized ESB to fine-grained integration deployment Isolated integration flows that are
deployed in separate containers cannot
affect one another by stealing shared
resources, such as memory,
Fine-grained integration deployment draws on the benefits of a microservices architecture we listed in connections, or CPU.
the last section: agility, scalability and resilience:
Home 24

Breaking the single ESB runtime up into many The move to fine-grained integration
separate runtimes, each containing just a few deployment opens a door such that ownership
integrations is explored in detail in “Chapter 4: of the creation and maintenance of integrations
Aspect 1: Fine grained integration deployment” can be distributed.

It’s not unreasonable for business application


Aspect 2: teams to take on integration work, streamlining
the implementation of new capabilities. This shift
Decentralized is discussed in more depth in “Chapter 5:
Aspect 2: Decentralized integration ownership”.
integration ownership
A significant challenge faced by service-
oriented architecture was the way that it
tended to force the creation of central
integration teams, and infrastructure to
create the service layer.

This created ongoing friction in the pace at


which projects could run since they always
had the central integration team as a
dependency. The central team knew their
integration technology well, but often didn’t
understand the applications they were
integrating, so translating requirements
could be slow and error prone.

Many organizations would have preferred


the application teams own the creation of their
own services, but the technology and
infrastructure of the time didn’t enable that.
Home 25

Aspect 3: How has the modern • File system based installation:


They can be installed simply by laying
Cloud-native integration runtime changed their binaries out on a file system and

integration infrastructure to accommodate agile starting them up-ideal for the layered
file systems of Docker images.
integration architecture?
• DevOps tooling support: The runtime
should be continuous integration and
Integration runtimes have changed dramatically Clearly, agile integration architecture requires deployment-ready. Script and property
in recent years. So much so that these that the integration topology be deployed very file-based install, build, deploy, and
lightweight runtimes can be used in truly cloud- differently. A key aspect of that is a modern configuration to enable “infrastructure
native ways. By this we are referring to their integration runtime that can be run in a as code” practices. Template scripts for
container-based environment and is well suited standard build and deploy tools should
ability to hand off the burden of many of their
to cloud-native deployment techniques. Modern be provided to accelerate inclusion into
previously proprietary mechanisms for cluster integration runtimes are almost unrecognizable DevOps pipelines.
management, scaling, availability and to the from their historical peers. Let’s have a look at
cloud platform in which they are running. some of those differences: • API-first: The primary communication
protocol should be RESTful APIs.
This entails a lot more than just running them in • Fast lightweight runtime: They run in Exposing integrations as RESTful APIs
a containerized environment. It means they containers such as Docker and are should be trivial and based upon
have to be able to function as “cattle not pets,” sufficiently lightweight that they can be common conventions such as the Open
making best use of the orchestration started and stopped in seconds and can be API specification. Calling downstream
easily administered by orchestration RESTful APis should be equally trivial,
capabilities such as Kubernetes and many other
frameworks such as Kubernetes. including discovery via definition files.
common cloud standard frameworks.
• Dependency free: They no longer require • Digital connectivity: In addition to
We expand considerably on the concepts in databases or message queues, although the rich enterprise connectivity that
“Chapter 6: Aspect 3: Cloud native integration obviously, they are very adept at has always been provided by integration
infrastructure”. connecting to them if they need to. runtimes, they must also connect to
modern resources.
Home 26

For example, NoSQL databases Modern integration runtimes are well suited to the three aspects of agile integration architecture:
(MongoDb and Cloudant etc.), and fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. Before we
Messaging services such as Kafka. turn our attention to these aspects in more detail, we will take a more detailed look at the SOA
Furthermore, they need access to a rich pattern for those who may be less familiar with it, and explore where organizations have struggled
catalogue of application intelligent to reach the potential they sought.
connectors for SaaS (software as a service)
applications such as Salesforce.

• Continuous delivery: Continuous delivery


is enabled by command-line interfaces and
template scripts that mesh into standard
DevOps pipeline tools. This further reduces
the knowledge required to implement
interfaces and increases the pace of delivery.

• Enhanced tooling: Enhanced tooling for


integration means most interfaces can be
built by configuration alone, often by
individuals with no integration background.
With the addition of templates for common
integration patterns, integration best practices
are burned into the tooling, further
simplifying the tasks. Deep integration
specialists are less often required, and some
integration can potentially be taken on by
application teams as we will see in the next
section on decentralized integration.
Home 27

Section 2: Chapter 4: Aspect 1:


Exploring agile integration Fine-grained integration deployment
architecture in detail
Now that you have been introduced to the If the large centralized ESB pattern containing all the integrations for the enterprise is reducing
concept of agile integration architecture we are agility for all the reasons noted previously, then why not break it up into smaller pieces? This
going to dive into greater detail on its three section explores why and how we might go about doing that.
main aspects, looking at their characteristics
and presenting a real-life scenario.
Breaking up the centralized ESB
- Chapter 4: If it makes sense to build applications in a more granular fashion, why shouldn’t we apply this
Aspect 1: Fine-grained integration idea to integration, too? We could break up the enterprise-wide centralized ESB component into
deployment smaller, more manageable, dedicated components. Perhaps even down to one integration
Addresses the benefits an runtime for each interface we expose, although in many cases it would be sufficient to bunch
organization gains by breaking up the the integrations as a handful per component.
centralized ESB

- Chapter 5:
Aspect 2: Decentralized integration
ownership
Discusses how shifting from a
centralized governance and development
practice creates new levels of agility and
innovation.

- Chapter 6:
Aspect 3: Cloud native integration
infrastructure
Provides a description of how
adopting key technologies and practices from
the cloud native application discipline can
provide similar benefits to application integration.
Home 28

Figure 7 shows the result of breaking up the ESB into separate, independently maintainable and
scalable components.

Externally exposed services/APIs

Exposure Gateway (external)

Fine grained integration


Microservice
Applications
Engagement
Applications

deployment allows you


to make a change to an
individual integration
with complete
confidence that you will
Public API
“Fine-grained integration
deployment” Enterprise API

Microservice
API Gateway
Lightweight language runtime
not introduce any
of Record

Applications
Systems

Lightweight integration runtime


Request/response integration instability into the
Asynchronous integration

Microservice application boundary


environment

Figure 7: Breaking up the centralized ESB into independently maintainable and scalable pieces

The heavily centralized ESB pattern can be broken up in this way, and so can the older hub and spoke
pattern. This makes each individual integration easier to change independently, and improves agility,
scaling, and resilience.
Home 29

This approach allows you to make a change to an We typically call this pattern fine-grained Installation is equally minimalist
individual integration with complete confidence integration deployment (and a key aspect of and straightforward requiring little
that you will not introduce any instability into the agile integration architecture), to differentiate more than laying binaries out on a
environment on which the other integrations are it from more purist microservices application file system.
architectures. We also want to mark a distinction
running. You could choose to use a different
from the ESB term, which is strongly associated • Virtualization and containerization.
version of the integration runtime, perhaps to
with the more cumbersome centralized The runtime should actively support
take advantage of new features, without forcing integration architecture.
a risky upgrade to all other integrations. You containerization technologies such
as Docker and container
could scale up one integration completely
independently of the others, making extremely
What characteristics does orchestration capabilities such as

efficient use of infrastructure, especially when the integration runtime Kubernetes, enabling non-functional
characteristics such as high
using cloud-based models. need? availability and elastic scalability to
be managed in the standardized
There are of course considerations to be worked To be able to be used for fine-grained
ways used by other digital
deployment, what characteristics does a modern
through with this approach, such as the generation runtimes, rather than
integration runtime need?
increased complexity with more moving parts. relying on proprietary topologies
Also, although the above could be achieved • Fast, light integration runtime. and technology. This enables new
using virtual machine technology, it is likely that runtimes to be introduced
The actual runtime is slim, dispensing with administered and scaled in
the long-term benefits would be greater if you hard dependencies on other components
were to use containers such as Docker, and well-known ways without requiring
such as databases for configuration, or
proprietary expertise.
orchestration mechanisms such as Kubernetes. being fundamentally reliant on a specific
Introducing new technologies to the integration message queuing capability. The runtime
team can add a learning curve. However, these itself can now be stopped and started in
seconds, yet none of its rich functionality
are the same challenges that an enterprise
has been sacrificed. It is totally reasonable
would already be facing if they were exploring to consider deploying a small number of
microservices architecture in other areas, so integrations on a runtime like this and then
that expertise may already exist within the running them independently rather than
organization. placing all integration on a centralized
single topology.
Home 30

• Stateless This provides a taste of how different the integration runtimes of today are from those of the past.
The runtime needs to able to run IBM App Connect Enterprise (formerly known as IBM Integration Bus) is a good example of
statelessly. In other words, runtimes such a runtime. Integration runtimes are not in themselves an ESB; ESB is just one of the
should not be dependent on, or even patterns they can be used for. They are used in a variety of other architectural patterns too,
aware of one another. As such they can be and increasingly in fine-grained integration deployment.
added and taken away from a cluster freely
and new versions of interfaces can be
deployed easily. This enables the container Granularity
orchestration to manage scaling, rolling
A glaring question then remains: how granular should the decomposition of the integration flows
deployments, A/B testing, canary tests and
be? Although you could potentially separate each integration into a separate container, it is
more with no proprietary knowledge of the
unlikely that such a purist approach would make sense. The real goal is simply to ensure that
underlying integration runtime. This stateless
unrelated integrations are not housed together. That is, a middle ground with containers that
aspect is essential if there are going to be
group related integrations together (as shown in Figure 8) can be sufficient to gain many of the
more runtimes to manage in total.
benefits that were described previously.
• Cloud-first
It should be possible to immediately explore a
deployment without the need to install any
local infrastructure. Examples include providing
a cloud based managed service whereby
integrations can be immediately deployed,
with a low entry cost, and an elastic cost model.
Quick starts should be available for simple
creation of deployment environments on
major cloud vendors’ infrastructures.

Figure 8: Related integrations grouped together can lead to many benefits.


Home 31

You target the integrations that need the most Conclusion on fine-grained integration deployment
independence and break them out on their own.
On the flip side, keep together flows that, for Fine-grained deployment allows you to reap some of the benefits of microservices architecture
example, share a common data model for in your integration layer enabling greater agility because of infrastructural decoupled
cross-compatibility. In a situation where components, elastic scaling of individual integrations and an inherent improvement in
changes to one integration must result in resilience from the greater isolation.
changes to all related integrations, the benefits
of separation may not be so relevant.
For example, where any change to a shared data
model must be performed on all related
integrations, and they would all need to be
regression tested anyway, having them as
separate entities may only be of minimal value.
However, if one of those related integrations has
a very different scaling profile, there might be a
case for breaking it out on its own. It’s clear that
there will always be a mixture of concerns to
consider when assessing granularity.

The right level of granularity


is to allow decomposition of
the integration flows to the
point where unrelated
integrations are not housed
together.
Home 32

Lessons Learned

A real-life scenario The problem The solution


Let’s examine an organization where an agile While this seemed like a reasonable approach, The solution was to break the data
methodology was adopted, a cloud had been it created issues with the application model into bounded contexts based on
chosen but who still had a centralized team that development team. Adding one element to business focus areas. Furthermore the
maintained an enterprise-wide data model and the model took, at best, two weeks. The integrations were divided up into groups
ESB. This team realized that they struggled with application team had to submit the request, based on those bounded contexts too,
even a simple change of adding a new element then attend the CoE meeting, then if agreed each running on separate infrastructure.
to the enterprise message model and the to that model would be released the following This allowed each data model and its
associated exposed endpoint. week. From there, the application dev team associated integrations to evolve
would get the model which would contain their independently as required yet still
The team that owned the model took requests change (and any other change any other team providing consistency across a now
from application development teams. Since it had submitted for between their last version more narrow bounded context. It is
wasn’t reasonable for the modelling CoE (Center and the current version). Then would be able worth noting that although this provided
of Excellence) team to take requests constantly, to start work implementing business code. improved autonomy with regard to data
they met once a week to talk about changes and model changes, the integration team
determine if the changes would be agreed to. After some time, these two week procedural were still separate from the application
To reduce change frequency, the model was delays began to add up. From this point we teams, creating scheduling and
released once a week with whatever updates need to strongly consider if the value of the requirements handover latencies.
had been accepted by the CoE. After the model highly-governed, enterprise message model is In the next section, we will discuss the
was changed the ESB team would take action worth that investment, and if the consistency importance of exploring changes to the
on any related changes. Because of the gained through the CoE team is worth the organizational boundaries too.
enterprise nature of the ESB this would then delays. On the benefit side the CoE team can
again have to be coordinated with other builds, now create and maintain standards and keep
other application needs and releases. a level of consistency, on the con side that
consistency is incurring a penalty if we look
at it from the lens of time to market.
Home 33

Chapter 5: Aspect 2: Decentralized integration


ownership

We can take what we’ve done in “Aspect 1: Fine


grained integration deployment” a step further.
Decentralizing integration
If you have broken up the integrations into ownership
separate decoupled pieces, you may opt to
distribute those pieces differently from an In the strongly layered architecture described
ownership and administration point of view as well. in “Chapter 3: The journey so far:

The microservices approach encourages teams SOA, ESBs and APIs”, technology islands such
to gain increasing autonomy such that they can as integration had their own dedicated, and
make changes confidently at a more rapid pace. often centralized teams. Often referred to as
When applied to integration, that means
the “ESB team” or the “SOA team”, they owned
allowing the creation and maintenance of
integration artifacts to be owned directly by the integration infrastructure, and the creation
application teams rather than by a single and maintenance of everything on it.
separate centralized team. This distribution of We could debate Conway’s Law as to whether
ownership is often referred to under the broader the architecture created the separate team or
topic of “decentralization” which is a common the other way around, but the more important
theme in microservices architecture. point is that the technology restriction of
needing a single integration infrastructure has
It is extremely important to recognize that
decentralization is a significant change for most been lifted.
organizations. For some, it may be too different
to take on board and they may have valid We can now break integrations out into
reasons to remain completely centrally separate decoupled (containerized) pieces,
organized. For large organizations, it is unlikely each carrying all the dependencies they need,
it will happen consistently across all domains. as demonstrated in Figure 9 below.
It is much more likely that only specific pockets
of the organization will move to this approach -
where it suits them culturally and helps them
meet their business objectives.

We’ll discuss what effect that shift would have


on an organization, and some of the pros and
cons of decentralization.
Home 34

Externally exposed services/APIs


You’ll notice we’ve also shown the
decentralization of the gateways to
Exposure Gateway (external) denote that the administration of the
API’s exposure moves to the application

Microservice
Applications
teams as well.
Engagement
Applications

There are many potential advantages to


this decentralized integration approach:

Public API • Expertise: A common challenge for


Enterprise API separate SOA teams was that they
API Gateway didn’t understand the applications
Lightweight language runtime they were offering through services.
Lightweight integration runtime The application teams know the data
Request/response integration structures of their own applications
Asynchronous integration better than anyone.
of Record
Systems

Microservice application boundary

• Optimization: Fewer teams will be


involved in the end-to-end
implementation of a solution,
significantly reducing the cross-team
chatter, project delivery timeframe,
Figure 9: Decentralizing integration to the application teams and inevitable waterfall development
that typically occurs in these cases.
Technologically, there may be little difference between this diagram and the preceding fine-grained
integration diagram in the previous chapter. All the same integrations are present, they’re just in a • Empowerment: Governance teams
different place on the diagram. What’s changed is who owns the integration components. Could you were viewed as bottle necks or
have the application teams take on integration themselves? Could they own the creation and checkpoints that had to be passed.
maintenance of the integrations that belong to their applications? This is feasible because not only There were artificial delays that were
have most integration runtimes become more lightweight, but they have also become significantly added to document, review then
easier to use. You no longer need to be a deep integration specialist to use a good modern approve solutions.
integration runtime. It’s perfectly reasonable that an application developer could make good use
of an integration runtime.
Home 35

The goal was to create consistency, the con is Let’s just reinforce that point we made in the API management is very commonly
that to create that consistency took time. The introduction of this chapter. While implemented in this way: with a shared
fundamental question is “does the consistency decentralization of integration offers potential infrastructure (an HA pair of gateways
justify the additional time?” In decentralization, unique benefits, especially in terms of overall and a single installation of the API
the team is empowered to implement the agility, it is a significant departure from the way management components), but with
governance policies that are appropriate to many organizations are structured today. The each application team directly
their scope. pros and cons need to be weighted carefully, and administering their own APIs as if they
it may be that a blended approach where only had their own individual infrastructure.
some parts of the organization take on this The same can be done with the
approach is more achievable. integration runtimes by having a
centralized container orchestration
platform on which they can be deployed
Does decentralized but giving application teams the ability
integration also mean to deploy their own containers
independently of other teams.
decentralized infrastructure
To re-iterate, decentralized integration is
primarily an organizational change, not a
technical one. But does decentralized integration
imply an infrastructure change? Possibly, but
not necessarily.

The move toward decentralized ownership of


integrations and their exposure does not
necessarily imply a decentralized
infrastructure. While each application team
Decentralized integration clearly could have its own gateways and
increases project expertise, container orchestration platforms, this is not a
given. The important thing is that they can
focus and team work autonomously.
empowerment.
Home 36

Benefits for cloud Traditional centralized technology-based organization


It is worth noting that this decentralized In the following Figure 10, we show how in a traditional SOA architecture, people were
approach is particularly powerful when moving aligned based to their technology stack.
to the cloud. Integration is already implemented Externally exposed services/APIs
in a cloud-friendly way and aligned with systems
of record. Integrations relating to the application
have been separated out from other unrelated Exposure Gateway (external)
integrations so they can move cleanly with the

Microservice
Applications
application. Furthermore, container-based

Engagement
Applications
infrastructures, if designed using cloud-ready
principles and an infrastructure-as-code
approach, are much more portable to cloud and
make better use of cloud-based scaling and cost
models. With the integration also owned by the
application team, it can be effectively packaged
as part of the application itself. Public API
Exposure Gateway
Enterprise API
In short, decentralized integration significantly Integration Runtime
API Gateway
improves your cloud readiness.
Lightweight language runtime

Lightweight integration runtime


We are now a very long way from the centralized
Request/response integration
ESB pattern—indeed, the term makes no sense
of Record
Systems

Asynchronous integration
in relation to this fully decentralized pattern—
Microservice application boundary
but we’re still achieving the same intent of
Scope of the ESB pattern
making application data and functions available
for re-use by other applications across and even
Integration Runtime
beyond the enterprise.

Figure 10: Alignment of IT staff according to technology stack in an ESB environment.


Home 37

A high level organizational chart would look


something like this:
A simple example of this would be a front-end
team wanting to add a single new element to
Moving to a decentralized,
their screen. If that element doesn’t exist on an business-focused team
• A front-end team, which would be focused
on end user’s experience and focused on
existing SOAP service in the ESB then the ESB
team would have to get engaged. Then, structure
creating UIs. predictably, this would also impact the back-end We’re trying to reduce the time between
team who would also have to make a change. the business ask and production
• An ESB team, which would be focused on Now, generally speaking, the code changes at implementation, knowing that we may
looking at existing assets that could be each level were simple and straightforward, so rethink and reconsider how we implement
provided as enterprise assets. This team that wasn’t the problem. the governance processes that were once
would also be focused on creating the
in place. Let’s now consider the concept of
services that would support the UIs from The problem was allocating the time for microservices and that we’ve broken our
the front-end team. developers and testers to work on it. The technical assets down into smaller pieces.
project managers would have to get involved If we don’t consider reorganizing, we
• A back-end team, which would focus on the to figure out who on their teams had capacity might actually make it worse! We’ll
implementation of the enterprise assets to add the new element, and how to schedule introduce even more hand-offs as the
surfaced through the ESB. There would be the push into the various environments. Now, lines of what is an application and who
many teams here working on many different if we scale this out we also have competing owns what begin to blur. We need to
technologies. Some might be able to provide priorities. Each project and each new element re-think how we align people to technical
SOAP interfaces created in Java, some would would have to be vetted and prioritized, and all assets. In Figure 11, give you a preview of
provide COBOL copybooks delivered over MQ, this is what took the time. So now we are in a what that new alignment might look like.
yet others would create SOAP services situation where there is a lot of overhead, in
exposed by the mainframe and so on. terms of time, for a very simple and Instead of people being centrally aligned
straightforward change. to the area of the architecture they work
This is an organizational structure with an
on, they’ve been decentralized, and
enterprise focus which allows a company to The question is whether the benefits that we aligned to business domains. In the past,
rationalize its assets and enforce standards get by doing governance, and creating common we had a front-end team, services teams,
across a large variety of assets. The downside interfaces is worth the price we pay for the back-end teams and so on; now we have
of this focus is that time to market for an operational challenges? In the modern digital a number of business teams. For example,
individual project was compromised for the world of fast-paced innovation we must think an Account team which works on anything
good of the enterprise. of a new way to enforce standards while related to accounts regardless whether or
allowing teams to reduce their time to market. not the accounts involve a REST API,
a microservice, or a user interface.
Home 38

Externally exposed services/APIs

Big bangs generally lead


Exposure Gateway (external)
to big disasters
The concept of “big bangs generally lead

Microservices
Engagement
applications

to big disasters” isn’t only applicable to

application
code or applications. It’s applicable to
organizational structure changes as well.
An organization’s landscape will be a
complex heterogeneous blend of new
and old. It may have a “move to cloud”
strategy, yet it will also contain stable
heritage assets. The organizational
structure will continue to reflect that
mixture. Few large enterprises will have
the luxury of shifting entirely to a
decentralized organizational structure,
nor would they be wise to do so.
of Record
Systems

For example, if there is a stable


application and there is nothing major
on the road map for that application, it
wouldn’t make sense to decompose that
application into microservices. Just as
that wouldn’t make sense, it also would
Figure 11: Decentralized IT staff structures. not make sense to reorganize the team
working on that application.
The teams need to have cross-cutting skills since their goal is to deliver business results, not Decentralization need only occur where
technology. To create that diverse skill set, it’s natural to start by picking one person from the the autonomy it brings is required by the
old ESB team, one person from the old front-end team, and another from the back-end team. organization, to enable rapid innovation
It is very important to note that this does not need to be a big bang re-org across the entire in a particular area.
enterprise, this can be done application by application, and piece by piece.
Home 39

We certainly do not anticipate reorganization If we look into the concerns and motivations of the people involved, they fall into two very
at a company level in its entirety overnight. different groups, illustrated in Figure 12.
The point here is more that as the architecture
evolves, so should the team structure working
on those applications, and indeed the Re-use  Agility

applications
Microservice
Quality  Velocity
integration between them. If the architecture Stability  Engagement Engagement Autonomy Freemium
for an application is not changing and is not Support 
applications applications
Cloud native
foreseen to change there is no need reorganize Monitoring Vendor agnostic

the people working on that application. Governance Developer is king


Performance Rapid prototyping

Integration
Traditional
Fixed requirements Short learning curve
Integration Runtime

Prioritizing Project What’s its track Can I start small

Delivery First recor Is the vendor


trustworthy Will it
Can it help me today
What do my peers

Application
of Record
Systems
serve me long term think of it Does it
Now let’s consider what this change does to an

SaaS
What do the have an active
individual and what they’re concerned about. analysts think of it community A
 re my
Could I get sacked skills relevant to
The first thing you’ll notice about the next for a risky choice my peers

diagram is that it shows both old and new Figure 12: Traditional developers versus a gile teams
architectural styles together. This is the reality
for most organizations. There will be many
existing systems that are older, more resistant A developer of traditional applications cares
to change, yet critical to the business. Whilst about stability and generating code for
re-use and doing a large amount of up-front
Agile teams are more
some of those may be partially or even
completely re-engineered, or replaced, many due diligence. The agile teams on the other concerned with the
hand have shifted to a delivery focus. Now,
will remain for a long time to come. In addition,
there is a new wave of applications being built instead of thinking about the integrity of the project delivery than
for agility and innovation using architectures enterprise architecture first and being willing they are with the
such as microservices. There will be new to compromise on the individual delivery
cloud-based software-as-a-service applications timelines, they’re now thinking about enterprise architecture
being added to the mix too. delivery first and willing to compromise on
consistency.
integrity.
Home 40

Microservice application
Let’s view these two conflicting priorities as two
ends of a pendulum. There are negatives at the
extreme end on both sides. On one side, we Guild(s)
have analysis paralysis where all we’re doing is
talking and thinking about what we should be
doing, on the other side we have the wild-wild-west
were all we’re doing is blindly writing code with
no direction or thought towards the longer-term
picture. Neither side is correct, and both have
grave consequences if allowed to slip too far to
one extreme or the other. The question still
remains: “If I’ve broken my teams into business
domains and they’re enabled and focused on
delivery, how do I get some level of consistency Microservice component Microservice component
across all the teams? How do I prevent duplicate
effort? How do I gain some semblance of
consistency and control while still enabling Figure 13: Practicing architects play a dual role as individual contributors and guild members.
speed to production?”

Here we have many teams and some of the members of those teams are playing a dual role.
Evolving the role of the On one side they are expected to be an individual contributor on the team, and on the other
side they sit on a committee (or guild) that rationalizes what everyone is working on. They are
Architect creating common best practices from their work on the ground. They are creating shared
frameworks, and sharing their experiences so that other teams don’t blunder into traps
The answer is to also consider the architecture they’ve already encountered. In the SOA world, it was the goal to stop duplication/enforce
role. In the SOA model the architecture team standards before development even started. In this model the teams are empowered, and the
would sit in an ivory tower and make decisions. committee or guild’s responsibility is to raise/address and fix cross cutting concerns at the
In the new world, the architects have an evolved time of application development.
role--practicing architects. An example is
depicted in Figure 13. If there is a downside to decentralization, it may be the question of how to govern the
multitude of different ways that each application team might use the technology – essentially
encouraging standard patterns of use and best practices. Autonomy can lead to divergence.
Home 41

If every application team creates APIs in their Therefore, the practicing architect is now Inevitably the proposed project
own style and convention, it can become responsible for knowing and understanding architecture and the actual resultant
complex for consumers who want to re-use what the committee has agreed to, encouraging project architecture will be different,
those APIs. With SOA, attempts were made to their team to follow the governance guidelines, and at times, radically different. Where
create rigid standards for every aspect of how bringing up cross-cutting concerns that their the architecture review board had an
the SOAP protocol would be used, which team has identified, and sharing what they’re objection, there would almost certainly
inevitably made them harder to understand and working on. This role also has the need to be not be time to resolve it. With the
reduced adoption. With RESTful APIs, an individual contributor on one of the teams exception of extreme issues (such as
it is more common to see convergence on so that they feel the pain, or benefit, of the a critical security flaw), the production
conventions rather than hard standards. Either decisions made by the committee. date typically goes ahead, and the
way, the need is clear: Even in decentralized technical debt is added to an
environments, you still need to find ways to
ensure an appropriate level of commonality Enforcing governance in a ever-growing backlog.
across the enterprise. Of course, if you are
already exploring a microservices-based
decentralized structure Clearly the shift we’ve discussed of
placing practicing architects in the teams
approach elsewhere in your enterprise, then you
With the concept of decentralization comes a encourages alignment. However, the
will be familiar with the challenges of autonomy.
natural skepticism over whether the committee architect is now under project delivery
or guild’s influence will be persuasive enough pressure which may mean they fall into
to enforce the standards they’ve agreed to. the same trap as the teams originally did,
The practicing architect Embedding our “practicing architect” into the sacrificing alignment to hit deadlines.
team may not be enough. What more can we do, via the practicing
is now responsible architect role, to encourage enforcement
for execution of the Let’s consider how the traditional governance
cycle often occurs. It often involves the
of standards?

individual team mission application team working through complex The key ingredient for success in modern
standards documents, and having meetings agile development environment is
as well as the related with the governance board prior to the intended automation: automated build pipelines,
governance implementation of the application to establish
agreement. Then the application team would
automated testing, automated
deployment and more. The practicing
requirements that cut proceed to development activities, normally architect needs to be actively involved
beyond the eyes of the governance team. in ways to automate the governance.
across the organization. On or near completion, and close to the agreed
production date, a governance review would occur.
Home 42

This could be anything from automated code Another dimension to note is that not all teams Clearly our aim should be to ensure that
review, to templates for build pipelines, to are created equally. Some teams are cranking general developers in the application
standard Helm charts to ensure the target out code like a factory, others are thinking teams can focus on writing code that
deployment topologies are homogeneous even ahead to upcoming challenges, and some teams delivers business value. With the
though they are independent. In short, the are a mix of the two. An advanced team that architects writing or overseeing common
focus is on enforcement of standards through succeeds in finding a way to automate a components which naturally enforce the
frameworks, templates and automation, rather particular governance challenge will be much governance concerns, the application
than through complex documents, and review more successful evangelists for that mechanism teams can spend more of their time on
processes. While this idea of getting the than any attempt for it to be created by a value, and less in governance sessions.
technology to enforce the standards is far from separate governance team. Governance based on complex
new, the proliferation of open standards in the documentation and heavy review
As we are discussing the technical architect it
DevOps tool chain and cloud platforms in procedures are rarely adhered to
may seem that too much is being put on their
general is making it much more achievable. consistently, whereas inline tooling based
shoulders. They are responsible for application
standardization happens more naturally.
delivery, they are responsible to be a part of the
Let’s start with an example: say that you have
committee discussed in the previous section,
microservices components that issue HTTP
and now we are adding on an additional
requests. For every HTTP request, you would element of writing common code that is to be
like to log in a common format how long that used by other application development teams.
HTTP transaction took as well as the HTTP Is it too much?
response code. Now, if every microservice did
this differently, there wouldn’t be a unified way A common way to offload some of that work is
of looking at all traffic. Another role of the to create a dedicated team that is under the
practicing architect is to build helper artifacts direction of the practicing architect who is
that would then be used by the microservices. writing and testing this code. The authoring of
In this way, instead of the governance process the code isn’t a huge challenge, but the testing
being a gate, it is an accelerator through the of that common code is. The reason for placing
architects being embedded in the teams, a high value on testing is because of the
working on code alongside of them. Now the potential impact to break or introduce bugs into
governance cycle is being done with the teams, all the applications that use that code. For this
and instead of reviewing documents, the code is reason, extra due diligence and care must be
the document and the checkpoint is to make taken, justifying the investment in the additional
sure that the common code is being used. resource allocation.
Home 43

How can we have


multi-skilled developers? Operations Deployment Build Artifact
Creation
Security Installation Resource
allocation
The next and very critical person to consider is
the developer. Developers are now to be
Operations Deployment Build Artifact Resource
expected and encouraged to be a full stack Security Installation
Creation allocation
developer and solve the business problem with
whatever technology is required. This puts an
incredible strain on each individual developer in
terms of the skills that they must acquire. It’s not Operations Deployment Build Artifact Security Installation Resource
Creation allocation
possible for the developer to know the deep ins
and outs of every aspect of each technology, so
something has to give. As we’ll see, what gives is
the infrastructure learning curve – we are finding Operations Deployment Build Artifact Security Installation Resource
better and better ways to make infrastructural Creation allocation
concerns look the same from one product to
another.
Operations Deployment Build Artifact Security Installation Resource
In the pre-cloud days, developers had to learn Creation allocation
multiple aspects of each technology as
categorized in Figure 14.
Figure 14: Required pre-cloud technology skills.

Decentralization allows developers to focus on what their team is


responsible for; delivering business results by creating artifacts.
Home 44

Each column represents a technology and each


row represents an area that the developer had
to know and care about, and understand the
implications of their code on. They had to know Operations Deployment Build Artefact Resource
Security Installation
Creation allocation
individually for each technology how to install,
how much resources it would need allocated to
it, how to cater for high availability, scaling and
security. How to create the artifacts, how to
Figure 15: Required pre-cloud technology skills.
compile and build them, where to store them,
how to deploy them, and how to monitor them
at runtime. All this unique and specific to each
technology. It is no wonder that we had One day, in an ideal world, the only unique thing about using a technology will be the creation
technology specific teams! of the artifact such as the code, or in the case of integration, the mediation flows and data
maps. Everything else will come from the environment. We’ll discuss this infrastructural change
However, the common capabilities and
in more depth in the next chapter.
frameworks of typical cloud platforms now
attempt to take care of many of those concerns
in a standardized way. They allow the developer
to focus on what their team is responsible for,
delivering business results by creating artifacts!
Figure 15 shows how decentralization removes
the ‘white noise’.

The grey area represents areas that still need to


be addressed but are now no longer at the front
of the developer’s mind. Standardized
technology such as (Docker) containers, and
orchestration frameworks such as Kubernetes,
and routing frameworks such as Istio, enable
management of runtimes in terms of scaling,
high availability, deployment and so on.
Furthermore, standardization in the way
products present themselves via command line
interfaces, APIs, and simple file system-based
install and deployment mean that standard
tools can be used to install, build and deploy, too.
Home 45

Conclusions on decentralized This concept is also rooted in actual technology improvements that are taking concerns away
from the developer and doing those uniformly through the facilities of the cloud platform.
integration ownership
As ever, we can refer right back to Conway’s Law (circa 1967) - if we’re changing the way we
Of course, decentralization isn’t right for every architect systems and we want it to stick, we also need to change the organizational structure.
situation. It may work for some organizations,
or for some parts of some organizations but not
for others. Application teams for older
applications may not have the right skill sets
to take on the integration work. It may be that
integration specialists need to be seeded into
their team. This approach is a tool for
potentially creating greater agility for change
and scaling, but what if the application has been
largely frozen for some time?

At the end of the day, some organizations will


find it more manageable to retain a more
centralized integration team. The approach
should be applied where the benefits are needed
most. That said, this style of decentralized
integration is what many organizations and
indeed application teams have always wanted
to do, but they may have had to overcome
certain technological barriers first.

The core concept is to focus on delivering


business value and a shift from a focus on the
enterprise to a focus on the developer. This
concept has in part manifested itself by the
movement from centralized teams into more
business specific ones, but also by more subtle
changes such as the role of a practicing architect.
Home 46

Lessons Learned

A real-life scenario The problem The solution

An organization who committed to decentralization The main problem was lack of end state vision. The solution was to align teams to microservices
was working with a microservices architecture that Because each piece of work was taken components, and create clear delineation of
had now been widely adopted, and many small, independently teams often did the minimum amount responsibilities. These needed to be done
independent assets were created at a rapid pace. In of work to accomplish the business objective. The through a rational approach. The first step was
addition to that, the infrastructure had migrated over main motivators for each team were risk avoidance to break down the entire solution into bounded
contexts, then assign teams ownership over
to a Docker-based environment. The organization and drive to meet project deadlines – and a desire
those bounded context. A bounded context is
didn’t believe they needed to align their developers not to break any existing functionality. Since each
simply a business objective and a grouping of
with specific technical assets. team had little experience with the code they needed business functions. An individual team could
to change, they began making tactical decisions to own many microservices components,
The original thought was that any team could work lower risk. however those assets all had to be aligned to
on any technical component. If the feature required the same business objective. Clear lines of
a team to add an element onto an existing screen, Developers were afraid to break currently working ownership and responsibility meant that the
that team was empowered and had free range to functionality. As they began new work, they would team thought more strategically about code
modify whatever assets were needed to to work around code that was authored from another modifications. The gravity of creating good
accomplish the business goal. There was a level of team. Therefore, all new code was appended to regression tests was now much more
coordination that occurred before the feature was existing code. The microservices continued growing important since each team knew they would
worked on so that no two teams would be working and growing over time, which then resulted in the have to live with their past decisions.
on the same code at the same time. This avoided the microservices not being so micro.
Importantly, another dimension of these new
need for merging of code.
ownership lines meant less handoffs between
This lead to technical debt piling up. This technical
teams to accomplish a business objective.
In the beginning, for the first 4-5 releases, this debt was not apparent over the first few releases,
One team would own the business function
worked out beautifully. Teams could work but then, 5 or 6 releases in, this became a real from start to finish - they would modify the
independently and could move quickly. However, problem. The next release required the investment front-end code, the integration layer and the
over time problems started to arise. of unravelling past tactical decisions. Over time the back-end code, including the storage. This
re-hashing of previously made decisions outweighed grouping of assets is clearly defined in
the agility that this organization structure had microservices architecture, and that principle
originally produced. should also carry through to organization
structures to reduce the handoffs between
teams and increase operational efficiency.
Home 47

Chapter 6: Aspect 3: Cloud native integration


infrastructure
If we are to be truly affective in transitioning to
an agile integration architecture, we will need to
Times have changed. Hardware is virtualized.
Also, with container technologies, such as
Integration pets:
do more than simply break out the integrations Docker, you can reduce the surrounding The traditional approach
into separate containers. We also need to apply operating system to a minimum so that you can
a cloud native - “cattle not pets” - approach to start an isolated process in seconds at most. Let’s examine what the common “pets”
the design and configuration of our integrations. Using cloud-based infrastructure, scaling can be model looks like. In the analogy, if you
horizontal, adding and removing servers or view a server (or a pair of servers that
As a result of moving to a fully cloud native containers at will, and adopting a usage-based attempt to appear as a single unit) as
approach, integration then becomes just pricing model. With that freedom, you can now indispensable, it is a pet. In the context
another option in the toolbox of lightweight deploy thin slivers of application logic on of integration, this concept is similar to
runtimes available to people building minimalist runtimes into lightweight the centralized integration topologies
microservices based applications. Instead of independent containers. Running significantly that the traditional approach has used to
just using integration to connect applications more than just a pair of containers is common solve enterprise application integration
together, it can now also be used within and limits the effects of one container going (EAI) and service-oriented architecture
applications where a component performs an down. By using container orchestration use cases.
integration centric task. frameworks, such as Kubernetes, you can
introduce or dispose of containers rapidly to
Cattle not pets scale workloads up and down. These containers
are treated more like a herd of cattle.
Let take a brief look at where that concept came
from before we discuss how to apply it in the
integration space. Using cloud-based
In a time when servers took weeks to provision infrastructure provides freedom
and minutes to start, it was fashionable to boast
about how long you could keep your servers to deploy thin slivers of
running without failure. Hardware was expensive,
and the more applications you could pack onto a application logic on minimalist
server, the lower your running costs were. High
availability (HA) was handled by using pairs of runtimes into lightweight
servers, and scaling was vertical by adding more
cores to a machine. Each server was unique, independent containers.
precious, and treated, well, like a pet.
Home 48

Table 1. Characteristics of pets

General characteristics How they are applied to a centralized or


of pets traditional integration context

Integration hubs are often built only once in the initial infrastructure
Manually built stage. Scripts help with consistency across environments but are mostly
run manually.

The hub and its components are directly and individually monitored
Managed during operation with a role-based access control to allow
administrative access to different groups of users.

The hub is nurtured over time, for example, by introducing new


integration applications, and changes to OS and software maintenance
levels. As part of this process, new options and parameters are applied,
Hand fed changing the overall configuration of the hub. Thus, even if the server
started out being based on a defined pattern, gradually the running
instance becomes more bespoke with each change in comparison to the
original installation.

Typically pairs of nodes provide HA. Great care is taken to keep these pairs
up and running and to back up the evolving configuration. Scalability is
Server pairs coarse-grained and achieved by creating more pairs or adding resources
so that existing pairs can support more workloads.
Home 49

Table 2. Characteristics of cattle


Characteristics of How they are applied to a agile
Integration cattle: integration architecture context
cattle
An alternative lightweight
approach
Simplistically, this shift means breaking Elastic scalability Integrations are scaled horizontally and allocated on-
demand in a cloud-like infrastructure.
up the more centralized ESB runtime
into multiple separate and highly
decoupled run times. However, the
change involves more than just breaking
out the integrations into containers.
Disposable and Using lightweight container technology encourages
changes to be made by redeploying amended images
A cattle-based approach must exhibit re-creatable rather than by nurturing a running server.
many, if not all, of the characteristics
in Table 2.

Adopting such an approach then impacts


Starts and stops Integrations are run and deployed as more fine-grained
the ways in which your DevOps teams entities and, therefore, take less time to start.
will interact with the environment and in seconds
the solution overall. These will be
consistent across any solution that
exists in a container-based architecture, Minimal Unrelated integrations are not grouped. Functional and
which will help create efficiencies as operational characteristics create colocation and
more solutions are moved to lightweight interdependencies grouping
architectures.

Infrastructure as Resources and code are declared and deployed together.


code
Home 50

• Maintenance: Integration servers are not • Affinity: Integration servers cannot make There are plenty of additional
administered live. If you want to make any any assumptions about how many other considerations we could discuss, but the
adjustments such as change an integration, replicas are running or where they are. overall point is clear: we need to think
add a new one, change property values, add This means careful consideration needs to very differently about how we design,
product fixpacks and so on, this is done by be paid to anything that implies any kind build, and deploy if we are to reap the
creating a new container image, starting up a of affinity, or selective caching of any data. benefits of greater development agility,
new instance based on it, and shutting down elastic scaling, and powerful resilience
Why? In a word, scalability. The container
the current container. models.
orchestration platform must be able to
Why? Any live changes to a running sever add or remove instances at will. If state is
make it different from the image it was built held for any reason, it will not be retained What’s so different with
from – it changes its runtime state. This would during orchestration.
then mean that the container orchestration cattle
engine cannot re-create containers at will for
failover and scaling. Adopting a “cattle How do we know if we’re doing it
right? Are we really creating
• Monitoring: Monitoring isn’t done via approach” impacts the replaceable, scalable cattle, or do we
connecting to a live running server. Instead,
the servers report what’s going on inside ways in which your still have heavily nurtured pets?
them via logging, which is aggregated by the
platform to provide a monitoring view. DevOps teams will There are many elements to what
constitutes an environment made from
Why? Direct monitoring techniques would not interact with the cattle rather than pets. One important
be able to keep up with the constantly litmus test that we’ll discuss here revolves
changing number of containers, nor would it environment and the around the question “What is a part of
be appropriate to expect every container to your build package for each new version of
accept monitoring requests alongside its solution overall, create a component?”. Take a look at the two
“day job”. Note: There are some exceptions
such as a simple health check, which is used
increasing efficiencies as images in Figure 16.

by the container orchestration platform to more solutions are moved


determine if the server is functioning correctly
and replace it if required. to lightweight
architectures.
Home 51

An important question to ask when you


release a new version is what is the
scope of the build package. If it is code,
and only code, then you are treating your
server like a pet. This implies that
version upgrades and patches would be
done at a separate time and through a
separate mechanism, leaving you unable
to guarantee the consistency of the
delivered artifacts.

This also implies that you couldn’t spin


up a new server quickly enough to meet
the demands of elastic scaling.
Figure 16: Pets versus Cattle
If the answer to the litmus test question
was everything including code, fixed
Let’s start by defining what is meant by • Environment Configuration: This would be
the endpoints that are expected to change configuration, runtime and environment
the text in the diagram configuration, then you are more than
environment by environment. An example
here would be if you’re integrating with likely treating your servers as cattle.
• Code: This is the code that you author and
deploy as a unit. In a Java world this would be something over HTTP, this configuration
would be the HTTP endpoint. If you’re This removes the chance that dev and
your JAR/EAR/WAR file. In a Node.js world, production react differently due to some
this would be the js files. In an IBM Integration connecting to a Database then this would be
the host, port, username and password. difference in server configuration since
Bus (IIB) world, this would be your BAR file. the server configuration is packaged
• Runtime: This is what is running your code. alongside the code -“infrastructure as code”.
• Fixed Configuration: These would be the
dependencies that your code relies on. If It could either be your node runtime, java JRE,
your code is making an HTTP call, this would Liberty server, IIB server, MQ server, etc.
be the HTTP package that you’re using. If It is the runtime that interprets and runs your
you’re using a Database connection, this coded artifacts.
would be the ODBC or JDBC classes.
Home 52

Pros and cons Application and


While we’re clearly encouraging you to consider the benefits of moving to a more
integration handled by
cattle-like approach, it’s only fair to recognize that more traditional pet-like approach the same team
also has benefits that might be more challenging to achieve with cattle. For a quick Once the application development group
comparison, see Figure 17, which shows some of the characteristics that vary has taken on the integration, there’s an
between cattle and pets. elephant in the room: At what point are
they doing integration, as opposed to
Integration scenarios vary in the application development?
characteristics that they need. With
For good reason, integration teams were
modern approaches to more
Pets Cattle lightweight runtimes and containers,
often told they should only do integration
logic, not application logic. This was to
you have the opportunity to stand avoid spreading business logic across
Longevity up each integration in the way that different teams and components
Disposability is most suited to it. You do not need throughout the enterprise. This deep
to assume that just because a cattle divide between teams doing “application”
and “integration” constantly dogged SOA,
Resource efficiency approach suits many integrations,
Elastic scalability resulting in a cascade of waterfall-style
it will suit all of them. For example, requirements between the teams that
existing integrations that rarely if slowed projects down.
Interdependencies ever need to be changed, and have
Isolation Now let’s be clear here, the fundamental
predictable load may not gain any
immediate benefit from the cattle premise of separating integration from
Maintenance effort application is still important, but we no
Agility approach. Conversely, new
longer need to go to the extremes of
integrations likely to undergo regular
having it done by separate teams – that
Centralization amendments with as yet unknown was just enforced on us by the technology
Decomposition loads will benefit significantly. You of the time.
can use both approaches and even
add hybrid options as required.
Figure 17: Characteristics of Pets and Cattle
Home 53

What if, as in the previous section on It is common to find microservices components


decentralization, we moved the integration in an application whose responsibilities are
responsibility into the application team, and primarily focused around integration. For
that team happened to be building their example, what if all a microservice component
application using a microservices architecture? did was to provide an API that performed a few
One of the key benefits of microservice invocations to other systems, collated and
architecture is that you can use multiple merged the results, and responded to the
different runtimes, each best suited to the job in
caller? That sounds a lot like something an
hand. For example, one runtime might be
integration tool would be good at. A simple
focused on the user interface and perhaps be
graphical flow—one that showed which systems
based on Node.js and a number of UI libraries.
Another runtime might be more focused on a you’re calling, allowed you to easily find where
particular need of the solution, such as a rules the data items are merged, and provided a
engine or machine learning. Of course, all visual representation of the mapping—would be
applications need to get data in and out, so much easier to maintain in the future than
surely we would expect to also see an hundreds of lines of code.
integration runtime too.
Let’s look at another example. There’s a
resurgence of interest in messaging in the
microservices world, through the popularity of
patterns such as event-sourced applications
Integration technology in and the use of eventual-consistency techniques.
So, you’ll probably find plenty of microservice
a microservices components that do little more than take
messages from a queue or topic, do a little
architecture can be a translation, and then push the result into a data
high-productivity part of store. However, they may require a surprisingly
large number of lines of code to accomplish. An
any application. integration runtime could perform that with
easily configurable connectors and graphical
data mapping, so you don’t have to understand
the specifics of the messaging and data store
interfaces, as depicted in Figure 18.
Home 54

As you saw in previous sections, the


integration runtime is now a truly
Externally exposed services/APIs lightweight component that can be run in
a cloud-native style. Therefore, it can
easily be included within microservices
applications, rather than just being used
Exposure Gateway (external) to integrate between them.

When discussing this approach, an


Engagement
Applications

Microservice
Applications
inevitable question is Am I introducing an
ESB into a microservices application? It
is an understandable concern, but it is
incorrect, and it’s extremely important to
tackle this concern head on. As you may
recall from the earlier definitions, an
integration runtime is not an ESB. That is
just one of the architectural patterns the
integration runtime can be a part of.

Public API ESB is the heavily centralized, enterprise-


Enterprise API scope architectural pattern discussed
API Gateway earlier in Chapter 3. Using a modern
Lightweight language runtime lightweight integration runtime to
of Record
Systems

Lightweight integration runtime


implement integration-related aspects of
an application, deploying each integration
Request/response integration
independently in a separate component
Asynchronous integration
is very different indeed from the
Microservice application boundary centralized ESB pattern. So the answer
is no, by using a lightweight integration
runtime to containerize discrete
integrations you are most certainly not
Figure 18: Using a lightweight integration runtime as a component within a microservices application
re-creating the centralized ESB pattern
within your microservices application.
Home 55

One of the key benefits of microservices runtimes, we now have the opportunity to use
architecture is that you are no longer restricted the right runtime for each task at hand. Where
to one language or runtime, which means you integration-like requirements are present,
can have a polyglot runtime—a collection of we can choose to use an integration runtime.
different runtimes, each suited to different
purposes. You can introduce integration as just
another of the runtime options for your
Common infrastructure
microservices applications. Whenever you need enabling multi-skilled
to build a microservices component that’s
integration centric, you would then expect to
development
use an integration runtime.
What is it exactly that has made it possible for
Traditionally, integration runtimes have been microservice application teams to work with
mostly used for integration between separate multiple different languages and runtimes within
applications—and they will certainly continue their solution. Certainly, in part it comes down
to the fact that languages have become more
to perform that role—but here we are discussing
expressive – you can achieve more, with less
its use as a component within an application.
lines of code – and tooling has become easier
to learn and more powerful. However, there’s
In the past, it would have been difficult for
another key reason that is directly related to
application developers to take on integration
what cloud-native brings to the table. The
since the integration tooling wasn’t part of the
runtimes share a common infrastructure not
application developer’s toolbox. Deep skills
just at the operating system level, but in many
were often required in the integration product
other dimensions.
and in associated integration patterns. Today,
with the advances in simplicity of integration
Historically, each runtime type came with
runtimes and tooling, there is no longer a need
its own proprietary mechanisms for high-
for a separate dedicated team to implement
availability, scaling, deployment, monitoring
and operate them. Integrations are vastly easier
and other system administration tasks.
to create and maintain.
Figure 19 demonstrates the difference between
In a world where applications are now
traditional and cloud native infrastructures. Figure 19: Traditional infrastructure with every capability
composed of many fine-grained components tied to a specific runtime, and a cloud native nfrastructure
that can be based on a polyglot of different with almost all capabilities provided by the platform.
Home 56

Modern lightweight runtimes are designed to introducing a lightweight integration Other examples might include,
leverage many if not all of those capabilities runtime to the toolkit will aid productivity development and test in one cloud
from the platform in which they sit. Cloud native with a minimal learning curve. environment and production in a
platforms such as Kubernetes combined with different one, or using a different
suitable runtime frameworks enable a
cloud vendor for a disaster recovery
lightweight runtime to be made highly available, Portability: Public, facility.
scaled, monitored and more in a single
standardized way rather than in a different way private, multicloud Whatever the reason, we are at a
for each runtime.
One of the major benefits of using a cloud native point where applications can be more
Essentially the team only needs to gain one set architecture is portability. The goal of many portable than ever before, and this
of infrastructure skills and they can then look organizations is to be able to run containers also applies to the integrations that
after the polyglot of runtimes in the application. anywhere, and to be able to move freely enable us to leverage their data.
This standardization extends into common between a private cloud, various vendors of Those integrations need to be able to
source code repositories such as GitHub and public cloud or indeed a combination of these. be deployed to any cloud infrastructure,
build tools such as Jenkins. It also increases the and indeed enable the secure and
consistency of deployment as you are Cloud native platforms must ensure efficient spanning of multiple cloud
propagating pre-built images that include all compatibility with standards such as Open API, boundaries.
dependencies out to the environments. Finally, Docker and Kubernetes if this portability is to
it simplifies install by simply layering files onto be a reality for consumers. Equally, runtimes
the file system. must be designed to take full advantage of the
standardized aspects of the platforms.
Ideally, the only new skills you need to pick up
to use another runtime is how to build its
An example might be data security. Let’s
artifacts, whether that be writing code for a
assume a solution has sensitive data that must
language runtime, or building mediation flows
remain on-premises at this point in time.
for an integration engine. Everything else is
done the same way across all runtimes. However, regulations and cloud capabilities may
mature such that it could move off-premises at
Once again, this brings the freedom to choose some point in the future. If you use cloud native
the best runtime for the task at hand. Based on principles to create your applications, then you
the information above, it is clear that if a have much greater freedom to run those
microservices-based application has components containers anywhere in the future.
that are performing integration-like work,
Home 57

Conclusion on cloud native


integration infrastructure

When we decompose what a microservices


application is actually composed of, we see
there is a blend of both business logic and
integration. There will always be a benefit of
writing integration-specific microservices in
a lightweight integration runtime and taking
advantage of the productivity enhancements.
If we have an integration runtime available
that can behave just like any other lightweight
runtime, truly playing to cloud-native principles,
then that’s what we should be using when it
comes to the many integration-centric tasks
required in modern applications. It as an
essential tool in the cloud-native tool box.
Home 58

Lessons Learned

The problem
A real-life scenario The teams immediately got stuck at a standstill, because the creation of each new service
meant that they would have to create a unique VM, install a runtime on top of that VM,
An organization had adopted a microservices configure each one for that particular use case, and finally add code to that runtime. These
steps would then have to be repeated and tested for each and every environment.
architecture with agile methodologies.
On their roadmap, this organization was on Development velocity came to a screeching halt as onboarding new microservices took too
pace to build out many microservices in a much time. Developers were stuck waiting for the creation of the infrastructure to run each
very short amount of time. This notion was new microservice. Inevitably, this raised the notion of leveraging runtimes that were already
perfectly aligned with the attributes of created. This was the exact behavior the organization had set out to avoid!
microservices architecture and did not
indicate any reason for concerns.
The solution
The team is new enough to plan for avoiding The team then realized the need for containers. A necessary component to support a
noisy neighbor scenarios, which would microservices architecture is a cloud environment. The team quickly realized that the isolation
certainly lead to dependency clashes. To that containers provide solved the problem of version clashes as well as isolating each
avoid such problems, they established the individual container from the noisy neighbor scenario. The solution here was therefore straight
need to create a new runtime for each forward - the team agreed on and adopted a cloud platform.
microservice. However, they did not choose
While this improved the situation, it didn’t succeed in entirely solving the problem. The team
to implement this on a cloud infrastructure. was still treating Docker containers like VMs. The container was started with the necessary
Instead, the team adopted VMs to provide running software and dependencies, but code came and went with each new version. The
this containment and required that each concept of packaging and treating Docker images differently that VMs was lost. To improve
microservice would need to run on its this state, the team picked the appropriate workload and started with stateless services.
own VM. From here, they could treat Docker containers like cattle, enabling a container to be
disposable. They also ensuring that each new version of code resulted in a new Docker image,
ensuring greater consistency between environments, and a more technology independent
build chain. This provided the agility the team needed to keep up with the demands of a
microservices architecture.
Home 59

Section 3: Moving Chapter 7: What path should you take?


Forward with an Agile
Integration Architecture
Now that you understand the concepts of an So far, you have seen how the centralized ESB pattern is in some cases being replaced by
agile integration architecture it is important that one or more of the following new approaches:
we examine next steps. While no two journeys
are the same there are some commonalities • Fine-grained integration deployment splits up the centralized ESB pattern into
that can be explored which may help you along more granular manageable pieces to enable a much more agile, scalable, and resilient
the path to making an agile integration usage of integration runtimes.
architecture a reality.
• Decentralized integration ownership puts the creation and maintenance of
- Chapter 7: What path should you take? integrations into the hands of application teams, reducing the number of teams and
Explores several ways agile integration touchpoints involved in the creation and operation of end-to-end solutions.
architecture can be approached
• Cloud native integration infrastructure fully extends agile integration architecture
- Chapter 8: Agile integration architecture principles into the cloud native space, treating the integration runtime as a true cloud
for the Integration Platform native component.
Surveys the wider landscape of integration
capabilities and relates agile integration Each of these aspects is an independent architectural or organizational decision that may
architecture to other styles of integration as be a good fit for your upcoming business solutions. Furthermore, although this booklet has
part of a holistic strategy. described a likely sequence for how these approaches might be introduced, other sequences
are perfectly valid.

Each aspect of agile integration architecture


is an independent architectural decision,
any one of which may be a benefit to
your business.
Home 60

For example, decentralization could precede the have come full circle, and are returning to Along with wide-ranging connectivity
move to fully fine-grained integration deployment point-to-point integration. The applications that capabilities to both old and new data
if an organization were to enable each application require data now appear to go directly to the provider sources and platforms, these integration
team to implement their own “separate ESB applications. Are we back where we started? tools also fulfill common integration
pattern”. Indeed, if we were being pedantic, this needs such as data mapping, parsing/
would really be an application service bus or a To solve this conundrum, you need to go back to serialization, dynamic routing, resilience
domain service bus. This would certainly be what the perceived problem was with patterns, encryption/decryption, traffic
decentralized integration—application teams point-to-point integration in the first place: management, security model switching,
would take ownership of their own integrations interfacing protocols were many and varied, and identity propagation, and much more—
but it would not be fine grained integration application platforms didn’t have the necessary again, all primarily through simple
because each application team would still have configuration, which further reduces the
technical integration capabilities out of the box.
one large installation containing all the need for complex custom code.
For each and every integration between two
integrations for their application.
applications, you would have to write new, The icing on the cake is that thanks to the
The reality is that you will probably see hybrid complex, integration-centric code for both the maturity of API management tooling, you
integration architectures that blend multiple service consumer and the service provider. are now able to not only provide those
approaches. For example, an organization might interfaces to consumers, but also:
have already built a centralized ESB for Now compare that situation to the modern,
integrations that are now relatively stable and decentralized integration pattern. The interface • make them easily discoverable by
would gain no immediate business benefit by protocols in use have been simplified and potential consumers
refactoring. In parallel, they might start rationalized such that many provider applications
• enable secure, self-administered
exploring fine-grained integration deployment now offer RESTful APIs— or at least web services
on-boarding of new consumers
for new integrations that are expected to and most consumers are well equipped to make
change quite a bit in the near term. requests based on those standards. • provide analytics in order to understand
usage and dependencies
Where applications are unable to provide an • promote them to externally facing so
Don’t worry…we haven’t interface over those protocols, powerful they can be used by third parties
returned to point-to-point integration tools are available to the application
teams to enable them to rapidly develop APIs/
• potentially even monetize APIs,
treating them as a product that’s
Comparing the point-to-point architectures we services using primarily simple configuration
provided by your enterprise rather
were trying to escape from in the early 2000s with and minimal custom code.
than just a technical interface
the final fully decentralized architectures we’ve
discussed, it might be tempting to conclude that we
Home 61

In this more standards-based, API-led Many organizations are choosing both –


integration, there is little burden on either side recognizing there are scenarios that lend
when a consuming application wants to make themselves more in one direction or the other. Increasingly,
use of APIs offered from another provider
application.
organizations will need
Therefore, when it comes to deployment options,
the integration technology must provide “choice to deploy integration
Of course, API management is only part of the
picture. API management provides the
with consistency”. Consistency refers to having technology in hybrid
standardized, secure, discoverable exposure of the same capabilities available regardless of
an API, but what if the application in question how the platform is deployed. fashions and therefore
doesn’t provide an API today, or it does, but it’s In this way, the enterprise users have ultimate need choice of
the wrong granularity, or it is overly complicated, flexibility and avoid making trade-offs between
or it has a complex security model. This is “right architecture” versus “best productivity”. deployment option and
where application integration runtimes come
into play. They provide the tools to perform
Choice means that there are multiple deployment consistent functionality
models that help satisfy organizational
deep connectivity, unpick complex protocols, imperatives, which may include: in all options.
compose multiple requests to produce an API
that is appropriate for exposure through an API
• Simplified administration and management
management layer.
• Performance optimization
It’s not point-to-point because, this integration • Dynamic scalability/flexibility Simplified Administration and
and surfacing of the API is only done once, on Management
the provider side, for a given capability. It can Organizations should seek out options for a
then be re-used easily by multiple consumers, hosted service in the cloud (often referred to One of the great benefits of managed
and its usage can be monitored and controlled as an Enterprise iPaaS), an installable software software is that it lowers the level of
in a standardized way. image, or as a prebuilt Docker image (as we expertise required for anyone to be
have largely been discussing). Each of these successful. This is a key concern where
Deployment options for deployment options has a value that aligns enterprises are looking to push the
integration capabilities outside of their
to the imperatives listed above.
fine-grained integration core IT operation. Many organizations are
Depending on your specific organizational goals seeking simpler deployment, management
As the organization considers shifting the and administration models, particularly
will lead you to choose one of these options
architecture, there will be an inevitable question when the workloads are not as aggressive,
over the other. The following three imperatives
about whether to deploy the integration or where cost is a primary issue.
are expanded on here to help guide that
components on premise or on the cloud.
decision making:
Home 62

Where a single organization integration in If performance optimization is the primary As we have discussed earlier in this
multiple solutions (i.e. most businesses), that requirement, an organization will likely prefer an book, the integration technology should
business may in fact seek to satisfy both on-premises installation on dedicated hardware be available as a container. This fine-
imperatives. and network infrastructure. The integration grained deployment model removes
platform should be installable in the hardware single points of management and control
In this situation, organizations may favor the environments of your choice (X, P and Z so that the architecture can scale
managed service option. An environment can be hardware) – whichever best fits the solution independently of other workloads in the
provisioned within a multi-tenant cloud within requirements. environment. Following the principles of
minutes. The vendor maintains the health of the cloud-native applications, the
environment and currency of the software, Dynamic scalability/flexibility technology is then a perfect fit for
greatly reducing the time, energy and cost of
organizations pursuing such scalability
traditional server installations. Many organizations have spikes in processing
and flexibility.
that happen at various times in the year.
Performance optimization For the retailer, these periods occur around
Thanksgiving or Valentine’s Day (or others
Maximizing performance is a multi-faceted depending on the specific merchandise). For
requirement. Within real-time architectures, the healthcare companies, there is a tendency to
primary consideration is typically reducing see larger workloads during open enrollment
latency. In this scenario, we want the message periods in November and December. However,
(or service call) to execute with as little friction as other spikes in workload cannot be so neatly
possible. Collocating hardware has an advantage planned, and when the workload represents
in reducing network hops and avoiding network significant business opportunity for profit, the
congestion. Pinning key reference data in local ability to scale up processing quickly is
caches provides a means of avoiding making paramount to success. In this book, we have
additional external calls which themselves explored the container-based and
introduce communication time. Ensuring the microservices-aligned architecture which is
service has a large enough pipe at anytime to perfectly suited to helping organizations with
accept any incoming requests also avoids wait this requirement. While other architecture
times. A system that deals with such requirements choices do exist, the repeatability of the
effectively tends to cost more, but where the container-based model across many IT
business solution is mission-critical, it may well disciplines makes this increasingly attractive.
be worth the time, effort and cost.
Home 63

Agile integration Chapter 8: Agile integration architecture for


architecture and IBM the Integration Platform

IBM has been leading innovation in the


integration space for 20 years, is a market
What is an integration One of the key things that Gartner notes
is that the integration platform allows
leader for each integration capability and has platform? multiple people from across the
been investing significantly in agile integration organization to work in user experiences
architecture. As such, the aspects that we’ve that best fits their needs. This means that
Through this book, we have been focused on the
explored through the prior chapters are all business users can be productive in a
areas that are supported with IBM Cloud application integration features as deployed in an
agile integration architecture. However, many simpler experience that guides them
Integration Platform. through solving straightforward problems,
enterprise solutions can only be solved by
while IT specialists have expert levels of
In the following chapter, we will provide a applying several critical integration capabilities.
control to deal with the more complex
survey of the IBM Cloud Integration Platform An integration platform (or what some analysts
enterprise scenarios. All of these, users
so that you can understand the key capabilities refer to as a “hybrid integration platform”) brings
can then work together through reuse of
it offers and some of the primary use cases together these capabilities so that organizations
the assets that have been shared; while
that customers generally apply it to. We hope can build business solutions in a more efficient
preserving governance across the whole.
that material is useful in complementing your and consistent way.
integration strategy.
Satisfying the emerging use cases of the
Many industry specialists agree on the value of
While not covered further in this book, another digital transformation is as important as
this integration platform. Gartner notes: supporting the various user communities.
technology which will be interesting to
organizations who recognize the merits of this The bulk of this chapter will explore these
approach is IBM Cloud Private. IBM Cloud The hybrid integration platform (HIP) emerging use cases, but first we should
Private is a robust application platform for is a framework of on-premises and further elaborate on the key capabilities
developing and managing on-premises, cloud-based integration and governance that must be part of the integration
containerized applications. It is an integrated platform.
environment for managing containers that
capabilities that enables differently skilled
includes the container orchestrator personas (integration specialists and The IBM Cloud Integration
Kubernetes, a private image repository, nonspecialists) support a wide range of
a management console, and monitoring integration use cases.… Application leaders Platform
frameworks. IBM Cloud Private also includes
a graphical user interface which provides a responsible for integration should leverage IBM Cloud Integration brings together
centralized location from where you can the HIP capabilities framework the key set of integration capabilities into
deploy, manage, monitor, and scale your to modernize their integration strategies a coherent platform that is simple, fast
applications. IBM Cloud Private fully supports and trusted. It allows you to easily build
and infrastructure, so they can address the
the orchestration requirements of the powerful integrations and APIs in
approaches we have described in this book. emerging use cases for digital business3. minutes, provides leading performance

Hype Cycle for Application Infrastructure and Integration, 2017, Elizabeth Golluscio.
3
Home 64

and scalability, and offers unmatched end-to-end Messaging


capabilities with enterprise-grade security.
Ensures real-time information is available from anywhere at anytime by providing reliable
message delivery without message loss, duplication or complex recovery in the event of
Within the IBM Cloud Integration platform, we
have coupled the six key integration specialties system or network issue.
- each a best-of-breed feature in its own right.
These are: Data Integration
Accesses, cleanses and prepares data to create a consistent view of your business within a data
API Management warehouse or data lake for the purposes of analytics.
Exposes and manages business services as
reusable APIs for select developer communities High Speed Transfer
both internal and external to your organization. Move huge amounts of data between on-premises and cloud or cloud-to-cloud rapidly and
Organizations adopt an API strategy to predictably with enhanced levels of security. Facilitates how quickly organizations can adopt
accelerate how effectively they can share their cloud platforms when data is very large.
unique data and services assets to then fuel
new applications and new business
opportunities.
IBM Cloud Integration Platform
Security Gateway
Premier Integration Experience
Extend Connectivity and Integration beyond the
enterprise with DMZ-ready edge capabilities that
protect APIs, the data they move, and the
systems behind them.

Application Integration API Lifecycle Security


Gateway
Application
Integration
Messaging
& Events
Data
Integration
High Speed
Transfer
Connects applications and data sources
on-premises or in the cloud, in order to Analytics | Security | Governance
coordinate exchange business information so
that data is available when and where needed.
OnCloud | Hybrid | On Premises

Figure 20: The IBM Cloud Integration Platform


Home 65

Emerging use cases and the integration platform


Through thousands of implementations, we have observed that customer’s adoption of integration
capability is normally in pursuit of very common business objectives. The four listed in this chapter
are not the only relevant patterns, but are among the most pervasive across organizations of any
size. After we describe each use case, we’ll then also look at some of the key integration capabilities
that leading IT professionals apply to be successful.

Scenario 1: Unlock business data and asets as APIs


API Management is one of the fastest growing segments in the integration space. The reason for this is
based on the speed at which organizations can build new business opportunities through a robust API
strategy. The ability to socialize and get applications, services, or data into the marketplace is critical for
any company that wants to grow. One of the best ways to do this is by exposing services as APIs for
external consumption. Organizations do this to either encourage development and expand their
presence in an ecosystem, or to create new revenue opportunities by using APIs. Usage increases as
organizations grow their ecosystems and as their products or services integrate with more applications
and platforms. A properly designed self-service API Developer Portal allows internal developers and
partners to quickly gain access to underlying apps without sacrificing security. It also socializes
microservices and APIs across teams, reducing duplication of work.

API Management is one of the fastest growing segments in


the integration space. The reason for this is based on the
speed at which organizations can build new business
opportunities through a robust API strategy.
Home 66

Deciding to adopt an API-led approach is of Also, critical to the API economy is elastic This is further complicated as different
course just the beginning of the story, you then scalability, as it is nearly impossible to know parts of the organization start adopting
need to actually implement the APIs. This comes which of your APIs will become popular. The IaaS in different cloud platforms.
in two parts: cloud native infrastructure employed by agile
integration architecture enables us to start small While these cloud platforms may include
• An outward facing API management capability yet still scale on demand should a particular API messaging technology, IT teams are
providing a gateway to make the APIs safely start to gain traction. finding that the assumptions of the lower
and securely available to the outside world, qualities of services provided by these
and providing the self-administered developer Scenario 2: Increase platforms (typically “at least once
portal to enable consumers to discover, delivery”) increase the burden on every
explore and gain access to the APIs. business agility with a new application to program to this new

• An application integration runtime to enable


modern messaging and pattern in a consistent way. Finally, these
new messaging platforms don’t naturally
access to data held deep in systems of record, integration infrastructure bridge into the existing backend systems,
transforming, translating and enriching the so integrating them across the DMZ
data to the point where it is fit to be exposed Many enterprises have long used messaging
becomes a challenge of its own.
via the API gateway. and integration at the heart of their critical
Organizations need messaging and
business applications. As they shift their
integration platforms adept at bridging
One of the primary drivers behind an API attention to the cloud, and especially to
across cloud and back end systems
strategy is to encourage innovation, by providing microservices, delivery of information by
reliably and securely.
external parties with the opportunity to think messaging becomes even more important.
creatively about how to leverage your data and One of the key design points of microservices
build it into new business models. This is very architecture is that microservices should each
different from traditional integration where the be highly independent and decoupled, and
required interfaces where often well known in messaging is a key way to achieve that.
advance and driven by specific projects. APIs
are much more demand driven, and are However, when it comes to delivering messages
constantly evolving as the ecosystem around across application boundaries they face some
them develops. Agile integration architecture challenges. Where they would like to build new
enables us to react to this continuously iterating customer engagement experiences on a cloud
environment, allowing safer adjustment and hosted infrastructure, they are finding that
introduction of individual integrations in isolation. tying these new systems into their existing
on-premises back-ends is challenging.
Home 67

The modern messaging offering must provide The integrations live with the application
Organizations need robust, scalable, secure, and highly available
asynchronous messaging to allow applications,
rather than in an inflexible centralized
infrastructure.
messaging and integration systems, and services to exchange data through
platforms adept at bridging a queue, providing guaranteed once-and-once-
only delivery of messages, enabling the
Scenario 3: Transfer and
across the cloud and business to focus on the applications rather Synchronize Your Data
back-end systems in order
than technical infrastructure. Ultimately, a high
quality distributed messaging capability allows
and Digital Assets to the
to provide consistent the application to become portable to wherever Cloud
that messaging capability can be deployed.
solution development One of the most critical aspects of the
In addition, an integration runtime then customer experience is responsiveness
experiences and speed simplifies how different applications and and ease. We live in a “now world” where
productivity. business processes interact with the messaging businesses and consumers expect instant
layer regardless of the application type access to the information they need.
(for example, off-the-shelf, custom-built, The technical difficulty of providing
software as a service), location (private cloud, reliable and secure access to this data
Modern messaging and integration middleware public cloud), protocol, or message format. does not concern them. Regardless of the
brings a new set of capabilities to overcome communication channel, distance, or
Messaging is all about decoupling; isolating device, they expect timely and reliable
these challenges:
components from one another to reduce information and action whenever they
interact with your organization.
• Enhancement of the enterprise integration dependencies, and increase resilience.
platform components to embrace cloud Fine-grained integration deployment further
This need creates difficulties for
characteristics such as elasticity, security, increases that resilience by ensuring that organizations on several fronts. An obvious
scalability, and others. wherever messaging interactions require one is the delivery of any size, number, or
integration, they have their own dedicated type of digital asset to anywhere. Today,
• Multicloud strategy using connection and containers performing that work, reducing data size, transfer distance, and network
integration capabilities on external vendor regression testing, and improving reliability. conditions still greatly impact the speed
cloud platforms through open standards and reliability that customers will get
to use best-in-class capabilities and avoid Agile integration architecture also simplifies versus what they expect. This dilemma
vendor/platform lock-in. migration to and between cloud platforms since has become chronic as more industries
the integrations relevant to a particular application become data-driven and operations
can be moved independently of the others. expand globally.
Home 68

Another difficulty is a bit more behind the IBM Cloud Integration provides a Therefore, in a multicloud architecture,
scenes. The amount of data created for and by all comprehensive data transfer and sync system particularly where part of the solution
of us is growing exponentially in our hyper- that is hybrid and multicloud, addressing requirements is to transfer video or
connected world. Today, businesses are moving a flexible set of data transfer needs. other large files, the ability to distribute
to a multicloud environment to gain maximum This high-speed transfer technology makes it these capabilities across the topology in
agility, efficiency, and scale, while lowering possible to securely transfer data up to 1000x a distributed manner is paramount to
operating risk. To support big data processing in faster than traditional tools, between any kind
achieve good customer experiences.
the cloud, organizations need a solution of storage, whether it’s on premises, in the
Organizations must then consider
specifically designed to move large files and data cloud, or moving from one cloud vendor to
weaving high-speed transfer into API,
sets to and from multiple cloud infrastructures another, regardless of network latency or
quickly and securely. physical distance. application and messaging-led
solutions. The elastically scalable
Some common situations for high speed infrastructure that underlies any one of
transfer are: these should then also account for
variability in the scale out requirements
Shifting large data • Sending and syncing urgent data of of this data transfer layer.
any size between your enterprises’
volumes between data data centers anywhere around the globe

centers and the cloud • Sending and syncing data to any major
infrastructure can be a public cloud by using our presence in
all public clouds to enable cloud
primary roadblock to migration at high speed

cloud adoption unless • Participating in larger solution patterns


along with other integration technologies
addressed through high (such as messaging and application
speed transfer technology. integration) to reduce latency and
provide delivery consistency
Home 69

Scenario 4: Integrating SaaS


• data synchronization – ensures
Businesses are rapidly adopting a new class of IPaaS solutions accelerate that data (for instance, customer
applications in the cloud to drive business data) is kept in sync across multiple
transformation - software-as-a-service (SaaS) business transformation systems where it is stored and
applications. These streamline and augment
activities that were previously supported by through adoption of SaaS maintained

more traditional on-premises applications. SaaS


is providing innovative capabilities, low costs to
apps via simple • integration services – exposes
integration logic as a RESTful end point
get started, and the ability to rapidly scale. It is configuration-based (API) so that it can be offered as part of
for these reasons that apps like Salesforce,
Netsuite, Workday, and others have become so approaches to integration. any business application or process
very popular.
• batch processing – extracts a set of
To maximize the impact of their SaaS information from an app, database, or
purchases, organizations can’t afford for these other data store, transforms that
applications to become isolated. By integrating As part of the IBM Cloud Integration Platform, information into a target format, and
their SaaS applications with other systems and IBM App Connect provides a range of loads it wherever required
data, organizations not only realize the full experiences that enable organizations to rapidly
range of capabilities that the SaaS application configure, deploy, and manage integrating their Many organizations are looking to
offers, but they are able to augment their SaaS SaaS applications with other systems across compliment this iPaaS capability with
purchases with other apps and services to their business or enterprise. API Management in a few scenarios:
deliver richer outcomes that drive greater
productivity and operational efficiency. It offers users intuitive tools and a no-code
configuration-based approach, enabling them to • Where the iPaaS is building new
quickly build integration “flows”. These flows RESTful integration services, those
Integration SaaS applications typically are APIs need to be managed, secured
provided through technology referred to as can address a broad set of integration
requirements: and governed in a manner that is
“integration platform as a service,” also known consistent with other APIs
as iPaaS. Integration platform as a service
developed in the enterprise.
provides the full gamut of integration capability • event-based integrations – watch for
with its ability to handle connectivity and business events across systems and then
integration to applications on-premises and in trigger downstream actions when those
the cloud. The iPaaS experience is purpose- events occur
built to simplify and accelerate the activities for
creating and running integrations in the cloud.
Home 70

• Some organizations have found that they


need to take an active role in managing the
workload they generate against their SaaS
app. This may be so that they can defer API
limits from those vendors or overage charges.
API Management can be inserted to gate
access to these SaaS apps, and prioritize
certain classes of enterprise workload.
Additionally, each project can be metered
and usage can be tracked. This would be
very useful for internal charge backs.

• Coupling the iPaaS and API Management


layer can provide a more consistent
abstraction layer when an organization
has a variety of SaaS apps that they need
to build against. Without a layer of
abstraction each team would have to go
through that learning curve to implement
with each SaaS provider.

The IBM Cloud Integration Platform is itself


written using microservices architecture. This is
what enables us to bring new features to market
so quickly, and manage the multi-tenant load so
elastically. With the most recent release, we
extended these features such that you can build
integrations in the cloud that seamlessly hook
into any of your enterprise systems. This
provides you with a single product that has both
rich enterprise connectivity along with a huge
breadth of cloud application connectors,
enabling true any-to-any integration on a
lightweight architecture.
Home 71

Conclusions

Through this final chapter, hopefully you’ve gotten a broader


perspective of the various critical capabilities required as part of
an integration platform, a sense of the requirements for those
capabilities to work together, and an appreciation of how the agile
integration architecture can be adopted to enable greater agility,
scalability and resilience for the platform.

It is also our hope that you’ve gained an appreciation for how IBM
has continued to innovate so that our customers can benefit from
adopting modern integration technologies that assist them
ultimately in satisfying their digital transformation objectives.

Kim, Nick and Tony are very happy to entertain questions, receive
feedback, and advise on specifics that might not have been
covered in this work. If you’d like to reach out, please find our
contact information in the “About the Authors” section. Of course,
we are also happy to be working for IBM where we have a great
team of professionals who also stand at the ready. If you already
have friends at Big Blue, we’re sure they would also be happy to
get your call.
Home 72

Appendix One: References

New material on this topic will be published/promoted on:


http://ibm.biz/AgileIntegArchLinks

SA regularly updated collection of relevant links exists here:


http://ibm.biz/AgileIntegArchLinks

The book builds on the following source material

• Moving to agile integration architecture


http://ibm.biz/AgileIntegArchPaper

• The fate of the ESB


http://ibm.biz/FateOfTheESBPaper

• Microservices, SOA, and APIs: Friends or enemies?


http://ibm.biz/MicroservicesVsSoa

• Cattle not pets: Achieving lightweight integration with IIB


http://ibm.biz/CattlePetsIIB

• The hybrid integration reference architecture


http://ibm.biz/HybridIntRefArch
© Copyright IBM Corporation 2018

IBM Corporation
Software Group
Route 100
Somers, NY 10589

Produced in the United States of America


May 2018

IBM, the IBM logo, and ibm.com are trademarks of International


Business Machines Corp., registered in many jurisdictions
worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM
trademarks is available on the web at “Copyright and trademark
information” at www.ibm.com/legal/copytrade.shtml.

This document is current as of the initial date of publication and


may be changed by IBM at any time. Not all offerings are
available in every country in which IBM operates.

THE INFORMATION IN THIS DOCUMENT IS


PROVIDED “AS IS” WITHOUT ANY WARRANTY,
EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION
OF NON-INFRINGEMENT. IBM products are warranted
according to the terms and conditions of the agreements under
which they are provided.

Please Recycle

00000000-USEN-00

You might also like