IBM Agile Integration Architecture
IBM Agile Integration Architecture
architecture
Using lightweight integration runtimes to implement
a container-based and microservices-aligned
integration architecture
Home 22
C o n t e n t s : Authors 5
How to navigate the book ............................................................................................................................. 7
Authors
Kim is a technical strategist on IBMs integration After years of implementing integration solutions Nick is a technical evangelist for IBMs
portfolio working as an architect providing guidance in a variety of technologies, Tony joined the IBM integration portfolio working as a
to the offering management team on current trends offering management team in 2008. He now leads technical specialist exploring current
and challenges. He has spent the last couple of the Application Integration team in working with trends and building leading edge solutions.
decades working in the field implementing customers as they adopt more agile models for He has spent the last 5 years working in
integration and process related solutions. building integration solutions and embrace cloud the field and guiding a series of teams
as part of their IT landscape. through their microservices journey.
Before that he spent 5+ years in various
other roles such as a developer, an architect
and a IBM DataPower specialist. Over the
Sincere thanks go to the following people for their course of his career he’s been a user of
node, xsl, JSON, Docker, Solr, IBM API
significant and detailed input and review of the material: Connect, Kubernetes, Java, SOAP, XML,
THANK YOU WAS, Docker, Filenet, MQ, C++, CastIron,
IBM App Connect, IBM Integration Bus.
Carsten Bornert, Andy Garratt, Alan Glickenhouse,
Rob Nicholson, Brian Petrini, Claudio Tagliabue,
and Ben Thompson.
Home 6
Executive Summary
Agile integration architecture enables building, managing and operating effectively and efficiently to achieve the goals of digital
transformation. It includes three distinct aspects that we will explore in detail:
a) Fine-grained integration deployment | b) Decentralized integration ownership and | c) Cloud-native integration infrastructure
Note that we have used the term “lightweight integration” in the past, but have moved to the more appropriate “agile integration architecture”.
1
Home 7
How to navigate the book The book is divided into three sections.
Section 1:
The Impact of Digital Transformation on Integration
The rise of the digital economy, like most of the seismic technology shifts over the past several
centuries, has fundamentally changed not only technology but business as well. The very concept
of “digital economy” continues to evolve. Where once it was just a section of the economy that was
built on digital technologies it has evolved becoming almost indistinguishable from the “traditional
economy” and growing to include almost any new technology such as mobile, the Internet of Things,
cloud computing, and augmented intelligence.
At the heart of the digital economy is the basic need to connect disparate data no matter where
it lives. This has led to the rise of application integration, the need to connect multiple applications
and data to deliver the greatest insight to the people and systems who can act on it. In this section
we will explore how the digital economy created and then altered our concept of application
integration.
The impact of digital transformation changes in how organizations are building solutions. Progressive IT
shops have sought out, and indeed found, more agile ways to develop
than were typical even just a few years ago.
Over the last two years we’ve seen a tremendous acceleration in the
pace that customers are establishing digital transformation initiatives.
In fact, IDC estimates that digital transformation initiatives represent
a $20 trillion market opportunity over the next 5 years. That is a To drive new customer experiences
staggering figure with respect to the impact across all industries and organizations must tap into an
companies of all sizes. A primary focus of this digital transformation
is to build new customer experiences through connected experiences ever-growing set of applications,
across a network of applications that leverage data of all types. processes and information sources
However, bringing together these processes and information sources – all of which significantly expand
at the right time and within the right context has become increasingly
complicated. Consider that many organizations have aggressively
the enterprise’s need for
adopted SaaS business applications which have spread their key data and investment in
sources across a much broader landscape. Additionally, new data
sources that are available from external data providers must be
integration capabilities.
injected into business processes to create competitive differentiation.
When we consider the agenda for building new customer experiences and focus on how data is
accessed and made available for the services and APIs that power these initiatives, we can clearly
recognize several significant benefits that application integration brings to the table.
One of the key strengths of integration tooling Each system has its own peculiarities that must
is the ability to access data from any system be understood and responded to. Modern
with any sort of data in any sort of format and integration includes smarts around complex
build homogeneity. The application landscape protocols and data formats, but it goes much
is only growing more diverse as organizations further than that. It also incorporates
adopt SaaS applications and build new solutions intelligence about the actual objects, business
in the cloud, spreading their data further across and functions within the end systems.
a hybrid set of systems. Even in the world of Application integration tooling is compassionate
APIs, there are variations in data formats and - understanding how to work with each system
structures that must be addressed. distinctly. This knowledge of the endpoint must
include not only errors, but authentication
Furthermore, every system has subtleties in the protocols, load management, performance
way it enables updates, and surfaces events. optimization, transactionality, idempotence,
The need for the organization to address and much, much more. By including such
information disparity is therefore growing at features “in the box”, application integration
that same pace, and application integration yields tremendous gains in productivity over
must remain equipped to address the challenge coding, and arguably a more consistent level
of emerging formats. of enterprise-class resiliency.
Home 11
3. Innovation through data: 4. Enterprise-grade artifacts: Each of these factors (data disparity,
expert endpoints, innovation through
data, and enterprise grade artifacts)
is causing a massive shift in how an
integration architecture needs to be
conceived, implemented and managed.
The result is that organizations, and
architects in particular, are reconsidering
Applications in a digital world owe much of their Integration flows developed through application what integration means in the new digital
innovation to their opportunity to combine data integration tooling inherit a tremendous amount age. Enter agile integration architecture,
that is beyond their boundaries and create new of value from the runtime. Users can focus on a container-based, decentralized and
meaning from it. This is particularly visible in building the business logic without having to microservices-aligned approach for
microservices architecture, where the ability of worry about the surrounding infrastructure. integration solutions that meets the
application integration technologies to The application integration runtime includes demands of agility, scalability and
intelligently draw multiple sources of data enterprise-grade features for error recovery, resilience required by digital
together is often a core business requirement. fault tolerance, log capture, performance transformation.
Whether composing multiple API calls together analysis, message tracing, transactional update
or interpreting event streams, the main task of and recovery. Additionally, in some tools the The integration landscape is changing
many microservices components is essentially artifacts are built using open standards and apace with enterprise and marketplace
integration. consistent best practices without requirements computing demands, but how did we get
for the IT team to be experts in those domains. from SOA and ESBs to modern,
containerized, agile integration
architecture?
Before we dive into agile integration From this series of events, service-oriented architecture was born. The core purpose
architecture, we first need to understand what of SOA was to expose data and functions buried in systems of record over well-formed,
came before in a little more detail. In this simple-to-use, synchronous interfaces, such as web services. Clearly, SOA was about more
chapter we will briefly look at the challenges than just providing those services, and often involved some significant re-engineering to align
of SOA by taking a closer look at what the ESB the back-end systems with the business needs, but the end goal was a suite of well-defined
pattern was, how it evolved, where APIs came common re-usable services collating disparate systems. This would enable new applications
to be implemented without the burden of deep integration every time, as once the integration
onto the scene, and the relationship between all
was done for the first time and exposed as a service, it could be re-used by the next application.
that and microservices architecture.
Let’s start with SOA and the ESB and what However, this simple integration was a one-sided equation. We might have been able to
standardize these protocols and data formats, but the back-end systems of record were
went wrong.
typically old and had antiquated protocols and data formats for their current interfaces.
Figure 1 below shows where the breakdown typically occurred. Something was needed
to mediate between the old system and the new cross-platform protocols.
The forming of the ESB
pattern
Engagement
Applications
As we started the millennium, we saw the
beginnings of the first truly cross-platform
protocol for interfaces. The internet, and with it
HTTP, had become ubiquitous, XML was limping
Enterprise API
its way into existence off the back of HTML, and
the SOAP protocols for providing synchronous Integration Runtime
Integration runtime
Integration
Runtime
This synchronous exposure pattern via web The ESB pattern often took the “E” in ESB very
services was what the enterprise services bus literally and implemented a single infrastructure
(ESB) term was introduced for. It’s all in the for the whole enterprise, or at least one for each ESB patterns have had
name—a centralized “bus” that could provide significant part of the enterprise. Tens or even
web “services” across the “enterprise”. hundreds of integrations might have been issues ensuring continued
We already had the technology (the integration installed on a production server cluster, and if
runtime) to provide connectivity to the that was scaled up, they would be present on
funding for cross-enterprise
back-end systems, coming from the preceding every clone within that cluster. Although this initiatives since those do
hub-and-spoke pattern. These integration heavy centralization isn’t required by the ESB
runtimes could simply be taught to offer pattern itself, it was almost always present in not apply specifically within
integrations synchronously via SOAP/HTTP, the resultant topology. There were good the context of a business
and we’d have our ESB. reasons for this, at least initially: hardware and
software costs were shared, provisioning of the initiative.
What went wrong for the servers only had to be performed once, and due
to the relative complexity of the software, only
centralized ESB pattern? one dedicated team of integration specialists
needed to be skilled up to perform the Often, line-of-business teams that were
While many large enterprises successfully development work.
implemented the ESB pattern, the term is often expecting a greater pace of innovation in
disparaged in the cloud-native space, and The centralized ESB pattern had the potential to their new applications became
especially in relation to microservices deliver significant savings in integration costs if increasingly frustrated with SOA, and by
architecture. It is seen as heavyweight and interfaces could be re-used from one project to extension the ESB pattern.
lacking in agility. What has happened to make the next (the core benefit proposition of SOA).
the ESB pattern appear so outdated? However, coordinating such a cross-enterprise Some of the challenges of a centralized
initiative and ensuring that it would get ESB pattern were:
SOA turned out to be a little more complex than continued funding—and that the funding only
just the implementation of an ESB for a host of applied to services that would be sufficiently • Deploying changes could potentially
reasons—not the least of which was the question re-usable to cover their creation costs—proved destabilize other unrelated interfaces
of who would fund such an enterprise-wide to be very difficult indeed. Standards and running on the centralized ESB.
program. Implementing the ESB pattern itself tooling were maturing at the same time as the
also turned out to be no small task. ESB patterns were being implemented, so the • Servers containing many integrations
implementation cost and time for providing a had to be kept running and patched live
single service were unrealistically high. wherever possible.
Home 14
• Topologies for high availability and disaster Integration engines of today are significantly But the end goal was the same: to make
recovery were complex and expensive. more lightweight, easier to install and use, and functions and data available via
can be deployed in more decentralized ways standardized interfaces so that new
• For stability, servers typically ran many that would have been unimaginable at the time applications could be built on top of them
versions behind the current release of the ESB concept was born. As we will see, agile more quickly.
software reducing productivity. integration architecture enables us to overcome
the limitations of the ESB pattern. With the broadening usage of these
• The integration specialist teams often didn’t
service interfaces, both within and
know much about the applications they were If you would like a deeper introduction into beyond the enterprise, more formal
trying to integrate with. where the ESB pattern came from and a mechanisms for providing services were
detailed look at the benefits, and the challenges required. It quickly became clear that
• Pooling of specialist integration skilled people that came with it, take a look at the source
resulted in more waterfall style engagement simply making something available over
material for this section in the following article: a web service interface, or latterly as a
with application teams.
RESTful JSON/HTTP API, was only part
http://ibm.biz/FateOfTheESBPaper
• Service discovery was immature so of the story.
documentation became quickly outdated.
The result was that creation of services by The API economy and That service needed to be easily
discovered by potential consumers,
this specialist SOA team became a bottleneck bi-modal IT who needed a path of least resistance
for projects rather than the enabler that it was for gaining access to it and learning how
intended to be. This typically gave by External APIs have become an essential part of to use it. Additionally, the providers of the
association the centralized ESB pattern the online persona of many companies, and are service or API needed to be able to place
a bad name. at least as important as its websites and mobile controls on its usage, such as traffic
applications. Let’s take a brief look at how that control and an appropriate security
Formally, as we’ve described, ESB is an evolved from the maturing of internal SOA
architectural pattern that refers to the exposure model. Figure 2 below demonstrates how
based services. the introduction of service/API gateways
of services. However, as mentioned above, the
term is often over-simplified and applied to the effects the scope of the ESB pattern.
SOAP-style RPC interfaces proved complex
integration engine that’s used to implement the to understand and use, and simpler and more
pattern. This erroneously ties the static and
consistent RESTful services provided using
aging centralized ESB pattern with integration
JSON/HTTP became a popular mechanism.
engines that have changed radically over the
intervening time.
Home 15
API Gateway
Systems
Integration runtime
the capabilities of the devices used
Request/response integration as consumers.
Asynchronous integration
and they play a significant role in the disruption Sound familiar? It should. These were exactly If you take a closer look at microservices
of industries that is so common today. This the same challenges that application concepts, you will see that it has a much
realization caused the birth of what we now call development teams were facing at the same broader intent than simply breaking
the API Economy, and it is a well-covered topic time: bloated, complex application servers that things up into smaller pieces. There are
on IBMs “API Economy” blog. contained too much interconnected and cross- implications for architecture, process,
dependent code, on a fragile cumbersome organization, and more—all focused on
The main takeaway here is that this progression topology that was hard to replicate or scale. enabling organizations to better use
exacerbated an already growing divide between Ultimately, it was this common paradigm that cloud-native technology advances to
the older traditional systems of record that still led to the emergence of the principles of increase their pace of innovation.
perform all the most critical transactions microservices architecture. As lightweight
fundamental to the business, and what became runtimes and application servers such as Node. However, focusing back on the core
known as the systems of engagement, where js and IBM WAS Liberty were introduced— technological difference, these small
innovation occurred at a rapid pace, exploring runtimes that started in seconds and had tiny independent microservices components
new ways of interacting with external footprints—it became easier to run them on can be changed in isolation to create
consumers. This resulted in smaller virtual machines, and then eventually greater agility, scaled individually to
bi-modal IT, where new decentralized, within container technologies such as Docker. make better use of cloud-native
fast-moving areas of IT needed much greater infrastructure, and managed more
agility in their development and led to the ruthlessly to provide the resilience
invention of new ways of building applications Microservices architecture: required by 24/7 online applications.
using, for example, microservices architecture. A more agile and scalable Figure 3 below visualizes the
way to build applications microservices architecture we’ve just
The rise of lightweight described.
Microservice
Applications
shape and size of your microservices
components. Add to that equally critical
design choices around the extent to
which you decouple them. You need to
constantly balance practical reality with
Public API
aspirations for microservices-related
Enterprise API
Exposure Gateway benefits. In short, your microservices-
API Gateway
Integration Runtime
based application is only as agile and
Lightweight language runtime
Integration runtime
scalable as your design is good, and your
Request/response integration
methodology is mature.
Asynchronous integration
Microservice Microservice application boundary
of Record
Applications
Systems
Integration
Runtime
In theory, these principles could be used anywhere. Where we see them most commonly is in the
systems of engagement layer, where greater agility is essential. However, they could also be used
to improve the agility, scalability, and resilience of a system of record—or indeed anywhere else in
the architecture, as you will see as we discuss agile integration architecture in more depth.
Without question, microservices principles can offer significant benefits under the right
circumstances. However, choosing the right time to use these techniques is critical, and getting
the design of highly distributed components correct is not a trivial endeavor.
Home 18
Microservices architecture, on the other hand, is an option for how you might choose to write an
individual application in a way that makes that application more agile, scalable, and resilient.
Home 19
elastic Scalability
Their resource usage can be truly tied to the
business model
discrete Resilience
With suitable decoupling, changes to one
microservice do not affect others at runtime
Microservice components are often made from As with any new approach there are challenges tests are a must. The developers who
pure language runtimes such as Node.js or Java, too, some obvious, and some more subtle. write code must be responsible for it in
but equally they can be made from any suitably Microservices are a radically different approach production. Build and deployment chains
lightweight runtime. The key requirements to building applications. Let’s have a brief look need significant changes to provide the
include that they have a simple dependency- at some of the considerations: right separation of concerns for a
free installation, file system based deploy, start/ microservices environment.
stop in seconds and have strong support for • Greater overall complexity: Although the
container-based infrastructure. individual components are potentially simpler, Microservices architecture is not the
and as such they are easier to change and solution to every problem. Since there is
scale, the overall application is inevitably a an overhead of complexity with the
collection of highly distributed individual parts. microservices approach, it is critical to
ensure the benefits outlined above
Microservices architectures • Learning curve on cloud-native outweigh the extra complexity. However,
lead to the primary benefits infrastructure: To manage the increased if applied judiciously it can provide order
number of components, new technologies and of magnitude benefits that would be hard
of greater agility, elastic frameworks are required including service to achieve any other way.
scalability, and discrete discovery, workload orchestration, container
management, logging frameworks and more. Microservices architecture discussions are
resilience. Platforms are available to make this easier, but often heavily focused on alternate ways
it is still a learning curve. to build applications, but the core ideas
behind it are relevant to all software
• Different design paradigms: components, including integration.
Microservices architecture enables developers The microservices application architecture
to make better use of cloud native infrastructure requires fundamentally different approaches
and manage components more ruthlessly, to design. For example, using eventual
providing the resilience and scalability required consistency rather than transactional
by 24/7 online applications. It also improves interactions, or the subtleties of asynchronous
ownership in line with DevOps practices whereby communication to truly decouple components.
a team can truly take responsibility for a whole
microservice component throughout its lifecycle • DevOps maturity: Microservices require a
and hence make changes at a higher velocity. mature delivery capability. Continuous
integration, deployment, and fully automated
Home 22
Agile integration architecture There are three related, but separate aspects
to agile integration architecture: Aspect 1:
If what we’ve learned from microservices • Aspect 1:
Fine-grained integration
architecture means it sometimes makes sense Fine-grained integration deployment
to build applications in a more granular deployment.
lightweight fashion, why shouldn’t we apply
that to integration to? What might we gain by breaking out the
integrations in the siloed ESB into separate
Integration is typically deployed in a very siloed runtimes?
and centralized fashion such as the ESB pattern.
What would it look like if we were to re-visit that • Aspect 2:
in the light of microservices architecture? Decentralized integration
It is this alternative approach that we call
“agile integration architecture”.
ownership. The centralized deployment of
integration hub or enterprise services
How should we adjust the organizational bus (ESB) patterns where all integrations
structure to better leverage a more are deployed to a single heavily nurtured
fine-grained approach? (HA) pair of integration servers has been
Agile integration architecture shown to introduce a bottleneck for
is defined as • Aspect 3: projects. Any deployment to the shared
Cloud native integration servers runs the risk of destabilizing
existing critical interfaces. No individual
“a container-based, infrastructure.
project can choose to upgrade the
decentralized and What further benefits could we gain by a version of the integration middleware
to gain access to new features.
microservices-aligned fully cloud-native approach to integration.
architecture for integration Although these each have dedicated chapters, We could break up the enterprise-wide
it’s worth taking the time to summarize them ESB component into smaller more
solutions”. at a conceptual level here. manageable and dedicated pieces.
Perhaps in some cases we can even get
down to one runtime for each interface
we expose.
Home 23
These “fine-grained integration deployment” patterns provide specialized, right-sized containers, Agility:
offering improved agility, scalability and resilience, and look very different to the centralized ESB
patterns of the past. Figure 6 demonstrates in simple terms how a centralized ESB differs from Different teams can work on integrations
fine-grained integration deployment.B patterns of the past. independently without deferring to a
centralized group or infrastructure that
can quickly become a bottleneck.
Individual integration flows can be
changed, rebuilt, and deployed
independently of other flows, enabling
safer application of changes and
maximizing speed to production.
Consumers
Scalability:
Integrations
Individual flows can be scaled on their
own, allowing you to take advantage of
Providers efficient elastic scaling of cloud
infrastructures.
Figure 6: Simplistic comparison of a centralized ESB to fine-grained integration deployment Isolated integration flows that are
deployed in separate containers cannot
affect one another by stealing shared
resources, such as memory,
Fine-grained integration deployment draws on the benefits of a microservices architecture we listed in connections, or CPU.
the last section: agility, scalability and resilience:
Home 24
Breaking the single ESB runtime up into many The move to fine-grained integration
separate runtimes, each containing just a few deployment opens a door such that ownership
integrations is explored in detail in “Chapter 4: of the creation and maintenance of integrations
Aspect 1: Fine grained integration deployment” can be distributed.
integration infrastructure to accommodate agile starting them up-ideal for the layered
file systems of Docker images.
integration architecture?
• DevOps tooling support: The runtime
should be continuous integration and
Integration runtimes have changed dramatically Clearly, agile integration architecture requires deployment-ready. Script and property
in recent years. So much so that these that the integration topology be deployed very file-based install, build, deploy, and
lightweight runtimes can be used in truly cloud- differently. A key aspect of that is a modern configuration to enable “infrastructure
native ways. By this we are referring to their integration runtime that can be run in a as code” practices. Template scripts for
container-based environment and is well suited standard build and deploy tools should
ability to hand off the burden of many of their
to cloud-native deployment techniques. Modern be provided to accelerate inclusion into
previously proprietary mechanisms for cluster integration runtimes are almost unrecognizable DevOps pipelines.
management, scaling, availability and to the from their historical peers. Let’s have a look at
cloud platform in which they are running. some of those differences: • API-first: The primary communication
protocol should be RESTful APIs.
This entails a lot more than just running them in • Fast lightweight runtime: They run in Exposing integrations as RESTful APIs
a containerized environment. It means they containers such as Docker and are should be trivial and based upon
have to be able to function as “cattle not pets,” sufficiently lightweight that they can be common conventions such as the Open
making best use of the orchestration started and stopped in seconds and can be API specification. Calling downstream
easily administered by orchestration RESTful APis should be equally trivial,
capabilities such as Kubernetes and many other
frameworks such as Kubernetes. including discovery via definition files.
common cloud standard frameworks.
• Dependency free: They no longer require • Digital connectivity: In addition to
We expand considerably on the concepts in databases or message queues, although the rich enterprise connectivity that
“Chapter 6: Aspect 3: Cloud native integration obviously, they are very adept at has always been provided by integration
infrastructure”. connecting to them if they need to. runtimes, they must also connect to
modern resources.
Home 26
For example, NoSQL databases Modern integration runtimes are well suited to the three aspects of agile integration architecture:
(MongoDb and Cloudant etc.), and fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. Before we
Messaging services such as Kafka. turn our attention to these aspects in more detail, we will take a more detailed look at the SOA
Furthermore, they need access to a rich pattern for those who may be less familiar with it, and explore where organizations have struggled
catalogue of application intelligent to reach the potential they sought.
connectors for SaaS (software as a service)
applications such as Salesforce.
- Chapter 5:
Aspect 2: Decentralized integration
ownership
Discusses how shifting from a
centralized governance and development
practice creates new levels of agility and
innovation.
- Chapter 6:
Aspect 3: Cloud native integration
infrastructure
Provides a description of how
adopting key technologies and practices from
the cloud native application discipline can
provide similar benefits to application integration.
Home 28
Figure 7 shows the result of breaking up the ESB into separate, independently maintainable and
scalable components.
Microservice
API Gateway
Lightweight language runtime
not introduce any
of Record
Applications
Systems
Figure 7: Breaking up the centralized ESB into independently maintainable and scalable pieces
The heavily centralized ESB pattern can be broken up in this way, and so can the older hub and spoke
pattern. This makes each individual integration easier to change independently, and improves agility,
scaling, and resilience.
Home 29
This approach allows you to make a change to an We typically call this pattern fine-grained Installation is equally minimalist
individual integration with complete confidence integration deployment (and a key aspect of and straightforward requiring little
that you will not introduce any instability into the agile integration architecture), to differentiate more than laying binaries out on a
environment on which the other integrations are it from more purist microservices application file system.
architectures. We also want to mark a distinction
running. You could choose to use a different
from the ESB term, which is strongly associated • Virtualization and containerization.
version of the integration runtime, perhaps to
with the more cumbersome centralized The runtime should actively support
take advantage of new features, without forcing integration architecture.
a risky upgrade to all other integrations. You containerization technologies such
as Docker and container
could scale up one integration completely
independently of the others, making extremely
What characteristics does orchestration capabilities such as
efficient use of infrastructure, especially when the integration runtime Kubernetes, enabling non-functional
characteristics such as high
using cloud-based models. need? availability and elastic scalability to
be managed in the standardized
There are of course considerations to be worked To be able to be used for fine-grained
ways used by other digital
deployment, what characteristics does a modern
through with this approach, such as the generation runtimes, rather than
integration runtime need?
increased complexity with more moving parts. relying on proprietary topologies
Also, although the above could be achieved • Fast, light integration runtime. and technology. This enables new
using virtual machine technology, it is likely that runtimes to be introduced
The actual runtime is slim, dispensing with administered and scaled in
the long-term benefits would be greater if you hard dependencies on other components
were to use containers such as Docker, and well-known ways without requiring
such as databases for configuration, or
proprietary expertise.
orchestration mechanisms such as Kubernetes. being fundamentally reliant on a specific
Introducing new technologies to the integration message queuing capability. The runtime
team can add a learning curve. However, these itself can now be stopped and started in
seconds, yet none of its rich functionality
are the same challenges that an enterprise
has been sacrificed. It is totally reasonable
would already be facing if they were exploring to consider deploying a small number of
microservices architecture in other areas, so integrations on a runtime like this and then
that expertise may already exist within the running them independently rather than
organization. placing all integration on a centralized
single topology.
Home 30
• Stateless This provides a taste of how different the integration runtimes of today are from those of the past.
The runtime needs to able to run IBM App Connect Enterprise (formerly known as IBM Integration Bus) is a good example of
statelessly. In other words, runtimes such a runtime. Integration runtimes are not in themselves an ESB; ESB is just one of the
should not be dependent on, or even patterns they can be used for. They are used in a variety of other architectural patterns too,
aware of one another. As such they can be and increasingly in fine-grained integration deployment.
added and taken away from a cluster freely
and new versions of interfaces can be
deployed easily. This enables the container Granularity
orchestration to manage scaling, rolling
A glaring question then remains: how granular should the decomposition of the integration flows
deployments, A/B testing, canary tests and
be? Although you could potentially separate each integration into a separate container, it is
more with no proprietary knowledge of the
unlikely that such a purist approach would make sense. The real goal is simply to ensure that
underlying integration runtime. This stateless
unrelated integrations are not housed together. That is, a middle ground with containers that
aspect is essential if there are going to be
group related integrations together (as shown in Figure 8) can be sufficient to gain many of the
more runtimes to manage in total.
benefits that were described previously.
• Cloud-first
It should be possible to immediately explore a
deployment without the need to install any
local infrastructure. Examples include providing
a cloud based managed service whereby
integrations can be immediately deployed,
with a low entry cost, and an elastic cost model.
Quick starts should be available for simple
creation of deployment environments on
major cloud vendors’ infrastructures.
You target the integrations that need the most Conclusion on fine-grained integration deployment
independence and break them out on their own.
On the flip side, keep together flows that, for Fine-grained deployment allows you to reap some of the benefits of microservices architecture
example, share a common data model for in your integration layer enabling greater agility because of infrastructural decoupled
cross-compatibility. In a situation where components, elastic scaling of individual integrations and an inherent improvement in
changes to one integration must result in resilience from the greater isolation.
changes to all related integrations, the benefits
of separation may not be so relevant.
For example, where any change to a shared data
model must be performed on all related
integrations, and they would all need to be
regression tested anyway, having them as
separate entities may only be of minimal value.
However, if one of those related integrations has
a very different scaling profile, there might be a
case for breaking it out on its own. It’s clear that
there will always be a mixture of concerns to
consider when assessing granularity.
Lessons Learned
The microservices approach encourages teams SOA, ESBs and APIs”, technology islands such
to gain increasing autonomy such that they can as integration had their own dedicated, and
make changes confidently at a more rapid pace. often centralized teams. Often referred to as
When applied to integration, that means
the “ESB team” or the “SOA team”, they owned
allowing the creation and maintenance of
integration artifacts to be owned directly by the integration infrastructure, and the creation
application teams rather than by a single and maintenance of everything on it.
separate centralized team. This distribution of We could debate Conway’s Law as to whether
ownership is often referred to under the broader the architecture created the separate team or
topic of “decentralization” which is a common the other way around, but the more important
theme in microservices architecture. point is that the technology restriction of
needing a single integration infrastructure has
It is extremely important to recognize that
decentralization is a significant change for most been lifted.
organizations. For some, it may be too different
to take on board and they may have valid We can now break integrations out into
reasons to remain completely centrally separate decoupled (containerized) pieces,
organized. For large organizations, it is unlikely each carrying all the dependencies they need,
it will happen consistently across all domains. as demonstrated in Figure 9 below.
It is much more likely that only specific pockets
of the organization will move to this approach -
where it suits them culturally and helps them
meet their business objectives.
Microservice
Applications
teams as well.
Engagement
Applications
The goal was to create consistency, the con is Let’s just reinforce that point we made in the API management is very commonly
that to create that consistency took time. The introduction of this chapter. While implemented in this way: with a shared
fundamental question is “does the consistency decentralization of integration offers potential infrastructure (an HA pair of gateways
justify the additional time?” In decentralization, unique benefits, especially in terms of overall and a single installation of the API
the team is empowered to implement the agility, it is a significant departure from the way management components), but with
governance policies that are appropriate to many organizations are structured today. The each application team directly
their scope. pros and cons need to be weighted carefully, and administering their own APIs as if they
it may be that a blended approach where only had their own individual infrastructure.
some parts of the organization take on this The same can be done with the
approach is more achievable. integration runtimes by having a
centralized container orchestration
platform on which they can be deployed
Does decentralized but giving application teams the ability
integration also mean to deploy their own containers
independently of other teams.
decentralized infrastructure
To re-iterate, decentralized integration is
primarily an organizational change, not a
technical one. But does decentralized integration
imply an infrastructure change? Possibly, but
not necessarily.
Microservice
Applications
application. Furthermore, container-based
Engagement
Applications
infrastructures, if designed using cloud-ready
principles and an infrastructure-as-code
approach, are much more portable to cloud and
make better use of cloud-based scaling and cost
models. With the integration also owned by the
application team, it can be effectively packaged
as part of the application itself. Public API
Exposure Gateway
Enterprise API
In short, decentralized integration significantly Integration Runtime
API Gateway
improves your cloud readiness.
Lightweight language runtime
Asynchronous integration
in relation to this fully decentralized pattern—
Microservice application boundary
but we’re still achieving the same intent of
Scope of the ESB pattern
making application data and functions available
for re-use by other applications across and even
Integration Runtime
beyond the enterprise.
Microservices
Engagement
applications
application
code or applications. It’s applicable to
organizational structure changes as well.
An organization’s landscape will be a
complex heterogeneous blend of new
and old. It may have a “move to cloud”
strategy, yet it will also contain stable
heritage assets. The organizational
structure will continue to reflect that
mixture. Few large enterprises will have
the luxury of shifting entirely to a
decentralized organizational structure,
nor would they be wise to do so.
of Record
Systems
We certainly do not anticipate reorganization If we look into the concerns and motivations of the people involved, they fall into two very
at a company level in its entirety overnight. different groups, illustrated in Figure 12.
The point here is more that as the architecture
evolves, so should the team structure working
on those applications, and indeed the Re-use Agility
applications
Microservice
Quality Velocity
integration between them. If the architecture Stability Engagement Engagement Autonomy Freemium
for an application is not changing and is not Support
applications applications
Cloud native
foreseen to change there is no need reorganize Monitoring Vendor agnostic
Integration
Traditional
Fixed requirements Short learning curve
Integration Runtime
Application
of Record
Systems
serve me long term think of it Does it
Now let’s consider what this change does to an
SaaS
What do the have an active
individual and what they’re concerned about. analysts think of it community A
re my
Could I get sacked skills relevant to
The first thing you’ll notice about the next for a risky choice my peers
diagram is that it shows both old and new Figure 12: Traditional developers versus a gile teams
architectural styles together. This is the reality
for most organizations. There will be many
existing systems that are older, more resistant A developer of traditional applications cares
to change, yet critical to the business. Whilst about stability and generating code for
re-use and doing a large amount of up-front
Agile teams are more
some of those may be partially or even
completely re-engineered, or replaced, many due diligence. The agile teams on the other concerned with the
hand have shifted to a delivery focus. Now,
will remain for a long time to come. In addition,
there is a new wave of applications being built instead of thinking about the integrity of the project delivery than
for agility and innovation using architectures enterprise architecture first and being willing they are with the
such as microservices. There will be new to compromise on the individual delivery
cloud-based software-as-a-service applications timelines, they’re now thinking about enterprise architecture
being added to the mix too. delivery first and willing to compromise on
consistency.
integrity.
Home 40
Microservice application
Let’s view these two conflicting priorities as two
ends of a pendulum. There are negatives at the
extreme end on both sides. On one side, we Guild(s)
have analysis paralysis where all we’re doing is
talking and thinking about what we should be
doing, on the other side we have the wild-wild-west
were all we’re doing is blindly writing code with
no direction or thought towards the longer-term
picture. Neither side is correct, and both have
grave consequences if allowed to slip too far to
one extreme or the other. The question still
remains: “If I’ve broken my teams into business
domains and they’re enabled and focused on
delivery, how do I get some level of consistency Microservice component Microservice component
across all the teams? How do I prevent duplicate
effort? How do I gain some semblance of
consistency and control while still enabling Figure 13: Practicing architects play a dual role as individual contributors and guild members.
speed to production?”
Here we have many teams and some of the members of those teams are playing a dual role.
Evolving the role of the On one side they are expected to be an individual contributor on the team, and on the other
side they sit on a committee (or guild) that rationalizes what everyone is working on. They are
Architect creating common best practices from their work on the ground. They are creating shared
frameworks, and sharing their experiences so that other teams don’t blunder into traps
The answer is to also consider the architecture they’ve already encountered. In the SOA world, it was the goal to stop duplication/enforce
role. In the SOA model the architecture team standards before development even started. In this model the teams are empowered, and the
would sit in an ivory tower and make decisions. committee or guild’s responsibility is to raise/address and fix cross cutting concerns at the
In the new world, the architects have an evolved time of application development.
role--practicing architects. An example is
depicted in Figure 13. If there is a downside to decentralization, it may be the question of how to govern the
multitude of different ways that each application team might use the technology – essentially
encouraging standard patterns of use and best practices. Autonomy can lead to divergence.
Home 41
If every application team creates APIs in their Therefore, the practicing architect is now Inevitably the proposed project
own style and convention, it can become responsible for knowing and understanding architecture and the actual resultant
complex for consumers who want to re-use what the committee has agreed to, encouraging project architecture will be different,
those APIs. With SOA, attempts were made to their team to follow the governance guidelines, and at times, radically different. Where
create rigid standards for every aspect of how bringing up cross-cutting concerns that their the architecture review board had an
the SOAP protocol would be used, which team has identified, and sharing what they’re objection, there would almost certainly
inevitably made them harder to understand and working on. This role also has the need to be not be time to resolve it. With the
reduced adoption. With RESTful APIs, an individual contributor on one of the teams exception of extreme issues (such as
it is more common to see convergence on so that they feel the pain, or benefit, of the a critical security flaw), the production
conventions rather than hard standards. Either decisions made by the committee. date typically goes ahead, and the
way, the need is clear: Even in decentralized technical debt is added to an
environments, you still need to find ways to
ensure an appropriate level of commonality Enforcing governance in a ever-growing backlog.
across the enterprise. Of course, if you are
already exploring a microservices-based
decentralized structure Clearly the shift we’ve discussed of
placing practicing architects in the teams
approach elsewhere in your enterprise, then you
With the concept of decentralization comes a encourages alignment. However, the
will be familiar with the challenges of autonomy.
natural skepticism over whether the committee architect is now under project delivery
or guild’s influence will be persuasive enough pressure which may mean they fall into
to enforce the standards they’ve agreed to. the same trap as the teams originally did,
The practicing architect Embedding our “practicing architect” into the sacrificing alignment to hit deadlines.
team may not be enough. What more can we do, via the practicing
is now responsible architect role, to encourage enforcement
for execution of the Let’s consider how the traditional governance
cycle often occurs. It often involves the
of standards?
individual team mission application team working through complex The key ingredient for success in modern
standards documents, and having meetings agile development environment is
as well as the related with the governance board prior to the intended automation: automated build pipelines,
governance implementation of the application to establish
agreement. Then the application team would
automated testing, automated
deployment and more. The practicing
requirements that cut proceed to development activities, normally architect needs to be actively involved
beyond the eyes of the governance team. in ways to automate the governance.
across the organization. On or near completion, and close to the agreed
production date, a governance review would occur.
Home 42
This could be anything from automated code Another dimension to note is that not all teams Clearly our aim should be to ensure that
review, to templates for build pipelines, to are created equally. Some teams are cranking general developers in the application
standard Helm charts to ensure the target out code like a factory, others are thinking teams can focus on writing code that
deployment topologies are homogeneous even ahead to upcoming challenges, and some teams delivers business value. With the
though they are independent. In short, the are a mix of the two. An advanced team that architects writing or overseeing common
focus is on enforcement of standards through succeeds in finding a way to automate a components which naturally enforce the
frameworks, templates and automation, rather particular governance challenge will be much governance concerns, the application
than through complex documents, and review more successful evangelists for that mechanism teams can spend more of their time on
processes. While this idea of getting the than any attempt for it to be created by a value, and less in governance sessions.
technology to enforce the standards is far from separate governance team. Governance based on complex
new, the proliferation of open standards in the documentation and heavy review
As we are discussing the technical architect it
DevOps tool chain and cloud platforms in procedures are rarely adhered to
may seem that too much is being put on their
general is making it much more achievable. consistently, whereas inline tooling based
shoulders. They are responsible for application
standardization happens more naturally.
delivery, they are responsible to be a part of the
Let’s start with an example: say that you have
committee discussed in the previous section,
microservices components that issue HTTP
and now we are adding on an additional
requests. For every HTTP request, you would element of writing common code that is to be
like to log in a common format how long that used by other application development teams.
HTTP transaction took as well as the HTTP Is it too much?
response code. Now, if every microservice did
this differently, there wouldn’t be a unified way A common way to offload some of that work is
of looking at all traffic. Another role of the to create a dedicated team that is under the
practicing architect is to build helper artifacts direction of the practicing architect who is
that would then be used by the microservices. writing and testing this code. The authoring of
In this way, instead of the governance process the code isn’t a huge challenge, but the testing
being a gate, it is an accelerator through the of that common code is. The reason for placing
architects being embedded in the teams, a high value on testing is because of the
working on code alongside of them. Now the potential impact to break or introduce bugs into
governance cycle is being done with the teams, all the applications that use that code. For this
and instead of reviewing documents, the code is reason, extra due diligence and care must be
the document and the checkpoint is to make taken, justifying the investment in the additional
sure that the common code is being used. resource allocation.
Home 43
Conclusions on decentralized This concept is also rooted in actual technology improvements that are taking concerns away
from the developer and doing those uniformly through the facilities of the cloud platform.
integration ownership
As ever, we can refer right back to Conway’s Law (circa 1967) - if we’re changing the way we
Of course, decentralization isn’t right for every architect systems and we want it to stick, we also need to change the organizational structure.
situation. It may work for some organizations,
or for some parts of some organizations but not
for others. Application teams for older
applications may not have the right skill sets
to take on the integration work. It may be that
integration specialists need to be seeded into
their team. This approach is a tool for
potentially creating greater agility for change
and scaling, but what if the application has been
largely frozen for some time?
Lessons Learned
An organization who committed to decentralization The main problem was lack of end state vision. The solution was to align teams to microservices
was working with a microservices architecture that Because each piece of work was taken components, and create clear delineation of
had now been widely adopted, and many small, independently teams often did the minimum amount responsibilities. These needed to be done
independent assets were created at a rapid pace. In of work to accomplish the business objective. The through a rational approach. The first step was
addition to that, the infrastructure had migrated over main motivators for each team were risk avoidance to break down the entire solution into bounded
contexts, then assign teams ownership over
to a Docker-based environment. The organization and drive to meet project deadlines – and a desire
those bounded context. A bounded context is
didn’t believe they needed to align their developers not to break any existing functionality. Since each
simply a business objective and a grouping of
with specific technical assets. team had little experience with the code they needed business functions. An individual team could
to change, they began making tactical decisions to own many microservices components,
The original thought was that any team could work lower risk. however those assets all had to be aligned to
on any technical component. If the feature required the same business objective. Clear lines of
a team to add an element onto an existing screen, Developers were afraid to break currently working ownership and responsibility meant that the
that team was empowered and had free range to functionality. As they began new work, they would team thought more strategically about code
modify whatever assets were needed to to work around code that was authored from another modifications. The gravity of creating good
accomplish the business goal. There was a level of team. Therefore, all new code was appended to regression tests was now much more
coordination that occurred before the feature was existing code. The microservices continued growing important since each team knew they would
worked on so that no two teams would be working and growing over time, which then resulted in the have to live with their past decisions.
on the same code at the same time. This avoided the microservices not being so micro.
Importantly, another dimension of these new
need for merging of code.
ownership lines meant less handoffs between
This lead to technical debt piling up. This technical
teams to accomplish a business objective.
In the beginning, for the first 4-5 releases, this debt was not apparent over the first few releases,
One team would own the business function
worked out beautifully. Teams could work but then, 5 or 6 releases in, this became a real from start to finish - they would modify the
independently and could move quickly. However, problem. The next release required the investment front-end code, the integration layer and the
over time problems started to arise. of unravelling past tactical decisions. Over time the back-end code, including the storage. This
re-hashing of previously made decisions outweighed grouping of assets is clearly defined in
the agility that this organization structure had microservices architecture, and that principle
originally produced. should also carry through to organization
structures to reduce the handoffs between
teams and increase operational efficiency.
Home 47
Integration hubs are often built only once in the initial infrastructure
Manually built stage. Scripts help with consistency across environments but are mostly
run manually.
The hub and its components are directly and individually monitored
Managed during operation with a role-based access control to allow
administrative access to different groups of users.
Typically pairs of nodes provide HA. Great care is taken to keep these pairs
up and running and to back up the evolving configuration. Scalability is
Server pairs coarse-grained and achieved by creating more pairs or adding resources
so that existing pairs can support more workloads.
Home 49
• Maintenance: Integration servers are not • Affinity: Integration servers cannot make There are plenty of additional
administered live. If you want to make any any assumptions about how many other considerations we could discuss, but the
adjustments such as change an integration, replicas are running or where they are. overall point is clear: we need to think
add a new one, change property values, add This means careful consideration needs to very differently about how we design,
product fixpacks and so on, this is done by be paid to anything that implies any kind build, and deploy if we are to reap the
creating a new container image, starting up a of affinity, or selective caching of any data. benefits of greater development agility,
new instance based on it, and shutting down elastic scaling, and powerful resilience
Why? In a word, scalability. The container
the current container. models.
orchestration platform must be able to
Why? Any live changes to a running sever add or remove instances at will. If state is
make it different from the image it was built held for any reason, it will not be retained What’s so different with
from – it changes its runtime state. This would during orchestration.
then mean that the container orchestration cattle
engine cannot re-create containers at will for
failover and scaling. Adopting a “cattle How do we know if we’re doing it
right? Are we really creating
• Monitoring: Monitoring isn’t done via approach” impacts the replaceable, scalable cattle, or do we
connecting to a live running server. Instead,
the servers report what’s going on inside ways in which your still have heavily nurtured pets?
them via logging, which is aggregated by the
platform to provide a monitoring view. DevOps teams will There are many elements to what
constitutes an environment made from
Why? Direct monitoring techniques would not interact with the cattle rather than pets. One important
be able to keep up with the constantly litmus test that we’ll discuss here revolves
changing number of containers, nor would it environment and the around the question “What is a part of
be appropriate to expect every container to your build package for each new version of
accept monitoring requests alongside its solution overall, create a component?”. Take a look at the two
“day job”. Note: There are some exceptions
such as a simple health check, which is used
increasing efficiencies as images in Figure 16.
Microservice
Applications
inevitable question is Am I introducing an
ESB into a microservices application? It
is an understandable concern, but it is
incorrect, and it’s extremely important to
tackle this concern head on. As you may
recall from the earlier definitions, an
integration runtime is not an ESB. That is
just one of the architectural patterns the
integration runtime can be a part of.
One of the key benefits of microservices runtimes, we now have the opportunity to use
architecture is that you are no longer restricted the right runtime for each task at hand. Where
to one language or runtime, which means you integration-like requirements are present,
can have a polyglot runtime—a collection of we can choose to use an integration runtime.
different runtimes, each suited to different
purposes. You can introduce integration as just
another of the runtime options for your
Common infrastructure
microservices applications. Whenever you need enabling multi-skilled
to build a microservices component that’s
integration centric, you would then expect to
development
use an integration runtime.
What is it exactly that has made it possible for
Traditionally, integration runtimes have been microservice application teams to work with
mostly used for integration between separate multiple different languages and runtimes within
applications—and they will certainly continue their solution. Certainly, in part it comes down
to the fact that languages have become more
to perform that role—but here we are discussing
expressive – you can achieve more, with less
its use as a component within an application.
lines of code – and tooling has become easier
to learn and more powerful. However, there’s
In the past, it would have been difficult for
another key reason that is directly related to
application developers to take on integration
what cloud-native brings to the table. The
since the integration tooling wasn’t part of the
runtimes share a common infrastructure not
application developer’s toolbox. Deep skills
just at the operating system level, but in many
were often required in the integration product
other dimensions.
and in associated integration patterns. Today,
with the advances in simplicity of integration
Historically, each runtime type came with
runtimes and tooling, there is no longer a need
its own proprietary mechanisms for high-
for a separate dedicated team to implement
availability, scaling, deployment, monitoring
and operate them. Integrations are vastly easier
and other system administration tasks.
to create and maintain.
Figure 19 demonstrates the difference between
In a world where applications are now
traditional and cloud native infrastructures. Figure 19: Traditional infrastructure with every capability
composed of many fine-grained components tied to a specific runtime, and a cloud native nfrastructure
that can be based on a polyglot of different with almost all capabilities provided by the platform.
Home 56
Modern lightweight runtimes are designed to introducing a lightweight integration Other examples might include,
leverage many if not all of those capabilities runtime to the toolkit will aid productivity development and test in one cloud
from the platform in which they sit. Cloud native with a minimal learning curve. environment and production in a
platforms such as Kubernetes combined with different one, or using a different
suitable runtime frameworks enable a
cloud vendor for a disaster recovery
lightweight runtime to be made highly available, Portability: Public, facility.
scaled, monitored and more in a single
standardized way rather than in a different way private, multicloud Whatever the reason, we are at a
for each runtime.
One of the major benefits of using a cloud native point where applications can be more
Essentially the team only needs to gain one set architecture is portability. The goal of many portable than ever before, and this
of infrastructure skills and they can then look organizations is to be able to run containers also applies to the integrations that
after the polyglot of runtimes in the application. anywhere, and to be able to move freely enable us to leverage their data.
This standardization extends into common between a private cloud, various vendors of Those integrations need to be able to
source code repositories such as GitHub and public cloud or indeed a combination of these. be deployed to any cloud infrastructure,
build tools such as Jenkins. It also increases the and indeed enable the secure and
consistency of deployment as you are Cloud native platforms must ensure efficient spanning of multiple cloud
propagating pre-built images that include all compatibility with standards such as Open API, boundaries.
dependencies out to the environments. Finally, Docker and Kubernetes if this portability is to
it simplifies install by simply layering files onto be a reality for consumers. Equally, runtimes
the file system. must be designed to take full advantage of the
standardized aspects of the platforms.
Ideally, the only new skills you need to pick up
to use another runtime is how to build its
An example might be data security. Let’s
artifacts, whether that be writing code for a
assume a solution has sensitive data that must
language runtime, or building mediation flows
remain on-premises at this point in time.
for an integration engine. Everything else is
done the same way across all runtimes. However, regulations and cloud capabilities may
mature such that it could move off-premises at
Once again, this brings the freedom to choose some point in the future. If you use cloud native
the best runtime for the task at hand. Based on principles to create your applications, then you
the information above, it is clear that if a have much greater freedom to run those
microservices-based application has components containers anywhere in the future.
that are performing integration-like work,
Home 57
Lessons Learned
The problem
A real-life scenario The teams immediately got stuck at a standstill, because the creation of each new service
meant that they would have to create a unique VM, install a runtime on top of that VM,
An organization had adopted a microservices configure each one for that particular use case, and finally add code to that runtime. These
steps would then have to be repeated and tested for each and every environment.
architecture with agile methodologies.
On their roadmap, this organization was on Development velocity came to a screeching halt as onboarding new microservices took too
pace to build out many microservices in a much time. Developers were stuck waiting for the creation of the infrastructure to run each
very short amount of time. This notion was new microservice. Inevitably, this raised the notion of leveraging runtimes that were already
perfectly aligned with the attributes of created. This was the exact behavior the organization had set out to avoid!
microservices architecture and did not
indicate any reason for concerns.
The solution
The team is new enough to plan for avoiding The team then realized the need for containers. A necessary component to support a
noisy neighbor scenarios, which would microservices architecture is a cloud environment. The team quickly realized that the isolation
certainly lead to dependency clashes. To that containers provide solved the problem of version clashes as well as isolating each
avoid such problems, they established the individual container from the noisy neighbor scenario. The solution here was therefore straight
need to create a new runtime for each forward - the team agreed on and adopted a cloud platform.
microservice. However, they did not choose
While this improved the situation, it didn’t succeed in entirely solving the problem. The team
to implement this on a cloud infrastructure. was still treating Docker containers like VMs. The container was started with the necessary
Instead, the team adopted VMs to provide running software and dependencies, but code came and went with each new version. The
this containment and required that each concept of packaging and treating Docker images differently that VMs was lost. To improve
microservice would need to run on its this state, the team picked the appropriate workload and started with stateless services.
own VM. From here, they could treat Docker containers like cattle, enabling a container to be
disposable. They also ensuring that each new version of code resulted in a new Docker image,
ensuring greater consistency between environments, and a more technology independent
build chain. This provided the agility the team needed to keep up with the demands of a
microservices architecture.
Home 59
For example, decentralization could precede the have come full circle, and are returning to Along with wide-ranging connectivity
move to fully fine-grained integration deployment point-to-point integration. The applications that capabilities to both old and new data
if an organization were to enable each application require data now appear to go directly to the provider sources and platforms, these integration
team to implement their own “separate ESB applications. Are we back where we started? tools also fulfill common integration
pattern”. Indeed, if we were being pedantic, this needs such as data mapping, parsing/
would really be an application service bus or a To solve this conundrum, you need to go back to serialization, dynamic routing, resilience
domain service bus. This would certainly be what the perceived problem was with patterns, encryption/decryption, traffic
decentralized integration—application teams point-to-point integration in the first place: management, security model switching,
would take ownership of their own integrations interfacing protocols were many and varied, and identity propagation, and much more—
but it would not be fine grained integration application platforms didn’t have the necessary again, all primarily through simple
because each application team would still have configuration, which further reduces the
technical integration capabilities out of the box.
one large installation containing all the need for complex custom code.
For each and every integration between two
integrations for their application.
applications, you would have to write new, The icing on the cake is that thanks to the
The reality is that you will probably see hybrid complex, integration-centric code for both the maturity of API management tooling, you
integration architectures that blend multiple service consumer and the service provider. are now able to not only provide those
approaches. For example, an organization might interfaces to consumers, but also:
have already built a centralized ESB for Now compare that situation to the modern,
integrations that are now relatively stable and decentralized integration pattern. The interface • make them easily discoverable by
would gain no immediate business benefit by protocols in use have been simplified and potential consumers
refactoring. In parallel, they might start rationalized such that many provider applications
• enable secure, self-administered
exploring fine-grained integration deployment now offer RESTful APIs— or at least web services
on-boarding of new consumers
for new integrations that are expected to and most consumers are well equipped to make
change quite a bit in the near term. requests based on those standards. • provide analytics in order to understand
usage and dependencies
Where applications are unable to provide an • promote them to externally facing so
Don’t worry…we haven’t interface over those protocols, powerful they can be used by third parties
returned to point-to-point integration tools are available to the application
teams to enable them to rapidly develop APIs/
• potentially even monetize APIs,
treating them as a product that’s
Comparing the point-to-point architectures we services using primarily simple configuration
provided by your enterprise rather
were trying to escape from in the early 2000s with and minimal custom code.
than just a technical interface
the final fully decentralized architectures we’ve
discussed, it might be tempting to conclude that we
Home 61
Where a single organization integration in If performance optimization is the primary As we have discussed earlier in this
multiple solutions (i.e. most businesses), that requirement, an organization will likely prefer an book, the integration technology should
business may in fact seek to satisfy both on-premises installation on dedicated hardware be available as a container. This fine-
imperatives. and network infrastructure. The integration grained deployment model removes
platform should be installable in the hardware single points of management and control
In this situation, organizations may favor the environments of your choice (X, P and Z so that the architecture can scale
managed service option. An environment can be hardware) – whichever best fits the solution independently of other workloads in the
provisioned within a multi-tenant cloud within requirements. environment. Following the principles of
minutes. The vendor maintains the health of the cloud-native applications, the
environment and currency of the software, Dynamic scalability/flexibility technology is then a perfect fit for
greatly reducing the time, energy and cost of
organizations pursuing such scalability
traditional server installations. Many organizations have spikes in processing
and flexibility.
that happen at various times in the year.
Performance optimization For the retailer, these periods occur around
Thanksgiving or Valentine’s Day (or others
Maximizing performance is a multi-faceted depending on the specific merchandise). For
requirement. Within real-time architectures, the healthcare companies, there is a tendency to
primary consideration is typically reducing see larger workloads during open enrollment
latency. In this scenario, we want the message periods in November and December. However,
(or service call) to execute with as little friction as other spikes in workload cannot be so neatly
possible. Collocating hardware has an advantage planned, and when the workload represents
in reducing network hops and avoiding network significant business opportunity for profit, the
congestion. Pinning key reference data in local ability to scale up processing quickly is
caches provides a means of avoiding making paramount to success. In this book, we have
additional external calls which themselves explored the container-based and
introduce communication time. Ensuring the microservices-aligned architecture which is
service has a large enough pipe at anytime to perfectly suited to helping organizations with
accept any incoming requests also avoids wait this requirement. While other architecture
times. A system that deals with such requirements choices do exist, the repeatability of the
effectively tends to cost more, but where the container-based model across many IT
business solution is mission-critical, it may well disciplines makes this increasingly attractive.
be worth the time, effort and cost.
Home 63
Hype Cycle for Application Infrastructure and Integration, 2017, Elizabeth Golluscio.
3
Home 64
Deciding to adopt an API-led approach is of Also, critical to the API economy is elastic This is further complicated as different
course just the beginning of the story, you then scalability, as it is nearly impossible to know parts of the organization start adopting
need to actually implement the APIs. This comes which of your APIs will become popular. The IaaS in different cloud platforms.
in two parts: cloud native infrastructure employed by agile
integration architecture enables us to start small While these cloud platforms may include
• An outward facing API management capability yet still scale on demand should a particular API messaging technology, IT teams are
providing a gateway to make the APIs safely start to gain traction. finding that the assumptions of the lower
and securely available to the outside world, qualities of services provided by these
and providing the self-administered developer Scenario 2: Increase platforms (typically “at least once
portal to enable consumers to discover, delivery”) increase the burden on every
explore and gain access to the APIs. business agility with a new application to program to this new
The modern messaging offering must provide The integrations live with the application
Organizations need robust, scalable, secure, and highly available
asynchronous messaging to allow applications,
rather than in an inflexible centralized
infrastructure.
messaging and integration systems, and services to exchange data through
platforms adept at bridging a queue, providing guaranteed once-and-once-
only delivery of messages, enabling the
Scenario 3: Transfer and
across the cloud and business to focus on the applications rather Synchronize Your Data
back-end systems in order
than technical infrastructure. Ultimately, a high
quality distributed messaging capability allows
and Digital Assets to the
to provide consistent the application to become portable to wherever Cloud
that messaging capability can be deployed.
solution development One of the most critical aspects of the
In addition, an integration runtime then customer experience is responsiveness
experiences and speed simplifies how different applications and and ease. We live in a “now world” where
productivity. business processes interact with the messaging businesses and consumers expect instant
layer regardless of the application type access to the information they need.
(for example, off-the-shelf, custom-built, The technical difficulty of providing
software as a service), location (private cloud, reliable and secure access to this data
Modern messaging and integration middleware public cloud), protocol, or message format. does not concern them. Regardless of the
brings a new set of capabilities to overcome communication channel, distance, or
Messaging is all about decoupling; isolating device, they expect timely and reliable
these challenges:
components from one another to reduce information and action whenever they
interact with your organization.
• Enhancement of the enterprise integration dependencies, and increase resilience.
platform components to embrace cloud Fine-grained integration deployment further
This need creates difficulties for
characteristics such as elasticity, security, increases that resilience by ensuring that organizations on several fronts. An obvious
scalability, and others. wherever messaging interactions require one is the delivery of any size, number, or
integration, they have their own dedicated type of digital asset to anywhere. Today,
• Multicloud strategy using connection and containers performing that work, reducing data size, transfer distance, and network
integration capabilities on external vendor regression testing, and improving reliability. conditions still greatly impact the speed
cloud platforms through open standards and reliability that customers will get
to use best-in-class capabilities and avoid Agile integration architecture also simplifies versus what they expect. This dilemma
vendor/platform lock-in. migration to and between cloud platforms since has become chronic as more industries
the integrations relevant to a particular application become data-driven and operations
can be moved independently of the others. expand globally.
Home 68
Another difficulty is a bit more behind the IBM Cloud Integration provides a Therefore, in a multicloud architecture,
scenes. The amount of data created for and by all comprehensive data transfer and sync system particularly where part of the solution
of us is growing exponentially in our hyper- that is hybrid and multicloud, addressing requirements is to transfer video or
connected world. Today, businesses are moving a flexible set of data transfer needs. other large files, the ability to distribute
to a multicloud environment to gain maximum This high-speed transfer technology makes it these capabilities across the topology in
agility, efficiency, and scale, while lowering possible to securely transfer data up to 1000x a distributed manner is paramount to
operating risk. To support big data processing in faster than traditional tools, between any kind
achieve good customer experiences.
the cloud, organizations need a solution of storage, whether it’s on premises, in the
Organizations must then consider
specifically designed to move large files and data cloud, or moving from one cloud vendor to
weaving high-speed transfer into API,
sets to and from multiple cloud infrastructures another, regardless of network latency or
quickly and securely. physical distance. application and messaging-led
solutions. The elastically scalable
Some common situations for high speed infrastructure that underlies any one of
transfer are: these should then also account for
variability in the scale out requirements
Shifting large data • Sending and syncing urgent data of of this data transfer layer.
any size between your enterprises’
volumes between data data centers anywhere around the globe
centers and the cloud • Sending and syncing data to any major
infrastructure can be a public cloud by using our presence in
all public clouds to enable cloud
primary roadblock to migration at high speed
Conclusions
It is also our hope that you’ve gained an appreciation for how IBM
has continued to innovate so that our customers can benefit from
adopting modern integration technologies that assist them
ultimately in satisfying their digital transformation objectives.
Kim, Nick and Tony are very happy to entertain questions, receive
feedback, and advise on specifics that might not have been
covered in this work. If you’d like to reach out, please find our
contact information in the “About the Authors” section. Of course,
we are also happy to be working for IBM where we have a great
team of professionals who also stand at the ready. If you already
have friends at Big Blue, we’re sure they would also be happy to
get your call.
Home 72
IBM Corporation
Software Group
Route 100
Somers, NY 10589
Please Recycle
00000000-USEN-00