Hype Cycle For Application Architecture and Integration, 2021
Hype Cycle For Application Architecture and Integration, 2021
Integration, 2021
Published 15 July 2021 - ID G00747497 - 82 min read
By Analyst(s): Eric Thoo, Massimo Pezzini
Initiatives: Software Engineering Technologies; Software Engineering Strategies
At the same time, as businesses transition through the COVID-19 pandemic and
associated economic changes, leaders responsible for modernizing application and
integration infrastructure are under pressure to reduce the cost and improve the
performance of their application portfolios. It is becoming critical to focus on improving
business process efficiency via integration and automation.
This Hype Cycle, along with Hype Cycle for Software Engineering, 2021, reflects the
position, rate of adoption and speed of maturation of innovative technologies and
practices that will affect the evolution of application and integration infrastructure. Many
of these innovations can have a short- to medium-term impact on application and
software engineering leaders’ strategies and tactics, but all collectively pave the way for
the composable business revolution.
Downloadable graphic: Hype Cycle for Application Architecture and Integration, 2021
■ iPaaS
■ IoT integration
Some innovations will take longer to achieve mainstream adoption relative to the ones
above, but have proven to deliver high or even transformational value, including:
Adoption of these innovations requires investment in skills and presents some risks
because of their intrinsic complexity or still-limited industry support. However, their market
penetration is growing due to many successful deployments and associated lessons
learned.
Other innovations in this Hype Cycle are either relatively mature but have a moderate
impact, or their low level of industry adoption dilutes their potentially high benefits.
Finally, a small number of innovations are still in the initial stages of their life cycle, so
application and software engineering leaders should assess their risks and rewards
associated with adoption.
Maturity: Embryonic
Definition
Integrating new applications and services into an application portfolio is complex and
expensive. Gartner research shows that up to 65% of the cost of implementing a new ERP
or CRM system is attributable to integration. The technology to enable applications to
self-integrate exists in pockets, but no vendor has yet combined all the elements
successfully. As applications develop the ability to discover and connect to each other, the
amount of basic integration work will dramatically reduce.
Business Impact
Drivers
Obstacles
■ The lack of a clear market leader that is looking to push this technology forward as
the major application vendors look to protect their customer bases.
User Recommendations
■ Ask their major application vendors about the interoperability of applications within
their portfolios. This is the area where self-integrating applications are most likely to
emerge first.
Sample Vendors
Maturity: Emerging
Definition
An ISET’s goal is to overcome these limits by empowering units and individual business
users to fulfill integration work in a self-service way by providing them a shared
technology platform and a proper set of services.
■ An ISET enables decentralized units to reduce time to value and increase business
agility by performing integration work by themselves, while keeping their integration
costs under control and ensuring overarching integration governance.
Drivers
■ The services (training, consulting, support, mentoring and service desk) required to
take advantage of these shared technologies.
Obstacles
■ The still-limited industry experience, which makes it relatively difficult to find support
and skills to help the ISET set up and quickly climb the relevant organizational,
methodological and technical learning curves.
User Recommendations
■ Establish your ISET by taking into account its size can vary from a few full-time
employees to dozens (or more), depending on your organization’s nature (midsize or
large) and the scale and complexity of your integration challenges.
■ Define what integration personas are in scope (and what are not) to ensure due
diligence for democratized use cases.
■ Establish KPIs to measure the ISET’s ability to help its constituents become more
innovative, creative and empowered via self-service integration.
■ Implement the ISET model in a stepwise approach, which makes it easier to justify
investments in terms of business or technical benefits.
Sample Vendors
Integration Teams for the Digital Era Must Support New Delivery Models
Maturity: Adolescent
Definition
Event broker platform as a service (ebPaaS) plays the role of the intermediary in event-
driven architecture (EDA), configuring the event topics, registering event publishers and
subscribers, facilitating capture and distribution of event notifications. Event brokers are
built on message-oriented middleware (MOM, also known as message brokers) that
delivers the essential publish-subscribe capability, then extended with additional EDA-
oriented mediation and governance capabilities.
Business Impact
Organizations that are aware of their relevant business ecosystem events are better
prepared to manage unexpected interruptions and capitalize on opportunities in business
moments. They are equipped for broadcasting notable events for simultaneous,
multitargeted response. Event broker services enable organizations’ versatility in
monitoring multiple sources of events and communicating to many responders in parallel,
with strong scalability, integrity and resilience.
Drivers
■ The migration of business applications to the cloud demands new platforms and
communication infrastructure, driving many organizations to evaluate and adopt
event broker services, paired with integration and API management offerings.
■ Most leading SaaS offerings support some event processing, increasing awareness
of benefits and opportunities of event-driven application design in a large number of
mainstream business and government organizations.
■ Open-source event brokers are easier to operate and scale, reducing the cost of early
experimentation with event-driven architecture and attracting more start-ups and
other advanced software engineering teams.
■ The increasing popularity of digital integration hubs and other data consolidation
approaches gets the near-real-time data accuracy when consuming event streams,
instead of database lookups.
Obstacles
■ ebPaaS offerings become too expensive as more proprietary features are added to
help differentiate from the competition.
■ Some software engineering teams use webhooks and websockets tools to set up
event notifications, delaying the full many-to-many experience of EDA that’s
implemented via an event broker technology.
User Recommendations
■ Pilot experimental projects using event brokers to gain insight and skills for the
upcoming more advanced projects. Even a basic pub/sub middleware service is
sufficient as a precursor for a full-featured event broker.
■ Give preference to ebPaaS vendors demonstrating the understanding of the full life
cycle of event brokers’ functionality and responsibility.
■ Plan for coordinated use of an event broker and a stream analytics platform. The
technologies are different and are used in combination in most advanced event
broker use cases.
Sample Vendors
Amazon Web Services (AWS); Confluent; Google; IBM; Microsoft; Solace; TIBCO Software;
Vantiq
Innovation Insight for Event ThinkingInnovation Insight for Event BrokersThe 5 Steps
Toward Pervasive Event-Driven ArchitectureThe Impact of Event-Driven IT on API
ManagementApplying Event-Driven Architecture to Modern Application Delivery Use Cases
Event-Driven Architecture
Analysis By: Yefim Natis, Paul Vincent
Maturity: Adolescent
EDA provides advanced opportunities for scale, extensibility and resilience in applications
through its asynchronous, intermediated, pub/sub design model. Monitoring business and
technical events in real time enables continuous analysis of context for advanced
intelligence in decision management. Organizations that are interested in digital business
innovation will inevitably discover event stream analytics and EDA as powerful
components of their application design.
Business Impact
EDA is a key enabling architecture pattern for a number of leading trends in digital
business. An event-aware organization is more responsive in its ecosystem, more
empathetic in its customer experience and more intelligent in its decision making than a
purely transaction-centric business. Competence in EDA accelerates transition to digital
business. Lacking event awareness, organizations may struggle to support business at
competitive speeds, agility, continuous innovation and cost-efficiency.
■ Many major application vendors, including Salesforce and SAP, upgraded support of
EDA to their applications and application platforms in recent years, enabling more
intelligent monitoring of business processes.
■ IoT applications use EDA to monitor states of devices. As the use of IoT software
continues to increase, so does the adoption of EDA.
■ All cloud hyperscalers have added or upgraded their support for EDA by adding and
extending their messaging and event brokering services.
Obstacles
■ The lack of productivity and governance tools dedicated to EDA limits the design of
EDA-based applications to more advanced engineering teams, and thus delays
broader adoption
■ The diversity of protocol and API formats and standards for event processing limits
adoption and increases implementation costs.
■ The design principles of EDA are less well-understood by most development teams,
in part because of the familiarity bias in favor of the common and ubiquitous
request/reply model, often implemented using REST APIs.
■ Aim for a pragmatic mixed use of request-driven APIs following the SOA model and
EDA, including application design, software life cycle and production management.
■ Adopt EDA gradually, as the industry develops required standards, best practices,
and improved productivity design and management tools.
■ Aim to establish EDA, along with SOA, as the common and complementary
architecture patterns, both considered for all application initiatives.
■ Manage and mediate event channels aggressively, and understand that their value
represents an in-motion view of key business processes and happenings.
Maturity: Adolescent
A digital integration hub (DIH) provides API-/event-based data access by aggregating and
replicating multiple system-of-record sources into a low-latency, high-throughput data
management layer that synchronizes with the systems of record via event-driven patterns.
A DIH enables scalable, 24/7 data access; reduces workloads on the systems of record;
and improves business agility. Organizations can reap additional value by leveraging a
DIH in analytics, data integration and composition scenarios.
Digital initiatives massively leverage APIs and, increasingly, events, to unlock core
business applications data and business logic. However, their success can be undermined
when traditional integration architectures face severe performance, scalability and
availability issues that often stem from an excessive workload generated in the systems
of record. A DIH is an increasingly popular alternative to these approaches as it is able to
fix these issues while delivering additional benefits.
Business Impact
■ Reduces the cost of running systems of record or limits the fees paid to SaaS
providers for API consumption
■ Improves business agility and favors composability by decoupling the API layer from
the systems of records
Drivers
DIH architectures are typically used to deliver an API platform featuring a data
management layer between the systems of record and the API service layer itself. In this
way, the inquiry workload generated by the API calls doesn’t hit the systems of record,
which are impacted only when their data must be updated.Therefore, interest in and
adoption of DIH-enabled API platforms is fast growing in:
■ The banking, insurance, retail, energy and utilities, higher education, transportation,
hospitality and telecom industry sectors. However, other industries (for example,
government and healthcare) are also showing interest in this architecture as they are
subject to the above-mentioned drivers.
■ Large and midsize organizations with limited skills attracted by vendors addressing
the opportunity by repackaging their technology portfolios in DIH-oriented value
propositions or by coming to the market with packaged DIH-enabled API platforms,
at times focused on specific use cases.
Obstacles
Although the emerging packaged DIHs will make implementation easier and faster, the
technical complexity in current DIH implementation limits its adoption to leading-edge
organizations with sufficiently advanced skills and financial resources.
■ Dealing with an architecture still not well known in the industry, which implies a
scarcity of know-how, experience and skills in turn leading to high costs
■ Assembling and managing the varied set of DIH building blocks (API gateways,
application platforms, integration platforms, event brokers, data management and
metadata management tools)
■ Keeping the data management layer in sync with the systems of record by
leveraging event-based integration tools (for example, change data capture)
User Recommendations
■ Enabling API “pull” and event “push” services to access data scattered across
multiple back-end systems
■ Embed DIH initiatives into the overall data hub strategy for governance and
integration to avoid ending up with yet another data silo.
Sample Vendors
Cinchy; Fincons Group; IBM; Informatica; Mia-Platform; Microsoft; Oracle; SAP; Sesam;
Software AG
Innovation Insight: Turbocharge Your API Platform With a Digital Integration Hub
Data Fabric
Analysis By: Ehtisham Zaidi, Robert Thanaraj, Mark Beyer
Definition
A data fabric is an emerging data management design for attaining flexible and reusable
data integration pipelines, services and semantics. A data fabric supports various
operational and analytics use cases delivered across multiple deployment and
orchestration platforms. Data fabrics support a combination of different data integration
styles and leverage active metadata, knowledge graphs, semantics and ML to automate
and enhance data integration design and delivery.
Business Impact
■ Adds semantic knowledge for context and meaning, and provides enriched data
models.
■ Evolves into a self-learning model that recognizes similar data content regardless of
form and structure, enabling broader connectivity to new assets.
■ Monitors data assets on allocated resources for optimization and cost control.
Drivers
■ A data fabric enables tracking, auditing, monitoring, reporting and evaluating data
use and utilization, and data analysis for content, values, veracity of data assets in a
business unit, department or organization. This results in a trusted asset capability.
■ Catalogs alone are insufficient in assisting with data self-service. Data fabrics
capitalize on machine learning to resolve what has been a primarily human labor
effort using metadata to provide recommendations for integration design and
delivery.
■ Organizations have found that one or two approaches to data acquisition and
integration are insufficient. Data fabrics provide capabilities to deliver integrated
data through a broad range of combined data delivery styles including bulk/batch
(ETL), data virtualization, message queues, use of APIs, microservices and more.
Obstacles
Data fabrics are just past the Peak of Inflated Expectations. The main challenges
surrounding broad adoption are:
■ Diversity of skills and platforms to build a data fabric present both technical and
cultural barriers. It requires a shift from data management based upon analysis,
requirements and design to one of discovery, response and recommendation.
■ Proprietary metadata restrictions will hamper the data fabric, which is wholly
dependent upon acquiring metadata from a wide variety of data management
platforms. Without metadata, the fabric requires analytic and machine learning
capabilities to infer missing metadata, and while possible, will be error prone.
User Recommendations
Data and analytics leaders looking to modernize their data management with a data
fabric should:
■ Invest in an augmented data catalog that assists with creating a flexible data model.
Enrich the model through semantics and ontologies for the business to understand
and contribute to the catalog.
■ Ensure subject matter expert support by selecting enabling technologies that allow
them to enrich knowledge graphs with business semantics.
■ Combine different data integration styles into your strategy (bulk/batch, message,
virtualization, event, stream, replication and synchronization).
Sample Vendors
Top Trends in Data and Analytics for 2021: Data Fabric Is the Foundation
Top Trends in Data and Analytics for 2021: Data Fabric Is the Foundation
HIP
Analysis By: Massimo Pezzini
Maturity: Adolescent
Definition
As organizations pursue digital and composable business initiatives, they find that the
integration challenges they must address are growing in complexity and quantity. Cloud
services, cloud data warehouses, ecosystems, mobile apps and Internet of Things (IoT)
devices are new endpoints that they must integrate with traditional applications and data
sources. The HIP helps software engineering leaders implement the integration and
governance capabilities needed to integrate all their IT assets.
Business Impact
Each organization’s HIP implementation will differ to reflect specific requirements. But in
all cases, it will alleviate integration challenges by:
Drivers
■ Support a differentiated set of use cases, including, but not limited to, application,
data, B2B, process, IoT, API and event integration; robotic process automation; and
digital integration hub
Although not all organizations need to address all these requirements, almost all
organizations will have to tackle some of them. Therefore, most midsize, large and global
organizations will have to deploy at least a subset of the capabilities defined in the HIP
framework.
Obstacles
User Recommendations
Sample Vendors
Boomi; IBM; Informatica; Jitterbit; MuleSoft; Oracle; SAP; SnapLogic; Software AG; TIBCO
Software
Maturity: Emerging
Definition
Business Impact
Obstacles
■ Experiences that AI learns come from a variety of integrators, not only specialists but
also citizen integrators who may not offer proven techniques. Poor design practices
that become popular through overuse will misdirect the recommendation engine.
■ Target simpler scenarios where past experience can be used to train ML systems in
integration.
Sample Vendors
Boomi; IBM; Informatica; Microsoft; Oracle; SAP; SnapLogic; TIBCO Software; Tray.io;
Workato
Maturity: Adolescent
Definition
Business Impact
■ Simplified access to the centralized data model via APIs instead of connecting direct
to application APIs.
Drivers
Obstacles
■ Vendor landscape is mostly small startups with only a handful of large vendors
providing this service.
User Recommendations
■ Recognize that this is currently still a relatively new market. The few vendors that do
provide this capability often do so for relatively niche use cases. It may take several
years before data hub iPaaS becomes general-purpose enough for most clients.
Given that the data is stored within the data hub iPaaS, this brings with it extra
challenges such as security, resilience and compliance that regular iPaaS vendors do
not have to worry about.
Sample Vendors
Maturity: Adolescent
Definition
The notion of PIPs has been in the market for many years, often known as “accelerators”
or “recipes” for integration between applications. However, PIPs are now becoming
popular because of the prevalence of SaaS offerings. PIPs enable ad hoc and citizen
integrators, as well as integration specialists, to deliver integration, thus enabling the self-
service model of integration delivery.
Drivers
■ Recently, the total number of vendors providing PIPs as a part of their offerings has
steadily increased, primarily driven by customers' adoption of SaaS offerings.
Examples include S4/HANA integration with Salesforce, NetSuite integration with
Shopify and many more.
■ PIP offerings are especially attractive to midsize organizations and LOBs of large
organizations that have limited IT skills and cannot handle overly complex
integration requirements. Many of these organizations see PIPs as a way to rapidly
deliver integrations without investing massively in new skills.
Obstacles
■ Some of the PIPs provided by the vendors pose a risk of vendor lock, which can
negatively affect your integration strategy.
■ Lack of flexibility of the PIP, leading to rigid integrations. Perfectly acceptable for
nondifferentiating use cases, if you want to integrate the same as everyone else, but
you have to change your working practices to match the PIPs.
User Recommendations
Leverage PIPs when trying to automate the integration of common and undifferentiating
business processes to reduce implementation costs and accelerate time to value.
Software engineering leaders responsible for integration should:
■ Empower new integration personas, such as ad hoc and citizen integrators, to take
on responsibility for integration in part by providing them with approved PIPs that
they can customize and deploy themselves.
■ Test your ability to implement a PIP efficiently and effectively by performing a proof
of concept (POC) for each PIP identified for implementation.
Sample Vendors
Boomi; Celigo; Informatica; Jitterbit; Mulesoft; Oracle; SAP; Snaplogic; Workato; Zapier
Choose the Best Integration Tool for Your Needs Based on the Three Basic Patterns of
IntegrationChoosing Application Integration Platform TechnologyToolkit: RFP Templates
for Application Integration Platforms
Service Mesh
Analysis By: Anne Thomas
Maturity: Adolescent
Definition
Business Impact
■ This type of middleware, along with other management and security middleware,
helps provide a stable environment that supports “Day 2” operations of
containerized workloads.
Drivers
■ As microservice deployments scale and grow more complex, DevOps teams need
better ways to track operations, anticipate problems and trace errors. Service mesh
automatically instruments the services and feeds logs to visualization dashboards.
■ Many managed container systems now include a service mesh, inspiring DevOps
teams to use it. The hyperscale cloud vendors provide a service mesh that is also
integrated with their other cloud-native services.
■ Independent vendors, such as Buoyant, HashiCorp and Kong provide service meshes
that support multiple environments.
■ Service mesh technology is immature and complex, and most development teams
don’t need it. It can be useful when deploying microservices in Kubernetes, but it’s
never required.
■ Users are confused by the overlap in functionality among service meshes, ingress
controllers, API gateways and other API proxies. Management and interoperability
among these technologies hasn’t yet been addressed by the vendor community.
■ Many people associate service mesh exclusively with Istio, even though it isn’t the
most mature product in the market and has a reputation for complexity.
User Recommendations
■ Delay adoption of service mesh until your teams start building applications that will
get value from a mesh, such as applications deployed in managed container
systems with a large number of service-to-service (east-west) interactions.
■ Favor the service meshes that come integrated with your managed container system
unless you have a requirement to support a federated model.
■ Reduce cultural issues and turf wars by assigning service mesh ownership to a
cross-functional PlatformOps team that solicits input and collaborates with
networking, security and development teams.
Sample Vendors
Amazon Web Services; Buoyant; Decipher Technology Studios; Envoy; F5; Google;
HashiCorp; Istio; Kong; Microsoft; Red Hat; Solo.io; Tetrate; VMware
Microservices
Analysis By: Anne Thomas
Maturity: Adolescent
Definition
Business Impact
■ Microservices allow teams to change one part of an application, without the delay
and cost of changing the entire application.
■ When applied well, the architecture increases the independence of different parts of
a large application, enabling multiple development teams to work autonomously and
on their own schedules.
Obstacles
■ Microservices architecture and its benefits are often misunderstood, and many
software engineering teams struggle to deliver outcomes that meet senior
management expectations. For example, microservices should not be shared, and
they will not save you money.
■ If you aren’t trying to implement or improve your continuous delivery practice, you
will almost certainly be disappointed with the microservices cost-benefit equation.
■ Microservices architecture is complex. Developers must acquire new skills and adopt
new design patterns and practices to achieve its benefits.
■ Set clear expectations by defining business goals and objectives for microservices
architecture adoption based on realistic cost-benefit analysis of the architecture.
■ Use microservices architecture as a tool to help you attain those goals. Don’t view
microservices as a destination.
■ Improve outcomes by creating guidelines for where and when software engineering
teams should and should not use microservices architecture.
10 Ways Your Microservices Adoption Will Fail — and How to Avoid Them
Serverless fPaaS
Analysis By: Anne Thomas
fPaaS can deliver significant savings for certain types of workloads via its consumption-
based micropricing model. The programming model also enables software engineers to
rapidly deploy and configure new functions with little or no assistance from operations
teams.
Business Impact
■ An fPaaS can offer significant cost savings and virtually unlimited scalability for
applications with highly variable capacity requirements.
Drivers
■ Potential cost savings — The micropricing model charges for small increments of
compute time per invocation, which can be advantageous for small, spiky
workloads. The model is less favorable for large, consistent workloads.
■ Rapid solution delivery — The serverless model reduces the amount of work
developers and operations teams need to do to build, deploy and configure
solutions.
■ Integration with hyperscale xPaaS — The hyperscale vendors make it easy to use
their fPaaS with their other cloud-native xPaaS offerings.
■ Broad use-case support — fPaas can support a broad spectrum of application use
cases, from basic websites to complex analytical processes.
■ Embedding within other xPaaS — Some xPaaS vendors embed an fPaaS in their
platforms to host code components, such as rules and workflow routines. Examples
include the InRule decision management platform and the Zoho low-code
application platform. These systems hide the complexity of the fPaaS programming
and operating model.
Obstacles
■ Cost savings don’t always materialize — fPaaS pricing isn’t favorable for
applications with consistently high invocation rates. Also, fPaaS-based applications
often require other xPaaS, such as API management, data management and
notifications.
■ Minimize vendor lock-in by ensuring that your software engineering teams don’t limit
their skills and practices to the proprietary features of a single fPaaS.
■ Evaluate whether fPaaS is a good fit for your applications and your teams’
development skills. Consider aPaaS or managed container solutions as alternatives.
■ Identify use cases where fPaaS offers a strategic benefit from a cost or agility
perspective. Consider fPaaS for microservices deployments and for applications
with highly variable or unpredictable capacity requirements.
■ Use fPaaS if it’s an integral part of another solution, such as edge computing,
decision management or low code.
Sample Vendors
Amazon Web Services; Cloudflare; Google; InRule; Microsoft; Netlify; Red Hat; Vercel; Zoho
Citizen integrator tools are typically cloud-hosted services providing very intuitive, no-code
integration process development tools. This way expert business users with minimal IT
skills can handle relatively simple application, data and process integration tasks (or
“automations”) by themselves. Citizen integrator tools also provide a rich set of packaged
integration processes (PIPs) that business users can rapidly configure and run with no
assistance from integration specialists.
Citizen integrator tools enable business users with minimal IT skills to perform self-service
integration work, thus increasing the organization’s overall delivery capacity. However their
ungoverned proliferation can lead to security and compliance risks and duplicated costs.
Business Impact
Citizen integrator tools enable business users to automate tasks currently integrated via
slow and error-prone manual methods. Integration specialists or ad hoc integrators
(developers, SaaS administrators), also use these tools to quickly sort out simple tasks
instead of using more powerful, but expensive and complex tools. Therefore, citizen
integrator tools contribute to improving organizations’ efficiency, productivity, agility and
innovation by reducing the relevant integration costs.
■ Citizen integrator tools may help deliver business value faster, reduce integration
costs and support tactical or strategic digital initiatives. These outcomes are
achieved by enabling rapid, pervasive integration by a wide range of employees
within (and potentially also outside) the organization. However they are available in
many forms, which address different markets and needs: PIPs — At times called
“recipes,” these are prepackaged and configurable sets of integration flows, available
stand-alone (at times for free), as embedded capabilities in SaaS or as add-ons to
integration platforms. As such buyers are typically application owners or SaaS
administrators. Integration software as a service (iSaaS) — Cloud services that
enable users to implement brand new PIPs and to deploy, run and customize existing
ones. They are typically sold to individual business users or work teams. Integration
platform as a service (iPaaS) — These are targeted to professional integrators, but
several iPaaS provide an iSaaS-like development environment and/or make
available collections of configurable PIPs atop their platform.
■ iSaaS tools have achieved notable traction in the consumer and SMB markets,
thanks to their very low cost of entry, intuitive user experience, low skills demand and
their rich set of PIPs. However, they have failed to penetrate other segments due to
their lack of enterprise capabilities and services (for example, consulting).
■ Business users are increasingly technology savvy and often driven by time-to-market
pressures, especially in the post-pandemic era that requires fast reaction to sudden
changes in the business environment. This will increasingly urge them to adopt
cloud citizen integrator tools, rather than wait for their IT colleagues to methodically
perform integration work for them. However, this will create a few challenges: If not
framed in a proper governance model, citizen integrator tools adoption by business
users will inevitably lead to security, compliance, management and governance
issues.
■ Although some central IT departments will adopt a positive attitude and proactively
address these challenges, others will try to stop business users from leveraging
these tools to prevent these risks. In addition, excessive expectations for ultra-easy,
super-fast integration and the simplistic nature of some citizen integrator tools may
still lead to disappointment, thus hindering their more widespread adoption.
User Recommendations
■ Engage with business teams to understand their automation needs and identify to
what extent citizen integrator tools can improve their responsiveness and
productivity.
■ Approve, certify and support a set of citizen integrator tools that meet these needs
and make them available to internal users in a self-service way. This will help to
prevent the uncontrolled proliferation of similar tools and maintain a degree of
centralized governance and monitoring.
■ Beware when selecting citizen integrator tools that: some tools are rather simplistic
and lowest-common-denominator in nature; and PIPs provided by SaaS vendors
may have been designed for a professional IT developer audience.
■ Give preference to providers that can support both “professional” and citizen
integrator requirements when selecting an iPaaS.
Quick Answer: When to Use (or Not Use) Embedded Integration Features Provided by Your
SaaS Vendor
Definition
ESP is a key enabler of continuous intelligence and related real-time aspects of digital
business. ESP’s data-in-motion architecture is a radical departure from conventional data-
at-rest approaches that historically dominated computing. ESP products have progressed
from niche innovation to proven technology and now reach into the early majority of
users. ESP will reach the Plateau of Productivity within several years and eventually be
adopted by multiple departments within every large company.
Business Impact
Drivers
■ ESP products have become widely available, in part because open-source ESP
technology has made it less expensive for more vendors to offer ESP. More than 40
ESP platforms or cloud ESP services are available. All software megavendors offer
at least one ESP product and numerous small-to-midsize specialists also compete in
this market.
■ ESP products have matured into stable, well-rounded products with many thousands
of applications (overall) in reliable production.
Obstacles
■ ESP platforms are overkill for most applications that process low or moderate
volumes of streaming data (e.g., under 1000 events per second), or do not require
fast response times (e.g., less than a minute).
■ Many architects and software engineers are still unfamiliar with the design
techniques and products that enable ESP on data in motion. They are more familiar
with processing data at rest in databases and other data stores, so they use those
techniques by default unless business requirements force them to use ESP.
User Recommendations
■ Use ESP products that are optimized for stream data integration to ingest, filter,
enrich, transform and store event streams in a file or database for later use.
Sample Vendors
Amazon; Confluent; Google; IBM; Informatica; Microsoft; Oracle; SAS; Software AG; TIBCO
Software
Adopt Stream Data Integration to Meet Your Real-Time Data Integration and Analytics
Requirements
Market Share Analysis: Event Stream Processing (ESP) Platforms, Worldwide, 2020
Maturity: Adolescent
Cloud native architecture is the set of application architecture principles and design
patterns that enables applications to fully utilize the agility, scalability, resiliency, elasticity,
on-demand and economies of scale benefits provided by cloud computing. Cloud native
applications are architected to be latency-aware, instrumented, failure-aware, event-driven,
secure, parallelizable, automated and resource-consumption-aware (LIFESPAR).
Many organizations are moving to cloud native architecture as they shift their application
workloads to cloud native application platforms. Cloud native principles and patterns
enable applications to operate efficiently in a dynamic environment and make the most of
cloud benefits. Organizations that simply “lift and shift” legacy applications to cloud
native platforms often find that the applications perform poorly, consume excessive
resources and aren’t able to fail and recover gracefully.
Business Impact
■ Cloud native architecture ensures that applications can take full advantage of a
cloud platform’s capabilities to deliver agility, scalability and resilience.
■ It enables DevOps teams to more effectively use cloud self-service and automation
capabilities to support continuous delivery of new features and capabilities.
■ It can also improve system performance and business continuity, and it can lower
costs by optimizing resource utilization.
Drivers
■ Organizations want to make the most of cloud computing to support their digital
business initiatives, but they can’t fully exploit cloud platform benefits without cloud
native architecture.
■ Software engineering teams are adopting cloud native architecture to support cloud
native DevOps practices, including self-service and automated provisioning,
blue/green deployments, and canary deployments. A basic set of rules known as the
“ twelve-factor app” ensures that applications can support these practices.
■ Without proper education, architects and developers can apply the principles poorly
and deliver applications that fail to deliver the expected benefits. This leads to
developer frustration in adopting the new patterns and practices.
User Recommendations
■ Use the twelve-factor app rules and the LIFESPAR architecture principles to build
cloud native applications.
■ Apply cloud native design principles as you modernize legacy applications that you
plan to port to a cloud platform to ensure that they can tolerate ephemeral or
unreliable infrastructure. Otherwise, they are likely to experience stability and
reliability issues.
■ Select an application platform that matches your cloud native architecture maturity
and priorities. Recognize that low-code platforms enable rapid development of
cloud-ready applications, but they won’t provide you with the full flexibility to apply
LIFESPAR and twelve-factor principles.
Sample Vendors
How to Help Software Engineering Teams Modernize Their Application Architecture Skills
MASA
Analysis By: Anne Thomas
Definition
MASA describes the foundation for modern business application architecture. It enables
multiexperience applications. It supports agility and rapid delivery of new capabilities. It is
a technical architecture that enables composability. It facilitates incremental
modernization of legacy applications while providing mechanisms that ensure security
and robust operations.
Business Impact
The initial impetus to shift to MASA was to enable existing applications to add support for
mobile experiences. But MASA enables many other critical application capabilities, such
as:
■ Multiple experiences for different types of devices and modalities, such as voice,
touch, wearables and immersive technologies.
■ Distinct, optimized experiences for the different personas that use an application.
Obstacles
■ The biggest obstacle to MASA is the extensive technical debt embedded in existing
application portfolios.
■ The architecture enables iterative modernization, but it will take years (perhaps
decades) to modernize the entire application portfolio.
User Recommendations
■ Ensure that development teams have competence in user experience design, service-
oriented architecture, API design and domain-driven design.
■ Encapsulate data and functionality in existing applications and expose them via
APIs to enable composition.
Adopt a Mesh App and Service Architecture to Power Your Digital Business
Full life cycle API management involves the planning, design, implementation, testing,
publication, operation, consumption, versioning and retirement of APIs. API management
tools enable API ecosystems and publishing APIs that securely operate and collect
analytics for monitoring and business value reporting. These capabilities are typically
packaged as a combination of developer portal, API gateway, API design, development
and testing tools as well as policy management and analytics.
APIs are widely used and accepted as the primary choice to connect systems, applications
and things to build modern composable software architectures. The use of APIs as digital
products monetized directly or indirectly is also on the rise. Advancing digital
transformation initiatives across the world have emphasized the need for creation,
management, operations and security of APIs and made full life cycle API management
an essential foundational capability every organization must have.
Business Impact
Full life cycle API management provides the framework and tools necessary to manage
and govern APIs that are foundational elements of multiexperience applications,
composable architectures and key enablers of digital transformations. It enables the
creation of API products, which may be directly or indirectly monetized, while its security
features serve to protect organizations from the business impact of API breaches.
■ APIs that package data, services and insights are increasingly being treated as
products that are monetized (directly or indirectly) and enable platform business
models. Full life cycle API management provides the tooling to treat APIs as
products.
■ Digital transformation drives increased use of APIs, which in turn increases the
demand for API management.
■ APIs provide the foundational elements required for growth acceleration and
business resilience.
■ Developer mind share for APIs is growing. Newer approaches to event-based APIs,
design innovations and modeling approaches such as GraphQL, are driving interest,
experimentation and growth in full life cycle API management.
User Recommendations
■ Use full life cycle API management to power your API strategy that addresses both
technical and business requirements for APIs. Select offerings that have the ability
to address needs well beyond the first year.
■ Choose a functionally broad API management solution that supports modern API
trends, including microservices, multigateway and multicloud architectures. Ensure
that the chosen solution covers the entire API life cycle, not just the runtime or
operational aspects.
■ Use full life cycle API management to enable governance of all APIs (not just APIs
you produce), including third-party (private or public) APIs you consume.
■ Question full life cycle API management vendors on their support for automation of
API validation and other capabilities, as well as their support for a modern, low-
footprint API gateway.
Definition
Data integration tools enable the design and implementation of data access and delivery
capabilities that allow independently designed data structures to be leveraged together.
Data integration tools have matured from supporting traditional bulk/batch integration
scenarios to now supporting a combination of modern delivery styles (such as data
virtualization and stream data integration). Data integration tools are now expected to
support hybrid and multicloud integration scenarios.
Data integration tool suites are expected to deliver simpler interfaces to support less-
skilled roles like citizen integrators. Growing requirements for automated data integration
also require support for data fabric architectures.
Business Impact
Integration tools that support data fabric designs will increase the productivity of data
engineering and data science teams.
Drivers
■ Ability to execute data integration in a hyperconnected infrastructure (irrespective of
structure and origins) and the ability to automate transformations through
embedded ML capabilities are the most important drivers for organizations investing
in modern data integration tools.
■ Activities for self-service data access and data preparation by skilled data engineers,
citizen integrators and other non-IT roles spur requirements for new data integration
tools.
■ While traditional data integration tools have now become mature on technical
metadata ingestion and analysis to support data integration activities, there is still
room for maturity for data integration vendors to introduce capabilities to harness
and leverage “active” metadata. Organizations must therefore investigate and adopt
data integration tools that can not only work with all forms of metadata, but also
share it bidirectionally with other data management tools (e.g., data quality tools) to
support data fabric architectures for automation.
■ Dynamic data fabric designs bring together physical infrastructure design, semantic
tiers, prebuilt services, APIs, microservices and integration processes to connect to
reusable integrated data. Vendors will continue to add data integration functionality
or acquire technology in these areas.
■ Tightly integrated data integration tool suites in which all components share
metadata (both active and passive), design environment, administration and data
quality support remain an area for improvement in the data integration tools market.
■ The popularity of data preparation (and other self-service ingestion tools), with the
sole focus on analytics use cases demonstrated, will create some confusion in the
market, slowing the advance of data integration tool suites.
■ The demand for a seamless integration platform that spans and combines multiple
data delivery styles (batch with data virtualization, for example), multiple
deployment options (hybrid and multicloud) and multiple personas currently exceeds
the capabilities of most offerings.
■ Most existing data integration tools are limited in their ability to collect and analyze
all forms of metadata to provide actionable insights to data engineering teams to
support automation.
User Recommendations
■ Assess your data integration capability needs to identify gaps in critical skill sets,
tools, techniques and architecture needed to position data integration as a strategic
discipline at the core of your data management strategy.
■ Review current data integration tools to determine if you are leveraging the
capabilities they offer. These may include the ability to deploy core elements
(including connectivity, transformation and movement) in a range of different data
delivery styles driven by common metadata, modeling, design and administration
environments.
Sample Vendors
Denodo; Fivetran; HVR; IBM; Informatica; Matillion; Precisely (Syncsort); Qlik; Talend;
TIBCO
Position Your Product to Benefit From the Rise of Intercloud Data Integration
LCAP
Analysis By: Paul Vincent, Jason Wong, Yefim Natis
Definition
LCAPs are one of the most popular types of development tools supporting the low-code
paradigm. They support general web and mobile application development with high
productivity while reducing the need for deep developer skills, and are mostly cloud-based.
They are widely adopted for developer personas ranging from enterprise software
developers to citizen developers. Over 200 vendors support a wide variety of business use
cases and industry specializations for digital business automation.
Drivers
■ Through the requirement for LCAP to enable competitive SaaS and complete
applications, they have evolved toward multifunction capabilities. LCAPs overlap
with the business process automation/iBPMS market for workflow use cases, and
the MXDP market for user-interface-driven use cases.
■ Current LCAP market share is heavily biased toward some very large hyperscalers
and a few successful independent vendors. However, Gartner commonly speaks with
clients that have multiple LCAP offerings deployed across the enterprise.
■ LCAPs have been implemented by the main SaaS platform vendors whose market
dominance and deep pockets could diminish the opportunities for a large number of
small LCAP vendors. However, this really means that for most enterprises the
question is not whether to adopt LCAP, but which LCAP(s) will they focus on and
invest in.
■ LCAP like most low-code trades productivity for vendor lock-in (of both applications
and developer skills). Vendor cancellations (like Google App Maker) do occur.
■ Licensing models vary across vendors and can be regularly updated by vendors, and
may not scale for new use cases. This can lead to vendor disillusionment!
User Recommendations
■ Evaluate application lock-ins due to the lack of portability or standards for low-code
models. This technical debt will accumulate fast, and means that vendor
relationships (and contracts) need to be considered strategic. Architecture needs
should be considered — for example whether to use the built-in database for all use
cases.
■ Ensure developers have access to the tools that make them productive, whether
LCAP or others, and are governed accordingly. Different developers with different
skill sets will vary in their successful adoption of LCAP.
■ Assess LCAP vendors. The large number of vendors implies possible future market
instability, although to date there have been few cases of LCAP retirements.
Sample Vendors
IoT Integration
Analysis By: Benoit Lheureux
Definition
IoT integration refers to the integration strategies and technologies needed to assemble
end-to-end IoT-enabled business solutions. IoT-specific integration challenges include
integrating IoT devices, operational technology (OT), digital twins and multiple IoT
platforms. More traditional IoT project integration challenges include integrating IoT
applications and digital twins with enterprise applications, data, business processes,
SaaS applications, B2B ecosystem partners and mobile apps.
Every IoT project requires significant integration work — some unique to IoT projects — to
enable IoT devices, IoT applications and various existing business applications to work
well together. In a recent survey, a majority (71%) of companies reported that they made
moderate to major investments in their integration strategy to support IoT projects (see
Survey Analysis: Companies Recognize Integration as a Key Competency for Internet of
Things Projects).
■ All software engineering leaders (SWELs) and application leaders responsible for IoT
projects must address IoT integration, and to successfully deliver IoT products, they
will either have to train or hire software engineers with unique-to-IoT integration
skills.
■ Special integration skills and tools are often needed for IoT projects (e.g., for OT
integration).
■ Extraordinary IoT project technology heterogeneity — e.g., multiple types and OEMs
offering IoT devices, brand-new and decades-old products and equipment, diverse
IoT device data heterogeneity, and diverse applications systems to be integrated.
■ A proliferating desire to ingest and analyze IoT data to support data-driven business
decisions.
■ IoT integration is a key challenge for IoT projects. A Gartner survey found that
companies can’t rely on a “one-size-fits-all” approach to IoT device integration, and
had to integrate their IoT projects with many different types of IT endpoints (see
Survey Analysis: Companies Recognize Integration as a Key Competency for Internet
of Things Projects).
■ To fully realize the benefits of IoT, companies will eventually need to integrate new
IoT technologies with legacy (i.e., pre-IoT) business applications and software using
new, enhanced workflow (see How Can Organizations Integrate IoT Digital Twins
and Enterprise Applications?).
■ Complex, distributed IoT projects often involve a mix of IoT devices, IoT platforms,
business applications, mobile apps, cloud services and (often) external business
partners. Such complex IT projects are needed to enable new IoT-enabled outcomes
— e.g., self-diagnosing and self-repairing assets and equipment, “lights-out-
manufacturing,” or product-as-a-service.
■ Performance and scalability — that is, potentially large numbers of IoT devices,
products and equipment with high API throughputs and large volumes of time series
data must be integrated.
■ Few engineers have IoT software development skills, and even fewer have IoT
integration skills.
■ TSPs investing in IoT products (e.g., IoT platforms) tend to focus more on IoT data,
applications and analytics, rather than on integration, which creates integration
functionality gaps.
■ A function gap among many general-purpose integration tools (e.g., ESB, iPaaS) for
many of the IoT-specific integration needs of IoT projects (e.g., IoT devices, OT
equipment or LOB applications such as MES). While many integration tools support
modern IoT device protocols (e.g., APIs, MQTT and OPC-UA), most cannot connect to
older, “brownfield” OT equipment.
User Recommendations
■ Clearly identify what IoT integration functionality is needed for IoT projects (see
Survey Analysis: Companies Recognize Integration as a Key Competency for Internet
of Things Projects).
■ Avoid simplistic approaches to IoT integration (e.g., “APIs = integration”) that cannot,
alone, address all your needs (e.g., does not also address functionality such as IoT
data translation, OT integration).
■ Confirm the availability of required IoT integration capabilities for any IoT product or
service (see Critical Capabilities for Industrial IoT Platforms).
■ Modernize your B2B integration strategy (either via EDI or APIs — see Use APIs to
Modernize EDI for B2B Ecosystem Integration) to enable IoT project integration with
business partners.
Sample Vendors
Alleantia; Dell Boomi; Informatica; Microsoft; Reekoh; Salesforce (MuleSoft); Sky Republic;
SnapLogic; Software AG; Solace
Use the IoT Platform Solution Reference Model to Help Design Your End-to-End IoT
Business Solutions
Definition
Organizations’ accelerating shift to the cloud is boosting the iPaaS market (up 38% in
2020) and has made it the biggest segment ($3.7 billion) of the integration platform
technology market. Its functional breadth makes it the natural alternative to classic
integration software (ESB, ETL and B2B gateway software) for large organizations. But,
unlike the classic software, iPaaS also attracts midsize organizations and lines of
business, due to its ease of access, versatility and low initial cost.
Business Impact
Drivers
■ The main goal of iPaaS providers now is to maximize opportunities to upsell and
cross-sell to their vast installed base. Therefore, they are evolving their offerings into
enterprise-class suites that address a wide range of hybrid, multicloud scenarios.
Hence, large and global organizations now position iPaaS as a strategic option to
complement, but increasingly also to replace, classic integration platform software,
which drives more widespread adoption.
■ A growing number of SaaS providers “embed” in their applications their own iPaaS,
or one from a third party, which they typically extend with a rich portfolio of
packaged integration processes (PIPs). This makes embedded iPaaS offerings
attractive to organizations that need to quickly address SaaS application integration.
■ Providers will keep investing to improve developers’ productivity, reduce time to value
and shorten the learning curve. The goal is to further expand their potential audience,
to include business users. Hence, providers’ R&D efforts will focus on using AI,
machine learning and natural language processing to assist development and
operation, enrich PIP portfolios, and enable CI/CD and DevOps to entice professional
developers.
Obstacles
■ The market’s extreme fragmentation (over 150 providers and counting). This makes
it hard for user organizations to select the best-fit iPaaS for their needs, could
generate a proliferation of diverse, stand-alone and embedded iPaaS offerings, and
risks fragmenting service providers’ investments in skills building.
■ The API rhetoric of seamless “plug and play” integration, the confusion among less
technically savvy users about the differences between iPaaS, RPA and API
management platforms, and the growing trend for code-based integration
encouraged by OSS integration frameworks. These factors could reduce iPaaS’s
appeal, at least to large organizations.
User Recommendations
■ An integration platform for midsize organizations moving to the cloud and for
“greenfield” integration initiatives.
■ A potential replacement for classic integration platforms that are obsolete or cannot
support their changing requirements.
Sample Vendors
Boomi; IBM; Informatica; Jitterbit; Microsoft; MuleSoft; Oracle; SAP; TIBCO Software;
Workato
Hype Cycle for Application and Integration Infrastructure, 2019 - 1 August 2019
Hype Cycle for Application and Integration Infrastructure, 2018 - 26 July 2018
Hype Cycle for Application Infrastructure and Integration, 2017 - 7 August 2017
Create Your Own Hype Cycle With Gartner’s Hype Cycle Builder
High Data Integration Tools Data Hub iPaaS Digital Integration Hub
Full Life Cycle API Event Broker PaaS Event-Driven Architecture
Management Integration Strategy HIP
IoT Integration Empowerment Team
LCAP iPaaS
MASA
Microservices
Packaged Integration
Processes
Phase Definition
Peak of Inflated Expectations During this phase of overenthusiasm and unrealistic projections, a flurry of
well-publicized activity by technology leaders results in some successes, but
more failures, as the innovation is pushed to its limits. The only enterprises
making money are conference organizers and content publishers.
Trough of Disillusionment Because the innovation does not live up to its overinflated expectations, it
rapidly becomes unfashionable. Media interest wanes, except for a few
cautionary tales.
Slope of Enlightenment Focused experimentation and solid hard work by an increasingly diverse
range of organizations lead to a true understanding of the innovation’s
applicability, risks and benefits. Commercial off-the-shelf methodologies and
tools ease the development process.
Plateau of Productivity The real-world benefits of the innovation are demonstrated and accepted.
Tools and methodologies are increasingly stable as they enter their second
and third generations. Growing numbers of organizations feel comfortable
with the reduced level of risk; the rapid growth phase of adoption begins.
Approximately 20% of the technology’s target audience has adopted or is
adopting the technology as it enters this phase.
Years to Mainstream Adoption The time required for the innovation to reach the Plateau of Productivity.
Transformational Enables new ways of doing business across industries that will result in
major shifts in industry dynamics
High Enables new ways of performing horizontal or vertical processes that will
result in significantly increased revenue or cost savings for an enterprise
Low Slightly improves processes (for example, improved user experience) that will
be difficult to translate into increased revenue or cost savings