0% found this document useful (0 votes)
211 views

Enterprise Architecture Blueprint For Utilities

A brief description of the Utilities companies enterprise architecture driven by "Utilities 4.0" Concept.

Uploaded by

Dariusz Kurowski
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
211 views

Enterprise Architecture Blueprint For Utilities

A brief description of the Utilities companies enterprise architecture driven by "Utilities 4.0" Concept.

Uploaded by

Dariusz Kurowski
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

ENTERPRISE

ARCHITECTURE
BLUEPRINT FOR UTILITIES
– PART 1:
Technology, Dev Tales, News
“The Good, The Bad and the Ugly”
Some days ago, I was co-hosting a webinar on “Piecing together
the Enterprise Architecture Puzzle for Utilities”. During our
virtual roundtable, our guests including Klaus Wagner, (Enterprise
Architect, Netze-BW / EnBW) and Rinse Veltmann (Director
Solutions, Energyworx) discussed an enterprise architecture
blueprint for the Utility 4.0 future and highlighted specific
related topics including:
• Challenges or shortcomings with this kind of an “Accidental
IT”, that is quite often the As-Is architecture for many
utilities.
• Strategies such as “Make vs Buy”, “Cloud vs. Data Center”,
“Best-of-Breed vs. 1-Stop-Shop” or “Pull IT vs. Push IT”.
• Principles and concepts like Event Driven Architecture (EDA),
API-driven development, domain driven design (DDD), cloud
native, microservices and data mesh.
Our lively debate, in which the panelist shared their views on
what was good and bad architecture emphasized: What
characterizes a good or bad architecture? What does “good” and
“bad” mean? Does “good”, “bad” and even “ugly” mean the same
for all of us? And can we measure it? See it? Feel it? Or
otherwise sense it?
So let me start by sharing some of the typical indicators and
symptoms of a bad architecture from my point of view:
• Monoliths determine the processes, routines or services,
meaning the utility has to adapt the systems’ capabilities.
• Siloed applications offer no or minimal standardized
integration capabilities and make entirely digitized processes
impossible.
• Core systems are heavily customized and therefore expensive
to update or upgrade.
• Custom built or developed solutions serving “commodity”
functionality (i.e. CRM, ERP, etc.).
• Critical solutions need special knowledge or expertise to
operate, manage or adapt and create vendor lock in.
• Black-box applications nobody (internally or externally) wants
to touch anymore as no one knows how those work.
• Adaptions (i.e. some new fields in an API) take several
months, are costly and slow therefore down innovation.
• Applications provide overlapping or competing functionality
and lead with that to inconsistent decisions, processes or
data.
• Systems have been thickened, meaning new requirements are
always realized in the same application by the same vendor,
even the functionality would have belonged into the domain
of another.
• Underlying infrastructure (i.e. storage, messaging, integration)
cannot be scaled elastically or require substantial
investments to meet the upcoming requirements to handle
big energy data in almost real time.
• IT landscape is built on a huge variety of (partly aging)
technologies, different (maybe antiquarian) architecture styles
or (old-fashioned) proprietary integrations create enormous
operational complexity and fragility.
• Each solution or system uses different mechanism to manage
security or data privacy and its own logging & tracing
framework.
READ MORE IN OUR E-ZINE

At Greenbird, we work to simplify the complexity of big data


integration for utilities to kickstart their digital transformation.
We have put together an e-zine with articles which speak to
these questions. Download the digital magazine to get more
information on enterprise architecture, the build vs buy debate,
understanding the digital integration journey and how to simplify
the IT/OT relationship.

Working with utilities, we sometimes meet an IT landscape that


has evolved (or even mutated) over many years with lack of
architectural management or strategy. That is what I call an
“accidental IT” or “accidental architecture”.
A system of siloed systems:
• Defined by business units’ specific needs and preferences for
a given solution or vendor,
i.e. a cloud and web-based CRM for customer service, but a
on-premise classical client-server GIS implementation
• Often centered around some big monolithic applications,
i.e. CC&B (Customer Care & Billing)
• With many manual, batch-style or file-based integrations,
i.e. file based meter reading exports from HES to MDM and
from MDM to CC&B,
i.e. manual synchronization for metering point info in MDM
and GIS
• With unclear or random “separation of concerns”,
i.e. VEE for profile-based meters done in the CC&B and VEE
for smart meters done in MDM,
i.e. Asset management for meters / metering points done in
CC&B, asset management for grid infrastructure done in GIS
There might be by far more symptoms or examples for
“accidental IT” or “The Bad, and the Ugly”. But it is more
important to think about, how to fix and how to handle the
shortcomings.
ENTERPRISE
ARCHITECTURE
BLUEPRINT FOR UTILITIES
– PART 2:
Technology, Dev Tales, News
“Bad Choices make Good Stories.”
There is a famous saying “Bad choices make good stories”. What
might be true in life, seems wrong for utilities’ IT or enterprise
architecture.
Part 1 of our Enterprise Architecture Blueprint for Utilities – “The
Good, the Bad and the Ugly”, discusses how bad decisions lead
to huge technical debt, an Accidental IT or bad architecture. In
part 2 of our EA series, I will focus on the good choices that
make good stories and identify KPIs for a good architecture.
Good Architecture
Whenever and wherever I discuss architecture with peers,
clients, or partners in the sector, it only takes a few seconds
before we philosophize about loose coupling, ease of integration,
flexibility, adaptability, interoperability, elastic scalability, or cost
efficiency. It always reminds me of the good old times where
almost every tender had a must-have requirement “The solution
shall have a good user interface”. I am sure you remember those
endless discussions about what is “good”. So, let us try to be
more concrete by defining what we mean with good architecture.
If you would be the pilot or captain of an “enterprise
architecture” airplane or vessel – instead of flying blind or
navigating in the dark - what numbers, figures, instruments or
KPIs would you need to monitor in your cockpit?
Cost to Serve: Instead of Total-Cost-of-Ownership (TCO), I’d
propose Cost-to-Serve as KPI number one. Marc Andreessen
famously said that “Software is eating the world.” As utilities
move more and more into analytics–and a data driven future as
well as a new generation of consumers that expects an entirely
digitized on-demand user excellence, IT and data science has
transitioned from being supporting function to be core business
for energy companies. Instead of reducing the IT budget or cost,
utilities will even increase their TCO. Therefore, the KPI to
monitor should be the Cost to Serve for all grid and business
operations.
Speed of Delivery: Speed is the new black. The future utility will
have to adapt quickly to changing market regulations, changes in
the energy system, new requirements and opportunities. Some of
our clients have told stories where changing some few fields in
an interface or adding a new service to an API took 6 months or
even more. Speed of delivery would be my second KPI. I would
even go as far as to monitor the average cost per change, too.
Media / Technology Breaks in the Value Chain: Everybody talks
about Digital Transformation or the Digital Utility. Everybody
talks about the importance of automating and digitizing business
processes and the user experience / interaction. Many utilities
struggle with medial segregation leading to broken processes
with multiple manual steps in the value chain. The number of
these manual steps or “media breaks” (commonly known in
German as Medienbrüche) where data must be entered several
times or manually transferred between different systems or
technologies would be another one of my KPIs.
Dirty Data Cleansing Jobs: Bad or dirty data today is already a
huge challenge for many utilities. Moving fast into the new
digital, analytics-driven future leveraging the power of AI and
Machine Learning requires a high degree of data quality. As an
initial KPI, I’d count the number of business or case management
issues caused by dirty data that the organization have to handle
and fix.
End of Life Applications or Technologies: Many utilities have built
a huge technical debt over the last decades. Many applications
are running on aging technologies or antique platforms. Several
applications have already reached their “End of Life” and vendors
continue to announce they will discontinue support for those
applications within the next two to three years. Monitoring those
“End of Life” and “Soon-End-of-Life” systems makes total sense
for me.
“It’s not Possible”: If I would be an Executive or manager of a
utility, I’d count the number of “It’s not Possible” responses I
receive from the IT department on my requests, ideas or
requirements. Examples like it’s not possible due to missing
integrations or interfaces; it’s not possible because it’s
hardcoding, somewhere, deep in the code; it’s not possible as
the data is not correct; we cannot handle the data volumes
anyway; it’s not possible because the person who build the
system has retired and traveling around the world. Monitoring
the number of “It’s not Possible” instances might be a good
indicator to track your architecture’s evolution.
I could continue my list of KPIs with
• Number of security issues / breaches
• Capacity of resources with required competency / skill set to
operate and management the IT landscape
• Amount of overlapping functionality in different applications
• Copies or replica of data
It would be great to have a discussion on the EA Cockpit for
utilities. Possibly create a set of standard KPIs and benchmarks
for the industry.
ENTERPRISE
ARCHITECTURE
BLUEPRINT FOR UTILITIES
– PART 3:
Technology, Dev Tales, News

“I am a Man of Principle”
“I am a man of fixed and unbending principles, the first of which
is to be flexible at all times” – a quote from former US Senator
Everett McKinley Dirksen. Most probably he thought about
politics in his role as senator, but the statement would equally
suit coming from enterprise architect, too!?
In part 1 – “The Good, the Bad and the Ugly” of our Enterprise
Architecture Blueprint, I focused on technical debt utilities have
built up over the years due to an “Accidental Architecture”. In
part 2 – “Bad choices make good stories,” I identified some KPIs
for a good architecture. And now, in this post, it’s time to move
on and talk about architecture principles.
Architecture Principles
According to the Open Group Architecture Framework better
known as TOGAF, principles are general rules and guidelines,
intended to be enduring and seldom amended, that inform and
support the way in which an organization sets about fulfilling its
mission. TOGAF details: Architecture Principles are a set of
principles that relate to architecture work. They reflect a level of
consensus across the enterprise, and embody the spirit and
thinking of existing enterprise principles. Architecture Principles
govern the architecture process, affecting the development,
maintenance, and use of the Enterprise Architecture.
TOGAF has laid out a set of 21 principles wherefrom some
experts would prioritize those 8 as the ones you need to know.
As a man of principles, let me introduce some of the most
important ones that, at least in my opinion, that are applicable
for the future digital utility.
Interoperability and Integration-friendly: The digital utility of the
future requires entirely digitized value chains and automated
processes. To achieve this, all applications and systems should
be built with integration and interoperability in mind and expose
documented interfaces using modern integration methods.
Standardization of Data and Meta Data: The Common
Information Model (CIM) emerges as a de facto standard for the
power industry and for transmission and distribution service
operators. In addition, we see more and more vendors are
leaning towards CIM as their de facto standard. Implementing a
unified information model inspired by CIM creates measurable
benefits for utilities.
Loose coupling: Utilities should get rid of technical debt as fast
as possible. To do so, dependencies to given applications,
vendors or proprietary platform technologies must be reduced or
minimized. Loose coupling between systems and services
increases exchangeability and flexibility.
Modularity: The future utility must be able to quickly adapt to
new opportunities, regulations, or market changes. As many
utilities still operate huge monolithic applications, they are
having to compromise and wait often several months for minor
changes or investing hundreds of thousands of Euros for
alterations or additional functionalities or fields in a user
interface. A modular application landscape creates the
foundation for the future success.
Event driven architecture: As more and more smart meters and
sensors are deployed in the electrical network and at the grid
edge, utilities must be able to handle a tsunami of big energy
data in real time. The traditional process driven integration
approach will fall short: Orchestrations get too complex; error or
exception handling gets extremely complicated; performance;
and message throughput will lag. Integration will be a nightmare
for both development, operations, and management.
Implementing the concept of an event driven architecture is key
to building a resilient and reactive integration system. A system
which is designed and built for scaling as needs change and
demands require.
API driven development: The entire industry is moving towards a
distributed energy system forcing utilities to operate a
distributed and modular IT and data architecture. API driven or
API-first development fosters a modern IT landscape with
holistically digitized and automated integrations.
Cloud native and containerized applications: Many utilities are on
their cloud journey. Most having set off with a single cloud
strategy. However, as every cloud provider (Microsoft Azure,
Google Cloud Platform, Amazon AWS, or other) has its own
strengths and drawbacks, utilities are recognizing the benefits of
embracing on a multi-cloud strategy with cloud native and
containerized applications: leveraging multiple cloud or
computing services (public cloud, private cloud), but in an
integrated and distributed cloud-mesh architecture.
Data Mesh: The future utility is data and analytics driven. Many
energy companies have recognized the value of data and
identified the potential benefits with implementing a data lake.
To avoid the same drawbacks experienced and typically
associated with an “Enterprise Warehouse” approach, that
included sizeable investments in terms of expenditures and
resources to build and manage a centralized and monolith data
warehouse platform, utilities should instead focus on
implementing a distributed data lake, also known as a data
mesh.
I am sure there are several other useful architecture principles.
So, I encourage getting a discussion started to establish a set of
best practice architecture principle for utilities.
ENTERPRISE
ARCHITECTURE
BLUEPRINT FOR UTILITIES
– PART 4:
Technology, Dev Tales, News

“Speed is the new currency of business”


“Speed is the new currency of business,” said Marc Benioff,
Founder and CEO of Salesforce, during a panel discussion on
innovation at the World Economic Forum in Davos in 2016. We
could now start a huge discussion to question the meaning,
purpose, and true impact of an event such as Davos. Let us
rather stick to our field of expertise where we can influence and
make a difference: Enterprise Architecture for Utilities.
The first three posts of my Enterprise Architecture Blueprint
series focused on technical debt, accidental IT, KPIs for a good
architecture and proposed some important architecture
principles. In this post I will move on to discussing IT strategy,
starting with the concept of a “Pace Layered Architecture”.
Pace Layered Architecture
With all ongoing changes shaking up the energy industry, I think
we all can agree that utilities must modernize their IT to keep up
with the pace of disruption and innovation in the market. In an
ideal world, utilities would just get rid of all legacy systems,
throw out all the old-fashioned monolithic applications and start
from the scratch with a greenfield approach to building a
modern, containerized, cloud-native, API-driven IT landscape.
Yeah right. Reality bites.
Utilities are operating a critical infrastructure and provide critical
services to all people and society as a whole. They must, at least
from an IT perspective balance security, resilience, reliability,
stability with innovation, agility, and speed of delivery.
In 2012, research and analyst company, Gartner, coined “Pace
Layered Architecture” as a concept for a multi-speed IT.
Gartner defines its approach as follows: A pace-layered
application strategy is a methodology for categorizing, selecting,
managing, and governing applications to support business
change, differentiation, and innovation.

Source: Gartner

For that, Gartner introduces different layers of software


applications where each layer contains common characteristics
related to speed of adaption and change:
Systems of Record: Packaged applications, typically Commercial
Off-The-Shelf products (COTS) or legacy homegrown systems
supporting your organization’s “commodity” business capabilities,
handling core transaction processing, or manage your reference
or master data.
As your core processes and capabilities do not change that
often, are common or “commodity” to most of utilities, and in
addition are often subject to regulatory requirements, the pace
of change is quite low.
Systems of Differentiation: Applications that support or enable
the unique business processes for your organization or utility-
specific capabilities.
Those business processes do not change daily. But still, you want
to streamline, digitize, automate, or optimize processes or
business functions occasionally. You might introduce or leverage
new technologies, too. So, still the pace of change is not the
highest nor the slowest.
Systems of Innovation: New solutions or services that are built
on demand to address new business requirements or
opportunities. With a lean startup mindset, solutions in this layer
usually start with an MVP (Minimum Viable Product) or PoV
(Proof of Value) that is iteratively improved based on users’
feedback towards a production ready service.
This layer is the fast track or fast line for innovation with a high
rate of change, high pace of delivery and short project lifecycles.

Source: Gartner

The concept of a pace layered architecture looks like a perfect


answer for utilities’ challenge with balancing resilience, reliability
and stability on the one side and the required agility on the other
side.
Considering all the enormous changes in the industry with new
players, changing regulations, disruptive analytics services,
emerging technologies, etc., I’d personally prefer or propose a
slight modification to establish a pace layered architecture for
the digital utility.

The “Pace layered Architecture for Utility 4.0” adds some specific
layers with defined characteristics:
System of Connectivity: A modern data integration layer based
on the concept of an Event Driven Architecture (EDA) and
reactive microservices handling all data flows and data exchange
between all the Systems of Record and between the layers to
establish the entirely digitized operational and business
processes that define the Utility 4.0.
System of Intelligence: A big energy data management and
analytics infrastructure layer used to deliver the cognitive
capabilities for the Utility 4.0: Smart grid edge analytics to create
insight and transparency in the network, intelligent grid
operations, asset health, predictive maintenance, demand / load
forecasting, EV charging optimizations, identification of
abnormalities and more, are just some examples of the cognitive
use cases.
System of Engagement: A digital user experience layer providing
the services and digital excellence both the digital native
consumers expect but also the utilities’ employees need to
handle the huge amount of data.
I feel fortunate to have been given the opportunity to implement
this architecture model as the foundation for a big
transformation program at one of the “Big Three” utilities in
Germany.
Introducing the “System of Connectivity” highlighted the value of
a modern data management approach and set the spotlight on
the importance of the “dirty job of data integration”. Placing the
“System of Intelligence” at the center of your architecture
fosters the cognitive capabilities the future utility needs to
achieve the sustainable development goals and business objects.
Plus, the “System of Engagement” improves the digital user
experience for the end-users, meaning for you and me as
consumers or prosumers. But maybe even more importantly, as
more and more data is digested in real time to drive faster
insights and decision-making, this applies to utility employees
too.
BUILDING BLOCKS OF AN
ADVANCED METERING
INFRASTRUCTURE - PART
1:
The Headend System
A recent study of AMI professionals at leading DSOs revealed
that regardless of the market set up or regulatory position, all
utilities are currently investigating the business case for
deploying modern smart meters to modernize the low voltage
grid and integrate the last mile of the smart grid. Driving the
rollout of next generation smart meters to improve customer
service and support advanced analytics in the smart grid at
minimal expenditure forms the backbone of Advanced Metering
Infrastructure in today’s context. But what makes up this
infrastructure? And which building blocks do utilities need to
understand and consider?
As industry specialists in utility data integration and enterprise
architecture, especially when it comes to Smart Metering, we
advise and consult utilities globally and support application and
system vendors implementing projects impacting millions of
consumers. As a result of this experience, we see a variety of
approaches to central architectural decisions that utilities are
taking. In this article we explore key factors and provide our
perspectives related to infrastructure cornerstones in the
advanced smart metering value chain, namely the headend
system. So what are the different types of headend systems on
the market today and what constitutes a good and future-
oriented integration architecture to support these systems?
The role of a Headend System
The role of the headend system or systems (HES) in the Smart
Metering architecture is typically two-fold. The main objective of
HES is to acquire meter data automatically, avoiding any human
intervention, and to monitor parameters acquired from meters.
In this case HES is managing the connectivity and scheduling the
collection of data from the metering infrastructure including
both the meter devices and communication. However, HES also
enables secure access to meters for configuration, software
updates and ad-hoc requests.

Figure 1: The traditional Smart Metering architecture


To avoid working in multiple operational systems, you would
ideally want a HES that is able to expose most of its services
through an application programming interface (API), meaning that
most functions can be executed from a central system like Meter
Data Management (MDM) or Customer Information System (CIS).
As an example, an authorized operator should in theory be able
to remotely disconnect a Smart Meter through the user interface
of the HES. However, in most cases this disconnection use case
would be initiated by the system that manages all customer
information and interactions (CIS).
Traditional vs. universal HES
Headend systems come in different shapes and formats. The
most common form is one delivered by the meter manufacturer
itself. Understandably, meter manufacturers offer a HES to
operate their meter functionality as they would not be able to
deliver smart meters without having an interface to operate their
devices (i.e. HES).
Next to meter manufacturers providing their HES, a number of
“universal” HES providers have entered the market over the last
fifteen years. They originated from a market supporting
Automated Meter Reading (AMR) for larger commercial and
industrial customers (C&I customers) since the 90’s. AMR use
cases were typically limited to reading out meter registers,
hence, universal data collection solutions offered adapters or
connectors to vendor-specific protocols of the various meters.
While this was common practice for C&I metering, a universal
HES business case for Smart Metering, with its bi-directional
communication and advanced functionality requirements, seems
less clear. From a technical point of view, there are key
challenges that need to be overcome:
• Multiple communication technologies (radio, PLC, GSM) and
patterns (mesh networks, concentrator-based networks or
point-to-point communications) require different solutions
and management capabilities from a HES. In addition to
functional use cases, a HES is also expected to handle
communication management and monitoring. Furthermore,
performant and scalable data collection engines need to take
into account the communication pattern in a given meter
network:
A large proportion of point-to-point connections require
strong multi-threaded collection capabilities with the ability
to open, process and close many communication channels
simultaneously. In opposition to that, concentrator-based
solutions need to focus on the transfer and process of large
amounts of data at speed.
• Missing standardization: Even if some progress has been
made during the last 15 years with respect to standardization
of the meter protocols, there are still many different
protocols and so-called standards. While that was one of the
initial triggers for universal HES, it is also one of its biggest
challenges:
keeping track of development changes in a multitude of meter
protocol adapters (and making that commercially viable).
• Meters now come with a great variation of functionalities.
While requirements vary from country to country, most
utilities tend to request a long list of functionalities,
anticipating what may be needed during the lifetime of a
smart meter which in most cases is longer than 10 years.
Implementing these large number of functionalities for an
equally large variety of different meter types within one
solution posts challenges for universal headend solution
suppliers.
There are also a number of key legal and commercial criteria
that should be considered.
• When investing in smart meters, you want them to work. The
responsibility for that lies with the meter vendor. To prove
and warrant a functioning infrastructure, the meter vendor
will always need to utilize a HES, most likely the meter
vendor will want to use their own HES for that purpose. In
other words, if a customer wants to utilize a universal HES,
the burden of proof for the functioning of the smart meters
will most likely change, making it harder for the customer to
prove non-compliance or performance of functionality within
the delivery.
• Most meter protocols consist of a public part and a private
part, in general terms. While daily meter reads or even meter
disconnection is relatively easy to implement for universal
HES, more proprietary functions such as meter configuration,
firmware management and functions supporting failure
analytics are a challenge, and are in most cases close to
impossible to implement, due to both complexity and IPR-
related issues.
HES Southbound Integration
From an architectural perspective, the universal headend looks
promising at first. Instead of one integration between MDMS and
each of the potentially many HES, only a single integration needs
to be created and managed.

Figure 2: Smart Metering architecture with universal HES


However, the complexity in the architecture described above is
moved from the Northbound side of the HES, to the Southbound
side, where a universal HES needs to manage multiple complex
integrations with often non-standardized protocols and
communication technologies. Furthermore, the HES itself is
transformed from a straightforward protocol converter &
configuration/collection engine for a single technology towards a
complex monolithic application with potentially significant
operational and maintenance cost.
HES Northbound integration
On the Northbound integration side of the HES, the landscape
has been changing in recent years. Integration between MDMS
and HES was traditionally handled as point-to-point (p2p)
integration between MDM and HES (See picture above). Many of
the larger MDM suppliers developed “out-of-the-box” available
connectors to specific HES from various vendors to support this
need.
Similar to the issue of operation and maintenance costs that
come with universal HES, the Northbound p2p integration also
has the potential to inflict large operational and maintenance
costs. The fact that the selected MDM vendor needs to
accommodate all changes in its p2p integrations towards one or
more HES can lead to the following issues:
• Vendor lock / dependency on MDM vendor;
• MDM vendors potentially lacking competence on enterprise
integration;
• Introducing tight coupling between systems where loose
coupling would be logical and achievable; and
• Non-compliance with regards to Smart Grid use cases that
require additional flexibility
A manifestation of the problem caused by p2p integrations
comes with new use cases in the Smart Grid area that require
close-to-real-time data transfer from meters to solutions in the
operational technology (OT) area. For example, “last gasp”
messages that should be passed on to an ADMS solution as fast
as possible to support fast power restoration, may need to be
routed on a prioritized data flow and not go through the MDM
first. This again causes the need for additional integrations
towards the OT area that cannot be handled by the MDM vendor.
Figure 3: Growing complexity with p2p integrations and new use cases
Changing paradigm on HES northbound

During the last years, our customer projects, ongoing tenders and
talks with partners supporting both HES and MDM solutions
around the world have indicated that the market is currently
undergoing a paradigm shift with regards to HES Northbound
integration. This paradigm shift is mainly based on two
developments:
1. The growing acceptance of IEC 61698-9 (CIM) as de-facto
standard and data model for all integration between HES and
MDM solutions;
2. Increased competence of customers around enterprise
integration, introducing concepts of loose coupling and event-
driven architecture (EDA), even in the smart metering value
chain.
The growing acceptance and implementation of CIM empowers
customers to be less dependent on the software suppliers’
services and makes it easier for third-party tools to take over
the responsibility for MDM-HES integrations.
In addition, growing competence and experience in the area of
enterprise integration makes customers realize that p2p
integrations in general are not sustainable and that the concept
of loose coupling and event-driven architecture is significantly
cheaper and more effective in the long run.
Another important factor is based on many lessons learned from
earlier AMI roll-outs where p2p integrations effectively prevented
quick and reliable error analysis and resolution, based on the
simple fact that no over-arching monitoring of all involved
integration flows was available, leading to an endless “finger-
pointing” game between involved vendors.
How to make the right architectural choices regarding headend
systems (HES)
All these factors have led to us recommending utilities (and
software vendors) implement a data integration layer, even on
the HES Northbound side, between HES and MDM. Ideally, this
integration layer supports CIM, loose coupling and event-driven
architecture out-of-the-box. It should also support real-time
streaming of data and be highly resilient.
When it comes to the role and integration of HES within the
Smart Metering value chain, the following overall highly effective
and resilient architecture is recommended for utilities:
Figure 4: Resilient and effective HES architecture
In this article we were only focusing on HES and its integration
into the Smart Meter value chain. We demonstrated that
universal HES may not be as promising as they seem, and we
explained why we think that Northbound HES integration should
go via an adequate integration bus.
In our next article we will describe the role of the Meter Data
Management System (MDM) and its integration into the Smart
Meter value chain. We will also give an outlook on the future of
MDM solutions as the core element of the Smart Meter value
chain based on different likely scenarios.

You might also like