Dynamics 365 Implementation Guide v1-2
Dynamics 365 Implementation Guide v1-2
Success by Design
Microsoft Dynamics 365
Success by Design Implementation Guide
All rights reserved. No part of this book may be reproduced or used in any manner
without permission of the copyright owner.
Azure Cosmos DB, Bing, BizTalk, Excel, GitHub, Hololens, LinkedIn, Microsoft 365,
Microsoft Access, Microsoft Azure, Microsoft Dynamics 365, Microsoft Exchange Server,
Microsoft Teams, Office 365, Outlook, Power Automate, Power BI, Power Platform,
PowerApps, SharePoint, SQL Server, Visio and Visual Studio are trademarks of Microsoft
Corporation and its affiliated companies. All other trademarks are property of their
respective owners.
The materials contained in this book are provided “as-is” and Microsoft disclaims all
liability arising from you or your organization’s use of the book.
Information contained in this book may be updated from time to time, and you should
refer to the latest version for the most accurate description of Microsoft’s FastTrack for
Dynamics 365 program.
Case studies used in this book are fictional and for illustrative purposes only.
ii
This book is dedicated to Matthew Bogan, our colleague and friend. The way you think
and the way you approach everything you touch embodies a growth mindset—that
passion to learn and bring your best every day to make a bigger difference in the world.
Thank you for being a shining example.
iii
What’s inside
Strategy / 1 Prepare / 513
1 Introduction to Implementation Guide 2 18 Prepare for go live 514
2 Success by Design overview 12 19 Training strategy 553
3 Implement cloud solutions 27
4 Drive app value 61
5 Implementation strategy 87 Operate / 593
20 Service the solution 594
Implement / 229
10 Data management 230
11 Application lifecyle management 266
12 Security 291
13 Business intelligence, reporting, and analytics 331
14 Testing strategy 360
15 Extend your solution 394
16 Integrate with other solutions 426
17 A performing solution, beyond infrastructure 474
This Implementation Guide comes at a pivotal time for the world, and
certainly for us at Microsoft. The pandemic has affected almost every
aspect of everyone’s lives, while also bringing about tremendous
change to business models and processes across every industry.
James Phillips
President, Digital Transformation Platform
1
1 Guide
Introduction to
Implementation
Guide
Implementation Guide: Success by Design: Guide 1, Introduction to Implementation Guide 2
“Our success is dependent on our customers’
success, and we need to obsess about them—
listening and then innovating to meet their
unmet and unarticulated needs. No customer of
ours cares about our organizational boundaries,
and we need to operate as One Microsoft to
deliver the best solutions for them.”
– Satya Nadella, Chief Executive Officer of Microsoft
Overview
Microsoft believes that every business is in the
business of creating great customer experiences.
Predictive era
Power Platform
Azure
Data ingestion (Planet Scale
Foundation)
For companies that “grew up” in the predictive era, that paradigm is
second nature. But for mature enterprises, the path can be more diffi-
cult, as it requires transformation of existing capabilities and processes.
For Microsoft, this isn’t just theory or prediction. We have been pursu-
ing our own transformation and have faced many of these challenges
ourselves. At the heart of our transformation is the way we build prod-
ucts. Instead of the multiyear product cycle of traditional, on-premises
products, Dynamics 365 is delivered as a service that’s continuously
refined and always up to date. It also allows us to capture rich teleme-
try on bugs and usage to inform ongoing feature development.
Beyond its influence on the way we run our own business, this digital
transformation paradigm has affected how we think about success for
our customers. Microsoft’s vision has always been about democratizing
Innovate everywhere
Microsoft’s data-first
cloud strategy
The opportunity contained in the shift from reactive to predictive—
knowing that the car is on its way to failure before it fails—is hard to
overstate. For Microsoft and for our customers, every product, ser-
vice, and business process is ripe for reinvention. But the reinvention
requires a wholesale reimagination—a literal turning upside down—of
the systems that power the interconnected fabric of products, services,
and applications.
te
ce
ms en
s
i
an d ex per
what Microsoft got right with Dynamics 365, further
Optimize Transform explain our data-first strategy in terms of Dynamics
operations products 365, and provide advice to customers embarking on
an implementation of Dynamics 365.
Microsoft’s data-first
Dynamics 365 strategy
Dynamics 365 is a If you’re reading this book, then your business likely has already invested
in Dynamics 365. However, understanding Microsoft’s future vision for
portfolio of business Dynamics 365 will help you take full advantage of your investment.
applications that
meets organizations Building on Microsoft’s cloud strategy (which includes Dynamics 365),
several other factors contribute to Microsoft’s data-first strategy to put
where they are—
Dynamics 365 into a category of one:
and invites them to ▪ Dynamics 365 provides front-office and back-office cloud applica-
digitally transform. tions that are consumable in a composable manner, which means
8
that our customers can maximize their investment without having
to do an extensive rip and replace of what were historically behe-
moth customer relationship management (CRM) and enterprise
resource planning (ERP) processes.
▪ Dynamics 365 is built upon the low-code Power Platform to
enable pro and citizen developers to build solutions, automate
processes, and generate insights using Power Automate, robotic
process automation (RPA), and Power BI—which no other vendor
can offer with its core platform.
▪ All of these are natively built on Azure, the cloud with an unmatched
level of security, trust, compliance, and global availability.
▪ Add to this the assets of Office 365 (such as Microsoft Teams,
LinkedIn, and Bing), which can be used to enrich the business
application experience.
Advice to customers
embarking on an
implementation of
Dynamics 365
Microsoft’s ideas on what customers should keep doing—whether
The ability to they are embarking on an implementation of Dynamics 365 or any
start quickly in other business application—are detailed later in this book, but they
the app allows include many of the traditional activities, such as business-process
More pointedly, Microsoft’s view is that being agile does not mean
engaging in sprint cycles that still require a 9-month to 18-month team
effort to deploy part of the application to end users.
want to harness it advance your business model, allow you to leapfrog over the competi-
tion, or something else?
so that they can
disrupt the market— In making an investment in business applications, organizations are
are the business called upon to clearly identify the value they intend to drive toward.
Will the technology get you there? Does the software or the imple-
application projects
mentation partner have the vision to get you there?
that Microsoft wants
to be a part of. An investment in business applications requires a level of organizational
10
maturity to think about what you and your business are after. It also
requires you to think beyond just the implementation of technology.
One of the primary grators, independent software vendors (ISVs), and customers as a means
to better architect, build, test, and deploy Dynamics 365 solutions.
goals of this book is
to democratize the For our customers, Microsoft recognizes that Success by Design doesn’t
practice of Success guarantee implementation outcomes, but we’re confident that it will
by Design by making help you achieve your project’s goals while enabling the desired digital
transformation for your organization.
it available to the
entire community For our partners, Microsoft is confident that Success by Design, coupled
of Dynamics 365 with your implementation methodology and skilled resources, will
increase your team’s effectiveness in delivering successful Dynamics 365
implementers.
projects to your customers.
For the first time in the product’s history, such questions have led to
the successful transition of 100 percent of Microsoft Dynamics 365
Microsoft believes online customers to one version, to a dependable and safe deployment
process, to a steady and coherent feature release cadence (including
that customer many overdue feature deprecations), and to a reliable and performant
success is the platform that delivers customer value. But product is only one half of the
precursor to every customer success equation.
Dynamics 365 Microsoft has put equal emphasis on the need to provide you the
product development prescriptive guidance and recommended practices to ensure a smooth
decision. implementation project and a properly designed and built Dynamics 365
14
solution that successfully transforms business operations. This push has
also resulted in the transformation of Microsoft’s FastTrack for Dynamics
365 service, which makes certain that our community of Dynamics 365
implementers, customers and partners, has access to Success by Design.
To this end, this chapter focuses on the what and the why of Success by
Design, as well as how project teams can use it to accelerate customer
success throughout the implementation of Dynamics 365.
15
Success by Design
objectives
Success by Design is prescriptive guidance (approaches and recom-
mended practices) for successfully architecting, building, testing, and
deploying Dynamics 365 solutions. Success by Design is informed by
the experience of Microsoft’s FastTrack program, which has helped our
customers and partners deliver thousands of Microsoft’s most complex
For interested customers, Microsoft
recommends that project leaders team Dynamics 365 cloud deployments.
up with their implementation partner
to enable Success by Design within their
project. In addition to the Success by Design Success by Design reviews are exercises in reflection, discovery (obser-
guidance available in this book, it’s highly
recommended that project teams ready vation), and alignment (matching to known patterns) that project teams
themselves by enrolling in the Success by can use to assess whether their implementation project is following
Design training on Microsoft Learn.
recommended patterns and practices. Reviews also allow project teams
to identify (and address) issues and risks that may derail the project.
Success
by Design
phases
Initiate Implement Prepare Operate
Success by Design maps the
Dynamic 365 implementation
lifecycle into four methodology-agnostic phases: Initiate, Implement,
Prepare, and Operate (Figure 2-1). In this section and the following
¹ Acknowledging the possibility that feature sections, we outline the Success by Design phases, their relationship to
deprecations or other changes to product roadmap
in compliance with Microsoft policy may occur. Success by Design reviews, and the desired outputs and outcomes.
By the Prepare phase, the solution has been built and tested and the
project team is preparing for the final round of user acceptance testing
(UAT) and training. Additionally, all necessary customer approvals have
been granted, information security reviews completed, the cutover plan
defined (including go/no-go criteria), mock go-lives scheduled, the sup-
port model ready, and the deployment runbook completed with tasks,
owners, durations, and dependencies defined. At this point, the project
team uses the Success by Design Go-live Readiness Review to identify
any remaining gaps or issues.
In the Operate phase, the customer solution is live. The goal of this phase
is stabilization and a shift in focus towards functionality and enhance-
ments that are earmarked for the next phase of the project.
17
turn to Success by Design reviews, which are also sometimes referred to
as Success by Design workshops.
Each review raises questions that serve as points of reflection that project
teams can use to generate important discussion, assess risk, and confirm
that best practices are being followed.
The primary review outputs fall into two related categories: findings and
recommendations.
19
A global corporate travel company is implementing Dynamics 365
Customer Service to drive its call center transformation. As the project
team digs into the business’s requirements, they learn that the Dynamics
365 solution must account for integration with multiple legacy systems
(many with high transaction volumes). Additionally, the requirements
point to a business-mandated customization that the project team
agreed could not be achieved using out-of-the-box functionality.
In preparation for the Solution Blueprint Review, the project team parses
these and other details, including confirming that solution performance
testing was purposely left out of the project scope on the assumption
that Microsoft’s Dynamics 365 cloud service should be performant on
its own. (Although it’s true that Microsoft is responsible for delivering a
reliable and performant cloud service to its customers, our experience
is that solution design and the resulting configurations, customizations,
and ISVs to achieve that design may play a role in impacting overall
Dynamics 365 solution performance.)
Conclusion
Success by Design equips project teams with a model for technical and
project governance that invites questions and reflection, which leads to
critical understanding of risks that might otherwise go unnoticed until
too late in the project.
Considering the pace of cloud solutions and the investment that organi-
zations make in Dynamics 365 software and implementation costs, even
the most experienced customers and partners with the best methodolo-
gies will benefit by incorporating Success by Design into their projects.
22
Case study
An inside look at the
evolution of Success by Design
When our Solution Architects got their start with Dynamics 365, they
found themselves hooked on the challenge of effectively aligning an
organization’s business processes and requirements with the (software)
product in hopes of delivering solutions that resulted in actual value
for users.
The content of this case study is based on Because of this, FastTrack Solution Architects also know the challenge
interviews conducted with Dynamics 365
of working with people, understanding the business, shaping software
FastTrack Solution Architects. The goal is
to provide readers with an inside look at to meet the needs of the business, and delivering a solution that is
Success by Design. We hope you enjoy
this insider view.
accepted by users. They know that projects don’t always go well and
often fail.
It’s for exactly this reason that the FastTrack for Dynamics 365 team
and Success by Design were created.
When the FastTrack for Dynamics 365 team was formed, these goals
were initially achieved without the use of a framework. In other words,
FastTrack Solution Architects used their experience and connection to
the product team, but this approach contained too many variables to
be consistently reliable.
As this chapter highlights, the FastTrack for Dynamics 365 team doesn’t
The more project teams invest in Success by Design, the more they will
get out of it.
Cloud implementation Whatever model an organization chooses, we find that the key to
success isn’t what implementation looks like, but rather how leaders,
Customize and extend
cloud applications architects, and entire companies approach the digital transformation
journey. Success in the cloud isn’t just about the technology or the
Operate in the cloud features available; it’s about the organizational mindset. For a success-
ful digital transformation, your organization must prepare for changes
Evergreen cloud
that span the entire enterprise to include organizational structure,
Upgrade from on-premises processes, people, culture, and leadership. It’s as much a leadership
to the cloud and social exercise as it is a technical exercise.
Upgrade from on-premises First, let’s understand the “why” behind the pervasive adoption of SaaS
to the cloud business applications by organizations across the globe. Traditional
approaches to business applications mostly involve the IT department
building a custom application or using an off-the-shelf product aug-
mented with customizations to meet the specific business requirements.
The typical approach includes phases to gather requirements, build,
go live, operate, and then follow the change management process to
request changes or add new features. The IT department has full control
of the application, change process, releases, updates, infrastructure, and
everything needed to keep the solution running for years.
The changes we’re talking about aren’t triggered by the cloud or tech-
nology. These are thrust upon us by the changing business landscape.
Adopting a cloud mindset, then, is about transforming your processes,
people, culture, and leadership in such a way that your organization can
embrace change quickly and successfully. We believe the fundamental
organizational characteristics that will determine success in this environ-
ment come down to focusing on delivering business value through a
“Every company is a technology company,
secure cloud technology platform. This includes the ability to harness the
no matter what product or service it
provides. Today, no company can make, data that is captured to generate insights and actions. This platform will
deliver, or market its product efficiently
without technology.”
also rely on automation to quickly react to changing needs.
Automate the For example, take a point-of-sale application that helps you address
your customer by first name and provides them personalized relevant
routine tasks so offers based on their loyalty, or recommends products based on their
you can focus on purchase history and Artificial Intelligence (AI). This will be better
creative, more received by your employees and customers than an application that
challenging work. can barely determine if a product is in stock or not. Simply offering the
best deal or best quality product isn’t enough in today’s hyper-com-
petitive environment. To win, you need to differentiate between your
customers, respond to their unique behaviors, and react to fluctuating
market demands.
Businesses likely already have the data to do this, but may lack the
technology to turn data into practical insight. For example, Dynamics
365 Commerce taps your systems to gather customer intelligence from
a myriad of sources (such as social media activity, online browsing hab-
its, and in-store purchases) and presents it so that you can shape your
service accordingly. You can then manage product recommendations,
unique promotions, and tailored communications, and distribute them
across all channels via the platform.
The “If it ain’t broke, don’t fix it” mentality common in large organiza-
tions, where business applications can remain unchanged for up to 10
years, puts the enterprise at risk. Unless your business is a monopoly
and can sustain itself without investing in technology, the hard reality
is most businesses today won’t survive without strategic plans for
digital transformation.
consequences. and deployed with an expectation that it will remain unchanged for
long periods of time. Instead, the core design principle for business ap-
plications should be ease of change. You should take advantage of the
wonders of automation to deliver changes as often as daily and weekly,
at the whim of business. This continuous development philosophy is
core to a Dynamics 365 SaaS application with weekly maintenance
updates and multiple waves of new features delivered every year.
Realizing the value of these cloud applications, therefore, is about
adopting the latest best-in-class capabilities, continuously delivered.
With this new mindset, SaaS services aren’t just used to meet business
requirements, they’re helping drive business value and adoption.
These scenarios all help illustrate how a transition to the cloud can play
an instrumental role in refocusing technology to deliver business value.
Driven by data
This disruptive innovation we’re seeing across several industries is
fueled by data. Almost everything around us generates data: appli-
ances, machines in factories, cars, apps, websites, social media (Figure
3-1). What differentiates one company from another is the ability to
harness this data and successfully interpret the information to generate
meaningful signals that can be used to improve products, processes,
and customer experiences.
te
ce
ms n
ie
s
Overall, the intelligence and insights you generate from your data
will be proportional to the quality and structure of your data. You
35
can explore this topic more in Chapter 10, “Data management,” and
Chapter 13, “Business intelligence, reporting, and analytics.”
Think platform
An organization can take several approaches towards digital trans-
formation. In many cases, you start with a single application being
deployed to a SaaS cloud. A key responsibility of IT decision-makers
and enterprise architects is to deliver a cloud platform for their or-
ganization’s digital transformation. Individual applications and their
app feature sets are important, but you should also look at the larger
picture of what the cloud platform offers and what its future roadmap
looks like. This will help you understand if the platform can meet the
short-term and long-term objectives of your digital transformation.
You’re investing Thinking about the platform versus a single app offers clear benefits.
You avoid reinventing the wheel with each additional application,
in the platform, and you instead deliver a foundation approved by governing bodies,
not just the with clear patterns and practices. This approach limits risk and brings
application. a built-in structure that enables reuse. Platform thinking doesn’t nec-
essarily mean that other cloud platforms or legacy applications have
to be rebuilt and replaced. Instead, you can incorporate them as part
of an all-encompassing platform for your organization, with well-de-
fined processes and patterns that enable integration and flow of data
between applications. Bringing this “systems thinking” to deliver a
platform for business applications can drastically reduce the amount
of time lost getting security and design approvals for individual apps,
thereby improving your agility and time to value.
The biggest selling point of the cloud is to stay current and quickly
deploy software. You can’t rely on teams to manually test and deploy
each and every change to achieve that vision.
Traditionally, a time lag existed between code and test and deploy.
A bug in production meant repeating the lengthy cycle. The fear of
breaking code meant that teams tended to delay updates for as long
as possible. With automation, you deploy fast and potentially fail fast.
This is where the culture of fail fast comes in. Automated processes
help companies empower their teams to take risks. You fail but quickly
release a fix. Over time, failures decrease and success rates improve,
instilling confidence in teams to deploy and innovate, and deliver-
ing value to the business faster. At the core of the cloud mindset is
Fig.
3-2
Plan
Monitor Code
DevOps
Operate Build
processes
Deploy Test
Release
The idea is to enable development, IT, and quality and security teams
to collaborate to produce more reliable and quality products with the
ability to innovate and respond more quickly to the changing needs of
business. Implementing DevOps inherently depends on automation:
automating the build process to enable continuous deployment,
regression tests to improve speed and reliability, and administrative
actions like user and license management.
If a change includes a bug, the build fails, and the cycle repeats. The
automation allows for changes to happen quickly without unneces-
sary overhead, a team can focus on the business logic and not the
infrastructure. Therefore, with CD we always have the most up-to-date
working software.
But this isn’t just a technological change. DevOps requires close collab-
oration between development, testing, and deployment teams. This is a
cultural and mindset change in which the whole team trusts each other,
feels empowered, and is collectively responsible for product quality.
Cloud implementation
Cloud implementation
Customize and extend considerations
cloud applications
Implementing solutions from cloud SaaS products can reduce man-
Operate in the cloud
agement overhead and allow you to deliver business value more
quickly by focusing on the application layer. Fundamental application
Evergreen cloud
considerations, however, still apply to SaaS cloud applications. Security,
Upgrade from on-premises scalability, performance, data isolation, limits, and capacity are still
to the cloud critical, but could have a different approach when compared to an
Security readiness
The responsibility for security in SaaS cloud applications is shared by
both the service provider and the customer. That means your existing
security policies might not be suitable to meet the security require-
ment in the cloud. The SaaS service provider processes customer data
and is responsible for aspects such as securing the backend infrastruc-
ture and data encryption in transit and at rest. As a data controller, the
customer is still responsible for securing access to environments and
application data.
Initial build pipeline instantiates pristine Building pipeline automates manual steps. No more Automated release pipeline removes manual steps. Weekly, daily, or hourly releases
development environment daily. need to upload to solution checker and manually become the new standard.
export solution, unpack, and push to repo.
Scalability
The scalability of the SaaS platform is a key consideration for business
applications. Being able to scale out and scale up to support seasonal
workloads or spikes in user activity will impact the overall user experi-
ence (both staff and customers) and effectiveness of business processes.
42
design, and customizations. SaaS services are inherently scalable, have
vast compute power, and are available from multiple locations around
the world, but that doesn’t necessarily guarantee performance if the
solution is poorly designed, or if users are accessing the service from
environments with high network latency.
One option for businesses looking for more reliability and security is
to use a private connection. Cloud providers offer dedicated channels,
You can test latency using the Azure for example Azure ExpressRoute. These connections can offer more
Latency Test and Azure Speed Test 2.0 for
each datacenter.
reliability, consistent latencies, and higher security than typical connec-
tions over the internet.
Private connections The other aspect of sharing resources and infrastructure in the cloud
is the possibility of monopolizing resources or becoming a noisy
offer reliability, neighbor. While rare, these cases are more often caused by poorly de-
consistent latencies, veloped runaway code than a legitimate business use case. Automated
and higher security monitoring and protection throttles are built into the Dynamics 365
platform to prevent such situations, so it’s important to understand
and comply with these service boundaries and throttles when design-
ing for cloud platforms.
Service protection
and a capacity-based model
Service protection and limits are used in cloud services to ensure
consistent availability and performance for users. The thresholds don’t
impact the normal user operations; they’re designed to protect from
random and unexpected surges in request volumes that threaten the
end user experience, and the availability and performance characteris-
tics of the platform or solution. It’s crucial to understand the protection
limits and design for them with the appropriate patterns, especially
around high-volume workloads like integrations and batch processing.
The cloud provides us the scalability to deal with large workloads, but
Plan for the daily peak and the monthly
maximum order transaction volumes a shared public cloud doesn’t mean you have unlimited capacity or
expected to ensure that the service is
licensed to satisfactorily support the peak
computing power. In many cases, these capacity limits are enforced
as well as the total maximum expected via licensing. You have to embrace this capacity-based model of cloud
volumes. Also, plan and prioritize for
integrations and Open Data Protocol infrastructure and plan to operate within the entitlements, taking
(ODATA) requests based on the volumes,
into account usage profile, data volumes, and integration patterns.
so they’re honored and not throttled
due to request limits. These checks and Understanding these protections and service boundaries that apply
balances prevent overutilizing resources,
preserve the system’s responsiveness,
to a specific service helps bring clarity on available resources. It also
and ensure consistent availability and drives the right behavior when it comes to design and development so
performance for environments running
Dynamics 365 apps. the solution is optimized to operate within the allocated capacity, or
additional capacity is procured to meet the requirements in advance.
These implementations are called cloud and edge, where cloud is the
system of intelligence and edge is the system of record.
For example, Dynamics 365 Commerce customers use the cloud and
edge model, in which the store operations run on-premises (edge)
and the main instance handles centralized back office functions like
finance, data management (such as products and customers), and
analytics (cloud).
Using a SaaS cloud Even when using supported extension techniques, you need to adhere
to best practices. A good example is that while the platform might
platform is an allow you to run synchronous code for up to two minutes when creat-
ongoing partnership ing a record, running to the time limit would block a user’s UI thread
between customer for two minutes when they save a record. Is that acceptable? Similarly,
and service provider. you could use custom code to limit access to records and implement
a custom security requirement, but certain access mechanisms could
bypass your custom code, leading to exposure. The point is to carefully
design your customizations and assess their impact, particularly when
those customizations affect the end user’s experience or deviate from
the security control constructs provided by the platform.
48
its capabilities so you get an evergreen, always up-to-date solution.
Future-proofing
One of the key principles we have established in this chapter and through-
out this book is getting comfortable with change. SaaS enables you to
adopt new features to maintain a competitive edge. These features and
capabilities are built on top of an existing baseline of features and tables.
Although repurposing or using a custom table may not be unsupported,
deviations can severely impact your ability to take advantage of future
capabilities or can affect the interoperability of your solution.
Chapter 15, “Extend your solution,” provides a deeper dive into making
a solution work for you.
Evergreen cloud
Center of Excellence
Upgrade from on-premises You might have a partner and specialized development team assist
to the cloud with the initial deployment, but once in “business as usual” operations
mode, does your team have the skillset to manage the platform, main-
tain the custom components, and assess and implement appropriate
new features? Not having the correct level of expertise and guidance
onboard after you go live can inhibit solution advancements and
undermine your ability to take full advantage of the platform. Creating
a Center of Excellence (CoE) within the organization and investing in
developing the skills to operate on the platform is extremely valuable.
Organizations that embrace organic application development by end
users also need a well-established CoE. This center can administer,
nurture, govern, and guide users to adopt the citizen development
approach to business applications, empowering everyone to create
innovative apps with high standards by using a consistent application
lifecycle management (ALM) process.
What to monitor?
Different personas in your company will want to monitor for different
aspects of the system. End users may be more concerned with respon-
siveness, admins may be looking at changes, and the business may be
looking at KPIs like the time taken to place an order.
Dynamics 365 has two release waves per year in which several incre-
mental enhancements and new capabilities are made available to
customers. Adopting these mandatory changes and new features,
many of which are included in existing license plans, are a fundamental
aspect of operating in cloud.
Stay engaged
Implementing the system is one thing, but getting continuous return
on investment (ROI) from your cloud solution requires engagement.
Companies who stay passionate and curious about the solution and
You can choose from application-specific adopt the latest features are the ones who enjoy most success. We
forums like the Microsoft Dynamics
Community Help Wiki, Yammer groups,
urge all customers to maintain engagement through conferences
blogs, events, and how-to videos where and events throughout the year, and stay connected by using com-
you can discuss ideas, learn features, learn
roadmaps, and ask questions. munity groups. Several official and unofficial communities have been
formed by customers, partners, and Microsoft engineering. Take every
Chapter 20, “Service the solution,” delves further into the updates
wave approach.
Conclusion
Embracing SaaS applications to run your business can significantly
accelerate your digital transformation, but it’s also important to rec-
ognize that organizational cloud maturity will play a significant role in
your strategy’s long-term success.
Adopt a cloud mindset tions to cloud, the data model, design, and data quality
make certain that the application becomes a stepping
Look to transform business processes by using SaaS-
stone and doesn’t stifle cloud adoption by carrying over
based applications and capabilities that provide value to
poor design and data.
the business process immediately and provide a path for
continuous improvement.
Customize and extend
Avoid implementing cloud-based processes that are
designed based on the recreation of legacy on-premise Understand the expected value of each application
user experiences. extension and quantifiable impact on aspects of the
business process such as data driven decision making,
Have a shared understanding at every level, from the
efficiency, automation, and ease of adoption.
executive sponsor to the developers, of the business
impact being delivered. Create guidelines for when to customize and extend
out-of-the-box apps and only adopt documented
Ensure the organization has done its due diligence
extension techniques.
organizing the data estate and understands the impact
of the new solution on the data estate. Avoid deviations and repurposing the out-of-the-box ta-
bles and data models (Common Data Model) because this
Gain approval for the cloud platform for use with the
inhibits future adoption of new features and capabilities.
appropriate application data category from information
security and compliance.
Operation
Ensure the respective teams understand all administra-
tive, operational, support, and monitoring aspects of the Ensure that the necessary expertise is available via the
platform, and the organization policies, processes, and partner (system integrator) or internal teams post-de-
patterns conform to the SaaS cloud. ployment to support and evolve the solution.
Implement DevOps and CI/CD pipelines to support Engage with the community to keep up with the latest
automation for build, testing, and deployment. innovation and help influence the product direction
using channels, events, blogs, and other mediums.
Design the solution according to service boundaries and
available licensing capacity, and ensure the controls to
further expand and scale the solution are understood.
One of its key businesses is a chain of retail stores that serves busy
guests across Australia.
Given the high volume of traffic that the retail chain processes and the
diversity of its retail store locations, the company was seeking a hybrid
solution with reliable, flexible connectivity.
Scalability and speed are critical for their retail business, and they
needed an infrastructure design that optimizes for both.
“We serve millions of guests each month,” says the company’s store
systems specialist. “To achieve speed of service, we know that we need
A cloud engineer at the company adds, “We have stores in rural areas
where they don’t have high internet speed connections. We wanted
to have something that is both in-store and in the cloud that would
synchronize when it comes to failovers and redundancy, so that we can
have the best of both worlds.”
One of the team’s other top considerations was how easy the infra-
structure would be to manage, so they factored in what the company
was already using across its retail business.
The hybrid deployment model using Azure Stack Edge and Dynamics
365 Modern Point of Sale provides store operations redundancy in case
of network outage.
As the company continues to roll out its new infrastructure, it’s piloting
a solution to recognize vehicles’ number plates based on real-time
CCTV feeds. One benefit of this is loss prevention, in situations where a
number plate is tied to a known offender in a published database.
They also plan to use the in-store cameras to do machine learning and
AI on Azure Stack Edge virtual machines to predict stock levels and
alert the staff, all locally, without the cloud.
Approach to digital
transformation
Throughout this book, we discuss several concepts related to Success
by Design, including how to initiate, implement, and deploy a project,
but the scope of Success by Design isn’t limited to a successful go live
or operational excellence. The long-term goal of Success by Design is to
create the right foundation for an organization to evolve their business
application landscape and expand their digital transformation footprint.
to evolve their digital transformation; opportunities could arise to take it further and
deliver transformational value to the business rather than looking at it
business application as just a tactical upgrade to projects run by IT with little involvement
landscape. from the business.
The business model represents the “why” and what it takes to deliver
the value to customer; how we go about doing it is the business
Disruptions in the process definition.
business model
create opportunities Triggers
for transformation. Disruptions and inflections in the business model create big opportu-
nities for transformation. In a world that’s changing at a pace greater
than ever, businesses have to reinvent themselves more often, improve
customer experiences, and attract and retain talent. The opportuni-
ties for impact are plentiful (for a deeper discussion, see Chapter 1,
“Introduction to Implementation Guides”).
Business process
The next stage of the process is discovery, which involves key business
stakeholders. With a holistic view of the evolving business model and
transformation goals in place, you can identify the related business
processes and applications that need to change. The discovery exer-
cise should be focused on creating clarity around how the business
processes and applications identified need to evolve to meet the
corresponding transformation goals.
Change streams
So far, this chapter has discussed the approach to digital transforma-
tion and how to develop a transformation plan for your application
and the process involved. This has traditionally been a top-down
Business
As we embark on the journey of transformation and start building the
business application based on the goals set by the business, it’s also
important to appreciate that during design and implementation, you
may need to accommodate for further adjustments and refinements.
This support for agility and change needs to be fundamentally built
into the program. This is where some of the iterative methodologies
could be beneficial, enabling us to respond to the change without it
turning into a disruption.
Embrace the Those leading transformational change often find that all aspects of
change are not well defined in the early stages , so iterating to adopt
mindset of getting changes quickly and early is key, but so is flexibility as clarity reveals the
comfortable with resulting business model. When transformation occurs in an industry,
frequent and what the industry will look like following the transformation is often
inevitable change. not known. Companies must be able to adapt quickly in the middle of
the project to incorporate these inevitable changes.
User
A key stakeholder in your transformation plan is the business user, who
interacts with the application often. If the process being implemented
in the application doesn’t meet the needs of the user, or the applica-
tion doesn’t consider working patterns and end user usage, it could
lead to poor adoption and lack of engagement. Incorporating user
feedback continuously throughout the process using well-defined and
frequent touchpoints is key to achieving your transformation goals.
69
SaaS application providers that are competing to build and deliver
business capabilities for customers are helping advance business appli-
cations in way that was unimaginable just a few years ago. Applications
that were just forms over data (mostly passive data capture systems used
to track, view, and report on known data) have evolved into applications
that can automatically capture data, learn from the data, deliver insights,
and guide users to the next best action. This makes it extremely import-
ant for business to watch for new capabilities being made available and
adopt them to accelerate their transformation to be a differentiator in
the industry. The key value proposition of SaaS is also the ability to tap
into the enhancements and features that are based on broad market
research. Activating these features can accelerate your transformation
with minimal effort and without any development cost.
External
External economic, social, and political drivers can disrupt your
transformation plan. The COVID-19 pandemic is an example of how
supporting remote work and creating online collaboration channels
became a top priority for most customers in 2020. This required co-
ordinated changes from infrastructure, to network, to the device and
application layer across the IT landscape of organizations. Although it’s
difficult to foresee and plan for external events, the iterative approach
to delivering continuous value in smaller batches allows you to pause
and pivot as needed. For example, consider an organization on a long-
term transformation program that releases a major capability to users
once a year versus an organization that has adopted the DevOps cul-
ture of delivering value with regular bi-monthly or monthly updates.
The latter company can realize value from investments much sooner
and is better positioned to pivot when change demands it.
Predictive
the essentials and basics in Phase
Phase 2 Phase 4 forecasting
1 of a solution and then deliver
Forecasting Sequences enhancements with most the in-
Premium autocapture novative, differentiating features
Assistance cards in later phases of the program.
and studio
But if the most valuable parts of
the transformation are planned
Essential Innovative
for much later in the lifecycle
Conversion
Phase 1 Phase 3 intelligence (Figure 4-2), you risk stakehold-
Contact
management ers losing interest, or poor user
Standard cards perception of the system (which
Opportunity
Advanced
You should plan the phases so you can pick up elements from various
quadrants and deliver a holistic solution that has the key process
elements, integrations, and reporting that delivers value (Figure 4-3).
Look for quick wins: the latest SaaS features that you can easily adopt
with little effort to bring innovative and differentiator elements into
the earlier phases.
Fig.
4-3
Enhanced Differentiator
Predictive
Forecasting Phase 2 Phase 4 forecasting
Premium autocapture
Assistance cards
and studio
Sequences
Phase 1
Conversion
intelligence
Contact
management
Standard cards
Opportunity
Advanced
pipeline
management
Autocapture
Quotes
Innovative
Minimal viable
product strategy
The concept of MVP, which enables you to quickly build, measure,
learn from feedback, and pivot if necessary, has been well established
in the startup world.
“MVP is not the A good MVP has the following characteristics (Figure 4-4):
▪ Delivers on the most important of the digital transformation goals.
smallest product ▪ Is production-ready and uses supported customization techniques
imaginable, though (this is different from a proof of concept or prototype designed to
it is the fastest way prove feasibility or get specific usability feedback).
to get through the ▪ Goes beyond the essentials and delivers on top of the existing
capability of the incumbent system.
build measure learn ▪ Focuses effort on delivering maximum business value in the short-
feedback loop with est amount of time, (you could compromise on some experiences
minimum amount that aren’t directly aligned to business outcomes).
of effort.” ▪ Is built in a way that allows for quick changes based on feedback,
making it possible to pivot. This translates to a greater reliance on
– Eric Ries, author of The Lean Startup
low-code configuration versus highly customized, professionally
developed, code-heavy experiences.
▪ Allows you to test and learn latest SaaS product features that could
be activated (this requires some level of change management but
Fig. almost no development effort).
4-4
Delivers on Production ready with Beyond the essentials Maximum value to business
goals for supported customization and delivers on in shortest time, minimal
transformation techniques (not proof of existing system compromises
concept)
Creates the foundation, Test hypothesis Test, learn, Built to pivot and
supports building blocks and deliver on activate change after feedback
for success expectations
Build
Although the MVP approach naturally works with new greenfield im-
Ideas Emotional
Code
product plementations, you may have existing apps that are being replaced or
Reliable migrated to another platform. Your scope should consider the existing
functionality to ensure parity, but shouldn’t try to mimic or replicate
Accessible
Functional
Learn Measure the experiences of the outgoing application. Focus your MVP strategy
on delivering the most value to the business sooner without limiting
Data
the scope to essentials (Figure 4-5). An MVP with a very broad scope
that takes years to deliver may cease to be an MVP.
Drive expansion
All the approaches and techniques we have discussed so far in this
Fig.
chapter can help you create an effective digital transformation roadmap
4-6
for your business application. They’re relevant to
the Initiate phase, but you can apply these methods
Users
Feature New iteratively or revisit them for further expansion. This
adoption workloads
section focuses on expanding the usage and scope of
business applications in different areas (Figure 4-6)
Incremental Satellite
changes applications to drive even greater business impact, while consider-
ing additional impact on the following:
Expansion ▪ Security and compliance
Application Integrations ▪ Application lifecycle management (ALM)
standardization
▪ Administration and governance
▪ User readiness and change management
Mobile Aggregation
▪ Environment strategy
Pivot
▪ Limits and capacity
▪ Data and integrations
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Potentially high If the solution Based on the Full-fledged Creation of a Although each Usually, data
impact based remains the environment user change new production user usually migration can
on regional same, impact on strategy and management environment comes with be complex on
regulations ALM is minimal the creation and readiness impacts the their own a live system
of additional effort is required environment capacity and in case
production strategy integration API, of additional
environments, capacity, and environments,
you could see storage could integrations
medium impact be impacted might have to be
replicated across
environments
Mobile
Ensuring support on multiple devices, at times with offline capabilities,
could be critical to adoption. In some cases, this needs to be a part
of the initial release (such as in warehousing systems in which goods
must be scanned when picked up), whereas in others it could be a
quick follow-up. Most out-of-the-box capabilities work as is on mobile
apps, with additional optimizations to support the move experience.
Our example sales mobile application offers the core sales application
features with additional targeted mobile experiences for field sellers
(Figure 4-8).
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Potentially high If the solution Additional User readiness Should not have It should not Custom
impact based on remains the device policies effort is any significant have any embedded
the regulations same, impact on could have an required to impact significant integrations
and need for ALM is minimal impact ensure adoption impact on user might require
mobile device on mobile and capacity work to make
management account for any them responsive
or mobile variations for mobile
application consumption
management
features
Incremental changes
Incremental changes are primarily driven through the feedback and
change requests from users and the business. These changes help ensure
that the application meets the expectation of ongoing enhancement
and continues to build on top of the initial MVP to maintain relevance.
It’s still important to look at these improvement requests through the
business value and transformation lens (Figure 4-9).
Fig.
4-9 Incremental changes
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Changes within No major No expected User readiness Usually remains Additional Assuming no
the approved impacts if the impact as long effort is unchanged integration and integration
SaaS service solution doesn’t as the data required app workloads changes, impact
boundaries have new PaaS boundaries could have is minimal
and data loss components don’t change some impact
prevention
policies should
have limited
security and
compliance
implications
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
New workloads
Dynamics 365 provides several business applications that are built on
the same scalable power platform. As end users adopt your application,
you will find additional opportunities to onboard related workloads
that could help eliminate data siloes by enabling stronger collabora-
tion and data sharing. For example, to achieve seamless collaboration
when you’re onboarding a marketing workload in addition to sales, the
sales team can get visibility into the nurture programs their customers
are part of, extend invite to marketing events, and so on. Similarly,
the accounts payable team could expand and move into a centralized
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
This could be App- Some apps User readiness New apps Additional apps Data and
medium impact specific ALM could require effort is on existing can impact integration
based on the app; requirements additional required, but environments storage and needs for the
core Dynamics can impact your admin tasks existing users won’t have a tenant API new app can
365 apps might release and will benefit from major impact capacity have an impact
have been build pipelines a familiar UI
already approved
Satellite applications
When deployed, your business application covers core use cases and
scenarios, some of which may only be relevant to a specific team,
region, or role. In such scenarios, you can deliver such targeted capa-
bilities via a satellite application. Satellite apps are a good opportunity
for user-developed apps and automations while using the data model
of the core application. You could also use these applications for in-
cubation before incorporating it in the core app. Regarding areas of
impact (Figure 4-12), it’s important to have strong governance around
the data model and creation of additional data stores, which can lead
to data fragmentation.
Integrations
Integrations can play a key role in eliminating data duplication and im-
proving data quality. Most importantly, they reduce the need for users
to navigate multiple applications, which prevents context switching.
The core integrations should be part of the initial phases and not be
left to future expansion phases. Even the out-of-the-box integrations
with existing productivity applications like Microsoft 365 can positively
impact adoption and make the information seamlessly available across
apps while enabling stronger collaboration. However, an abundance of
Fig.
4-12 Satellite applications
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
If the data flows Managing As the satellite User readiness Depending Potential impact You should
in accordance the lifecycle apps grow effort is low on the ALM to capacity; avoid
with data loss of satellite organically, the for citizen- and citizen you may also fragmenting
prevention apps requires appropriate developed development need licenses the data with
policies, impact changes to governance community strategy, the to support the additional apps
should be low existing ALM policies need to apps, but might environment makers and using their own
processes be in place to require readiness strategy is consumers if data stores,
manage them effort if they’re impacted they’re not but it may be
adopted more already covered required in
broadly by existing some cases
licenses
Fig.
4-13 Integrations
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Depending on ALM process Additional Depending You may need Potential Expect impact
the data flows, could be monitoring on frontend to have stubs impact to API on data
additional impacted and integration versus backend or downstream consumption movement and
security depending on telemetry integration, integration make efforts
approvals may complexity is needed the impact and environments to ensure
be needed for to support readiness effort integrations
integration reconciliation of could vary are built to
pattern failures standard
The cutover approach for old applications is critical; make sure to avoid
parallel usage leading to fragmentation of data and other aftereffects
(Figure 4-14).
Fig.
4-14 Aggregation
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Level of impact Existing ALM Admin User readiness is This will have Additional users High impact
depends processes are processes are required impact on and increased with respect to
on data changed consolidated environment data volumes data alignment
classification strategy, impact capacity and integration
requiring a consolidation
potential merge
Application standardization
This model of growth is driven by creating a generic application that
isn’t targeted at a specific process but a host of similar processes. The
application could even provide a framework that enables the business
to onboard themselves. This can achieve hyper-scale, in which you can
broadly t-shirt size hundreds of processes or applications that serve
more than one business process.
Application standardization
Fig.
4-15
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Once approved Existing ALM There could be User readiness is Should have Additional users There could
for appropriate processes process specific required for each minimal impact and increased be an increase
data shouldn’t configurations new process data volumes in the data
classification, change for a required for impact capacity generated by
more processes specific template each new each process
shouldn’t have workload on the
an impact template
Pivot
Pivots aren’t necessarily expansion, but can trigger major change to an
application, which could fundamentally change and further broaden
its scope (Figure 4-16). For example, a targeted Power App used by a
small team could go viral within an organization, and you must pivot
to make it an organization-wide application.
Pivot
Fig.
4-16
User readiness
Security and Admin and Environment Limits and Data and
ALM and change
compliance governance strategy capacity integrations
management
Application Existing ALM Admin and User readiness Potential Additional users Expect high
will need to processes will governance might be changes to and increased impact to data
undergo further potentially scope will required the overall data volumes and integration
scrutiny change need to be environment will impact based on the
reevaluated strategy capacity pivot
Going live with your MVP is the first step to driving value. You will iter-
atively refresh your transformation roadmap, revisit and refresh your
program goals, adopt new product capabilities, and drive meaningful
expansion while staying true to the core program value and transfor-
mation goals. Following this approach can establish your organization
as a leader in your industry.
The holistic value of SaaS is much broader; it goes beyond just the
technology and features of your application. You’re investing not only
in the technology but also in the research and innovation that drives
the product changes. Simplifying experiences and improving interop-
erability further reduces costs and drives business innovation.
Change stream
Account for the various change streams that will impact
the program and ensure budget and timelines are
adjusted to accommodate.
5 Deployment
In this chapter, we explore common industry methodologies and look
6 Change management
7 Conclusion
at common deployment strategies and high-level views on change
management strategy.
This helps deliver a solution aligned with business needs and allows for
change management and a higher user adoption.
The KPIs mentioned above are not exhaustive, customers have specific
KPIs that cater to their business scenarios. It is important to acknowl-
edge that each KPI the customer identifies and prioritizes has a direct
impact on the functionality that would be deployed on the Dynamics
365 Business Applications solution. Consequently, having a definitive
list of metrics to refer back to enables prioritization of the right use
cases and allows customer stakeholders to gauge the project success in
a tangible way.
The key roles required for all projects can often be grouped into two
types of team structures.
▪ Steering committee The group of senior stakeholders with the
authority to ensure that the project team stays in alignment with
KPIs. As they monitor the progress they can help make decisions
that have an impact on overall strategy, including budget, costs
and expected business functionality. Steering committees usually
consist of senior management members and group heads whose
business groups are part of the identified rollouts.
▪ Core implementation team This is the team doing the actual
execution of the project. For any Dynamics 365 implementation
project, core implementation teams should include project
The project may require additional roles depending on the scope and
methodology. Some roles may need to join temporarily during the
project lifecycle to meet specific needs for that period.
A common decision that most teams face is finding the right balance
between delivering a high value solution faster by using the off the
shelf capabilities versus extending the product capabilities to imple-
ment the business requirements and needs. Extending the Dynamics
91
365 applications not only requires initial development costs but also
maintainability and supportability costs in the long run. This is an area
where implementation teams should carefully revisit the cost impact.
In the cloud world with rapid availability of new features and capabil-
ities, there is a need to rethink the investments required for extending
the application. Refer to Chapter 15, “Extend your solution,” for more
details on assessing this impact.
Choose a methodology
Before we discuss choosing a methodology for your Microsoft
Dynamics 365 project, we need to understand why a methodology is
important for Dynamics 365 projects.
Start
Go
live
Waterfall model
A delivery execution model based on sequential, steadily flowing, like a
Waterfall, series of phases from conception, initiation, analysis, design,
development, testing, deployment, operations, to maintenance.
Agile model
Agile development is not a methodology in itself, but an umbrella
term that describes several Agile methodologies. Some of the industry
known methodologies are Scrum, XP, DSDM, Sure Step for Agile (for
Business Applications).
▪ Scrum methodology A delivery execution methodology based
on Agile principles where requirements are in flux or unknown
prior to beginning a sprint. Sprints involve requirements analysis,
design, build, test, and user review. Sprints are a fixed duration
(usually 30 days), as illustrated in Figure 5-2.
Fig. 5-2
24h
30 days
Agile
Success by Design Framework is
An Agile implementation is an iterative approach that uses a number
methodology agnostic and is aligned of iterations or sprints.
to ensure proactive guidance and
predictable outcomes irrespective
of chosen methodology. For more
In Microsoft Dynamics 365 Business Applications projects, the majority
information, refer to Chapter 2,
“Success by Design overview.” of requirements are delivered by the packaged software. There are
specific challenges and tasks that are out of the box and need to be
managed and mitigated as part of the project methodology with the
user stories contained in the form of a backlog. Using those stories, you
carve out plans and within them a number of iterations or sprints of
development and testing are executed. Within each sprint, you have a
number of user stories outlining the requirements.
The idea is to release software faster, take early and regular feedback,
adapt, and release again. This cycle continues until all the requirements
are met or the backlog is cleared.
management
Project management Project Project
Analysis
Program
management management
1.2.1 1.2.2
Conduct Gather user
solutions training
overview requirements
1.3.1 1.4.1 Design
Conduct detailed
Agile preparation
Gather
4.2.2
business process
Training
business
Conduct end
analysis requirements
user training
1.7.1
Set up DEV Development
and other
non-production 1.4.2
environments
process analysis
Conduct fit gap
analysis
Business
1.8.1
Establish Deployment
integration
strategy
1.9.1
1.4.3
Gather data
Define
Requirements and
migration Operation
configuration
solution backlog
requirements
5.4.2
4.4.2
Transition to
Go live
support
Custom
coding
backlog
2.1.4 2.1.2
Conduct sprint Conduct sprint
post mortem planning meeting
and testing
4.6.2
Quality
Daily sprint cycle Conduct UAT
3.1.1 3.1.2
Agile execution
Infrastructure
3.1.3
3.1.6 4.7.1
Create scripts
Conduct Build production
for testing
sprint testing environment
3.1.4
3.1.5
Sprint
Generate
configuration
daily build
and development
and interfaces
Integration
2.1.3
Conduct sprint
technical preview
4.9.3
3.6.1 3.7.1
migration
Final data
Finalize
Data
Conduct
migration to
solution production
production
testing specification
The phases, milestones, and deliverables are clearly defined, and you
only move to the next phase when a prior phase is completed.
Training plan
Functional design
Process and
functional UAT scripts UAT
Transition to
Solution
requirements
Technical design support
Solution testing
Fit gap analysis
Solution design
Data migration
Technology
requirements
Integration and Production Production
interface design specifics environment
Environment
specification
Analysis Process
Design End-to-end
Development Performance
Iteration testing UAT
Once per
Once per project Once per release iteration Multiple cycles per release Once per release
in a release
With this approach, we can manage initial activities, like initiation and
solution modeling, and final activities like system integration testing,
user acceptance testing, and release to production, in a noniterative
way. Then the build activities, such as requirement detailing, design,
development, testing, are completed with an iterative approach.
This helps provide early visibility into the solution and allows the team
to take corrective actions early on in the overall cycle. This approach is
considered to use the best of both Waterfall and Agile approaches and
is a win-win for any size of implementation.
The idea is to get early feedback and validation from the business team
on the business requirements scope and understanding. This approach
allows the project to showcase working solutions faster than other
methodologies and achieves a higher rate of implementation success.
Define deployment
strategy
A deployment and rollout strategy describes how the company is
going to deploy the application to their end users. The strategy chosen
is based on business objectives, risk propensity, budget, resources, and
time availability, and its complexity varies depending on each imple-
mentation’s unique scenarios. There are several key factors to consider
prior to choosing any of the approaches detailed below.
▪ What is the MVP (minimum viable product) needed for faster val-
ue to customers and then later plan for continued enhancements?
Refer to Chapter 4, “Drive app value,” to define an MVP strategy.
▪ •Is this a multi-country rollout or single-country rollout?
▪ Is there a need to consider pilot rollouts prior to wider team rollouts?
▪ What are the localization requirements, especially in scenarios of
multi-org or multi-region rollouts?
▪ Do we need to phase out a legacy application by a certain key
timeline or can it be run in parallel?
Big-bang
In the big-bang approach, as the name suggests, the final finished
software solution is deployed to production and the old system is re-
tired. All users start using the new system on the go-live date. The term
“big-bang” generally describes the approach of taking a large scope of
work live for the entire user base of your organization simultaneously.
The risks of this approach are that the entire project could be rushed,
important details could get ignored, and business processes trans-
formed may not be in the wider interest of the organization. In an
aggressive big-bang rollout, the risks are magnified due to the dual
combo of a large scope and shortened rollout period.
100
Change management can also suffer as people are less inclined to use
the system that may not be solving the business problems it was meant
to solve.
With such large changes happening, there is a potential for issues to arise
post go-live and falling back on the older system is very hard, if even
possible. It is critical to ensure proper contingency planning for business
continuity, and support mechanisms to deal with post go-live issues.
Large project scopes are at much greater risk by using the big-bang
approach since the delivery of finished software to the production takes
up a longer timeline, there is more to organize during transition. If the
final rollout runs into challenges, the wider user community is impacted
Fig. 5-6
due to the resulting delays. If Waterfall methodology is used as the
implementation, then the end
Pros Cons users don’t have a feel for the real
▪ Shorter, condensed ▪ May not accommodate for rapidly
system when it is finally rolled out.
deployment times, minimizing changing market scenarios and
organization disruption product capabilities, leaving a
lack of solution alignment at However, if hybrid methodology
▪ Lower costs from a quick rollout
without loss of overall work, but deployment causing a negative is used, it is possible to involve
with an added need to account impact for end users end users from the beginning,
for resource costs to manage the ▪ Daily task delays from users keeping them updated on what
complexity learning to adapt to the new is landing on go-live and keeping
application without any prior
▪ Same go-live date for all users any surprises in check.
experience
▪ Reduced investment in
temporary interfaces ▪ Transitioning back to legacy
For large and complex imple-
systems can be costly and
difficult, if even possible
mentations, big-bang projects
can require many months or even
▪ Any failures during the rollout have
a high impact on maximum users years to complete. Given the pace
for each deployment relative to a systems until the new solution is fully
▪ There is much more effort required from the users as they double
key information.
▪ This rollout may be preferred by projects where the business risk is
very high and cannot be easily mitigated by testing. At the same
time, we need to be mindful of making sure team is not cutting
corners on testing and training efforts.
▪ This rollout is less and less popular due to the extra efforts and
costs involved in keeping two systems.
Pros Cons
▪ Less risk ▪ High efforts and high costs
▪ Users take their time to gradually plan to migrate to the ▪ Needs for data duplication can get complicated. We
new system recommend having a clear exit plan that is time or
workload based, for example, all existing cases closed
in old system
Define a change
management strategy
Adoption and change management planning is a critical aspect of
implementation strategy. Change is never easy for any organization
and requires thoughtful planning and a structured approach to
achieve high adoption and usage. A change management strategy
with a clear action plan ensures smooth rollouts and alignment with
stakeholder and end user expectations. In the absence of a planned
structured approach with the preparation, support, and skills they need to succeed in change.
to achieve high PROSCI’s framework that is followed at Microsoft describes the three
adoption and usage. sequential steps that are followed in change planning.
104
Preparing for change
The first phase in PROSCI’s methodology helps change management
and project teams prepare for designing their change management
plans. It answers these questions:
▪ How much change management does this project need?
▪ Who is impacted by this initiative and in what ways?
▪ Who are the sponsors we need to be involved with to make this
initiative successful?
Managing change
The second phase focuses on creating plans that integrate with the
project plan. These change management plans articulate the steps that
you can take to support the users being impacted by the project.
▪ Communication plan Communications are a critical part of the
change process. This plan articulates key messages that need to go
to various impacted audience. It also accounts for who sends the
messages and when, ensuring employees are hearing messages
about the change from the people who have credibility with them at
the right time. Communication is an understated yet very significant
aspect that keeps the various teams connected throughout the entire
journey. Some of the key communication aspects to consider are:
▫ Providing a sneak peek into what is in it for end users
▫ Launch and availability communications
▫ Project sponsor vision and direction communication
▫ Communication frequency and approach
▫ End user feedback incorporation
▫ Status reporting, including steering committee reports
▪ Sponsor roadmap The sponsor roadmap outlines the actions
needed from the project’s primary sponsor and the coalition of
sponsors across the business. In order to help executives be active
and visible sponsors of the change, it identifies specific areas that
require active and visible engagement from the various leadership
teams, what communications they should send, and which peers
across the coalition they need to align with to support the change.
▪ Training plan Training is a required part of most changes and is
critical to help people build the knowledge and ability they need
to work in a new way. The training plan identifies the scope, the
intended audience, and the timeframe for when training should
Reinforcing change
Equally critical but often overlooked, this third phase helps you create
specific action plans for ensuring that the change is sustained. In this
phase, project and change management teams develop measures and
mechanisms to measure how well the change is taking hold, gauge
user adoption, identify alignment with KPIs, and correct gaps.
Conclusion
When a team defines an implementation strategy, they set clear
106
expectations and guidelines for the team and wider business on how
the project goals are going to be attained. This helps define a clear
translation of business goals into a structured implementation plan.
108
6 Guide
Solution
architecture
design pillars
You must be sure about the
solution before you start building,
because failure is not an option.
Introduction
You can’t have a solution without first having a vision.
When you know the answer you’re looking for, only
then you can find a solution to get you there.
But it’s not always easy to know and articulate what you want, let
alone identify the elements that are essential for creating a blueprint of
your solution.
Solution architecture
design pillars
Most of us know what “architecture” means in the building industry,
it includes the job of the architect, the scope of the work, and what’s
ultimately delivered. The architect starts with the project requirements
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 110
and the budget, and then works on a blueprint of the design. Once the
physical labor begins, the foundation is laid, the house is built, and the
interior is completed. Many years of maintaining and improving the
building may follow. But why and when do you need an architect to
design a solution for your organization?
Let’s take the simple case of building a new house for a dog. In most
cases, this is a single-person job with a general vision of the final prod-
uct. Building a doghouse is cheap, and the risk of failure is low, so a
trial-and-error approach is one option.
Building blocks of
solution architecture design
The journey starts with a Vision. The vision details are articulated in
a Business strategy. To build the solution Processes, People and Data
needs to be joined in a Solution architecture (Figure 6-1). A solution
strategy outlines how those elements can be addressed by using the
technology of Dynamics 365 apps. Project and change methodologies,
supported by governance and control methods, define the workflows
and the delivery.
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 111
Solution architecture design incorporates the business vision and its
Fig.
implementation into a blueprint. The first version is created as part of
6-1
the pre-sales process and forms the initial high-level
understanding of what you plan to build.
Technology
of Dynamics 365 apps
Vision
A vision is the desire to achieve something, to
change the present and improve the future.
Methodology
When an organization decides to go through a
transformation, it’s usually because one or more
Fig.
6-2
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 112
people had the ability to predict what was coming and expressed a
desire for change, or a change was forced by industry disruption or
regulations. Such a change could be in the form of a mission statement
or a business case listing its objectives, which might include:
▪ Modernizing legacy enterprise business applications.
▪ Finding unified and faster ways of working.
▪ Earning higher profits.
▪ Achieving better service or product quality.
▪ Improving the user experience.
▪ Empowering users to drive greater value by building apps.
As the vision comes together, the plan for achieving it can start
taking shape.
Business strategy
Every vision serves a purpose, as does every organization, and any
solution needs to be aligned with this purpose. A business strategy
supports your vision by answering fundamental questions, such as:
▪ Why are you making this change, and what are the anticipated
benefits? What is the business value sought by the solution?
▪ Where do you imagine the organization will be in five, 10, or 20 years?
▪ What business capabilities can your organization offer with the
new solution? What business processes can you run? What infor-
mation and data would you like to record and report on, in line
with your organization’s services or products?
▪ Which clients, customers, or people inside the organization will be
served by the new solution, and who will be affected by it?
▪ Would you like to improve your current line of business or are you
open to a new industry?
▪ When do you plan to have the vision materialized? What is the
high-level timeline? And do you intend to deliver the solution at
once or grow in stages?
▪ Where are the regions, geographically and in terms of business
functions, to which the solution will apply? Will you apply it to all
or just some of them?
▪ Who is going to plan, design, and deliver the solution? Do you
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 113
have a preferred partner, or do you need to select a vendor? Who
Why should you
will support and maintain the solution post go-live?
define your processes? ▪ How will you incorporate technology into your solution? (This is
It creates consistency by the first step of solution design and the link to your solution strate-
allowing for consolidation and gy, as well as the business case for IT transformation.)
unification of processes across
locations, teams, and systems.
It reduces complexity by
standardizing and rationalizing
Processes
previously complicated and
cumbersome procedures. Process architecture is a commonly understood, shared view of all busi-
ness processes that an organization uses to deliver a product or service.
It eliminates guesswork and It represents how an organization operates, in a structured order.
potential ambiguity by stream-
Complete, accurate, and well-organized process architecture is the first
lining operations and improving
communications. and most important pillar for a sound solution design. It confirms end-
to-end business understanding, provides a structure for understanding
It promotes productivity by
the scope, and serves as the basis for testing and training. It also forms
eliminating inefficiencies and
establishing one workflow for a map for requirements gathering. The processes are constructed at
all users. different levels to reflect specific areas, functions, locations, and teams.
Process architecture is often referred to as a target operating model, a
It guarantees quality by
maximizing attention to detail
process library, or a catalog.
and ensuring work is done in
a pre-defined, optimized way Any transformation starts with defining your processes, which is also a
each time.
good time to revisit and improve them.
It boosts morale by helping
team members take pride in
mastering the process, refining Dependent activities
their skills, and avoiding mis-
takes and missed deadlines. While we’re not going to take a deep dive into the business process
management life cycle and capability maturity, following the Success
It encourages continuous
improvement by giving team
by Design framework can help you identify important elements of your
members who follow the pro- solution design.
cesses a chance to give input on
how to improve them.
Process architecture includes multiple dependent activities:
It increases user acceptance ▪ Scope management Defining the scope of the solution is the
by providing a higher-quality first step of the design. Ideally, you have a baseline process taxon-
experience at each launch.
omy with at least three levels of the process architecture. Start by
mapping requirements against it and marking which processes are
in scope. You can add to and take away processes from your initial
process library.
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 114
Key deliverables related to processes
Linear flow
Product
prototyping
Test against
Conduct Put ideas into ideation Select ideas for
strategic
competitive analysis pipeline conceptual design
imperatives
Conduct ongoing
research
Inbound Wave
Receiving Put away Count Pick Pack
logistics processing
Planning
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 115
▪ Business analysis Initially, it’s likely that business users don’t
know or understand the application, and the tech partner doesn’t
understand the business, so a business analysis helps to join them
together. A business analyst gathers requirements and links them
to processes; conversely, a good processes structure drives the
requirements gathering by asking all necessary questions.
▪ Requirements management Every functional and nonfunc-
tional requirement needs to be linked to at least one process. If an
applicable process doesn’t exist, you must slot it into the relevant
section of the process architecture. Azure DevOps is a good tool to
use for this success measure, as shown in Figure 6-3.
Fig.
▫ As part of the process, data goes in as input exchanged be-
6-4
tween people and applications, and data goes out as output in
People the form of documents, analysis, and reports (Figure 6-4).
▪ Test management Once the solution is designed and devel-
oped, the process architecture establishes a baseline for testing.
Solution
design When you test every process, you ensure that every requirement is
s
se
es
c
P ro refer to Chapter 14, “Testing strategy.”
▪ Training Process architecture also defines and drives your training
content. You can use your process flows and guides as a first draft
for your training materials. Learn more in Chapter 7, “Process-
focused solution,” and Chapter 2, “Success by Design overview.”
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 116
Key deliverables
related to people People
Organizational structure Even though not every process or activity is performed by a person,
Cross-functional there’s always a step or output that involves people, whether internal
process flows and maps (employees and contractors) or external (customers and vendors). This
is the key reason why and how the solution design is influenced by the
List of business users
or personas people pillar. People shape the solution design in many ways, including:
▪ Geographical location
Dynamics 365
security roles list ▪ Time zones
▪ Languages
Mapping between personas
and security roles
▪ Customs
▪ Internal and external hierarchies
▪ Organizational structure
Key deliverables
related to data Data
Data governance strategy The third pillar of solution design is data.
Data architecture
A system with inaccurate, misleading, or partial data will lead to a failed
Data migration and implementation, so you must understand your data and how it fits into
integration strategy the overall solution. With the right data, you can identify actionable and
analytical information that improves business intelligence. Think about:
Data quality strategy
▪ Master data This is static information, such as details about
customers and products, that you use and exchange in your busi-
ness activities.
▪ Transactional records This refers to recorded business infor-
mation, such as customer payments and expense reports, that you
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 117
create, store, and report as part of your business process.
▪ Documents This refers to documentation, such as sales orders
and customer invoices, that you create as part of your business
process.
▪ Reports This is organized or filtered information, such as a trial
balance or aged debt, serving as input or output for your business
activities.
Solution strategy
Your solution strategy is a consolidated view and approach that defines
your overall solution. A solution blueprint is a living document with
several review points (Figure 6-5) during a project’s lifespan to help
you identify and take necessary actions, mitigate risks, and resolve
issues as they arise. In the Success by Design framework, the blueprint
is considered essential for a project’s success, and provides a view of
Fig. the overall solution architecture and dependent technologies.
6-5
Kickoff Go live
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
Initiate
Implement
Prepare
Operate
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 118
The Solution Blueprint Review workshop is a mandatory part of the
solution design experience, and helps you keep track of your solution
design’s progress to ensure that your vision and objectives are still
viable. It also allows solution architects and the implementation team
to review and gain an understanding of your:
▪ Program strategy
▪ Test strategy
▪ Business process strategy
▪ Application strategy
▪ Data strategy
▪ Integration strategy
▪ Intelligence strategy
▪ Security strategy
▪ Application lifecycle management (ALM) strategy
▪ Environment and capacity strategy
For more information and a list of activities Capturing these details helps you understand the project, and validates
in each Success by Design phase, refer to
Chapter 2, “Success by Design overview.” your solution design via the complete and well-organized processes,
data, and people enabled by Dynamics 365.
Technology
While technology does not drive all the processes, it provides the
backbone of products and services required to fulfill your businesses
strategy. Your processes dictate which technology to use and when,
and that technology will bring your digital transformation to life. For
example, the Dynamics 365 Sales app provides value-added details
about your organization’s lead-to-opportunity pipeline. Other exam-
ples of technology that’s under the hood to support users include:
▪ Infrastructure as a service (IaaS)
▪ Software as a service (SaaS)
▪ Integration tools
▪ Business intelligence tools
▪ Azure AI
▪ Azure Machine Learning
▪ Data connectors
▪ Customer portals
▪ Mobility solutions
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 119
The technology and system selection are usually performed during the
sales process as part of the first high-level fit gap assessment.
Identify potential It is easy to say that technology can be used to solve any business pro-
solutions using Fit-Gap cess scenario, but technology is simply what turns the gears for your
Analysis processes, people, and data. It’s important to include all stakeholders
in the design decisions, because processes don’t always align with the
technology. This is what we call the “gap” in a requirement, and it’s
where you might decide to purchase an Add-on solution or develop an
Conduct the Gap Solution
extension to meet the needs of the process (Figure 6-6). Think through
Design Workshop to
review proposed solutions the different options to find the right balance for your solution, but be
sure to challenge the business requirements before customizing the
system. You might be surprised to find out that what seemed like a
“must-have” customization is more of a “nice-to-have” feature.
After the fit-gap analysis is completed, the gap solution design workshop
For more on this topic, refer to Chapter 15, helps to discuss and further assess the identified potential solutions to
“Extend your solution.”
meet the requirements. This workshop provides structure to help analyze
For more information about the Success the gap solutions in terms of risk, ease of use, long term support
by Design framework, refer to Chapter 2,
“Success by Design overview.”
considerations, product roadmap alignment, etc. with the purpose of
selecting the solution that best fits the process and requirements.
Methodologies
A methodology is a collection of methods used to achieve predictable
outcomes. Good methodology also demonstrates why things need
to be done in a particular order and fashion. The Success by Design
framework is a higher-level abstraction through which a range of
concepts, models, techniques, and methodologies can be clarified.
It can bend around any methodology, including The Open Group
Architecture Framework (TOGAF), the Microsoft Solutions Framework
(MSF), and the Zachman Framework.
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 120
▪ Project management
▪ Change management
▪ Governance and control
For more detailed information about
project management approaches, review
Chapter 5, “Implementation strategy.”
Project management
There are multiple project management methodologies and approaches,
and methodology for enterprise resource planning (ERP) is a topic that
can generate more heat than light. Partners often rely on their own brand-
ed methodology to reassure customers about their governance. Project
management approaches can be grouped into three core categories:
▪ Waterfall
▪ Agile
▪ Hybrid
Change management
Project management (the “how”) and change management (the “who”)
are both tools that support your project’s benefits realization. Without
a change management plan in place, your organization’s objectives
are at risk. To drive adoption with your end users and accelerate value
realization, apply a vigorous change management approach, which is
most effective when launched at the beginning of a project and inte-
grated into your project activities.
121
or possibly even stakeholder management, but such a limited view of
governance may cause an organization to ignore core elements re-
Fig.
quired for success (Figure 6-7).
6-7
Governance domains
Project Stakeholder Solution Risk and issues Organizational change
management engagement management management Change control and communication
▪ Weekly status ▪ Executive steering ▪ Oversight of ▪ Objective risk ▪ Identification and ▪ Intentional and
reports committee functional and assessment and prioritization of customized
nonfunctional mitigation planning requirements and communication to
▪ Project plan and ▪ Communication
attributes solution attributes stakeholders
schedule and promotion ▪ Issue identification,
▪ Data, integration, ownership, and ▪ Tracking and ▪ Periodic
▪ Resource ▪ Escalation of
infrastructure, and resolution reporting via ADO effectiveness check
management unresolved issues
performance
▪ Prioritization and ▪ Organization and ▪ Solution adoption
▪ Financial ▪ Solution decision
▪ Customizations and severity assessment process changes and operational
management making and
process adaptation transformation
collaboration ▪ Risk response
▪ Risk management management
▪ Security and management
▪ Business alignment
▪ Change compliance
and tradeoffs
management
▪ Project status ▪ Project change ▪ Architecture board ▪ Project status ▪ Statement of work ▪ Communication plan
meetings and reports management meetings and report
▪ Governance ▪ Project artifacts ▪ Stakeholder
▪ ADO (project tools) ▪ Solution framework ▪ ADO (project tools) engagement
▪ Project status report
management
▪ Governance ▪ Organizational
▪ Governance
framework ▪ Organizational change management
framework
change management
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 122
▪ All project leaders, including the project manager, solution ar-
chitect, and technical architect, closely communicate and align
toward the common goal of solution deployment.
▪ The technical architect leads and orchestrates the technical
architecture across all application components to ensure optimal
performance.
▪ The solution architect guides users toward the new vision and
transformation.
Read more in Chapter 5, “Implementation
▪ The change manager ensures all internal and external communica-
strategy,” and Chapter 8, “Project governance.”
tions are in place.
Conclusion
As we’ve explained in this chapter, solution architecture follows a sys-
tematic approach that identifies your desired solution and the building
blocks needed to construct it. Solution architecture design takes
your business vision and breaks it into logical sections that become a
blueprint for building your solution. Here is a list of the key steps for
solution architecture design:
▪ Define your vision.
▪ Create a business strategy.
▪ Outline your business processes.
▪ Determine how people are connected in and around your
organization.
▪ Know your data.
▪ Develop a solution strategy using technology and tools.
▪ Apply project management, change management, and gover-
nance and control methodologies.
Implementation Guide: Success by Design: Guide 6, Solution architecture design pillars 123
7 Guide
Process-focused
solution
Implementation Guide: Success by Design: Guide 7, Process-focused solution 124
Introduction
Fast changing technologies and markets demand a
rapid rhythm of delivering products and services.
Business applications improve inefficient processes by automating, opti-
mizing, and standardizing. The result: A more scalable business solution.
When defining your business model, assess the state of your processes.
Some organizations may want to use new technology to optimize exist-
ing processes through automation. Others are moving to e-commerce
Solution
or outsourcing parts of the operation, which requires new business
processes. Still others might be moving to lean manufacturing.
Defining the
Opportunity for
scope of the
implementation
optimization
Defining your
requirements
128
and organization layers. It is common for senior managers to be sur-
prised by how processes actually work on the ground.It is essential that
the right team is involved in mapping business processes to the new
technology. This team should be guided by the business stakeholder
who owns the process in the business operation. This individual is
charged with making the right decisions about structure and investment
in the process to achieve their goals. The stakeholder is supported by
one or more subject matter experts (SME) who are familiar with how the
business process operates in the real world. These experts can provide
depth around the various scenarios under which a process is executed.
Their knowledge, coupled with a mindset open to new ways of work-
ing, helps drive the right conversation on process mapping.
When reviewing new process diagrams, the team should consider the
following:
▪ Do the business SMEs recognize their business in the processes?
▪ Do they provide adequate coverage of the business activities in scope?
▪ Do they show the key interconnections between the processes?
▪ Do they show meaningful end-to-end business flows?
▪ Will they represent a good level at which the project team can
communicate with business stakeholders on risks, progress, decisions,
and actions?
▪ Do they illustrate processes sufficiently to identify opportunities
for improvement?
▪ Are they sufficiently well-defined to be able to be used in the next
stage where the processes can be applied to drive the design in
the Dynamics 365 system such that they can reflect the improve-
ments and the project goals and vision?
These process diagrams help define the baseline for the organization’s
The key to getting the processes mapped well and quickly is to ensure
the following:
▪ The right people are in the room to direct the mapping.
▪ Any mapping software or tools that help facilitate the rapid defini-
tion of the process and do not slow things down.
▪ The level to which the process is described is directly proportional
to its importance to the business. For example, once the end-to-
end scope of a widely used and very standard finance process is
defined to a certain level it should not be further broken down to
step-by-step processes.
▪ The process mapping effort is interactive, not a series of docu-
ments that go through long handoffs and approvals.
▪ The tools for the mapping can be anything from sticky notes, to
Microsoft Visio, to specialized process drawing software. The process
maps are visual and not hidden in wordy Microsoft Excel lists.
This is a vital step that needs to be executed early. Plan so that processes
are defined as fast as possible using the simplest tools in the workshop,
like sticky notes, pens, and whiteboards. The drawings and comments
Mapping business process is a not can then be transferred offline to more sophisticated applications.
one-time effort that occurs only during
the initial implementation. Rather,
it should be a continuous focus to
Remember that doing this mapping at the start of the project imple-
ensure the processes are aligned to
the evolving needs of a business and mentation is important. It could become a fundamental part of helping
are making the best use of the latest
capabilities in the product.
the analysis of the opportunities for process improvement and creating
the right baseline for the project.
Defining the
scope of the Defining the scope of the
implementation
Defining your requirements
implementation
A business application implementation is essentially the delivery of a
Fit to standard and
fit gap analysis new capability for end-to-end business transactions using the appli-
cation. Processes are the foundation for the definition of the solution
Implementation lifecycle scope. Processes embody many of the properties that make for a good
connected to processes
scope definition.
A solution that helps you to ▪ Business processes are well understood by the customer as they
operate your business are expressed in the languages they know best.
133
Fit to standard and fit
Start your implementation
project with business processes
definition of the The advantages of starting with this approach using the process catalog
solution scope. are as follows.
135
▪ It promotes the reduction of customizations by supporting the de-
livery of the underlying business needs through configuration rather
than going directly into fit gap analysis with detailed requirements
that may be influenced by the existing or legacy system.
▪ It helps reduce the risk of missed requirements as the evaluation
of the fit with the new system is based on the richer and broader
context of business processes. As these business processes are the
natural language of business users, their evaluation is more com-
prehensive, meaningful, and effective compared to working with a
list of requirements.
▪ The process catalog can direct the fit-to-standard assessment by
working iteratively through the processes, starting with the high-
er-level core processes, and then working with the more detailed
sub processes and variants. This also helps the business users more
clearly see how well their underlying business requirements are
being met within the system.
▪ The project is more likely to adopt modern recommended stan-
dard processes embedded in the system.
▪ It creates higher-quality solutions as the processes are tested
by Microsoft and are more likely to be market-tested by others,
whereas custom developments and variants, especially complex
ones based on the legacy system, will need to be specifically
validated by the customer.
▪ The standard solution allows for more agility in adopting related
technologies and by keeping to standards where possible, make it
easier to add the real value-add custom extensions.
▪ It enables a faster delivery of the new solution; there is no need to
wait longer for a custom solution to be developed.
▪ Standard processes are more easily supported by internal and
external support teams, including Microsoft Dynamics Support.
Gap analysis
As discussed in the previous section, adopting a process-centric
solution within Dynamics 365 has clear benefits. However, there may
be specialized functions that are not part of the standard solution as
shown in Figure 7-3. That is identified with the fit gap analysis. After
having the configurations set, you can look for gaps in the processes
and make a decision whether to customize.
Fig.
7-3 Dynamics 365
Process
Process Process Process End
Start step 3 with
step 1 step 2 step 4
Extension
Third-party solutions
An alternative to extending is to buy an existing solution from a
third-party vendor, also known as an independent software vendor
(ISV). This option is more common when there is a significant gap
and developing a functionality in-house can be complex and costly.
Sometimes you don’t have enough resources, budget, or expertise to
develop and maintain such a significant solution.
Fig.
7-4
Process 4 in
external app
or ISV
Dynamics 365
Process
Process Process Process End
Start step 3 with
step 1 step 2 step 5
Extension
A solution that helps you to Business process mapping helps draw the as-is processes to under-
operate your business stand how the business is running right now, and the to-be processes
to show how they work in the future. This also emphasizes the impor-
tance of business process mapping early in the project.
After determining what you keep or build, create the list of require-
ments, aligning them to the business processes for an accurate picture.
You can use tools equipped for the traceability of the requirements.
Defining your
requirements
At go-live, a business application such as Dynamics 365 sees system
users performing tasks and activities as part of a business process.
Process-centric The users are not executing on gaps, fits, or isolated requirements.
implementation It is important to keep this in mind when thinking of the full project
lifecycle lifecycle. The ultimate test of the project success is when the system
A solution that helps you to is in operation and the design build testing actions are intermediate
operate your business steps towards that goal. It is recommended to drive these phases with
the end-to-end business process as the framework upon which all the
related activities are planned, performed, and measured.
Design
When creating the solution blueprint, the vision of the overall solution
architecture gives solidity and certainty to the project scope. This is the
transition from scope as business processes to scope as designs within
the system. As it is expressed in business language, it helps to ensure
that business users can be usefully engaged in the review and approval
Product and
Account Contact Quote Order Invoice
pricelist
Dual-write
of the scope and proposed solution. There are, of course, other views
of the solution architecture in terms of data flow, systems landscape,
and integrations.
Having the high-level process, you can start breaking down the end-
to-end business processes into meaningful subprocesses, as shown in
Figure 7-6.
Product and
Account Contact Quote Order Invoice
pricelist
Dual-write
Add products
Start Create order
to order
Price order
The project can also use the process-focused solution to conduct reviews
of the solution with the wider business to confirm that the project is
delivering on expectations. These reviews are much harder to do well
without the supporting process definition. That’s because without the
processes in focus, the project cannot speak the language of the busi-
ness. Many organizations are expecting some degree of business process
transformation as part of the implementation of Dynamics 365.
The reviews of the solution when the new processes are first explained
in business process terms helps ground the subsequent discussion
of the current state of the solution in clear, well understood business
language. This significantly improves the quality and usefulness of the
business review by minimizing the technical implementation jargon
and instead concentrating on the “business of business.”
Testing
A project that has taken a process-centric view reaps the benefits during
testing. Other than the necessarily narrow focus of unit testing and some
non-functional tests, almost all other test types should be rooted in
some form of process definition. When there is an existing set of process
definitions and the associated designs, and a system with the designs
implemented, there are many advantages to the testing methods.
See Chapter 14, “Testing strategy,” for more details on the overall
approach to testing.
Training
Training to use business systems like Dynamics 365 applications is
fundamentally learning how to conduct day-to-day business processes
using the system. Most of the functional roles in the system interact
with multiple functions and processes. Rarely do roles exist in a vacuum
of their own process, but instead interact with other processes. Having
business process flows defined and available in the process catalog,
including flows across the seams of system roles, helps both in the
collation of process-based training materials and to guide the training.
During the design process, if the roles are mapped against the process-
es as part of security design, they can be directly used for testing and
for generating role-based training.
Even where some of the system roles may be very specifically restricted
to a specialized function in the system, there is often a need to under-
stand the upstream and downstream processes to perform a function
145
well, and to understand any implications of any delays or changes in
the flow.
The process-based training not only helps prior to go-live, it also can
be used with new hires and when users change roles. It allows those
new to the business to absorb the business process simultaneously
while understanding how that process is performed within the system.
Roles can be easily defined in a business process and use the correct
security roles for testing and training as shown in Figure 7-7.
Fig.
7-7
Add products
Sales manager Start Create order
to order
Price order
Support
The process catalog created in the project provides more than the
framework for implementation. It can also help with supporting the
solution. Functional issues raised to support usually need to be repro-
duced by the support team to enable the team to investigate the root
causes. A robust process-based definition of the flow allows support
to recreate the steps defined by the agreed standard process flow. The
visual depiction and description of the process allows the support team
This ability to place yourself in the business process helps reduce the
number of back-and-forth cycles of communication and the overall
time taken to understand the issue in business terms. It increases the
confidence of the user that their reporting of the issue has been under-
stood. This reduces anxiety and improves user sentiment of the system.
Business processes focus Ensure the business process definition is complete and
considers all activities and subprocesses.
Ensure the business process view of the organization is
Take advantage of the latest SaaS technology to drive
at the core of the definition of the project.
efficiency and effectiveness for the process optimization.
Clearly articulate the key business processes that are in
Ensure future readiness when mapping your business
scope and the respective personas so they are under-
process to the solution by incorporating configurability by
stood by all involved parties in the implementation.
design.
Ensure business model analysis, process engineering,
and standardization strategies are considered part of the
Fit gap analysis
project definition and deliver a strong process baseline
before implementation starts. Adopt a fit-to-standard approach and align to the
philosophy of adopting wherever possible and adapting
Collect the business processes in a structured and hierar-
only where justified.
chical process catalog during the requirements phase.
Process-centric solution
Use business processes for each phase of the project to
deliver better outcomes (all phase activities are better
planned, performed, and measured).
The design phase was similarly based on writing and reviewing com-
plex design documents and was running late. The focus tended to be
on the “gaps” identified, and as these were not always placed in con-
text, the discussion with business users was not always productive and
was often at cross purposes. As further delays accrued and the business
users were becoming even less confident about the proposed solution,
the stakeholders decided to review the project direction and the rea-
sons for the continuous delays and general dissatisfaction.
The delivery of the processes was distributed into sprints of four weeks
each, so each sprint delivered a meaningful part of the storyboard for each
of the workstreams, with an emphasis on delivery of end-to-end processes
as rapidly as possible. A high-level plan was constructed based on the
sequence, dependencies, and estimated effort related to process delivery.
The project successfully went live, and the customer continued to adopt
a process-centric view throughout the remainder of their go-lives in
other countries. The implementation partner decided to adopt this
process-centric approach as their new standard implementation approach
for their other projects because they could clearly see the benefits.
Project approach
We need to ensure that we’re creating a project governance model
Classic structures
that is fit for the world today and tomorrow. This includes considering
Key project areas new governance disciplines and reviewing the classic governance
Project plan model for effectiveness. We also need to consider that many partners,
system integrators, independent software vendors (ISVs), and custom-
We explore each of these areas in
more detail in this chapter. ers may have their own approach and governance standards that they
may have used previously.
Why is project
governance important?
All system implementation projects, including Dynamics 365 applica-
tions, need good project governance to succeed. However, business
application projects have specific needs and challenges, and aren’t the
Fig.
8-2
easiest of projects to implement. Business applications directly impact
and are directly impacted by the processes and the people in the busi-
Typical ness. For a business application implementation to be successful, it’s not
sufficient to have good software and good technical skills; the project
governance also needs to have good governance processes that include the business.
The list is long, and you may have seen other issues, but most of these
issues result in project delays, quality issues, and impact on budgets.
However, the root cause of the issues tends to lie in gaps in the defi-
nition of the governance model, or in the effectiveness of operating
the project governance processes. Even after the project goes live and
meets most of the business requirements, if the project delivery isn’t
smooth, it can create stakeholder dissatisfaction and a lack of confi-
dence in the project.
Next, we explore the various areas that you should think about as part
of establishing your project governance model.
Project
governance
Project governance areas
Project governance is a wide topic and can encompass multiple dif-
Project goals ferent practices and disciplines. Various methodologies exist in the
market, including those that are customized by partners and cus-
Project organization
tomers. Irrespective of the methodology, you should consider some
Project approach fundamental principles when designing your governance, or when
reviewing and revising it.
Classic structures
Project goals
Project goals
Project organization
Well-defined project goals are essential for steering a project and for
Project approach defining some of the key conditions of satisfaction for the stakehold-
ers. Often, the project goals are described in the project charter or as
Classic structures part of the project kickoff. In any case, it’s worth shining a spotlight on
them as part of the Initiate phase of the project and regularly through-
Key project areas
out the project. We recommend taking deliberate actions to reexamine
Project plan the goals in the context of your latest understanding of the project.
It’s essential to have all the stakeholders pull the project in the same di-
rection; conflicts at the level of misaligned goals are extremely hard for
the project team to solve. For example, if Finance leadership is looking
to improve compliance by adding more checks and approvals in a
purchase process, and Procurement leadership is looking for a faster,
It’s essential to have less bureaucratic and more streamlined purchasing process, unless the
all the stakeholders goals are balanced, the project delivery will falter. The end result will
pull the project in probably disappoint both stakeholders. Another common example is
when IT leadership has a goal of a single platform or instance for mul-
the same direction— tiple business units, but the business unit leadership has no goals to
conflicts at the level create common business processes. This mismatch can remain hidden
of misaligned goals and undermine the feasibility and efficiency of a single platform.
159
goals from the business, but also ownership of the successful delivery
of the project goals.
Are the project goals well understood by all the project members?
Some projects rely on a single kick-off meeting to communicate the
goals. However, many goals would benefit from more in-depth dis-
cussion (especially with project members from outside the business) to
better explain the underlying business reasons. Consider how you can
reinforce this communication not just during the initial induction of a
new project member, but also throughout the project lifecycle.
Once a project starts Once a project starts the Implementation phase, the necessary at-
tention needed for the day-to-day management and delivery can
the Implementation sometimes overshadow the importance of the strategic project goals.
phase, the necessary This is one of the key findings from project post go-live reviews, the
attention needed original aims of the project faded into the background as the project
day management In many cases, projects are driven by the fit gap list, which takes a very
and delivery narrow view of the business expectations. Consider specifically reviewing
can sometimes the initial scope of the project and initial solution blueprint (and project
overshadow the plan) with the business stakeholders to assess how well the goals are
mapped to the project deliverables and take any corrective actions.
importance of the
strategic project goals. Is the governance process structured to formally monitor
progress against the goals?
This is an often-neglected area; sometimes projects only seriously
review the goals or success criteria as part of the final go-live assess-
ment, which is too late. Try to avoid this by defining specific structured
processes to ensure the assessment is ongoing. For example, look at
how the solution design blueprint helps meet the business process
objectives and how the data migration and reporting strategy meets
the information access objectives. Monitor to confirm that the project
priorities are aligned with the project goals and aren’t being diverted
by other considerations or day-to-day challenges.
Project goals
Project organization
Project organization Projects implementing business applications tend to have common
Project approach structures and roles (such as project manager, solution architect,
subject matter expert, functional consultant, and technical consultant)
Classic structures that are recognizable across different projects. The difference in the
effectiveness of project teams comes from the way in which the project
Key project areas
organization functions in practice compared to the theoretical con-
Project plan structs (Figure 8-3).
Fig.
8-3
Project organization
Projects in which the senior stakeholders are more passive and just
occasionally asking “How is it going?” or “Let me know if you need
something” tend to have poor outcomes.
How well are the team roles aligned to the solution complexity
and system design requirements?
You should conduct an honest analysis of the experience and ability
of the resources in key roles when compared to the complexity of the
design and its constraints. Be wary of job titles that don’t match the
experience and authority that normally accompany such titles.
163
This is particularly important for the key roles of lead solution architect,
lead technical architect, and project manager.
This honest review is especially recommended for the roles that are
likely to be the most constrained, so that mitigation plans can be
initiated in a timely manner.
During the implementation, you should regularly assess how the day-
to-day working of the control, communication, and feedback functions
are being helped or hindered by the project team organization (Figure
8-4). How well does the project organization structure facilitate or
constrict the undiluted and timely flow of direction, guidance, and
decisions from the business to the project workstreams? In the other
direction, does the structure enable or hinder the flow of accurate,
actionable, and timely feedback from the workstreams to the project
Fig. leadership and business stakeholders?
8-4
You should also confirm that the project organization allows all roles to
Project goals
Project approach
Project organization
When talking about project approach, one of the dangers is that it
Project approach can be assumed to be synonymous with the project implementation
methodology. This can then leave a vacuum in the processes that need
Classic structures
to be defined outside of the implementation methodology. This can
be especially true if the implementation partner is providing a limited,
Key project areas
technical implementation methodology. If you’re the customer, you
Project plan should consider the wider set of processes required to define your
project scope, manage your resources, and manage the changes,
tasks, and processes. Prioritize areas that aren’t directly covered by the
partner but are necessary for every business to perform in support of
a business application implementation. You should explicitly identify
areas that would be the responsibility of the customer to manage.
Then confirm that you have adequate definition of the approach and
the right level of governance planned for each of these areas.
167
there is no overview solution design blueprint.
168
that the development process is building on, or extending, the
standard business process flows designed in the system
▪ Dynamics 365 applications are a cloud-based, SaaS business appli-
cation with a regular update rhythm that needs to be recognized
as part of the new way of working
All of these factors (and more) mean that the implementation meth-
odology needs to be directly relevant to the nature and needs of a
Dynamics 365 business application project.
Project goals
Classic structures
Project organization
Most, if not all, business application projects have the common, classic
Project approach governance structures in place. This section doesn’t give a general
overview of historically well-known disciplines around these common
Classic structures governance areas; instead we look at how to assess the true effective-
ness of these processes and controls in practice. The mere presence of
Key project areas
these classic governance structures such as steering groups, program
Project plan boards, or risk registers can sometimes deceive projects into thinking
that they have adequate active governance. Let’s dig deeper into these
areas to explore how we can get better insight into their function and
evaluate their effectiveness.
Steering groups
Most projects have some form of steering group in which the proj-
ect sponsor, senior business stakeholders, and project leaders meet
regularly to discuss and review the project. A lesson learned from
participating in multiple such steering groups across multiple projects
is that the effectiveness of these meetings varies hugely. Common
factors that can impact the effectiveness of a steering group are when
Fig.
8-5 the purpose of steering group
meetings is diluted or unclear,
and when project status reporting
isn’t easily understood, accurate,
or actionable (Figure 8-5).
Risk register
Most projects have some form of a risk register. When used well, this
register is a good way to communicate risks and help teams focus their
attention on removing barriers to success. The following are examples
of ineffective use of risk registers:
▪ Risks that are of little practical value to the project are being used
to shrug responsibility or provide cover against blame.
▪ Risks remain on the register for a long time with just new com-
ments and updates being added weekly at each risk review
meeting, but with no resolution. This implies that either the risk
isn’t getting the attention it deserves or it’s difficult to resolve and
Stage gates
Stage gates or milestone-driven planning and reviews are a common
feature of the majority of business application projects, including more
agile projects. These milestones are regarded as important checkpoints
spread throughout the project timeline, which the project can only
pass through if they have met certain criteria. The reality of many
projects is that the checkpoints don’t always perform their intended
A milestone function. There may be several reasons for this, and projects should
should have the examine if they are suffering from the following limitations:
primary purpose ▪ Criteria for the milestone are unclear You should strive to
create very explicit criteria for entering and exiting a milestone. If
of confirming the criteria are difficult to measure, it’s difficult to take unequivocal
that the project decisions based on data.
has satisfactorily ▪ Exit and entry criteria aren’t respected Projects are typically
completed the under great pressure to get past one milestone and move into the
next phase, either because of resource usability or commercial
precursor tasks payment triggers, or because the project optimistically believes
so that it’s safe, that it will somehow catch up with the overdue tasks that didn’t
efficient, and meet the milestone criteria. Sometimes a project is also pressured
to move to the next phase because it implies progress. This self-de-
meaningful to start lusion can create additional risks in the project; uncontrolled
the tasks of the disturbances in project deliverables can delay the ultimate goal of
next stage. a successful go live. The debt incurred by skipping the criteria has
173
to be paid back at some point; the later it’s resolved, the higher
the cost. When considering allowing a milestone to pass when it
hasn’t met all its criteria, project leadership should strongly consid-
er the nature and size of the incomplete tasks, precisely how these
tasks will be addressed, what impact the lagging tasks will have on
dependent tasks. This allows for a managed transition between
milestones with the risks fully understood and a clear and realistic
action plan for the stragglers.
▪ Milestones don’t reflect true dependency From a project
lifecycle perspective, a milestone should have the primary purpose
of confirming that the project has satisfactorily completed the
precursor tasks so that it’s safe, efficient, and meaningful to start
the tasks of the next stage. There may be other perspectives, such
as commercial ones to trigger stage payments or just periodic
points for the project to review progress or cost burndown, but
we’re focusing on a project lifecycle point of view. Some common
examples of stage gate criteria are as follows:
▫ To-be business processes defined An approval step from
the business to confirm the business process scope helps
ensure that prior to spending a lot of effort on detailed
requirements and design, the process scope has been well
established and agreed upon.
▫ Solution blueprint defined This helps ensure that the
requirements have been analyzed and understood sufficiently
well to be able to define an overall, high-level design that is
confirmed as feasible. Additionally, the key design interrela-
tionships and dependencies are established before working on
the individual detailed designs.
▫ Formal user acceptance testing start agreed Starting
formal UAT with business users tends to be the final, full formal
test phase before go live, with the expectation that go live is
imminent and achievable. Prior to starting this phase, it makes
sense to validate the readiness of all the elements to ensure
that this test phase can complete within the allocated time
period and meet the necessary quality bar.
The design review board should also have representatives, not just
from the project, but also from the wider business and IT, to help
ensure the business case for the design is sound and the impact on
related systems (integrations) and overall enterprise architecture are
considered. Another characteristic of a successful design review board
is that they have minimum bureaucracy and their focus is on helping
the project to move as securely and as fast as possible.
For the best project velocity, you should set an expectation that the
reviews will be interactive and expected to be resolved within a single
iteration. This requires the design review board to be pragmatic and
well prepared. It also requires the project to have the discipline to
provide sufficiently detailed proposed design information. This process
Project goals
Key project areas
Project organization
Project governance is often thought to be confined to the classic areas
Project approach we discussed in the previous section, but in Dynamics 365 business ap-
plications, we should consider the key activities that also need to have
Classic structures
governance processes embedded. We see many projects where the
strategies for the more specialist project delivery areas have their own
Key project areas
processes and disciplines that don’t always work in concert with the
Project plan project approach or to the benefit of the overall project. These areas
are often managed and driven by specialists with their own processes
and internal goals.
Test strategy
A well-defined test strategy is a key enabler for project success. A high
level of governance is required to ensure that the right test strategy is
created and implemented, because multiple cross-workstream parties,
business departments, IT departments, and disciplines are involved.
When evaluating the suitability of the test strategy, in addition to the
technical angles, consider how well some governance areas are covered:
▪ Does the right level of governance exist to ensure that the testing
strategy is mapped to, and proportionate with, the project scope
Data migration
For data migration, examine the related governance coverage to
ensure that this process is well understood at a business level:
▪ Do you have the right level of business ownership and oversight
on the types and quality of data being selected for migration?
▪ Is a data stewardship process in place to make sure that the
cleansed and migrated data is kept clean during the project and in
operational use?
▪ Will the data from the existing source systems be faithfully
transformed so that it’s still meaningful and fit for purpose in
Dynamics 365?
▪ Will the data strategy and plan provide the right type, quan-
tity, and quality of data to the project at the right time for key
milestones such as testing, training, mock cutovers, and other
data-dependent tasks throughout the project lifecycle?
▪ Does the data migration strategy adequately reflect the go live cu-
tover strategy, so the mock cutovers or migration dry runs closely
reflect the process for data migration at go live?
Make sure that the business impact on either side of the integration is
understood and managed. Confirm that the data exchange contracts
are defined with the business needs in mind. Examine if the security
and technology implications for the non-Dynamics 365 systems are
properly accounted for.
Cutover
The cutover from the previous system to the new Dynamics 365 system
is a time-critical and multi-faceted process. It requires coordination
from multiple teams for the related tasks to all come together for a go
live. You almost certainly need to include business teams that aren’t
directly part of the project. Therefore, cutover needs to be shepherd-
ed with a deep understanding of the impact on the wider business.
Preparation for the cutover needs to start early in the project, and the
governance layer ensures that the cutover is a business-driven process
and not a purely technical data migration process. For example, early
definition and agreement on the related business calendar sets the
right milestones for the data migration to work with.
The cutover from
the previous The legacy systems shutdown window for the final cutover is typically
short, perhaps over a weekend. For some cutover migrations, that
system to the new
window may be too short to complete all the cutover activities, includ-
Dynamics 365 ing data migration. In such cases, the project team may perform the
system is a time- data migration as a staggered, incremental migration, starting with
critical and multi- slow-moving primary data and frozen transactional data. This leaves a
smaller, more manageable remainder to address during the shutdown.
faceted process.
179
This needs careful governance, and the strategy needs to be decided
early because the data migration engine needs to be able to deliver in-
cremental data loads. You should also carefully consider what business
activities you may need to throttle or perform differently to reduce the
number and complexity of changes between the first migration and
the final cutover. The changes need to be meticulously recorded and
registered (automatically or manually) so that they can be reflected in
the final cutover.
Training strategy
Most projects have plans to train at least the project team members
and business users. All too often though, training is seen as a lower pri-
ority. If any project delays put pressure on timelines or budget, training
can be one of the first areas to be compromised.
This trade-off between respecting the go live date and completing the
full training plan can be easier for the implementation team to rational-
ize because the team is aiming for a go live and the risk of poor training
can be seen (without sufficient evidence) as manageable. The worst
consequences of poor user training are felt in the operational phase.
Project goals
Project plan
Project organization
Project plan analysis is where the outcomes of a lot of governance topics
Project approach become visible. The effects of good governance are felt on a project
by noting that the project plan is resilient to the smaller changes and
Classic structures unknowns that are a reality for all business application projects. Poor
governance, on the other hand, manifests itself as continuously missed
Key project areas
targets, unreliable delivery, and repeated re-baselining (pushing back) of
Project plan the project plan milestones. The key is for the project planning process
to have its own governance mechanisms to avoid poor practices, detect
risks early, and provide the agility and insights to fix and adjust quickly.
Projects should institute thresholds for how out of date a project plan
is allowed to become. A project plan that is inaccurate in terms of the
estimated effort (and remaining effort), duration, and dependencies
will promote the wrong decisions and allow risks to remain hidden, and
ultimately give an inaccurate picture of the project status.
Does the project plan give a clear indication of what tasks are critical?
Business application projects have many moving parts and many spe-
cialist areas. The project must be able to accurately derive the critical
path so that project leadership can focus their attention on those tasks.
For agile projects, agile crews prioritizing the backlog and managing
the critical path within short-duration sprints and iterations can pro-
vide the necessary insight.
Status reporting
A well-constructed project plan facilitates accurate project status
reporting. However, it needs to be deliberately designed into the
project plan with the right dependencies, estimated effort, and mile-
stones so it can be easily extracted from the plan. This means that the
project should be deliberate in the detail to which tasks are scheduled
so dependencies and milestones are created with this in mind. Because
status reporting often has milestone-level status as a key indicator, the
meaning of a milestone’s completion must be explicitly defined so the
implications are clear to the intended audience.
Some projects, especially ones using more agile practices, may use
alternative or additional analysis and presentation methods such as a
backlog burndown, remaining cost to complete, or earned value, but
these principles apply nevertheless and they all rely on having accurate
underlying data on actual versus expected progress to date and the
remaining effort expected.
Finance
HR
Operations
Logistics
Legend
Process not complete Process complete,
and not on track in repair, or testing
Finance
HR
Operations
Logistics
Status reports can also include status from other project controls such
as risks and issues, sentiment surveys, or resource availability in prior
months. These aren’t directly derived from a project plan and can
provide a useful supplementary view. However, it’s important to ensure
that actionable conclusions are extracted from these alternative views;
these conclusions should generate tasks that can be measured, planned,
The project status should not be the only source for this feedback loop;
consider all the other controls and procedures in the project that can
generate actionable information to establish a systematic mechanism
defined to diligently listen and extract the actions. For example, up-
dates from daily stand-ups, design reviews, sprint playbacks, or even
informal risk discussions can help provide useful feedback.
Conclusion
The primary intention at the start of this chapter was to provide an
overview of the importance of good project governance and provide
guidance on how to assess the effectiveness of your existing or pro-
posed governance model.
185
We discussed how defining a project organization model that has an
engaged leadership and facilitates rapid and accurate communication
is more aware of the reality of the project. Similarly, project orga-
nizations that enable the right accountability, naturally encourage
cross-collaboration, and reflect the complexity and effort of the project
are much more likely to have good velocity and be successful.
You can use the key objectives of good project governance to judge
the overall suitability of a given governance model. Ideally, you should
test a proposed governance model against these objectives during the
Initiate phase of the project and throughout, but it’s never too late.
187
Checklist
Project organization
structure Classic governance
Align business streams with functional workstreams Establish an effective steering group that understands
for better efficiency and structure, and make sure the enough of the project details to actively steer the
business stream is directly involved in the project. project based on meaningful, actionable, and evidence-
based information.
Secure strong executive sponsorship and active engage-
ment from senior business stakeholders within each Create a risk register with meaningful and actionable risks
business stream. that align with project priorities and are actively addressed.
Ensure cross-team collaboration in which members of Implement stage gate or milestone-driven planning and
each workstream are involved in other workstreams to reviews as checkpoints to better track and communicate
avoid working in silos. implications of the project status.
Plan and budget the project resources in proportion to Implement design review boards to ensure new designs
the effort and complexity of the project. and changes operate within the boundaries of the solu-
tion blueprint and don’t adversely affect other designs.
Define accountability and responsibility at the project
leadership level. Each stream identifies an owner with
autonomy who is empowered to make decisions.
Introduction
Defining your environment strategy is one of the
most important steps in the implementation of your
business application.
Environment-related decisions affect every aspect of the application, from
application lifecycle management (ALM) to deployment and compliance.
Tenant strategy
Understanding tenants is fundamental to defining an environment
Fig.
9-2
Test
Operations
environment
Dynamics 365
and Power Platform
Production
Tenant Sales app
environment
SharePoint
Microsoft 365 SharePoint
site
Product #3
Let’s examine some of the pros and cons for a global single-tenant setup.
Let’s examine some of the pros and cons for a global multitenant setup.
198
▪ Service-level admin actions, such as ability to copy an environ-
ment, may not be available across tenants, which can make testing
or troubleshooting difficult.
▪ The build automation and continuous integration and continuous
delivery (CI/CD) pipelines that span multiple tenants can be more
complicated and might require manual intervention, especially
when managing connections to the service.
▪ You may have to purchase a significant number of licenses for con-
ducting testing. With load testing, for example, you can’t reliably
simulate a load from 1,000 users using five test-user accounts.
▪ If you’re using capacity add-ons, you will have to make duplicate
purchases for each tenant to develop and test your solutions.
▪ Integrations with other services that can’t be done across tenants,
which means potentially purchasing licenses for other Microsoft
services for each tenant.
Compliance
Security and compliance are critical considerations for an environ-
ment strategy. Each organization needs to ensure that data is stored
and processed in accordance with local or regional laws, such as
At the onset of the project, your organization must determine your envi-
ronment’s compliance requirements. These can vary widely depending
on the industry, regulations, business type, and user base, and will need
to be approved by your internal security and compliance teams.
Application design
The environment strategy can affect the application design. Conversely,
the needs of an application can drive the environment requirements,
and it’s not uncommon for IT systems within an organization to reflect
the actual structure of the organization. Depending on your organiza-
tion’s strategy for data isolation, collaboration, and security between
different departments, you could choose to have a single shared envi-
ronment or create isolated environments. For example, a bank might
allow data sharing and collaboration between commercial and business
banking divisions while isolating the personal banking division, which
reflects the bank’s application design and environment strategy.
200
understanding the underlying data and integration dependencies,
could lead to unnecessary complexity and fragmentation.
The data store for an application and the supporting business pro-
cess plays a key role in the environment decision. If multiple apps for
different departments can benefit from leveraging each other’s data,
a single environment with integrations can improve consistency and
collaboration. The user experience can be tailored via dedicated apps
for different personas and secure data access using the security model.
Performance
Microsoft cloud services provide a high degree of scalability and per-
formance. Based on considerations such as network latency, firewalls,
network traffic monitoring, organizational proxies, and routing by in-
ternet service provider (ISP), globally distributed users can experience
higher latencies when accessing the cloud. This is why we recommend
creating a latency analysis matrix (Figure 9-4) for solutions that have a
globally distributed user base.
Fig.
9-4
Environment
User location Device Network Latency Bandwidth
location
Scalability
Scalability of the SaaS platform is a critical consideration for business
applications. Traditionally, scaling up in on-premises deployments was
about adding more servers or more CPU, memory, or storage capacity
to existing servers. In a cloud world with elastic scale and microservice
architecture, the server could be replaced by an environment and the
compute and data transfer units by the API capacity. (This is just used
as analogy, it’s not a one-to-one mapping where one environment
corresponds to a server in the SaaS infrastructure.)
Vertical scalability
Organizations commonly operate single environments supporting
thousands of users, and each user has a defined API entitlement based
on the license type assigned. The environment’s storage grows as more
Horizontal scalability
With horizontal scalability, organizations can have several separate en-
vironments, with any number of Power Automate flows on the tenant.
There are no native sync capabilities between environments, and you
still need to take license entitlements into consideration, especially
when it comes to tenant-wide storage and API entitlement.
Maintainability
Maintainability measures the ease and speed of maintaining a solution,
including service updates, bug fixes, and rolling out change requests
and new functionality.
The effort to automation testing so you can run regression tests and quickly identify
any dependencies that could cause issues.
maintain the
solution is directly The effort to maintain the solution is directly proportional to the number
proportional to of environments involved. For example, testing a release wave or analyz-
the number of ing the impact of deprecations is easier when there is just one production
environment with the Dynamics 365 Sales app and the Dynamics 365
environments Customer Service app, compared to a solution that uses different
involved. production environments for the Sales and Customer Service apps.
203
ALM
ALM includes the tools and processes that manage the solution’s
lifecycle and can affect the long-term success of a solution. When
following the ALM of a solution, consider the entire lifespan of the
solution, along with maintainability and future-proofing. Changes to
your environment strategy will directly affect the ALM (and vice versa),
but it’s important to be clear that environments are not the repository
of your code and customizations.
Types of environments
Production environments are meant to support the business. By de-
fault, production environments are more protected for operations that
can cause disruption, such as copy and restore operations. Sandbox
Purposes of environments
▪ Development One or more development environments are usu-
ally required, depending on the customization requirements and
time available. Development environments should be set up with
proper DevOps to allow for smooth CI/CD. This topic is covered in
more detail in Chapter 11, “Application lifecycle management.”
▪ Quality assurance (QA) Allows for solution testing from both
a functionality and deployment perspective before the solutions
are given to the business teams in a user acceptance testing (UAT)
environment. Only managed solutions should be deployed here.
Depending on project requirements, there can be multiple testing
environments, including regression testing, performance testing,
and data-migration testing.
▪ UAT Generally the first environment that business users will have
access to. It will allow them to perform UAT testing before signing
off deployment to production environment for go live.
▪ Training Utilized to deliver training. It allows business users to
Citizen development
One of the key value propositions of the Power Platform, the underlying
no-code/low-code platform that powers Dynamics 365 Customer
Engagement apps, is that it enables people who aren’t professional de-
velopers to build apps and create solutions to solve their own problems.
When planning Central IT still has governance responsibility and creates guidelines
to secure data with DLP policies. But business users should be able to
your environment build apps and automate their work while accessing data in a secure
strategy, consider way. This organic growth of applications by business users facilitates
any future phases digital transformation at a massive scale. Thousands of organizations
and rollouts of have adopted this “citizen developer” philosophy to quickly roll out
hundreds of apps across the organization, creating a community of
the solution, as engaged employees who are helping realize the vision of an agile busi-
well as changing ness that can evolve to meet customer needs in days or weeks instead
requirements. of months or years.
The environment strategy that enables this kind of organic growth will
be different from a traditional IT-developed solution. It’s crucial to de-
liver an agile and automated process where business users can request
environments to enable the maker experience and connect to data in
a secure way while conforming to the standards for data modeling and
application design set by the organization.
Future-proofing
Future-proofing is the process of developing methods to minimize
any potential negative effects in the future. It can also be referred to as
resilience of the solution to future events.
There isn’t a standard answer or blanket approach that will work for
all apps within your organization. The best approach for you is the
one that facilitates collaboration and cross-pollination of information
between apps, while also reducing data silos and meeting your organi-
zation’s security, compliance, and data protection requirements.
Multiple-app environment
In a multiple-app environment scenario (Figure 9-5), a single pro-
duction environment is used for multiple apps. For example, the
production environment might have the Dynamics 365 Marketing and
Sales apps to enable collaboration between the marketing and sales
teams, and facilitate a quick transfer of qualified leads to sales repre-
sentatives. Similarly, having the Sales and Customer Service apps in the
Fig. same environment gives the sales team insights into customer support
9-5
experiences, which could affect
Let’s examine some of the pros and cons for the multiple-app deploy-
ment model.
Per-app environment
In a per-app deployment model (Figure 9-6), every application gets
its own production environment, with a separate set of environments
to support the underlying ALM and release process. There is com-
plete isolation of the data, schema, and security model. A per-app
Let’s examine some of the pros and cons for the per-app
deployment model.
209
Global single environment
A global single environment is just one environment deploying an
app that’s accessed by users across the globe. All the data, processes,
code, customizations reside in a single environment. Based on the
app’s security requirements and regional data-sharing policies, your
organization will have to manage access via security roles. This is the
most common approach, as it enables strong global collaboration and
a unified view of data and processes across different regions, business
units, and legal entities.
Your organization might have to adjust the security model for each
region to meet local regulations and accommodate cultural differences
around sharing and collaboration.
Let’s examine some of the pros and cons of a global single environment.
around sharing and deployments, multiple environments may be more suitable because
of the implications (such as latency) associated with the connection
collaboration. infrastructure, which can significantly affect the user experience.
Distributing environments to provide users with more local access
can reduce or overcome network-related issues, as the access occurs
over shorter network connections. For example, with Dynamics 365
Commerce, customers can deploy region specific Commerce Cloud
Scale Units to provide better latency to store and ecommerce users
and elastic scale and easier ALM processes.
This model makes it easier for local operations to run and maintain
their solutions, but it also increases the costs, and may lead to siloed
work and deployment.
Let’s examine some of the pros and cons of a global multiple environment.
n Hu
tio b-
lu Spo a
n Sp
ent ke tio ok
so
nd
nm
A
-sp
lu
ke
eB
ok
o
so
po
vir
eB
en
d-s
sol
Spoke A
Spoke A en
solut
viro
Hub-an
ution
Hub environment
ion
nment
Hub solution
Hub solution
Sp t
ok n
eCe me Sp
o ke C on
n v iro n s o l u ti
Hu
b-an io n
d-spoke C solut
eB
so
sol
Spoke A
ution
Hub solution
Environment lifecycle
scenarios
Sp Environments transition through various states before they are decom-
o ke C on
s olu ti missioned. Some of these transitions are natural to support ALM, but
Implementation Guide: Success by Design: Guide 9, Environment strategy 213
there could be business, compliance, and technical reasons that trigger
these transitions. As described earlier, environment-related changes are
complex, and they require significant planning and (usually) downtime.
Creation
An organization might create a new environment for several reasons.
The purpose of the environment, and considerations such as the envi-
ronment type, country or region, apps required, access security groups,
URL and agreed-upon naming conventions, and tiers and service
levels, should be well defined and understood before its creation.
Environment transition
An organization might have multiple types of environments, each
targeting specific use cases, including trial environments, default
environments, sandboxes, and production environments. Be aware
of possible transitions between types, as there could be limitations
and implications to the service. For instance, changing a production
environment to a sandbox might change the data-retention policy and
applicable service-level agreements (SLAs).
Copy
Most often used in troubleshooting scenarios, environment copy lets
you create a copy of an existing environment with all its data or only
the customizations and configuration. Be aware of the effect of storage
capacity and compliance requirements, which could restrict copying of
customer data. Sometimes copy is also used to support ALM instead of
using the source control. This pattern should be avoided, and environ-
ments should never be used as a repository for code and customizations.
Geo-migration
Geo-migrations could be triggered by changes in regulations, or
simply because of a need to move an environment to a lower-laten-
cy region for a better user experience. This would normally involve
creating a copy of an environment and then physically moving it to
a different region. It almost certainly will involve downtime for users,
and requires approval from the compliance and security teams. It also
could lead to change in service URLs and IP ranges, affecting integra-
tions and network security.
Tenant-to-tenant move
Any change in tenant strategy might trigger a request to move an en-
vironment from one customer tenant to another. This is not a common
request and will require several steps before the actual move. Most
importantly, users must be correctly mapped between tenants and the
record ownerships must be restored. It might require a regional migra-
tion first and could involve several hours of downtime.
Merged environments
Merging multiple environments into a single environment takes
substantial planning and could be extraordinarily complex. Merging
involves the data model and the processes in the app, followed by the
actual data with necessary transformations and deduplication, then the
security model and, finally, the integrations.
215
Split environments
Splitting an environment could be required if an organization
transitions to a multiple-environment strategy. This could involve pro-
visioning new environments and migrating all previous customizations,
along with the relevant data subset. Alternatively, an organization
could create the split by copying the environment and moving it to
another region.
Administration mode
Administration mode or maintenance mode can be used for mainte-
nance when only selected users can access the environment. Before
you do this, assess the systems that will be affected by putting an
environment in administration or maintenance mode, such as a pub-
lic-facing portal that connects to the environment.
Deletion
Deleting an environment removes all the data and customizations, as
well as the option to restore and recover. The process of decommis-
sioning or deleting an environment needs gated approvals.
A CoE helps enforce A CoE helps enforce security and policies, but also fosters creativity and
innovation across the organization. It empowers users to digitize their
security and policies, business processes, while maintaining the necessary level of oversight.
but also fosters
creativity and With the changing technology landscape and evolution of security
Now we’d like to offer some guidance specific to the finance and op-
erations apps like Dynamics 365 Finance, Dynamics 365 Supply Chain
Management, and Dynamics 365 Commerce. A team implementing a
project with these apps requires environments to develop, test, train,
and configure before production. These nonproduction environments
come in a range of tiers with pre-defined sizing, topologies, and costs.
Understanding these basics is key to charting out a well-designed
environment plan.
Development 1 Development
Build 1 n/a
Training 2 or 3 Sandbox
You should also think about the build environment. Source code devel-
oped in the development environment is synched to Azure DevOps. The
build process will use the build environment, which is a tier 1, to produce
code packages that can be applied to sandbox and production.
When it comes to more complex projects, you may need more envi-
ronments. For example, you may wish to conduct testing in parallel if
multiple teams are developing and testing parallel workstreams.
General recommendations
▪ Plan for environments early in the project and revisit the plan at
regular intervals.
▪ Define a consistent naming standard for your environments. For
example, a gold environment should have “gold” in the name.
▪ Have a regular schedule to deploy updates and import fresh data
(if needed).
▪ Keep all environments in the same region if your business is in
one region. For example, avoid having a test environment in one
geographical location and production in another.
▪ Deploy environments by using an unnamed account, such as
dynadmin@your_organization_name.com. Assign the environ-
ments to an owner who will be responsible for their status and
maintenance. We strongly recommend using the same dedicated
admin account on all environments.
an owner who will out the project. A lack of environments may prevent concurrent testing
activities and delay the project.
be responsible for
their status and
maintenance. Training environments
▪ Make sure all users have access with appropriate roles and permis-
sions, which should be the same roles and permissions they will
Gold environment
Make sure no transactions happen in a gold environment. There should
be a process to bring tested configurations into the gold environment.
Data-migration environments
▪ Assess whether you need a dedicated environment for data
migration, which is a disruptive task that can’t generally coexist
with other types of test activities. Multiple data-migration envi-
ronments may be needed to avoid conflicts if multiple parties are
migrating data concurrently.
▪ Account for data-migration performance in environment plan-
ning. Depending on the size of the data-migration task, it may be
necessary to use a tier 3 or higher environment to perform da-
ta-migration testing. (You can also use an elevated cloud-hosted
environment.)
Pre-production environments
▪ Assess whether there is a need for a separate environment to test
code or configuration changes before they’re applied to production.
▪ If there will be continuing development of the solution after you
go live, you may need a separate pre-production environment to
support concurrent hotfix and hotfix test activities. (This environ-
ment should have the same code base and data as production, so
a like-for-like comparison can be performed for any new changes
before they’re applied to production.)
Performance-testing environments
▪ Plan a specific environment for performance testing, or you won’t
Developer environments
Ensure each developer has an independent development environment.
Production environment
Raise production environment requests through support, as you don’t
have direct access to this environment.
Product-specific
guidance: Customer
Engagement
Now we’re going to focus on product-specific resources that apply to
Power Platform and customer-engagement apps.
Fig.
9-9 Tenant: Business
Development Development Development
Microsoft unified CRM and ERP within Dynamics 365 and the Power
Platform brands to make it easier to create apps and share data across
all Dynamics 365 applications. The combination also creates a set of
purpose-built intelligent apps to connect front-office and back-office
functions through shared data. Rich analytical capabilities provide orga-
nizations with deep insights into each functional area of their business.
Conclusion
As we have seen throughout this chapter, defining an environment strat-
egy for your organization is a critical step when planning for deployment.
Decisions made about environment strategies are hard to change and
could be very costly later in the program lifecycle. The wrong environment
strategy can create unnecessary data fragmentation and increase the
complexity of your solutions. Early environment planning aligned to
your organization’s digital roadmap is fundamental to success.
Organization environment Assess the impact on the overall data estate. Avoid
excessive fragmentation and promote reuse of existing
and tenant strategy integrations and data flows.
Define environment and tenant strategies and obtain
agreement for them at the program level with all
Global deployment
key stakeholders, including business, IT, security, and
compliance. Account for global deployment scenarios; additional
coordination and agreement may be required to meet
Create a strategy that considers the future growth of the
regional requirements and compliance needs.
solution.
Assess the latency to choose the optimal location—
Create an environment strategy that can support the
global deployments are prone to network latency
ALM processes and necessary automations.
related performance issues.
Create an environment strategy that considers the short
and long-term impact on licensing, compliance, appli-
Governance and control
cation design, performance, scalability, maintainability,
and ALM of the solution. Establish governance processes for provisioning, mon-
itoring, managing the lifecycle, and decommissioning
Create a strategy that considers the potential need for
the environments early on.
citizen development scenarios or collaborative develop-
ment with both IT and business users. Ensure the different security personas involved in the
environment management are understood and appro-
Create an environment planning matrix with key consid-
priately assigned.
erations of the pros and cons to help plan and visualize
the impact. Use the CoE Starter Kit to make necessary adjustments
to citizen development use cases as needed because
The management team was not clear that the environments, services,
and default capacities included in the SaaS subscription license, mainly
the production and sandbox environments and hosted build auto-
mation services, were the minimum needed to run the solution on an
ongoing basis, but were insufficient on their own to complete all the
implementation activities required for the moderately complex rollout.
229
10
Implementation Guide: Success by Design: Guide 10, Data management
Guide
Data
management
230
Deploy faster and
more efficiently.
Introduction
Data surrounds us every day, like a blanket. Whether
you are posting on your social media, scheduling
a doctor’s appointment, or shopping online, the
information collected is one of the most valuable
assets of any business.
With the right data, organizations can make informed decisions, im-
prove customer engagement, and gather real-time information about
products in the field.
This chapter aims to break down the various functions within data man-
In this chapter, we discuss the many
agement that collectively ensure information is accessible, accurate, and
ways data plays a part in defining a
solution. Data plays a vital role in the relevant for the application’s end users. We focus on the most common
success of every deployment.
discussion points surrounding the lifecycle of data within a project.
You learn about:
• Data governance
• Data architecture Regardless of your role, take the time to consider what is important to
• Data modelling each person interacting with the data. For example, users of a system
• Data migration
• Data integration focus on their data quality, ability to search, data relevance and per-
• Data storage
formance, while architects and administrators are focused on security,
• Data quality
licensing, storage costs, archival, and scalability.
Data stewardship
A data steward is a role within an organization responsible for the
management and oversight of an organization’s data assets, with the
goal of providing that data to end users in a usable, safe, and trusted
way. By using established data governance processes, a data steward
ensures the fitness of data elements, both the content and metadata,
and they have a specialist role that incorporates processes, policies,
guidelines, and responsibilities for administering an organization’s
entire data in compliance with policy and/or regulatory obligations.
Data quality
The quality of the data cannot be thought of as a second or third pro-
cess. Data quality is at the front of data governance policies. It should
be thought of as high quality and fit for the purpose of whatever the
intended use is. Driving data’s accuracy and completeness across the
organization is typically managed by dedicated teams who may use a
variety of tools to scrub the content for its accuracy. Although these tools
aid the process more and more, this is still typically a human responsibility.
The users that are most familiar with their data should typically be the
gatekeepers for cleansing, including standardization and adherence to
the policies outlined by the data steward.
For example, one retailer wants to better target its customers through
email campaigns, which in the past failed to deliver expected results
due to incorrect contact information being captured. While defining
this use case, the company also set up the right data governance
principles that ensure what data quality is required for email cam-
paigns. They were challenged to define “good” data to satisfy the use
case. This simple use case uncovered other issues. The company found
that online customers, who buy as guests, could enter any value in the
email field and there was no validation. This led to data stewards and
LOB process owners setting up new validation processes.
Without data
Without data governance in place, organizations struggle to control
governance in corporate assets needed to ensure data quality. During the requirements
place, organizations gathering stage of your implementation, start paying particular atten-
struggle to control tion to the availability of data required for your solution. Early discussion
and identification goes a long way in defining your use cases.
corporate assets
needed to ensure For an example of a proper use case, see “A Show-Don’t-Tell Approach
data quality. to Data Governance.”
234
Governance
Data architecture
Architecture
After data governance is understood within the organization, the
Modeling next step is to look at data architecture and the different types of
enterprise data.
Storage
The key point to data architecture is to create a holistic view of the data
repositories, their relationships with each other, and ownership. Failure to
implement data architecture best practices often leads to misalignment
issues, such as a lack of cohesion between business and technical teams.
When you prepare your data architecture, use the questions below to
guide you.
▪ How do you use your data?
▪ Where do you store your data?
▪ How you manage your data and integrate it across your lines of
business?
▪ What are the privacy and security aspects of the data?
Before you can start working with data, it is important to know what a
typical enterprise data is made up of, as illustrated in Figure 10-1.
Azure APIs
Telephony Interactions
Corporate Line of
application business
End
users Company’s
Contacts Activities Master relationships
A accounts
Cases Other
Microsoft
Microsoft Master data integration
USD Dynamics
365 Master Employees
B
Email Exchange
Power server-sync
Applications Master Products
C
External
knowledge Knowledge
Customers base base
self-service Master Product
articles
D alerts
SharePoint SharePoint
document
library
Financials
Corporate
accounting
application
Master data
Master data is your source of common business data and represents a
Configuration data
Configuration data is data about different setups needed to prepare
your production environment for operational use. Besides importing
master data like customers and products, you need to import the
configurations data. For example:
▪ Currencies
▪ Tax codes
▪ Modes of payment
▪ Address (countries, states, postal codes, etc.)
Transactional data
This type of data is generally high in volume due to the nature of its use.
Transactional data typically refers to events or activity related to the
master data tables. The information is either created automatically or
recorded by a user. The actual information could be a statement in fact
(like in banking), or it could be a user interpretation, like the sentiment of
a customer during a recent sales call. Here are a few other examples:
▪ Communication history
▪ Banking transactions
▪ IoT transactions
▪ ERP transactions (purchase orders, receipts, production orders, etc.)
Inferred data
Inferred data is information not collected by the business or users.
Typically, this information is automatically generated based on other
external factors, which adds a level of uncertainty. For example:
▪ Social media posts
▪ Credit score
▪ Segmentation
238
Governance
Data modeling
Architecture
With a base understanding of the data governance and data archi-
Modeling tecture, we can now focus our attention on how we plan to store the
information we collect. Data modeling should always be completed
Storage before any configuration begins. Let us start with a basic understand-
ing of data modeling, recommended practices, and how it is related to
Migration
a project.
Integration
According to the DMBoK 2, “The process of discovering, analyzing,
Quality representing and communicating data requirements in a precise form
is called the data model.” While data architecture is at an elevated level
and involves a data architect looking at the business requirements
broadly, data modeling is at a lower level and deals with defining and
designing the data flows in precise detail.
Data architecture and data modeling need to work together when de-
signing a solution for a specific business problem. For example, if your
organization wants to implement an e-commerce solution, you cannot
do that unless somebody knows and has defined the existing data ar-
chitecture and data models, different integrations in play, existing data
imports and exports, how customer and sales data is currently flowing,
what kind of design patterns can be supported, and which platform is a
better fit into the existing architecture.
Branch ID Headquarters ID
Branch ID
Forecasting capacity
The capacity your environment consumes should be part of the over-
all environment strategy. Every environment should be assessed for
storage requirements based on usage. For example, the development
environment does not require as much storage as a production envi-
ronment to operate.
First, you need to calculate the approximate volume and size of the
data to properly come up with an estimate. You can gather this detail
Fig. from your existing data sources. You should come up with a size either
10-4
in megabytes (MB) or gigabytes (GB) needed in production.
Based on the estimated size for
% of
Environment Use production, you then need to
Product
allocate a percentage for each en-
Production Contains all data required for release 100 vironment. Figure 10-4 provides
an example.
Contains a sampling of data for
Training 15
training
The best practice is to build a data
storage forecast for a minimum of
QA Contains a sampling of data for testing 15
three years, including an average
increased annual volume. We
SIT Contains a sampling of data for testing 15 recommend that when designing
and configuring your different en-
Contains limited data only for vironments you discuss with your
DEV 5
development needs
system integrator, and Microsoft,
Governance
Configuration data and
data migration
Architecture
Modeling
Once you have a good understanding of your business requirements,
Storage
project scope, and have set up proper data governance, data ar-
chitecture, and data models, you need to start the preparations for
Migration
importing data to prepare for your solution to be ready for go live(s).
Integration
There are two types of data we are talking about: Configuration data
Quality
and migrated data.
Managing these two data imports is often the most challenging part
of Dynamics 365 implementations. We often see customers underes-
timating the efforts required and projects suffering as issues surface
later. In this section, we talk about plans for configurations, plans for
data migration, different environments, identifying data sources, ETL
(Extract, Transform, Load) or ELT (Extract, Load, Transform) processes,
and finally, staffing the right resources who can manage these efforts.
Since the Dynamics 365 offerings are highly configurable, you need
tight control over the different configurations required to be set. With
so many development and testing environments and different team
members, there is a high chance of incorrect configurations being set,
leading to errors in processes. In the case of Dynamics 365 Finance,
Supply Chain Management, and Commerce, we recommend setting
up a dedicated golden configuration environment to capture and
maintain the correct configuration data. A gold environment can
help ease the pain in tracking, storing, updating, and controlling the
configuration data. Gold environment settings should be kept pristine,
have tight access control, and no one should be allowed to create any
transactions. You can use the golden configuration database to restore
other environments for testing, data migration, and even initial go live.
This plan also represents the functional scope for the solution. As
described in Chapter 7, “Process-focused solution,” you start with your
process definition and scope for the implementation. This process re-
quires enabling functionality to be enabled in the application, and this
functionality requires configuration and master data and eventually
some transactional data in connection to the first use of the applica-
tion, like some opening balances. The configuration plan represents
this scope in terms of system enablement and the correlation between
all of them.
For example, flagging the setups that only need to be imported once
as they are shared across business units or legal entities versus set-
ups that need to be imported for each phase as they are not shared.
A configuration plan can help you be organized and consider the
requirements for all the subsequent business units or legal entities.
Another example, in the case of Dynamics 365 Finance, Supply Chain
Management, and Commerce, can be whether you want to use data
sharing features to copy the configurations across companies instead
of manually importing those configurations into each company.
244
configurations and master data required to be loaded.
This plan also helps the team be prepared for cutover and calculates
the required downtime window to import all necessary data. The
team should test the configuration plan a number of times, note the
time it takes for all the configurations, and confirm that the total time
required is within the cutover window.
Migration planning
When deploying a solution, data is particularly important. Different
departments within the organization can find it challenging to support
certain activities when there is no data in the system.
When building a plan, keep in mind that data migration activities can be a
disruptive task and should not co-exist with other testing activities, so it is
advised to procure a dedicated high tier data migration environment.
247
▪ SQL database
▪ Third party database
▪ External webservices
▪ Access
▪ Flat files/Excel
Environments
Another missed factor is sizing of the import and staging databases
required for running migration tooling and cleansing of data. You need
to make sure environments used are sized appropriately to handle the
volumes of data in scope. It is recommended practice to have all data-
bases and environments running under the same location and region.
During environment planning, the appropriate environments should
be sourced for data migration.
Data mapping
The process of data mapping can start once the solutions for data
modeling have been defined. You should be organized to keep the
process systematic and simple.
nsform
Tra Roles and responsibilities
Customers and partners should staff the project team with the right
Staging resources who understand data and the tools in Dynamics 365.
Fig.
10-7
Role Responsibility
Data integration is done to bring data into the system or out to other
systems. Typically, this happens through an event or in a batch on a
schedule. For example, when a record is created or updated it would
be event driven and the nightly scheduled transmission of data would
be a batch.
Refer to Chapter 16, “Integrate with other solutions” for more details.
Data quality
Once data is migrated and integrated to your solution, it is critical that
information remains accurate, complete, reliable, and, most importantly,
up to date.
You can use various techniques to manage the quality of the data,
including data profiling, data cleansing, and data validation. The most
important aspect of data quality to understand is that data quality is
the responsibility of everybody in an organization. We see many cus-
tomers who overestimate the quality of their data and underestimate
the effort it takes to get it into shape. Keeping data quality top notch
requires following all the principles we highlighted in this chapter and
equally strong leadership to drive the habit of managing data on an
ongoing basis.
The benefits of data do not just stop there. Having the right data
means you can start leveraging ML and AI today to predict what your
customers need tomorrow.
Product-specific guidance
Up to this point in the chapter, our data related guidance has applied
to Dynamics 365 Finance, Supply Chain Management, Commerce, as
well as Customer Engagement application projects. While both appli-
cations live in the Dynamics 365 ecosystem and customers frequently
adopt both systems, often simultaneously, there are differences
between the two. This can mean differences in how each application
should manage its data.
251
Customer Engagement
This section includes a number of recommendations and resources
provided in Customer Engagement to help manage modeling, storage,
migration, and archival.
Data modeling
Data modeling is a science, and there are data modeling professionals
and established standards for data modeling. To be effective with
Dynamics 365 data modeling, you do not have to be a professional
data modeler or use any special tools. Popular tools like Microsoft Visio
can be used to quickly create a basic ERD diagram that visualizes the
relationships and flow of data between tables. In this section, we dis-
cuss some general best practices for data modeling for Dynamics 365
deployments.
Start with what you need now but design the data model in a way that
supports what you are going to be doing in the future. For example,
if you know that down the road you need to store additional details
about sales territories, using a text field for territory now makes it more
difficult to implement than if you use the territory entity relationship.
Plan for what is coming.
Data storage
This section provides product-specific guidance you can use while
implementing or maintaining your solution.
Storage capacity
The storage capacity is a standard calculation within the Power
Platform that is easily managed by the system administrator. The Power
Platform admin center is the tool you should use to maintain visibility
of storage and consumption. Within the Power Platform admin center,
go to Resources > Capacity > Dataverse for more details about your
capacity entitlements, as shown in Figure 10-8. You can access this by
going to the Power Platform admin center.
Fig.
10-8
Storage segmentation
To better understand how capacity is calculated within Customer
Engagement, the following provides the breakout based on storage
type and database tables.
Dataverse database: All database tables are counted for your data-
base except for logs and files below.
Dataverse files: The following tables store data in file and database
storage:
▪ Attachment
▪ AnnotationBase
▪ Any custom or out-of-the-box entity that has fields of datatype file
or image (full size)
▪ Any table that is used by one or more installed Insights applications
and ends in “-analytics”
▪ AsyncOperation
▪ Solution
▪ WebResourceBase
▪ RibbonClientMetadataBase
Data migration
This section provides product-specific guidance you can use while
implementing or maintaining your solution.
254
Dynamics 365 solution. The following is not an exhaustive list, but it
includes some of the most common options.
Power Apps Excel add-in This add-in can be used to open entities
directly in Excel and create and update records. Records are updated
or created directly in Dataverse. Not all entities support updates from
Excel, and lookup fields must be manually edited to correctly match.
Legacy Dynamics 365 data import utility You can import data to
Dynamics 365 entities from CSV, .xls, .xml, and .zip. While the Dataverse
API Get Data option is recommended for most flat file imports, the
legacy data import option has several unique capabilities that might
be useful in some cases.
▪ Legacy data import can be used to create new entities and fields
and option set values in Dynamics 365. While this is convenient,
the best practice is to add these items from a solution rather than
from data import.
▪ Legacy data import can import multiple files at the same time
when multiple files are added a zip file.
▪ Legacy data import can resolve lookup fields using values not
contained in the primary or alternate keys.
Extract, transform, and load (ETL) software For more complex mi-
grations, such as migrating an entire legacy CRM environment’s data,
manual import from flat files is not efficient and can be error prone.
For more complex migrations, commercially available ETL tools like
SSIS, Azure Data Factory, or a number of third-party ISPs offer services
and tools that can be used to create a migration data transformation
specification (DTS) that can migrate legacy data to Dynamics 365. This
approach can have the following benefits:
256
▪ Reusability of migration, allowing the data migration developer to
test and refine the migration multiple times before go live.
▪ Delta sync loads when moving the data from an in-production
system to a new system. Users still use the legacy system until the
switchover to Dynamics 365 happens. If the data is loaded into
production several days before go live, there is additional data cre-
ated in the legacy system after the initial load is finished. ETL tools
allow the data loaded to be filtered to only include data changed
since the last load, ensuring that the migrated data is up to date
when the users start using the new system.
▪ Consistency with data integration. Sometimes data is migrated in
an initial load, and then updated via an ongoing integration. In
these cases, it is optimal to use the same tooling that you use for
the ongoing integration to also migrate the initial data load, and
in those cases the same DTS and field mappings may be used for
the data migration as well.
▪ More complex data transformation. When moving data via flat
files, you can choose what data is mapped to the new system, but
if you wish to change or transform the data, it must be done man-
ually in the source flat files. With an ETL based migration, you have
flexibility to transform the data during the migration. For example,
say there are five different types of accounts in the source data,
but you wish to consolidate them to three, ETL tools allow for this
transformation in the data translation specification.
▪ Updates and upserts. The flat file and Excel imports support
updating records that match record keys, but sometimes your
matching logic is much more complex. Say you want to both
insert and update records (upsert). This is complex and tedious
to do with the out-of-the-box data import options, as you don’t
always have the record IDs. ETL tools allow the data migration
developer to define record matching logic and update, insert, or
upsert records based on any matching criteria. This is also helpful
for making sure that duplicate records are not being created in the
target.
▪ More flexibility in mapping lookup fields. Lookups to other en-
tities can be challenging for data imports, especially fields like
“customer” that are polymorphic, or fields like “owner” that have
special properties. If your legacy data has owning users that have
Power Automate and Azure Logic apps These can be used to im-
port data and provide connectors to over 300 services. These options
provide many of the same benefits as an ETL tool. In addition, since
Power Automate includes more than 300 connectors to many leading
services and application platforms, Power Automate can be a great
option to migrate data from these services. Although we have these
options, they are not really meant to perform large data migration
jobs, they could be considered a great option for low-volume or low-
rate delta migrations that are not too demanding on throughput.
As mentioned earlier, over time the volume of your data grows as well
as the cost. Implementing a data archival and retention strategy allows
regular shifting of data from one storage location to another. The fact
is, over time your data grows and the SQL server comes under strain.
This impacts SQL server’s performance and thus degrades user experi-
ence. Even though Dynamics 365 is a SaaS application, and you do not
bother with managing SQL server, it is a clever idea to have periodic
data maintenance routines in place to archive and delete unwanted
data to keep database nimble.
Store
Operations
Dynamics 365 Finance, Supply Chain Management, and Commerce
have a number of data management tool sets and data maintenance
cleanup schedules, which we describe in more detail here.
Data
management Best practices with handling PII (Personally identifiable
import/ information) data
export ▪ Avoid storing unnecessary PII data in Dynamics 365 Finance and
Operations apps if possible.
• Copy legal entity
• Templates
▪ Identify, tag, and classify PII data that you need to store.
▪ Dynamics 365 Finance, Supply Chain Management, and
Database Commerce uses Azure SQL database that allows data encryption at
operations rest and transport.
Backup / restore ▪ X++ APIs/patterns to encrypt and decrypt data at columns level
/ refresh / point in
time restore for added security.
▪ Build Edge applications/integration to help store and mitigate
• Product to sandbox data residency and PII requirements.
• Sandbox to product
• Tier 1 to sandbox (new)
Data management toolsets
Dynamics 365 Finance, Supply Chain Management, and Commerce
Data sharing has a rich toolset and processes available to support customers’ data
framework movement and migration requirements. Customers can use the fea-
tures in app and LCS to combine different approaches. Figure 10-10
highlights the options.
Database operations
Though you can use data entities and data packages to move small
configurations, this may not be practical always. You may often find it
handy to move entire databases.
References
▪ Database movement operations home page
▪ Submit service requests to the Dynamics 365 Service
Engineering team
Data Management/Data
Warehousing information, news
Data sharing framework
and tips - SearchDataManagement
(techtarget.com) Cross-company sharing is a mechanism for sharing reference and
group data among companies in a Dynamics 365 Finance, Supply
Insights-Driven Businesses Set The
Pace For Global Growth
Chain Management, and Commerce deployment.
(forrester.com)
This framework is introduced to allow sharing setups and master data
DMBoK - Data Management Body
of Knowledge (dama.org) across multiple legal entities. This facilitates master data management
when you are dealing with multiple legal entities and want to designate
one legal entity as master for some setups and parameters data. For ex-
ample, tax codes may not change from company to company so you can
set up in one legal entity and use cross company data sharing framework
and its policies to replicate the data across rest of the legal entities.
262
Configuration data and
Checklist data migration
Create, maintain, update, and test a configuration plan
throughout the project lifetime. It accounts for all the
Data governance required configuration data you import to support go live.
and architecture Ensure the data migration analyst, data migration archi-
Establish data governance principles to ensure data tect, and data steward create a plan for data migration
quality throughout the business processes lifecycle, that includes identifying data sources, data mapping,
focusing on the data’s availability, usability, integrity, environments, ETL, testing, and cutover planning.
security, and compliance.
Focus on maximizing the data throughput during
Appoint a data steward to ensure data governance migration by following Customer Engagement apps best
principles are applied. practices.
Define proper use cases and make data available to Optimize for network latency by staging in the cloud
support the business processes. and batching requests.
The system was hard to maintain and fraught with issues as customer
data was divided in a number of applications and databases.
The first phase of the project targeted the UK business and involved
moving over 150 of their business processes to Dynamics 365.
The company now has a unified view of their customers, which helps
them provide better customer service and allows marketing efforts to
be more targeted.
Introduction
Generally, everything has a life and its own lifecycle.
Your solution goes through the same process, starting
with conception, moving through implementation,
then continuous operation, and finally to transition.
A solid application lifecycle management (ALM) strategy brings a
successful solution to customers with complete visibility, less manual
interaction with automation, improved delivery, and future planning.
With ALM, you have defined processes and In this chapter, we talk about ALM in the context of the overall solution
practices, a structured team, and tools at lifecycle. Your solution may use one or more of the Dynamics 365
your disposal.
Business Applications such as Finance, Supply Chain Management,
ALM is the management of your lifecycle (from
Sales, Field Service, or Commerce.
conception to operation) of your solution.
What is ALM?
ALM is managing the end-to-end lifecycle of your solution, starting
from procuring the Dynamics 365 license, to mapping your business
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 267
requirements in your application, designing your solution, extending
your custom requirements, validating and testing the solution consid-
ering business requirements, deploying the solution to business, and
maintaining it through the lifetime of your solution (Figure 11-1).
Planning
Fig.
11-1
Maintenance Requirement
Deployment Design
Testing Configuration
Development
It’s very important that the implementation team has the appropri-
ate ALM tooling. ALM tools (such as Microsoft Azure DevOps) are
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 268
required to manage all aspects of the solution, including application
governance, requirement management, configuration, application
development, testing, deployment, and support. The ALM tool should
be well connected with all team members as well as all processes. For
example, when a developer checks in the code, they should mark the
changes with a particular work item, which connects the development
work item with the business requirement work item to the requirement
validation work item.
Why have
an ALM strategy?
From the time you conceptualize your Dynamics 365 solution, you
start the application lifecycle: from the project Initiate phase, to the
Implement phase, Prepare phase, and finally the Operate phase
(Figure 11-2).
During the lifecycle of the Dynamics 365 solution, you identify partner
teams, require project management, gather and map business process-
es, develop new processes, perform testing, deploy code, and finally
maintain the solution in production.
269
Fig.
11-2
Think about taking a trip. You set dates, book flights and hotels, and plan
places to visit. All that planning will likely result in a great vacation.
In the same way, a well-planned ALM will lead to a solution that grows
your business. With ALM recommended practices, you’re set for success
in your solution implementation. You gain visibility into several areas:
▪ The current work items (requirement, development, testing)
▪ The work items completed, in progress, or planned
▪ The teams that worked, are working, or will work on specific tasks
▪ Issues and risks associated with solution implementation
▪ Development best practices
▪ Code history or code version control
▪ Build and release automation
▪ A testing plan, test cases, and test results against requirements
Your ALM may not be perfect right from the start. But it’s a foundation.
You can refine your ALM practices over time.
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 270
Like a poorly planned trip, if you don’t have effective ALM, you can
expect a negative impact on your implementation, solution quality,
and business satisfaction.
With effective ALM and well-defined practices, you can keep your
solution healthy and up to date with the latest application releases.
During implementation
While you’re implementing the solution, you go through multiple
phases: Initiate, Implement, and Prepare. ALM applies to all these
aspects in your implementation:
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 271
▪ Project management
▪ Business process management
▪ Application configuration
▪ Development
▪ Testing
▪ Bug tracking
▪ Ideas, issues, risks, and documents
▪ Release management
After implementation
When you’re live in production, you’re in the Operate phase. ALM
continues with the following aspects:
▪ Continuous updates
▪ Independent software vendor (ISV) updates
▪ Maintenance and support
▪ New features
▪ Next phase
Project management
Efficient ALM requires having documented processes, templates, and
tools for all project management activities, such as project planning,
cost management, team management, ideas for improvements or
solutions, issues, and risk management. A project manager performs
their responsibilities efficiently when these areas are defined and
documented. For example, you should have a template for the project
plan that the project manager can apply in the Implement phase.
272
efficiently. Teams can track business requirements for business process-
es and connect further with configuration and development.
For example, a functional team member can gather and define business
requirements in a given template and track them in Azure DevOps. They
can have it reviewed by assigning a work item to business users and then
store the business requirement document in a repository.
Application configuration
ALM processes and tools should also include managing application
configurations. Before configuring an application, the business should
define and review business processes and requirements. Functional
team members should have defined processes with tools to perform fit
gap analysis, and a defined configuration required in the application.
Development
Having an efficient development lifecycle is one of the mandatory
aspects of ALM. In general, the development lifecycle consists of the fol-
lowing: Design, Develop, Build, Test, and Deploy. Continuous integration
and continuous deployment (CI/CD) is one of the best and latest practic-
es to enable delivering code changes more frequently and reliably.
After the business requirements are defined and identified, the devel-
opment team should get involved to work on any gaps in the solution.
The development team analyzes the requirements, reviews the
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 273
functional design, prepares the technical design, and gets the technical
design reviewed. The development team should use version control,
create development tasks, link check-ins with work items, and prepare
a unit-testing document.
Fig.
11-3
Developer A Sandbox
Developer A Code Solution A
Successful Product
Developer B testing in Yes Solution A
Developer B Code
Developer B Code sandbox Solution B
Testing
Testing is an integral part of ALM. Under ALM, test management
processes and tools should be defined with templates to help manage
test preparation, implementation, and reporting. It should also include
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 274
what tools to use for these steps. Chapter 14, “Testing strategy,” pro-
vides more information about test management.
Chapter 20, “Service the solution,” and Chapter 21, “Transition to sup-
port,” cover maintaining the solution and the support process in detail.
Team responsibility
Various team members are involved in an implementation, with sep-
arate roles and responsibilities. Multiple teams such as the customer,
partner, and ISV work are involved and must work together.
Azure DevOps
Microsoft recommends using Azure DevOps as the tool for managing
or maintaining your ALM practices and processes. For some areas of
Dynamics 365 (such as Finance and Operations apps), Azure DevOps is
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 275
the only version control tool.
A standard lifecycle is available for each work item, and you can build
your own custom states and rules for each type of work item.
You can also use Azure DevOps for project management, test manage-
Take some time to learn more about
ment, bug tracking, release management, and many more aspects of
DevOps tools on Azure.
your implementation.
276
▪ Management of Microsoft support
▪ Customization analysis
▪ Subscription estimator
▪ Issue search
▪ Continuous updates
▪ Service requests to Dynamics Service Engineering (DSE)
▪ Environment monitoring and diagnostics
▪ Asset library
The goal of LCS is to deliver the right information, at the right time, to
the right people, and to help ensure repeatable, predictable success
with each rollout of an implementation, update, or upgrade.
You can use the BPM tool to define, view, and edit the Finance and
Operations apps out-of-box business processes (in the form of librar-
ies), which you can use in future implementations. The tool helps you
review your processes, track the progress of your project, and sync your
business processes and requirements to Azure DevOps.
Every business is different in some ways; this tool helps you align your
business processes with your industry-specific business processes and
best practices. BPM libraries provide a foundation for your business
process, and you can add, remove, and edit your processes according
to your solution requirements.
Development
In Finance and Operations apps, Microsoft Visual Studio is used as the
development environment. The development lifecycle includes the
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 277
following steps (Figure 11-4):
▪ Each developer uses their own development environment
▪ Developers write source code and check in their code to
Azure DevOps
▪ Developers also sync code from Azure DevOps to get the source
code from other developers
▪ Source code can be merged to respective branches depending on
the branching strategy
▪ The build takes the source code from Azure DevOps, uses the build
definition, and creates a deployable package
▪ The build pipeline also pushes the deployable package to the LCS
asset library
▪ Azure release pipelines work with Visual Studio to simplify deploy-
ing packages to UAT
▪ When UAT is complete, the deployable package is marked as a
release candidate to deploy to production
▪ The Dynamics Service Engineering (DSE) team deploys it to pro-
Fig. duction using a service request from LCS
11-4
Environment
Developer Developer
Build User acceptance Product
virtual machine virtual machine
testing
1 2
Source code Source code Source code Application Application Release Release
deployable deployable candidate candidate
package package
Tool
Version control
The primary purpose of version control is storing and maintaining
source code for customizations, as well as ISV solutions. You develop
against local, XML-based files (not the online database), which are
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 278
stored in version control tools such as Azure DevOps. The following are
Fig.
recommendations for version control branching:
11-5
▪ Consider using minimum branching option
▪ Consider the following recommended branching strategy:
Release Release XX ▫ Development Developer check-in and testing with develop-
branch branch
ment data (Trunk/Dev)
▫ Test Deploying to the tier 2+ and testing with current produc-
tion data (Trunk/Main)
Test (main) branch ▫ Release or Release XX Retesting in the tier 2+ and deploying to
production (Trunk/Release) or v-next (Trunk/Release XX)
▪ Run Customization Analysis Report (CAR) tool to check best prac-
tices and resolve errors and warnings
Code ▪ Use the shelve command or suspend changes to keep work safe
Development upgrade
branch branch ▪ Request a code review to ensure code quality
▪ Check in code when a feature is complete, and include changes
Arrows show the direction
when creating the branch
from one feature in each changeset
▪ Merge each feature in a separate changeset
▪ Don’t check in code directly to the test or release branches
▪ Don’t check in changes for more than one feature in a single
changeset
▪ Don’t mark deployable packages from the development and test
branches as release candidates
For more information, take a look at ▪ Don’t merge untested features into the release branch
our document on how to develop and
customize your home page.
Figure 11-5 illustrates a recommended branching strategy.
Build automation
Speed is essential for rapid implementation, and build automation is
the key to achieving this.
The build process is mandatory for any code to run. This process
involves compiling the source code and producing binary files (assem-
blies). A database sync also requires a build first because the schema is
retrieved from the assemblies (and not the XML files).
The Azure DevOps build system provides the following triggers for builds:
▪ Scheduled builds, such as nightly at 6 PM
▪ Continuous integration, such as:
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 279
▫ Starting a build as soon as code is checked in
▫ Gated check-in
▪ Manual builds (on demand)
Author tests
Automated testing
Unit tests As part of development ALM, testing automation should be in place
Task recorder tests to achieve fast-moving code from development to deployment. In
Component tests Finance and Operations apps, you can integrate testing and validation
two different ways:
Integrate tests ▪ Unit and component level testing using SysTest framework
▪ Automated testing using Task recorder and the Regression suite
Test module setup automation tool (RSAT)
Integrate with
build system
To keep up with innovation and constant changes in your solution,
it’s critical to invest and build in continuous validation. Achieving this
Run tests
requires different components (Figure 11-6).
Continuous
validation The RSAT significantly reduces the time and cost of UAT for Finance
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 280
and Operations apps. RSAT lets functional super users record business
tasks by using Task recorder and converting the recordings into a suite
of automated tests, without having to write source code.
For more information about automated
testing, we offer tutorials on testing and
validations, using the regression suite Deployment
automation tool, and acceptance test
library resources. Let’s review the key concepts for Finance and Operations
apps deployment:
▪ Deployable package A unit of deployment that can be applied
in an environment
▪ Deployment runbook A series of steps that are generated to
apply the deployable package to the target environment
▪ AX Installer Creates a runbook that enables installing a package
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 281
to latest version was a hefty task. Once you have gone live with Finance
and Operations apps, the days where customers need to have large
upgrade implementations are gone.
To learn more about One Version or
Continuous Updates, see One Version
service updates overview - Finance & With One Version, all customers benefit from being on the same ver-
Operations | Dynamics 365 | Microsoft Docs.
sion with access to the latest capabilities available, and no one is stuck
You can find many of your questions
on less capable older versions.
answered in One Version service updates
FAQ - Finance & Operations | Dynamics
365 | Microsoft Docs.
Solutions
Solutions are the mechanism for implementing Dev ALM in Power
Platform apps. They’re the vehicle that distributes components across
the different environments.
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 282
Use connection references and environment variables to define
information about a connector or to store parameter keys and val-
ues. This streamlines the process of migrating the solutions between
For more information about deployment,
environments.
refer to our guides on how to:
▪ Organize your customizations with
solutions
▪ Create and deploy solutions
You should have a good understanding of the solution concepts in
▪ Use tools to migrate you solutions the Power Platform—the solution is the baseline for the components
between environments
▪ Support team development across their lifecycle. For more information, refer to our overview of
solution concepts.
Tools
Several tools are available to help automate the process of managing
and shipping solutions, which can help increase efficiency, reduce
manual labor, and reduce human error when working with solutions
throughout their lifecycle.
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 283
GitHub are a popular example of version control applications.
▪ The Configuration Migration tool This tool enables you to
move configuration and reference data across environments.
Configuration and reference data is data that is stored as records
in Dataverse but that supports various aspects of the application
such as USD, portal configuration, and lookup values.
▪ Package deployer The package deployer lets administrators or
developers deploy comprehensive packages of relevant assets to
Dataverse environments. Packages can consist of not only solution
files, but also flat files, custom code, and HTML files. You have the
ability to automate the execution of the package deployer with
DevOps build tools task Deploy package.
▪ Solution packager This tool can unpack or pack a compressed
solution file into multiple XML files and other files or vice versa,
so they can be easily managed by a source control system or be
deployed to an environment from source control. The execution of
this can be automated with DevOps build tools tasks.
▪ The Power Apps CLI The Power Apps CLI is a simple, single-stop
developer command-line interface that empowers developers
and app makers to create code components. You can also use this
interface to automate the deployment of portal configurations.
▪ PowerShell cmdlets The PowerShell cmdlets for administrators,
app makers, and developers allow automation of monitoring,
management, and quality assurance tasks that are possible
through the Power Apps admin center user interface.
▪ Test Studio Use Test Studio to ensure the quality of your
canvas apps.
Check out our additional resources for:
▪ Microsoft Power Platform Build ▪ Easy Repro Use Easy Repro or Playwright to generate test
Tools for Azure DevOps
▪ Power Apps build tools for Azure
scripts that can be used to regression test changes to your model
DevOps driven apps.
▪ Key concepts for new Azure
Pipelines users ▪ Performance Insights Use performance insights to analyze
runtime user data and provide recommendations.
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 284
should gather the required processes, tools, and teams before project
kickoff, such as:
▪ Project plan template
▪ Business requirement document template
▪ Functional and technical spec templates
▪ Test plan and test suite template
▪ Defined tools such as DevOps template
▪ Defined roles required for partner and customer teams
Workshop scope
FastTrack ALM workshops are designed for implementers who want to
make sure that their development approach meets the requirements
of the implementation and is aligned with typical best practices. The
workshop could cover several topics:
▪ Development work management Focuses on high-level,
day-to-day developer activities. It reviews that the team has devel-
opment guidelines, development best practices, and work items
such as requirement, tasks, and bugs management.
▪ Code management Reviews your version control, branching,
code merging, and code reviews strategy. Branching can be simple
or complex depending on the project and phases. It also looks at
how customization is checked in, such as gated check-in.
▪ Build management Looks at your build strategies, such as
manual build or automated build. It mainly reviews your build
An ALM workshop plan for build definitions, different variables, and build triggers for
your implementation. It also reviews whether you’re using a build
provides guidance environment or Azure-hosted builds.
and best practices.
285
▪ Release management Assesses your deployment plan for
Operations apps into implementation, any findings and recommendations could cause
significant rework.
Implement application lifecycle management
in Finance and Operations apps - Learn |
Microsoft Docs
286
Checklist
Implementation Guide: Success by Design: Guide 11, Application lifecycle management 287
Case study
Global transport systems
company finds ALM strategy
to be a cornerstone of
implementation
A global transport systems company embarked on a multiple-phase,
multiple-region implementation of Dynamics 365 Sales, Customer
Service, and Field Service with an aggressive schedule for achieving a
minimum viable product (MVP) go live.
Implementation Guides: Success by Design: Guide 11, Application lifecycle management 288
environments too late in the design phase, after unmanaged
solutions were already built and deployed to these environments
during Phase 0.
▪ Merging the Sales, Customer Service, and Field Service solutions
into a single “workstream” solution during the build phase of the
implementation lifecycle.
▪ Resolving errors too quickly during solution deployments because
of pressure from project leadership, instead of taking the time
to understand each problem’s root cause. (Such pressure led to
developers customizing directly within the test environment.)
As the project team prepared for the go live, the seemingly indepen-
dent decisions made during the initial phases resulted in deployment
issues that eventually stalled the go live. Investigations by Microsoft
Support confirmed that there was no ALM strategy in place, and iden-
tified the key issues:
▪ Unmanaged solutions caused solution-layering issues for testing and
production, affecting the sanctity and integrity of the environments
▪ A prototype solution employed for production purposes introduced
quality issues
▪ Failure to use ALM practices such as DevOps caused traceability
issues and prompted developers to build functionality that wasn’t
aligned with customer requirements
▪ Suboptimal code was implemented because tools such as solution
checker weren’t used to enforce code quality
▪ Testing without traceability was insufficient, and buggy code was
deployed to other environments
As the old saying goes, “By failing to prepare, you are preparing to fail.”
The MVP go-live date was delayed by 12 weeks and Microsoft worked
alongside the project team to determine the root cause of each issue.
The team eventually acknowledged that a series of seemingly unre-
lated decisions affected the MVP go live, and they sent every team
member working in a technical role to an ALM refresher training. The
project team also confirmed plans that should have been solidified at
the beginning of the project, including an environment strategy and a
solution management and ALM approach.
Implementation Guides: Success by Design: Guide 11, Application lifecycle management 289
After the refresher training, the project team started with a “crawl to
walk” approach. During the “crawl” stage, they implemented mandatory
ALM practices with these governance elements:
▪ Cleaning up their production and test environments, and moving
from unmanaged to managed solutions
▪ Implementing a build process to deploy builds from development
to test to production
▪ Establishing a governance process to restrict developer access to
production and test environments
▪ Adding a bug-triaging process that allowed the development team
to troubleshoot and fix issues in development environments and
use the build process to deploy fixes in higher-tier environments
▪ Mandating the generation of solution checker reports
Implementation Guides: Success by Design: Guide 11, Application lifecycle management 290
12
Implementation Guide: Success by Design: Guide 12, Security
Guide
Security
291
“Businesses and users are going
to embrace technology only if
they can trust it.”
– Satya Nadella, Chief Executive Officer of Microsoft
Introduction
In this chapter, we look at the fundamental
security principles applicable to Microsoft
Dynamics 365 implementations.
Next, we discuss in more detail how some of these principles apply
differently to Dynamics 365 Customer Engagement and Dynamics 365
Finance and Operations applications. We then address the importance
of making security a priority from day one, with specific
examples from each product that build upon the concepts we’ve
discussed. Finally, we look at how to avoid common mistakes by
examining some key anti-patterns.
Security overview
Security is the protection of IT systems and networks from theft or
damage to their hardware, software, or data and disruption of the
service. Dynamics 365 is a software as a service (SaaS) offering from
Microsoft. In a SaaS service, data and applications are hosted with a
provider (in this case, Microsoft) and accessed over the internet. In this
deployment model, the customer maintains ownership of the data,
but shares application control with the provider. Therefore, security,
compliance, privacy, and data protection are shared responsibilities
between provider and customer.
Microsoft Customer
Fig.
12-2 Microsoft takes its commitment seriously to safeguard customers’
data, to protect their right to make decisions about that data, and to
Security Implement
be transparent about what happens to that data. On our mission to
strong security
measures to
empower everyone to achieve more, we partner with organizations,
safeguard your empowering them to achieve their vision on a trusted platform. The
data Microsoft Trusted Cloud was built on the foundational principles of
security, privacy, compliance, and transparency, and these four key
Privacy Provide you with principles guide the way we do business in the cloud (Figure 12-2). We
control over your apply these principals to your data as follows:
data to help keep ▪ Security Implement strong security measures to safeguard
it private
your data
▪ Privacy Protect your right to control and make decisions about
Compliance your data to help keep it private
Help you meet
▪ Compliance Manage your data in compliance with the law and
your specific
help you meet your compliance needs
compliance needs
▪ Transparency Be transparent about our enterprise cloud services
and explain what we do with your data in clear, plain language
Fig.
12-3
Compliance
Every organization must comply with the legal and regulatory standards
Compliance goals of the industry and region they operate in, and many are also subject
Define and document standard to additional contractual requirements and corporate policies. Figure
operating procedures that meet 12-3 lists some standard compliance goals and their implementation in
multiple certification requirements
Dynamics 365.
Run the service in a compliant
fashion and collect evidence Microsoft is responsible for the platform, including the services it
Customer responsibility
As a customer, you’re responsible for the environment after the service
has been provisioned. You must identify which controls apply to your
Refer to Microsoft compliance
offerings for more information business and understand how to implement and configure them to
about regulatory compliance
standards and Microsoft products.
manage security and compliance within the applicable regulatory
requirements of your nation, region, and industry.
Privacy
You are the owner of your data; we don’t mine your data for advertising.
Fig. If you ever choose to end the service, you can take your data with you.
12-4
Figure 12-4 lists some standard privacy goals and their implementa-
Privacy goals tion in Dynamics 365.
You own your data
You know where your data is located How we use your data
You control your customer data Your data is your business, and you can access, modify, or delete it at
any time. Microsoft will not use your data without your agreement, and
when we have your agreement, we use your data to provide only the
services you have chosen. We only process your data based on your
Implementation details
agreement and in accordance with the strict policies and procedures
All data is classified that we have contractually agreed to. We don’t share your data with
Security
Security is a shared responsibility in a SaaS deployment. This means
that some aspects of security are shared by both the customer and the
provider, other aspects are the responsibility of the customer, and others
are the responsibility of the provider. For Dynamics 365 deployments,
Microsoft as the cloud provider is responsible for security aspects includ-
ing physical datacenter security, the operating system, network controls,
and providing a secure application framework (Figure 12-5).
Fig.
12-5
Microsoft
protecting you
Intelligent
Security
Graph
Industry
Partners
Antivirus
Network
Certifications ...
Cyber Defense
Opperations Center
Malware protection center Cyber hunting teams Security response center Digital crimes unit ...
Conditional
access
Cloud app
security
Event
management
Rights
management
Key
vault
Security
center
... Active
protection
service
Windows
update
Microsoft 365
advanced threat
protection
Smart
screen
PaaS IaaS
Advanced
threat Terminal
analytics app
Security goals The following is a list of core security controls available in Dynamics 365:
Safeguard data using state-of- ▪ Security Development Lifecycle
the-art security technology and ▪ Datacenter security
processes of the industry
▪ Data segregation
Use the same identity platform as ▪ Encryption
Microsoft 365, so users have the ▪ Secure Identity
same username and password for all
▪ Authorization
Control who can access what data ▪ Auditing and monitoring
Core security Establish security Establish design Use approved tools Perform dynamic Create an incident Implement incident
training requirements requirements analysis response plan response plan
Create quality gates Perform attack Desperate unsafe Perform fuzz Conduct final
and bug bars surface analysis functions testing security review
and reduction
Fig.
of-the-art physical security.
12-8
Barriers Fencing
Perimeter
Building
Azure has a defense system against DDoS attacks on its platform services.
It uses standard detection and mitigation techniques, and is designed
to withstand attacks generated from outside and inside the platform.
Data segregation
Dynamics 365 runs on Azure, so it’s inherently a multi-tenant service,
meaning that multiple customers’ deployments and virtual machines
are stored on the same physical hardware. Azure uses logical isolation
to segregate each customer’s data from others. This provides the scale
and economic benefits of multi-tenant services while rigorously pre-
venting customers from accessing one another’s data.
Data protection
Windows OS level Capabilities
API (DPAPI)
DPAPI encrypts SMK
SQL TDE performs real-time I/O encryption and encryption of
the data and log files to provide data encryption at rest
SQL instance level Service master key All data transmitted between servers within the datacenter for
mirroring is encypted as well
SMK encrypts the DMK
for master DB Microsoft manages the keys and handles management of
encryption
Master DB level Database master key Encrypts the entrie SQL database
Master DB level Certificate Can change the database master key without having to change
the actual database encryption
Availability
Database
Content DB level
encryptions key
Available now for all Dynamics 365 online environments
Certificate encrypts DEK
in content DB
Organization DB
End-to-end
Data in transit Data in transit
encryption of
between a user between
communications
and the service datacenters
between users
Authentication: Users
Authentication is the process of proving an identity. The Microsoft identity
platform uses the OpenID Connect protocol for handling authentication.
By default, only authenticated users can access Dynamics 365.
Express route
Multifactor
authentication
Azure Active
Other directory services Microsoft 365 Dynamics 365 Directory
Azure Active Directory
Fig.
12-12
Conditions Actions
Cloud
User / group Allow access applications
Cloud application
Device state On-premises
Enforce MFA
Location (IP range)
User per user/per
Client application application
Sign-in risk
Block access
Authorization
Authorization is the control of access to the Dynamics 365 applications.
Customer responsibility
As a customer, you’re responsible for:
▪ Account and identity management
▪ Creating and configuring conditional access policies
▪ Creating and assigning security roles
▪ Enabling and configuring auditing and monitoring
▪ Authentication and security of components of the solutions other
than Dynamics 365
Transparency
Microsoft is transparent about where your data is located. You know
where your data is stored, who can access it, and under what condi-
tions. Dynamics 365 customers can specify the Azure datacenter region
Fig. where their customer data will be stored. Microsoft may replicate
12-14
customer data to other regions available within the same geography
for data durability, except in specific scenarios, such as the following:
Transparency goals ▪ Azure AD, which may store AD data globally
Choose where your data is stored ▪ Azure multifactor authentication, which may store MFA data globally
▪ Customer data collected during the onboarding process by the
Transparent about how we respond
to government requests for your data
Microsoft 365 admin center
Security features
We use three main categories of security features to provide appro-
priate end-user access (Figure 12-15): fundamental security controls,
additional security controls, and manual sharing. Most of the security
requirements should be addressed using fundamental security con-
trols; other options should be used to manage the exceptions and
edge scenarios.
Record ownership
Dataverse supports two types of record ownership:
▪ Organization owned When a record is assigned to
Organization, everyone in the environment can access the record
▪ User or Team owned If not assigned to the organization, a record
is assigned to Business Unit, Child Business Unit, Team, or User
Some out of the box tables are exceptions to the above 2 types such as
system user record is owned by a Business Unit.
Additional controls
Hierarchy security Handles exceptions to the
Field-level security fundamental security
Access teams controls more easily
Table relationships behavior
Business units
Business units are a security modeling building block that helps in
managing users and the data they can access. The name “business
unit” can be misleading because the term doesn’t necessarily have any
direct relationship with an organization’s operating business units. In
Dataverse, business units provide a framework to define the organiza-
tional structure of users, teams, and records. Business units group users
and teams by organizational hierarchy and can work in conjunction
with security roles to grant or restrict access to data.
The real power of business units comes from their hierarchical nature.
Users can be given access to records just in their business unit, or their
business unit and the business units under their unit. For example, the
hierarchical nature of business units can allow you to limit access to re-
cords at the site, district, region, and corporate levels. Business units are
useful to segment data into ownership containers for access control.
Security roles
A privilege is permission to perform an action in Dynamics 365. A
security role is a set of privileges that defines a set of actions that can
be performed by a user. Some privileges apply in general (such as the
ability to use the export to a Microsoft Excel feature) and some to a
specific table (such as the ability to read all accounts).
Field-level security
You can use field-level security to restrict access to high business im-
pact fields to specific users or teams. For example, you can enable only
certain users to read or update the credit score of a business customer.
Hierarchy security
You can use a hierarchy security model for accessing data from a user
or position hierarchy perspective. With this additional security, you
gain more granular access to records, for example by allowing man-
agers to access the records owned by their reports for approval or to
perform tasks on reports’ behalf.
310
Fig.
12-16
Audit
Auditing helps you comply with internal policies, government regula-
tions, and consumer demands for better control over confidential data.
Organizations audit various aspects of their business systems to verify
that system and data access controls operate effectively and identify
suspicious or non-compliant activity.
Dataverse audit
Audit logs are provided to ensure the data integrity of the system and
to meet certain security and compliance requirements. The auditing
Don’t enable auditing for all tables and columns. Do your due dili-
gence to determine which tables and fields are required for auditing.
Excessive auditing can affect performance and consume large volumes
of log storage.
Authentication
For users to access any Power Apps portal, they must exist in Dataverse
as contacts. This applies to both internal and external users. Power
Apps portals support Azure AD, Azure B2C, ADFS, and other third-par-
ty providers such as LinkedIn, Facebook, and Google. You can find
Authentication configuration details and the complete list of identity
providers at Get started with configuring your portal authentication -
Power Apps | Microsoft Docs.
313
Sign-up
There are two common ways to control sign-ups for the Power App portals:
▪ Azure B2C is the preferred ▪ Open Registration is the least restrictive option to sign up for the
authentication provider for portals.
portal access. Configuration portal allows a user account to be
It separates authentication from
authorization. registered by providing a user identity. A new contact is created in
▪ Azure B2C supports third-party
authentication providers such as
Dataverse on sign-up.
LinkedIn, Facebook, Google, and ▪ The alternative option is by invitation. The invitation feature of
many more with custom policies. Use
Azure B2C as a bridge to other Identity portals allows you to invite contacts to your portal through auto-
providers as it will support more options mated email(s) created in your Microsoft Dataverse. The people
and Microsoft won’t be duplicating
these investments in the portal. you invite receive an email, fully customizable by you, with a link
▪ Local Authentication is deprecated but
to your portal and an invitation code.
not removed yet. It cannot be extended
to other systems like Azure B2C and
Microsoft is not investing in new local
authentication features. Its use is limited
Authorization
and short-lived. Authorization is a control to provide access to data and web pages in the
▪ For B2B scenarios, consider guest users
with Azure AD authentication. What Power Apps portal. The authorization is managed through web roles.
is B2B collaboration in Azure Active
Directory? | Microsoft Docs
Web Roles
Web roles allow portal users to perform special actions and access
protected Dataverse contents. It’s similar to security roles in Dynamics
CE apps. A contact can have multiple web roles.
A Power Apps portal website can have multiple web roles, but can
only have one default role for authenticated users and one for anon-
ymous users. Web roles control the following:
▪ Dataverse table permissions allow access to individual records in
the Dataverse tables. It allows you to set a scope for access such as
global, contact level, account level, parental level, etc. Customers
can control the granular access on a record such as read, write,
delete, append, and append to.
▪ Page permissions allow access to portal webpages. For example,
you can allow pages to be available anonymously for public ac-
cess, or restrict access to users who have specific roles.
▪ Website access permissions allow portal users to manage front
side editing of some portal contents, such as content snippets and
weblink sets.
Best Practice
Use Portal Checker in your portal deployments. It lists forms, entity
lists, and OData feeds exposed anonymously. Make sure no data is
exposed to anonymous users by mistake.
Role-based security
In Finance and Operations apps, role-based security is aligned with
the structure of the business. Users are assigned to security roles based
on their responsibilities in the organization and their participation in
316
business processes. Because you can set up rules for automatic role
assignment, the administrator doesn’t have to be involved every time
a user’s responsibilities change. After business managers set up secu-
rity roles and rules, they can control day-to-day user access based on
business data. A user who is assigned to a security role has access to
the set of duties that are associated with that role, which is comprised
of various granular privileges. A user who isn’t assigned to any role has
no privileges. Privileges are composed of permissions and represent
access to tasks, such as cancelling payments and processing deposits.
Duties are composed of privileges and represent parts of a business
process, such as maintaining bank transactions. Roles are assigned to
users in either all legal entities or within specific legal entities to grant
Fig.
access to Finance and Operations application functionality.
12-18
Users
Authorization
Security roles
Duties Privileges
User interface
Tables and fields SSRS reports Service operations
elements
Database
Database
Segregation of duties
Some organizations may require some tasks to be separated and per-
formed by different users. For example, you might not want the same
person to maintain vendor information, acknowledge the receipt of
goods, and process payments to the vendor. This concept is sometimes
referred to as separation of function, and the standard security roles
included in Finance and Operations applications have been created
with this separation in mind. To cite the previous example, different
security role assignments are required in Dynamics 365 Finance to
maintain vendor information, acknowledge the receipt of goods, and
process payments to the vendor.
Security reports
Finance and Operations applications provide a set of rich security
reports to help you understand the set of security roles running in your
environment and the set of users assigned to each role. In addition to
the security reports, developers can generate a workbook containing
all user security privileges for all roles.
System log
System administrators can use the User log page to keep an audit
log of users who have logged on to the system. Knowing who has
logged in can help protect your organization’s data. The user logging
capability allows administrators to identify roles that provide access to
sensitive data. The sensitive data identifier enhances the user logging
experience by letting your organization produce audit logs that show
who in your system has access to sensitive data. This capability is help-
ful for organizations that might have multiple roles that have varying
degrees of access to certain data. It can also be helpful for organiza-
tions that want a detailed level of auditing to track users who have had
access to data that has been identified as sensitive data.
Make security
a day one priority
Security should be at the forefront when you start a Dynamics 365
project, not an afterthought. Negligence of security requirements can
lead to significant legal, financial, and business risks. It can also impact
the overall scalability and performance of the solution. In this section,
we discuss some of the security impacts on Dynamics 365.
Impact on performance
Security design can impact system performance. Dynamics 365 has mul-
tiple access control mechanisms, all with their own pros and cons. The
ideal security design is a combination of one or more of these controls
based on your requirements and the granularity, simplicity, and scalabili-
ty they provide. The wrong choice can lead to poor performance.
Sharing can also trigger cascading rules that can be expensive if not set
appropriately. Sharing should be for edge cases and exception scenarios.
You can get more information on this topic from the Scalable Security
Modeling with Microsoft Dynamics CRM whitepaper. It’s an older
document, but still relevant for Dynamics 365 security.
The volume of data will impact performance in the long run. You
should have a well-defined archival strategy. The archived data can be
made available through some other channels.
Unless carefully designed and tested, policy queries can have a signif-
icant performance impact. Therefore, make sure to follow the simple
but important guidelines when developing an XDS policy. Performance
can be significantly impacted by poorly performing queries. As a best
practice, avoid numerous joins, consider adding lookup tables to im-
prove performance, carefully tune queries for optimal performance, and
carefully and thoroughly test any XDS policies when using this feature.
Governance takes time to understand, roll out, and adopt. The earlier
Governance you begin the process, the easier compliance becomes.
takes time to
Customer Engagement examples
understand, roll The activity logging data is retained between 90 days to 1 year based
out, and adopt. The on the Microsoft 365 license type. For compliance reasons, if you need
earlier you begin to keep these records for longer, you should move this data to an
external store.
the process, the
easier compliance
becomes.
325
Impact on rollout
In many projects, security design is created late in the implementation
process. It can lead to a lot of issues, ranging from system design to in-
adequate testing. Keep the following recommended practices in mind:
▪ Test the system using the right security roles from the very beginning.
▪ Use the correct security profile during test cycles. Don’t provide an
administrator role to everyone.
▪ When training people, make sure they have access to make all the
required actions. They shouldn’t have access to more information
than required to complete those actions.
▪ Validate security before and after data migration occurs.
You can use Tabular Data Stream (TDS) endpoints with Customer
Engagement for direct query scenarios, which honor the Dynamics 365
security context.
Organizational changes
What happens if a user moves to another team or business unit? Do
you need to need to reassign the records owned by the user? What
should be the process?
Do you reassign all records or just active record? What is the effect
Reassigning a large volume of records can also take a long time and affect
the system’s performance. Learn more about assigning or sharing rows.
Maintenance overhead
Plan out how to assign security roles to new users. Are you going to
automate a process? For example, you can use Azure AD group teams
in Dynamics 365 to assign roles to new team members. This can make
it very easy to assign license and security roles to new team members.
Security anti-patterns
An anti-pattern is a frequently implemented, ineffective response to a
problem. Several security anti-patterns should be avoided for scaling,
performance, and security reasons. We shared a few in the previous
section, such as using Organization owned entities for reference data
tables in Customer Engagement, or indiscriminately logging transac-
tional tables in Finance and Operations apps. In this section, we discuss
a few more anti-patterns to avoid.
Compliance
An Introduction to Cloud
Computing for Legal and Conclusion
Compliance Professionals
In this chapter, we introduced how the Trusted Cloud is built on the
Achieving Trust and Compliance
foundational principles of security, privacy, compliance, and trans-
in the Cloud
parency, and outlined basic concepts of information security as they
apply to Dynamics 365. We then took a closer, product-specific look at
Privacy how these security concepts are applied to Customer Engagement and
Privacy at Microsoft Finance and Operations applications. With that as a foundation, we
Security architecture
329
Checklist
Identity and access Ensure the security model is optimized to perform and
provides the foundation for further expansion and scale
Establish an identity management strategy covering user by following the security model best practices.
access, service accounts, application users, along with
Have a process to map changes in the organization
federation requirements for SSO and conditional access
structure to the security model in Dynamics 365. This
policies.
needs to be done cautiously in a serial way due to the
Establish the administrative access policies targeting downstream cascading effect.
different admin roles on the platform, such as service
admin and global admin.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 332
a fundamental change occurring across industries and organizations:
Data now comes from everywhere and everything.” As an essential
part of a business solution, data represents a sizable portion of each
user’s engagement. The solution processes and analyzes the data to
produce information, which allows an organization to make informed
decisions, and determines actions that can come from it.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 333
amounts of data—and how to generate user value with it—especially
when that data becomes siloed by the systems that generate or collect
it. Data silos make the goal of having a 360-degree view of each user
even more challenging. Successful organizations are able to digitally
connect every facet of their business. Data from one system can
be used to optimize the outcomes or processes within another. By
establishing a digital feedback loop (Figure 13-1) that puts data, AI,
and intelligent systems and experiences at the core, organizations can
transform, become resilient, and unlock new values for users.
Fig.
13-1
Engage Empower
customers people
l ig
Intel ent
Data & AI
s
sy
te
ce
ms n
ie
s
an d ex per
Optimize Transform
operations products
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 334
Evolution of business
intelligence, reporting,
and analytics
Practices for gathering, analyzing, and acting on data have evolved
significantly over time. Traditional methods of standardizing and
generating static reports no longer give businesses the agility to adapt
to change. New technology and secure, highly reliable cloud services—
which are poised to meet the needs of organizations that must quickly
manage increasing amounts of data—have given rise to a new era of
digital intelligence reporting.
Traditional reporting
The early generations of business intelligence solutions were typically
centralized in IT departments. Most organizations had multiple data
repositories in different formats and locations that were later com-
bined into a single repository using extract, transform, and load (ETL)
tools, or they would generate reports within siloed sources and merge
them to provide a holistic view of the business.
Once the data was unified, the next steps were deduplication and
standardization, so the data could be structured and prepared for re-
porting. Business users who lacked the expertise to perform these tasks
would have to rely on the IT team or specialized vendors. Eventually,
the business would receive static intelligence reports, but the entire
process could take days or even weeks, depending on the complexity
of the data and maturity of the processes in place. Data would then
undergo further manipulation, when required, and be shared across
different channels, which could result in the creation of multiple ver-
sions that would be difficult to track.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 335
had to be fixed in the central repository. The process to produce the re-
port would need to be executed again—and repeated multiple times if
the same fix wasn’t applied at the source. This method of reporting was
referred to as “closing activities,” because it typically occurred at the
end of a month, quarter, or year, which meant that organizations were
slow to respond to opportunities because they waited until the end of
a given period to analyze their data and make decisions.
Self-service reporting
The evolution to a more agile approach favored self-service capabilities
to empower users. More user-friendly solutions reduced the IT depen-
dency and focused on providing business users with access to data and
visualization tools so they could do their own reporting and analytics.
This accelerated the speed of data analysis and helped companies
make data-driven decisions in competitive environments. However, in
this model, reporting was unmanaged—increasing the number of ver-
sions of reports, as well as different representations of the data—which
sometimes prevented organizations from using a standardized method
to analyze data and inhibited effective decision-making.
336
The new era of business
intelligence solutions
With data now coming from everywhere and everything, organizations
must be prepared to convert that data into business knowledge so
users can make informed decisions and trigger actions. Many organi-
zations employ highly skilled workers, such as data scientists, who are
responsible for analyzing the data and producing the necessary in-
sights that can affect the business. This approach is expensive and adds
dependency on specialized roles to perform tasks that usually can’t be
completed by typical business users.
Reporting
and analytics strategy
To be a market leader, an organization’s products and services must
evolve continuously and exceed customer expectations. The informa-
tion used to improve a product or service is based on data that comes
to the surface via reporting. Reporting requirements can be as simple
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 337
as determining the status of an order or as complex as a cash-flow
analysis for a global organization.
Your analytics strategy can help transform data collected from different
processes and systems into knowledge to help you stay competitive in
the market.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 338
With a centralized data and business intelligence strategy, the IT de-
partment is usually responsible for data, ETL processes, and reporting
solutions. In a self-service approach, the IT team implements a data
Every organization needs people who
understand how to analyze and use data.
analytics platform that enables users without statistical, analytical, or
Training and communication are key to data-handling expertise to access and use data. Teams should under-
ensuring that business users know how
to use data and insights to optimize their stand their organization’s goals and strategy before implementing an
work, drive efficiency, and make informed enterprise data platform.
decisions.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 339
reporting—and should not be an afterthought.
Organizations must also identify any critical reports that require data
mash-up with other sources, such as front-office apps or transportation
systems, to develop an appropriate data integration and solution strat-
egy. Data volume may help determine how reports will be designed
and shared with users.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 340
configure different document formats instead of writing code, making
the processes for creating and adjusting formats for electronic docu-
ments faster and easier. It works with the TEXT, XML, Microsoft Word
document, and OPENXML worksheet formats, and an extension inter-
face provides support for additional formats.
Financial reporting
Finance and business professionals use financial reporting to create,
maintain, deploy, and view financial statements. Financial reporting
capabilities in Dynamics 365 Finance and Operations apps support
currency translation and financial dimensions to efficiently design
financial reports.
341
sales data and team performance. Sales representatives and managers
can use an out-of-the-box sales pipeline chart in Dynamics 365 Sales to
visualize the revenue for an opportunity based on each pipeline phase.
Dynamics 365 delivers rich and interactive reports that are seamlessly
integrated into application workspaces. By using infographics and vi-
suals supported by Microsoft Power BI, analytical workspaces let users
explore the data by selecting or touching elements on the page. They
also can identify cause and effect, perform simple what-if operations,
and discover hidden trends—all without leaving the workspace. Power
BI workspaces complement operational views with analytical insights
based on near-real-time information. Users also can customize embed-
ded reports.
Reporting categories
Reporting needs for an organization can be classified into two catego-
ries: operational reporting and business reporting.
Operational reporting
Examples of operational reporting include orders received in a day,
delayed orders, and inventory adjustments in a warehouse. This kind
of reporting supports the detailed, day-to-day activities of the organi-
zation. It is typically limited to a short duration, uses real-time granular
data, and supports quick decision-making to improve efficiency. It also
helps organizations identify their issues and achievements, as well as
future strategies and actions that may affect the business. Operational
reports can empower businesses to determine their next steps for
improving organizational performance. Organizations can fulfill oper-
ational reporting requirements using elements such as native controls,
SSRS reports, dashboards, and business documents.
Business reporting
Business reporting refers to reports detailing operating expenses and
financial key performance indicators (KPIs) to business stakeholders
so they can understand the organization’s overall health and make
more informed decisions. This kind of reporting delivers a view of
current performance to enable the stakeholders to identify growth
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 342
opportunities and areas for improvement, and track business perfor-
mance against the planned goals for the organization.
A common risk for businesses is the quality To define your data strategy, start with the business questions you
of their data. If data is weak, unstructured,
unsecured, or difficult to use or access, need to answer to meet your organization’s goal of using the data to
additional work will be required to move,
make more effective decisions that improve the business outcome.
scrub, protect, and improve the quality of
the data. Poor data quality also can lead to
lost opportunities, failure to consistently
attract customers, increased time and
With customers now having access to more information and channels,
expense for cleaning data or processing an organization’s data strategy should reflect the customer journey.
corrections, and inaccurate or inconsistent
KPI measurements. From a data management perspective, all channels and checkpoints
across the customer journey should be represented.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 343
Modernize your data estate
Your data estate modernization strategy should map the current data
estate, goals, business processes, and regulatory requirements to aid
gap analysis of the existing and desired future state. It should identify
key analytics and metrics to improve the business and effectively align
business intelligence investments with business goals.
Some organizations emphasize collecting Cloud-based solutions offer modern capabilities based on machine
data more than analyzing it to drive
insights. It’s also important to identify gaps
learning and AI to analyze portions of data and automate other pro-
and blockers for business objectives and cesses, such as identifying and solving data quality issues. In terms of
outcomes, and not focus solely on the data,
structure, analytics, tools, or technologies. analysis, prebuilt models can serve the most common use cases, elimi-
nating the need to build custom models.
Empower people
To do their jobs more efficiently, employees need tools and resources,
as well as timely access to information. Utilizing AI to further automate
business processes contributes to better and faster results, which then
empowers your people (Figure 13-2) to make the best decisions and
deliver value to customers.
Engage customers
Modern applications are already capable of delivering insights by using
AI and data analytics to optimize business processes and shed light on
customer activity. For example, Dynamics 365 can provide a customer
service agent with insights into a customer’s interests and purchasing
behavior information in real time, allowing the agent to make sugges-
tions tailored to the customer. These types of insights help businesses
intelligently engage customers (Figure 13-2) to provide a superior
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 344
Fig.
13-2
2022
70%/60% OPTIMIZE OPERATIONS
70 percent of organizations will rigorously track data
quality levels via metrics, increasing data quality by 60
percent, and significantly reducing operational risks
and costs.
2023
60 percent of organizations will utilize
packaged AI to automate processes in
multiple functional areas.
5x
customer-engagement analytics to optimize their
OPTIMIZE OPERATIONS sales processes.
Cloud-based AI will have increased five times from
2019, making AI one of the top cloud services.
50% TRANSFORM PRODUCTS
(AND SERVICES)
30%
More than 50 percent of equipment manufacturers
OPTIMIZE OPERATIONS will offer outcome-based service contracts that rely
30 percent of organizations will harness the on IoT-based connectivity (up from less than 15
collective intelligence of their analytics communities, percent in 2019).
outperforming competitors who rely solely on
centralized or self-service analytics.
50% ENGAGE CUSTOMERS
Digital adoption solutions will be white-labeled in
Source: “Predicts 2020: Data and Analytics Strategies – Invest, Influence 50 percent of customer-facing software as a service
and Impact.” Gartner.com. Gartner, Inc., December 6, 2019. (SaaS) a
pplications, increasing customer satisfaction
and loyalty.
Implementation Guide: Success by Design: Guide 10, Business intelligence, reporting, and analytics 345
customer service experience.
Optimize operations
Cloud-based AI usage is an increasingly common investment area for
most organizations—and not just for customer-facing technology.
For example, Dynamics 365 for Field Service can use AI to anticipate
hardware failures on a manufacturing floor and automatically dispatch
technicians before the malfunction. Organizations that optimize their
operations (Figure 13-2) with augmented analytics, AI, and embedded
intelligence will be more competitive in the marketplace.
Components of the
modern data estate
The Dynamics 365 platform can be an essential part of your modern
data estate architecture. Your business data is securely stored within
Dynamics 365 as a function of your day-to-day operations. In addition,
Dynamics 365 can export data to or ingest data from various sources
to be used in reporting, workflow, applications, or in any other way
that is required by your business.
You can also generate insights and analytics from data created
and managed inside each Dynamics 365 application. Apps using
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 346
embedded intelligence, such as the audience insights capability inside
the Dynamics 365 Customer Insights app, enrich your data and allow
more informed decision-making.
Common
Data
Model
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 347
insights from your data. Normalizing the data sets up your organization
to better identify opportunities and puts your data in a format that could
be utilized for future projects. Dynamics 365 uses Microsoft Dataverse—
which is structured according to the CDM—to store and secure app
data. The CDM structure is defined in an extensible schema, as shown
in Figure 13-4. This allows organizations to build or extend apps by
Fig.
13-4 using Power Apps and Dataverse directly against their business data.
CDM Schema
Core
Customer relationship management (CRM)
Account
Account Sales Service Solutions
Activity
Appointment Competitor Case
Marketing
Contact
Campaign Discount Contact Account
Currency
Contact Invoice Entitlement Contact
Email
Lead Opportunity Resource Event
Goal
Marketing list Order Service Marketing email
Letter
Phone call Order product Schedule group Marketing page
Note
Social activity Quote Task
Owner
Organization
Working with
industry and subject
matter experts, the
CDM is expanding to Account Attributes
include additional Description: A business that represents a • accountNumber • creditLimit • territoryId
business enterprises customer or potential customer. • accountRatingCode • openDeals • hotelGroup
and concepts. • createdOn • openRevenue • ...
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 348
Data unification components
Services and applications that ingest data from multiple sources serve
a vital role in a modern data estate. Aggregation from data stores and
services provides users with business-critical information supplied in
dashboards and reports. The resulting data and events can also be
used to trigger workflows and act as inputs to the apps running on the
Dataverse platform. Many data unification components are built into
Dynamics 365 applications—and organizations can design their own
integrations to fit their business needs.
Campaigns Geo
Beh
Social
act
av
Tra
io
l AI and
Bots
Email Social machine
Conflate Enrich learning
Mobile LinkedIn Power Apps In-person
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 349
Services provides text, speech, image, and video analysis, and enriches
data via Microsoft Graph.
Dataverse applications
Building an app typically involves accessing data from more than one
source. Although it can sometimes be done at the application level,
there are cases where integrating this data into a common store creates
an easier app-building experience—and a single set of logic to maintain
and operate over the data. Dataverse allows data to be integrated from
multiple sources into a single store, which can then be used in Power
Apps, Power Automate, Power BI, and Power Virtual Agents, along with
data that’s already available from Dynamics 365.
Azure Data Lake Storage Gen2 is the foundation for building enterprise
data lakes on Azure and provides low-cost, tiered storage with high
availability and disaster recovery capabilities.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 350
Fig.
Azure Synapse Analytics is a limitless analytics service that brings
13-6
together data integration, enterprise data warehousing, and big data
analytics.
Microsoft Power
Platform Data Export Service
The Data Export Service replicates data from the Dataverse database
The low-code platform that spans
Office 365, Azure, Dynamics 365, to an external Azure SQL Database or an SQL Server on Azure virtual
and standalone applications machines. This service intelligently syncs all data initially, and thereaf-
ter syncs the data on a continuous basis as delta changes occur in the
system, enabling several analytics and reporting scenarios on top of
Azure data and analytics services.
Power Platform
The Power Platform (Figure 13-6) enables organizations to analyze, act
on, and automate the data to digitally transform their businesses. The
Power Platform today comprises four products: Power BI, Power Apps,
Dataverse
Power Automate, and Power Virtual Agents.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 351
Power BI
Power BI is both part of the Power Platform and stands on its own
by bridging the gap between data and decision-making. Power BI
lets business analysts, IT professionals, and data scientists collaborate
seamlessly, providing a single version of data truth that delivers in-
sights across an organization.
Power BI helps you analyze your entire data estate within the Dynamics
365 or Azure platforms, or external sources. Power BI can connect
individually with siloed sources to provide reporting and analytics, or
it can connect with data stores within or outside of Dynamics 365. As
data can come from multiple sources, organizations should analyze
how Power BI will connect with those sources as a part of their data
estate pre-planning.
Microsoft Azure
With Dynamics 365 at the center of the data estate, Azure provides an
ideal platform (Figure 13-7) for hosting services for business work-
loads, services, and applications that can easily interact with Dynamics
365. Built-in services in Dynamics 365 let you export data as needed
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 352
or scheduled. Power BI can aggregate information from Dynamics 365
and Azure sources into an integrated view, and Power Apps can access
both Dynamics and Azure sources for low-code, custom applications
Fig.
designed for business use.
13-7
Store
Azure Stack
Azure Stack is a portfolio of products that allows you to use embedded
intelligence to extend Azure services and capabilities to your environ-
ment of choice—from the datacenter to edge locations and remote
offices. You can also use it to build and deploy hybrid and edge comput-
ing applications, and run them consistently across location boundaries.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 353
Fig.
13-8
Hotpath
Real-time analytics
Coldpath
History and
Nonstructured Azure Event Hubs trend analytics Azure Cognitive Services Power BI Premium Analytics
V=Variety Azure Machine Learning
images, video, audio, free
text
(no structure)
Semi-structured Azure Data Factory Azure Data Lake Storage Gen2 Azure Databricks Azure Cosmos DB Application
V=Volume
Scheduled/event-triggered Fast load data
images, video, audio, with PolyBase/ Integrated big data
data integration scenarios with traditional
free text ParquetDirect
data warehouse
(loosely typed)
Enterprise-grade
semantic model
Store Server
Data store
Azure Blob Storage offers massively scalable object storage for any
type of unstructured data—such as images, videos, audio, and doc-
uments—while Azure Data Lake Storage eliminates data silos with a
single and secured storage platform.
The Azure Machine Learning service gives developers and data sci-
entists a range of productive experiences to build, train, and deploy
machine learning models faster.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 354
to embed the ability to see, hear, speak, search, understand, and accel-
erate decision-making in your apps.
Synergy
Getting maximum value from your data requires a modern data
Due to budget, time, or skills constraints, estate based on a data strategy that includes infrastructure, processes,
some organizations decide to deliver
a business solution as a first step,
and people.
with a plan to improve insights and
analytics capabilities at some point in
the future. Insights and analytics should Your data can flow inside a cloud solution or via synergy with other
be addressed in the early phases of a components and platforms to provide an infrastructure and processes
business solution, even if it doesn’t
yet include all scenarios. Leaving these for analyzing data and producing actionable outcomes.
vital elements until later can affect
the business and the user experience,
eventually reducing adoption, increasing People are a big part of the process, and the data journey will often
costs, and giving a negative perception of
the value the solution has to offer.
start and finish with them. The insights and actionable outcomes will
allow them to make better decisions—for themselves and the business.
Conclusion
In this chapter, we discussed how organizations are becoming more
competitive, expanding their global reach to attract customers, and
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 355
using business intelligence solutions to make the most of their data.
While seeking data on how customers interact with their products and
services, organizations are acting on that data to give customers access
to more content, new purchasing channels, and brand options. By
deeply understanding customer behavior, organizations can engage
customers proactively and make predictions about their future behavior.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 356
Checklist
Reporting and Align the reporting and analytics to the broader master
data management strategy.
analytics strategy
Use customer data platform offerings such as customer
Align reporting and analytics strategy with overall
insights to unify the customer data from various siloed
implementation project plan.
data sources.
Map out the organizational data estate to develop a
Use modern data and BI platforms such as Azure Synapse
holistic view of different data sources, the type of data
Analytics and Power BI to build enterprise data solutions.
they hold, and the schema used.
Focus on not just delivering a report but aligning
Define your analytics strategy and the tools to support
reporting with business processes to add value to the
it. Ensure the approach meets the current and future
organization.
reporting requirements while considering the data
volumes and different sources of data.
Implementation Guide: Success by Design: Guide 13, Business intelligence, reporting, and analytics 357
Case study
Premier yacht brokerage cruises
into smarter marketing and
boosts sales by 70 percent with
Dynamics 365
To differentiate itself from the competition, a large company that sup-
plies luxury vessels decided to invest in reporting and intelligence. In
an industry where relationship-building efforts must be incredibly pre-
cise and personal, any insight into a customer’s mindset is invaluable.
Implementation Guide: Success by Design: Guide 10, Business intelligence, reporting, and analytics 358
nurture leads toward a sale, and used Power BI reports to generate
insights that helped them identify the best prospects and create win-
ning proposals.
The company is also using Dynamics 365 and Power BI to uncover mar-
ket trends, as augmented analytics help the company learn about their
customers in ways that were not possible before—and build authentic
relationships that will last long after the initial sale.
Implementation Guide: Success by Design: Guide 10, Business intelligence, reporting, and analytics 359
14
Implementation Guide: Success by Design: Guide 14, Testing strategy
Guide
Testing
strategy
360
Overview
During the implementation of a solution, one of the
fundamental objectives is to verify that the solution
meets the business needs and ensures that the
customer can operate their business successfully.
The solution needs to be tested by performing multiple business execu-
tion rehearsals before the operation begins and the system is taken live.
To align with the scope of the solution, testing needs to focus on vali-
dating the business processes in it to stay on track. Microsoft Dynamics
365 Business Applications are rich with features that can distract testers
easily during testing, so it is important to focus on the features that
add value to the business process. By only focusing on those features
and the requirements needed for them, we can get to the starting line
of initiating operations for the solution being built.
Fig.
14-1 Test strategy
Defining the
testing strategy
After observing thousands of Dynamics 365 deployments, we can see
that most customers do a reasonable level of testing before go live. But
Fig.
14-2 Scope of the testing
rage of the tes
Cove t
The scope of the testing is defined early in the project after defining
the solution, and it is refined as the solution blueprint takes shape.
What changes throughout the project is the type of testing that comes
into focus as you move through the implementation lifecycle and the
Scope of
Processes ability to actually test the solution.
testing
Test cases
During the implementation, the number of test cases in the testing
scope keeps increasing as you progress in building the solution. After
go live, the scope is focused on maintaining the quality as you update
the solution, but testing can increase if you continue to expand.
Dee
pness of the test
Writing a good test case requires a series of tools and tasks that help
you validate the outcome. Dynamics 365 provides some of the tools to
build test cases faster, for example, the task recorder in the Finance and
Operations apps where you can dynamically capture the steps needed
to execute a task in the app.
Test cases should reflect the actual business execution in the system
and represent how the end user ultimately operates the system, as
illustrated in Figure 14-3.
Azure DevOps is great tool for documenting
test cases and connecting them to the
solution development lifecycle. Having this
artifact in Azure DevOps allows for tracking
Test cases should be composed of, at minimum, the following:
any bug discovered during testing and ▪ The process and requirements that the test case is covering.
helps to trigger the fix and plan for the next
testing cycle while also determining the ▪ The prerequisite, or entry, criteria to execute the test, which can
coverage of your test; for example, process, be dependent on other tests or necessary data to produce the
user story, requirement, design, etc. This is
depicted in Figure 14-4. expected outcome.
▪ The reproduction steps.
Finally, when you design your test cases, design for what you expect to
happen based on the process objective but also on what it should not
do. Plan for tests that can break the process.
Fig.
14-3 Components of the test case
+ + +
What Get ready How Quality check
Linked business Prerequisites Reproduction steps Expected outcome
process requirements
Fig.
14-4
Description
Dealership request parts for recreational vehicles out of warranty
Test case ID Test steps Test data Expected results Actual results Pass/Fail
PC-20 1. Create sales order header Customer: Sales order cannot Sales order is Fail
2. Check credit line in order CU050 - Great Adventures be posted posted
3. Create order line Dealership
4. Validate and post sales order
Part:
P001 - All weather speakers
Qty: - 4 pieces
Tester notes P001 - All weather speakers
Credit check is not stopping the order to be posted. Customer setup with wrong credit limit. Data quality issue.
During the planning stage of the testing, you need to be able to an-
swer and provide clarity for the following questions. This helps design
and define the frequency of the testing based on the necessary itera-
tions or test cycles to secure a high-quality solution.
▪ When do you need to start testing?
▪ How do you control the versions of the solution to be tested based
on the progressive readiness of it?
▪ How do you document the outcome of the testing?
▪ Who participates in the testing?
▪ Where does the testing happen in terms of environments and type?
▪ How is the ownership of the outcome defined?
▪ How deep you are going to test?
▪ What types of testing do you execute based on the project needs
and solution complexity?
This helps to plan the quality control portions of the solution and must
happen at the start of the Initiate phase of the project.
The plan for testing must be documented and signed off on by the
business team prior to its execution. This is important because it leads
into other types of planning, like environment planning, that deter-
mines where the test is done.
367
solution, schedule of testing, and resources needed, and it contains the
scope of the testing.
Fig.
14-5 Components of the test plan
Always create a test plan for your project regardless of if your imple-
mentation is simple, complex, or if you are implementing a standard
solution or extending the solution.
Always create a test plan at The test plan brings transparency and helps keep your goals and
implementation and obtain sign off
on it by business stakeholders. objectives on securing a quality solution. It contains important infor-
mation that guides the team on how to conduct the testing.
The successful outcome of the test cycle confirms the readiness of the
solution to meet the requirements of the business. There are different
terms used to describe this testing event, for example conference room
pilots, testing iterations, testing cycle, etc. The important message here
is that the testing event is comprehensive.
Align your solution version to the Each component of the solution version is very important, and all of
testing cycles and coordinate the
them need to come together as part of the solution. This is especially
readiness of all the dependent parties to
provide their own artifacts. Missing one important with data. The earliest you bring migrated data into the mix
necessary artifact can cause delays in
the implementation, introducing risk.
the better. One of the most common causes of errors during the first
days of operation is poorly migrated data.
Ownership definition
Having clear expectations of who takes care of the testing outcome
is key. Testing can result in bugs, but also it can result in discovering
configuration issues, gaps, or new opportunities where improvements
can be applied in future cycles or solution releases. Based on the type
of outcome, we need to define who takes ownership of the fix and
what test plan is needed to account for that.
Use Azure DevOps to build Having a model to document the outcome of a test case and test cycle
dashboards that can help to report
progress and quality of the testing. allows the implementation team to validate the performance of the
test cycle. Preparing to document and report the outcome highlights
The business team involvement during testing is critical, they own the
solution and plan for enough time to allow for proper testing by the
key business users. One common pattern of failure is poor involvement
from this group of testers.
372
Test types
In the previous section, we covered the different roles and environment
types that may be needed for testing, but we mentioned the need to
consider the test type. Let’s now combine the concepts in this section
so you can see how to use different test types to implement Dynamics
365 under the Success by Design framework.
Fig.
14-6 Testing throughout the solution lifecycle
Unit test
Functional test
* Go live
End-to-end test
Performance test
Security test
Regression test
Interface test
Mock go-live
Unit test
This test type focuses on individual function, code, and configuration
testing. It is done by developers and is the lowest level component of
the testing. In this test type, the developer verifies the requirements,
validates, improves the code design, and finds and fixes defects.
See it in action
This testing happens in a development environment and is expected to
Let us assume we are building a solution be done during the implementation mainly, or during any bug fixing.
for a customer that ships goods and
uses a specialized transportation system
It is a fast, iterative process. This test type is required if you are custom-
that is part of the final solution. For izing or extending your solution and is one of the first opportunities to
this example, we use unit testing to
test specific validations during a sales introduce a quality check in the solution. At this point you should val-
order confirmation and interface with
idate performance behavior of the individual components in addition
the transportation system using Power
Automate. The unit tests are written and to keeping security in scope. Being diligent from the start with this test
executed for each validation requirement
and interface in this scenario.
type can save you important time during implementation by avoiding
rework or bug fixes for issues that should be detected during this test.
Functional test
Functional tests can be either a manual test or an automated one. They
are done by the functional consultants, customer SMEs, or testers. The
purpose of functional tests is to verify the configurations of the solu-
tion or any custom code being released by the developers. The primary
objective is to validate the design as per the requirements. This is done
See it in action
in a test or developer environment during the Implement phase of the
Following the previous example Success by Design framework. At this point, testing automation can
regarding the sales order validation, the
functional testing of the interface with
also be introduced.
the transportation system focuses on the
configuration and setup, like customer
data, products, pricing, warehouse setup, This test type is the first test to be done by consultants and customer
and the dependencies with payment
SMEs. Consultants still need to do it, as it is important that the first line
processing and other module-related
configuration and parameters. of testing is done by the consultant’s previous customer testing at least
to verify stability of the feature.
The data used to test beyond this point requires it to keep evolving
continuously using customer data and should reflect the reality of
the operation.
Process tests
The solution continues to be built, with unit testing being done by
developers and functional testing by consultant and customer SMEs.
The work from the development team is validated, and bugs and
corrections are completed while the test is mapped to the process.
This is the point at which running connected test cases is ideal. The
point where we raise the bar in our testing approach by connecting a
collection of test cases that are all connected to our process.
End-to-end tests
After validating all the individual processes, it is time to connect all of
them and increase their complexity with new process variations. This is
the first test cycle that looks like a complete operation.
The test is done by functional consultants who are preparing the cycle
and guiding the team, but the overall cycle is mainly executed by
customer SMEs and testers.
The main objective of this test type is to validate all full business
processes in scope and it needs to be done in an integrated test envi-
ronment since now it is connecting to other systems that interact with
the solution. It is iterative and the prerequisite to being able to execute
See it in action your user acceptance test (UAT).
Plan for this test by making sure This test is key to validating the entire solution works in conjunction with
you include real customer data and
other systems that are part of the business, and testing is done by having
migrated data.
role-based access control enabled so it validates a real end-to-end test.
Performance tests
Successful testing is not complete until we not only make sure we
can run the business on top of the solution, but also that we can do it
at scale. We are implementing Dynamics both for today and for the
future, and we need a solution that lasts and performs.
We put special emphasis on this test type since there are many mis-
conceptions defining whether this test needs to be executed. Our
experience has shown that performance is one of the most common
reasons for escalation since teams often miss this test when it is needed.
The objective of this test is to ensure the solution performs while focus-
ing on critical processes that require scaling with load and growth. Not
all the processes are involved in this test.
See it in action
The business users must be trained prior to UAT, not just on the solu-
The business team now brings in a select
tion but also on how the pending process works once the solution
group of people who run the operation. is deployed to production; otherwise, this group causes false error
This group consists of order processors,
the accounting team, the accounts reports because the lack of training.
receivable team, shippers, and others. The
selected group of people run a real-life
operation simulation. All the processes are At the end of the successful test, the end user connects the new solution
executed in parallel and in-sync between
teams. For the prospect to cash process,
to its reality of their daily tasks and confirms the readiness of the solu-
they involve a selection of different end tion, but also their own readiness at being familiar with the new system
users connected to this process to run the
test. The team tests all the variations of the after being trained. The new solution fulfills the business operation need.
processes in scope.
The final iteration of this test type requires the business sign off on
acceptance. We describe the importance of this step in later sections in
this chapter.
A regression test happens when you have change in the code, or any
configuration or new solution pattern that can impact different pro-
cesses. This test type is done by testers, developers, and end users.
As we realize that this necessary quality gate is needed, there are tools
available to help you to automate. You can find links to more information
about these tools at the end of the chapter in the “References” section.
There are different techniques you can follow to run your regression test.
You can opt to test almost 100 percent of the processes, which can
provide you comprehensive coverage, but it is expensive to run and
maintain especially if there is no automation.
You can prioritize the test based on business impact. This technique
ensures you have coverage of the processes that are mission critical for
the operation, it is also a more affordable approach and the only risk is
that you cannot guarantee a solution 100 percent free of regression.
380
Another technique is to focus on the areas that you expect to be
impacted based on the change. This technique is targeted where the
change is happening and requires less effort, but it has the limitation
of not being able to confirm that you are free of regressions and the
impact cannot be visible on the direct change itself, but instead down-
stream on other processes.
You can also combine all the previous techniques, making sure you
test critical business processes, and you can target more testing on the
features being changed.
Every time you test successfully, you put money in a trust. Every time
you have a failed test or fail to properly test, you deduct money from
that trust leaving a debt in the solution completeness. At the end, you
need to reach the end goal with the trust full of money.
Again, automation is key, and you should plan for testing automation.
See it in action Your solution is alive and ever evolving so change is a constant, not just
coming from the application, but from the customer business needs.
Microsoft has released new functionality
that enriches the order processing feature
in Dynamics 365 Finance and Operations Start building your automated testing progressively, focusing first on
apps. When the new update becomes
key business processes then expanding further over time. Do it as early
available, teams can execute regression
testing to key business processes as possible during the implementation process and always test it after
connected to this new feature. Testing the
order and warehouse processes following
a change and prior to production deployment.
a new configuration change is done
because it can impact the picking process.
Once the update is done in the test Regression is important but it can be costly, especially if you do not
environment, the team runs an automated automate in the long term. Automation brings the benefit of testing
test to confirm the solution produces the
expected outcomes. faster, having better testing coverage, and providing repeatability of
the test. Automation takes as much planning as was needed for the
Finally, keep in mind that finding bugs during regression testing could
be due the change of the solution design, a standard product update,
or just the resolution of previous bugs redefining how the solution
works, which can require recreating the test case.
Mock cutover
This is a special test since it is executed in a special environment, the
production environment.
This test brings important value because it helps to validate aspects like
connectivity, stability of the access by users and data, integration end
points configuration, device connectivity, security, network, and many
of the test cases in your processes may be environment dependent.
See it in action
During this test, you are able to validate confirmation of the estimated
Continuing with the previous examples
times between environment preparation for production for all the
in this chapter, the team has tested
the entire solution and they are ready planned tasks during go live. It is a confirmation that all the planned
to move forward. The team wants to
confirm the cutover plan in terms of
activities run smoothly once you do the actual final execution.
sequence of activities, timing, and
production environment stability. They
execute the cutover plan by rehearsing Once the testing is completed you rollback your data up to the point
and documenting so they can adjust that there are not transactions being created because of the testing,
it. After executing the mock cutover,
they found the time to load data and allowing you to finish the environment preparation and start running
sequence it required adjustment due to
your system for real. If the mock cutover fails, there is a risk of delaying
other conflicting cutover tasks. The team
adjusted that sequence and confirmed the the go live.
readiness to execute the updated Cutover
plan knowing it works without surprises.
We recommend you always plan for a mock cutover test type.
Always test under the umbrella of the processes, in the end the pro-
cesses are the language of the business and the testing is proof that
the solution “can speak” that language.
Executing testing
In a previous section, we focused on the importance of defining a test-
ing strategy. Now we are going to explore the minimum components
you need to keep in mind to execute the testing.
During the communication effort you share the test plan for the test cycle.
You need to communicate the following with the team during testing.
▪ Scope Before you start testing, you describe the scope and pur-
pose of the testing cycle, and what processes, requirements, and
tests cases are included in that test cycle. Every new testing cycle
requires detail on how the iteration has been designed in terms of
incremental testing and what is expected to be tested. The scope
of the test cycle is aligned to the solution version being used.
▪ Schedule The time expected to be used to run the test cycle.
▪ Roles Who is testing and how are the test cases distributed?
How do they report test case execution so dependent teams can
continue the testing? Who is the orchestrator of the test? Who
resolves questions to test cases per area?
▪ Resolution process One of the objectives to test is to identify
any defect in the solution. The communication plan needs to
specify how those bugs are reported but also how to document
the feedback from the tester.
▪ Progress How is the progress recorded and communicated so
everybody can see the current state of the testing cycle?
▪ Resources The communication needs to specify where the
testing happens and how to access the apps. It determines any
additional equipment needed for testing, for example printers, bar
scanners, network requirement, etc.
Part of a successful test cycle is to set the
▪ Test sequence Especially on process test, end-to-end test, and
expectations with the participants, so
everyone keeps the focus on the objective. user acceptance test types. You need define and align how the
Consider the test cycle like a mini go live
for the scope in place for the cycle.
different teams interacts.
▪ Test objectives Here you explain the purpose of the testing cycle.
For the next test cycle, we start over, as even passing test cases during
Sign off by the business team on the
this cycle need to be re-tested in the next one.
overall pass of the final test cycle is a
required step prior to executing cutover
preparation for go live. This creates
Solution acceptance
accountability on the acceptance of the
solution by business. Any non-blocker bug
needs to be document and they should be
and operation
low risk for the operation at not bringing a
solution at going live without them.
Finally, the last test cycle takes the main objective of why you started
implementing the solution to confirm the readiness to run the business
using the new solution.
This confirms the readiness of the solution but also the readiness of the
production environment if you run a mock cutover test. It is important
that for any final test the business team signs off and confirms they can
operate the business with the new solution.
Accepting the solution does not mean it is 100 percent free of bugs.
You need to assess the value to bring a fix, at this point if the issue is
very low risk for the operation it is often better to go live with known
nonblocking bugs and fix them later than to introduce risk by making
an unnecessary rushed fix and not having time to re-test properly.
From here you move to the maintenance mode of the solution if you
are implementing only one phase, or you continue adding new work-
loads or expanding the solution. Keep the discipline for testing and
scale using the tools for automation. Testing is a continuous practice,
the difference is the frequency, scope, and tools used to test.
Performance testing approach Testing during implementation is the step that builds the trust for the
business to run their operation. Always test and do so as early as possi-
Regression suite automation tool
ble. It is never too early to start testing.
Customer
Engagement apps
FastTrack automated testing in a day
workshop offering
EasyRepro in 60 Seconds
389
Checklist
Ensure the Software Development Lifecycle (SDLC) Consider automation for functional, process, end-to-
includes unit testing by the developer in a development end, performance, and regression testing.
environment, with focus on function, code, and configu-
Establish a test communication plan with clear scope
ration of all extensions and integrations.
and objectives, specifying the schedule, sequence, roles
Ensure the functional testing is performed by functional involved, and issue resolution process.
consultants or SMEs in a test or developer environment
Perform regression testing in a test environment, or
with the primary objective to validate the design against
development environment for updates, throughout the
the requirements.
Implement and Prepare phases as well as post-go live to
Ensure process testing is performed by SMEs in a test ensure the solution performs as expected after a change.
environment, running multiple processes consecutively,
focusing on testing whether the solution allows business
Go live
operations and monitoring for unintended outcomes.
Plan a mock cutover in the new production environment
to validate confirmation of the estimated times between
environment preparation for production for all the
required tasks during go live.
The team included the most common test types except for perfor-
mance testing under the assumption that the first rollout would not
require it since the extension of the system was low, and the customer
will be implementing just manufacturing business unit. Next rollouts
will include the servicing business, including the warranty manage-
ment of recreational vehicles plus other operations. The servicing
operations will continue to be executed as a third-party solution for
now and their dealership network will be using this system to order
parts and honor warranties.
Very soon, during the first testing cycle, the team started to test across
the first wave of the solution where there were no integrations, so
completing testing was satisfactory in terms of the cycle performance.
As the team got ready for their second wave of testing, they started
to introduce some of the integrations as part of the scope for the test
cycle, but those integrations were emulated with dummy data and the
team introduced some migrated data. Integrations worked for their
purpose and the volume of migrated data was small.
At combining the testing for having a larger volume of users for the
test and integrations running with similar volume of operations to
what they expected in production, the team hit the first major blocker.
The solution was not keeping up with the needs of the test cycle, and
they were not even in production. The first reaction from the team was
an underperforming infrastructure, which raised the concerns of the
business stakeholders upon learning the test cycle outcome.
The team was ready to prepare for UAT and decided to continue ex-
pecting that this would not be an issue in production due to it having
higher specs. They assumed the production environment would be
able to solve this performance challenge, so they decided to continue,
complete UAT, and move to production. The customer signed off and
preparation moved to the next stage to get ready for go live.
The big day came and production and all the departments switched
to new system. The first day was normal and everything was working
great. The team decided to turn on integrations for the servicing
solution on the second day. When the second day came, they were
ready to go interconnect with the service department, and integrations
started to flow into Dynamics 365. This is when they had their first
business stopper: they had a sudden decrease in performance, users
were not able to transact in Dynamics 365, service departments from
the dealerships were not able to connect effectively, shipments started
to slowdown, and the shop floor was having a hard time to trying to
keep inventory moving to production.
For the next phases, this customer included performance testing. They
dedicated an environment to stress test the solution with their own busi-
ness needs and they executed these tests earlier with additional scenarios
and UAT that included parallel processing of people testing and inte-
grations running. They were able to have second go live to include their
accessories manufacturing business and it was a breeze in comparison.
Introduction
Business solutions offer a rich set of capabilities
to help drive business value.
Still, in some situations, you need to extend the solution and adjust off
the-shelf functionality to accommodate organization or industry spe-
cific business processes. These adjustments can change how a feature
works or bring additional capabilities to meet specific requirements.
While Dynamics 365 apps natively offer rich capabilities, they also
offer powerful options to customize and extend them. Extending the
solution can open even more opportunities to drive business value. It
is important to note, however, that extending should not compromise
the fundamental advantages of an evergreen cloud solution, such as
usability, accessibility, performance, security, and continuous updates.
Complex business requirements lead to
highly advanced solutions with customiza- These are key to success and adoption.
tions and extensions to applications.
Defining your
Advanced implementations bring an
increased risk that user experience suffers
because of the introduction of perfor-
extensibility strategy
mance, stability, maintainability, sup-
portability, and other issues.
Implementation Guide: Success by Design: Guide 15, Extend your solution 395
advantages, including out-of-the-box security, native integrations with
other cloud solutions, cost savings, and improved agility. The ever-
green cloud approach also enables the continued ability to evolve with
the latest modern features. In addition, it is often easier to use off-the-
shelf solutions with embedded modern processes and further extend
it. This tailors a solution to meet business needs, rather than having to
build a custom solution from scratch with all the associated hassles and
expense to meet the precise requirements of your organization.
What is extending?
Even though Dynamics 365 apps cover a wide variety of business
processes and industries, there is often some degree of specialization
or customization needed. , We refer to this as extending the solution.
Extending can vary from minor changes to the user interface of a
particular feature to more complex scenarios like adding heavy
calculations after certain events. The depth of these extensions has
important implications on how much the off-the-shelf product needs
to change to meet specific business requirements.
Implementation Guide: Success by Design: Guide 15, Extend your solution 396
organizations may also depend on a third-party to maintain and evolve
it, and in some scenarios an unresponsive third-party can block the
organization from adopting new functionalities.
Legacy solutions may have taken years to develop and evolve and may
not use the latest and greatest functionality available on the market. As
an example, Dynamics 365 apps natively use artificial intelligence and
machine learning to provide insights that help users make the best and
most informed decisions.
397
Leveraging ISV solutions
Leveraging independent software vendor (ISV) solutions from the
app marketplace instead of extending the solution to achieve the
same results may save development cost and time, as well as testing
and maintenance resources. ISVs typically support and maintain the
solution at scale for multiple organizations. Their experience can be an
advantage for organizations that require additional functionalities that
are already provided by ISVs.
Extensibility scenarios
Extending off-the-shelf solutions occurs when functionality is changed
to fulfill an organization’s requirements.
App configurations
Configurations are the out-of-the-box controls that allow makers and
admins to tailor the app to the needs of a user. These setting changes
are low effort, requiring no support from professional developers. They
are a powerful way to make the application your own, for example
Implementation Guide: Success by Design: Guide 15, Extend your solution 398
changing a theme to reflect business branding.
App settings are the safest and the least impactful way of tailoring the
solution to your needs and should be the preferred approach before
exploring another extensibility technique.
Low-code/no-code customizations
A differentiator for Dynamics 365 apps and the latest generation SaaS
products is the powerful customization capabilities made available
through “what you see is what you get” (WYSIWYG) designers and de-
scriptive expression-based languages. This paradigm helps significantly
reduce the implementation effort and enables businesses to get more
involved with design and configuration.
399
organization requirements that are specific to an industry or unique
business processes, including specific functionality focused on specific
roles or personas. This allows personalization that streamlines the user
experience so a user can focus on what is most important.
Implementation Guide: Success by Design: Guide 15, Extend your solution 400
Extending into PaaS
In some scenarios, organizations leverage PaaS components to extend
solutions, which adds powerful capabilities that help address complex
requirements. Dynamics 365 apps has a core design philosophy that
allows our SaaS applications to be extended by leveraging the underly-
ing Azure PaaS capabilities. This is referred to as the no-cliffs extension
approach. The approach enables businesses to start with SaaS, and
then for the most complex long-tail scenarios, extend into the Azure
PaaS. Doing so alleviates the fear of being limited by the platform.
This no-cliffs extension provides the best of both worlds. The SaaS appli-
cation provides the off-the-shelf functionalities as well as the options and
methods to extend them. The PaaS extensions further enrich the solution
architecture by providing rich and powerful mechanisms that scale and
allow heavy processing of operations outside of the business solution.
Fig.
15-1 Dynamics 365 Dynamics 365
Finance and
Operations apps
Azure IoT IoT Intelligence Add-in
Central
Workspaces
Connected Data
technology ingestion
Alerts
Automation
Implementation Guide: Success by Design: Guide 15, Extend your solution 401
Connected field service
Fig.
that brings in Azure IoT PaaS on top of Dynamics 365 Field Service, as
15-2
shown in Figure 15-2.
Dynamics 365 Azure Functions is an event-driven serverless compute platform that can
Field Service
also solve complex orchestration problems, by using it you can move
some of the heavy computing processes away from Dynamics 365 apps.
Fig.
15-3
Implementation Guide: Success by Design: Guide 15, Extend your solution 402
Considerations
Every piece of an extension should focus on bringing efficiency or
value to the organization. Inherent costs do exist when implementing
changes and can vary from the cost of building, testing, maintaining,
and support. These should be taken into consideration when planning
to extend a solution.
In this section, we delve into the key considerations and impacts exten-
sions can have on a solution.
While the main purpose of extending may not be to improve the user
experience, it should not negatively impact the user experience and
how the solution behaves across different devices and platforms. In
addition, extending should not negatively impact the responsiveness
and performance of the solution.
Implementation Guide: Success by Design: Guide 15, Extend your solution 403
negative impact, and security breaches are not to be taken lightly.
The same happens with compliance, for example the General Data
Protection Regulation (GDPR) policies. Compliance functionality im-
plemented natively also need to be inherited in any customizations of
extensions. Failing to do so may have consequences for organizations
that do not comply with regulations.
GDPR is just one set of regulations. Regulation of privacy and data use
exists in many different forms across several markets. While there is a
great deal of overlap in terminology, privacy and security are not identical.
Security is about preventing unauthorized access to any data, while priva-
cy is ensuring, by design, proper acquisition, use, storage, and deletion
of data defined as private under local regional and global regulations.
While security is natively built into Dynamics 365 apps and highlighted
as best practices, privacy requirements tend to have a higher probabili-
ty of being overlooked by developers when extending native apps.
Performance
Although cloud solutions provide a high degree of scalability and
performance, when extending a solution, it is important not to com-
promise its performance.
When extending the user interface and/or the business logic, addi-
tional efforts are added to create, retrieve, update, or even delete data.
Those additional efforts may have an impact on the user experience,
depending on the amount of extended functionality added.
Implementation Guide: Success by Design: Guide 15, Extend your solution 404
the solution is extended, as those extensions can highly impact them.
Scalability
The scalability of a business solution is also a key consideration to de-
termine how you extend it. While the cloud platform includes scalable
servers and micro services, other aspects of the platform need to be con-
sidered to determine the impact of your business solution architecture.
405
When extending the solution, it is important to understand how use
grows over time and the impact of the design on parameters like:
▪ How much storage is required by the extensions and how does it
grow over time?
▪ How many additional application programming interface (API)
calls do the features require?
▪ What are the limits on workflows, code execution, or API calls?
This means that all extended functionalities are added on top of ALM
practices, which also increases the complexity. As an example, if the
extended features required different configurations, or are only ap-
plicable to specific business units, countries or regions, there needs to
be a separation at the ALM process, which may just be that they are
shipped in different packages (or solutions).
Maintainability, supportability,
and future-proofing
In a cloud world where updates are continuous, and new features are
released with frequency, it is important to think about the impact and
costs of maintaining extended functionalities.
Implementation Guide: Success by Design: Guide 15, Extend your solution 406
When extending a solution, additional maintenance requirements are
added to the business solution, so it is important to understand and be
aware of deprecations and roadmap. This helps avoid building something
that might soon be offered natively by the solution or being forced to re-
place parts of the extended solution because of deprecated functionality.
Supportability
Extending a business solution can also complicate the support re-
quirements of the solution. Normally, the first line of support is at the
organization itself or a vendor. Support resources must have expertise
on the business solution and extensions built on top of off-the-shelf
capabilities. This requires a specialization of resources to support the
extended business solution.
Implementation Guide: Success by Design: Guide 15, Extend your solution 407
It is good practice to take the following perspectives into account. Firstly,
having sound developer craftmanship. This applies to basically any lan-
guage or technology and dictates that when APIs and extension points
are created, they are well thought through, robust, and well defined.
Also, that the extension is made in a way that allows for other extensions
to use the same extension point or API side-by-side with ours.
Product-specific guidance
In the following sections, we will look at Dynamics 365 Customer
Engagement apps and Finance and Operations apps individually.
Implementation Guide: Success by Design: Guide 15, Extend your solution 408
list of additional relevant links at the end of the chapter.
Let’s first look at what we can extend without changing the application
components in Figure 15-4.
Fig.
15-4
Extension example #1 Extension example #2 Extension example #3
Requirement: Create a custom Requirement: Add a table to hold a list of Requirement: Add code to automatically change
workspace with tiles and lists from food allergens. Add the allergen field to the the status of the new production order to “started”
multiple forms and queries across the sales line record for the user to indicate that a when the user firms a planned production order.
manufacturing module. product contains the specific allergen.
Tools and components: Use the Tools and components: Use Visual Studio in a Tools and components: Use Visual Studio in a
personalization feature and “add to developer environment to add the extended developer environment to extend the X++ business
workspace” functionality in the UI to data types, the tables and fields and the logic, add appropriate event handlers, class, methods,
put together the desired workspace extensions to the sales line table and the Sales and X++ code to catch the event that the planned
and components. No change to code Order form. This change requires new and order is firmed, and execute the correct code pattern
or application components is needed. changed application components and must to bring the new production order to started status.
follow software development lifecycle (SDLC), This change requires new and changed application
build, and deployment guidelines. components and X++ code, and must follow SDLC,
build, and deployment guidelines. Considerations
about scalability and performance when large
numbers of planned orders are firmed using
appropriate developer tools should be considered.
This requires user level experience with Requires entry level professional developer Requires medium- to expert-level professional
navigation of the relevant module, experience in Dynamics 365 Finance and developer experience in Dynamics 365 Finance and
navigation, and personalization. Operations apps and familiarity with Visual Operations and familiarity with Visual Studio, build
Studio, building procedures, and best practices. procedure, best practices, and, ideally, frameworks
like SysOperations Framework, Multithreading, and
performance-checking tools and patterns.
Implementation Guide: Success by Design: Guide 15, Extend your solution 409
an extension to the standard product, it is a best practice to consider if
one of the options for noninvasive personalization or configuration can
meet the required functionality. Finance and Operations apps offer wide
options for personalizing the UI, creating user specific experiences, and
automating processes without having to do any programming at all.
As the table below shows, many options are available to find alter-
native approaches to extending the application. The below is not an
exhaustive list. Additional options for low code and no code options
are mentioned in Chapter 16, “Integrate with other solutions.” The
decision of whether to extend comes down to user efficiency and value
Fig.
15-5 for your customers.
The application remembers the While this is hardly considered a way of extending, it Personalizations can be shared
Restricted last settings for sorting, column does give the user the opportunity to store selections in by users or managed centrally
width, criterial values in queries dialogs, expansion of relevant fast tables, and aligning by an admin from the
personalization
and dialogs. the column width so that more columns are visible on a personalization page.
given form.
Personalization bar in forms and Personalization allows users or admins to add or hide Personalizations can be shared
Personalization workspaces. fields or sections, change labels, change the order by users or managed centrally
of forms/UI of columns, and edit the tab order by skipping fields by an admin from the
when pressing tab. personalization page.
Saved views is a combination of Saved views is a powerful tool that allows the user Saved views can be shared by
personalization of the UI for a the ability to quickly switch between tailored views of users or managed centrally by an
form and filtering and sorting of columns, filtering and sorting on the same screen admin from the
the form data source. depending on the specific task at hand. For example, Personalization page.
Saved views
a buyer in a pharmaceutical company may need a
simple view of the purchase order screen for May require the Saved views
nonregulated materials purchasing regulated materials feature to be turned on.
used for manufacturing
Users can use personalization This functionality allows users to tailor their experience Custom and personalized
and the “add to workspace” to their needs. Workspaces provide glanceable workspaces can be shared by
Custom button to add tiles, views, or information about the most important measures, users or managed centrally by
Workspaces links to an existing or custom actionable items, and relevant links to other pages. an admin from the
workspace. personalization page.
Users with certain security roles Finance and Operations apps have a rich set of Custom fields and the
can add up to 20 custom fields features that apply across a wide set of industries. personalized forms to show the
to tables Some organizations require additional fields on fields can be shared by users or
certain tables; for example, the item or customer managed centrally by an admin
Custom fields
master or the sales order header. This feature allows from the personalization page.
the user to create these fields. Note that these fields Deleting a custom field is
are specific to the environment that they are created irreversible and results in the loss
in and cannot be referenced by developer tools. of the data in the custom column.
Implementation Guide: Success by Design: Guide 15, Extend your solution 410
Fig.
(continued)
15-5
The grid on forms in the system The grid offers the following capabilities. The admin can enable the
have some extended features New Grid Control feature from
▪ Calculating totals for columns in the grid footer
that may eliminate the need for feature management. Note that
an extension. ▪ Pasting from Excel there are certain limitations, see
▪ Calculating math expressions. For example, the reference at the bottom of
if the user enters 3*8 in a numeric field and the section.
Grid capabilities presses tab, the system calculates and enters the
result of 24
The user can add a canvas app to The ability to embed a canvas power app enables Please see more about UI
a form or workspace as embedded citizen developers to use low-code/no-code options integration for Finance and
into the UI or as a menu item for interacting with data in Dataverse directly from Operations apps in Chapter 16,
Embedded that can pull up the app from the the UI in the Finance and Operations apps. “Integrate with other solutions.”
power apps menu.
canvas apps
It is important to note that if the app must interact
with Finance and Operations apps data, that
integration to Dataverse must be in place and the app
must of course support the required actions.
Users can view, edit, and act on IT admins can build and publish mobile workspaces that Simple actions can be done from
business data, even if they have have been tailored to their organization. The app uses the mobile workspaces. Most
intermittent network connectivity existing code assets. IT admins can easily design mobile more advanced actions require
Mobile on an app for iPhone and workspaces by using the point-and-click workspace extension.
workspaces Android. designer that is included with the web client.
Users can extract or edit data The Excel integration allows for a wide variety of With great power comes great
on most forms in the system by scenarios for entering, presenting, and interacting with responsibility. While it is easy to
clicking on the office icon. data in way that is not possible from the UI. In addition change a whole column of data
Excel to the export-to and open-in Excel option, the user can and publish that data into the
integration create workbooks and templates for specific purposes. system, it is equally easy to make
Excel has many features for presentation and offers a mistake.
data manipulation capabilities for larger datasets that
users cannot do in the system UI.
Implementation Guide: Success by Design: Guide 15, Extend your solution 411
Figure 15-6 is an overview of the components, the consideration for
Fig. extensions, and the related characteristics.
15-6
Graphical editor in Visual Studio Forms must adhere to patterns. The editor in Visual Deviate from the predefined
with preview. Studio has a rich modeler that will help the developer patterns and create monster all-
applying the right structure to the screens. This in-one screen style forms. They
User interface
will ensure performance when loading the form, are often a result of the designer
adaptability across different resolutions and screen trying to replicate a legacy
sizes and consistency across the UI. system experience.
Metadata editor in Visual Studio Follow best practices and frameworks. Apply indexes Create redundancy or replicate
Data model for Tables, Fields, Extended Data and define delete actions. Normalize. Use effective poor modeling from legacy apps.
and queries Types, Enums Queries and Views date framework when applicable. Keep performance
in mind. Use field lists when performance is a risk.
X++ Editor in Visual Studio. Adjust compiler setting to alert about best practices, Ignore best practices, write long
code with the goal of zero deviations. Use code patterns methods or over comment
and frameworks. Practice good developer the code.
Business logic
citizenship. Write clean easily readable code.
Run CAR report and use compatibility checker.
Unit test the code.
SSRS report editor in Visual SSRS reports are good for certain precision Reach for the SSRS report option
Studio designs and tabular lists. See Chapter 13, if there is a better way.
Reporting
“Business intelligence, reporting, and analytics,”
for more information.
Metadata editor in Visual Studio The out-of-the-box data entities are general purpose Create data source proliferation.
entities are built to support a wide variety of features See Chapter 16, “Integrate
surrounding business entity. In scenarios where a with other solutions,” for more
high volume, low latency interface is required it is information.
Data entities recommended to build custom data entities with
targeted and specific features needed to support high
volume interfaces in the implementation.
Development architecture
The architecture of the development environment, as shown in Figure
15-7, includes the software development kit (SDK), which consists of
Visual Studio development tools and other components. Source con-
trol through Azure DevOps allows multi-developer scenarios, where
each developer uses a separate development environment. Deployable
packages are compiled and packaged in a build environment or a build
service and uploaded to Dynamics Lifecycle Services (LCS) for deploy-
ment to nonproduction environments. Deployment to the production
environment happens after proper testing has been done in the UAT
Implementation Guide: Success by Design: Guide 15, Extend your solution 412
environment and users/stakeholders have signed off as described in
the SDLC.
Fig.
15-7
Batch
Storage
Project system Cloud manager
service
X++ code editor instance service
Designers
Application explorer
Design-time meta-model Deploy
Application object server Service
Best practice integration hosted by Internet information service
Package
(metadata, binaries, and data)
Business
database
Build
Model binaries Runtime
(file system)
Model store
Metadata API
(file system)
Development environment
Implementation Guide: Success by Design: Guide 15, Extend your solution 413
By bringing data from Finance and Operations apps into Dataverse, it
is possible to leverage the Power Platform and use low-code/no-code
developer models on Finance and Operations apps data. Either as embed-
ded power apps or PowerBI reports or dashboards, in standalone apps,
or integrated with other Dynamics 365 apps, without making changes
through extensions to the Finance and Operations app and data layer.
Fig.
15-8
Best practice checks for X++ and Developers should strive for zero deviations. The You can stop developers from
Best practice application components are built best practices are made to ensure and updatable, checking in code with best
check into Visual Studio. They can be performing and user friendly solution. practice deviations.
errors, warnings, or informational.
The compatibility checker tool The compatibility checker tool is available as one Not all breaking changes can
Compatibility can detect metadata breaking of the dev tools. You can use it to ensure that your be detected by the tool. See the
report changes against a specified solutions are backward-compatible with earlier Compatibility checker docs page
baseline release or update. releases before you install or push updates. for specifics.
The users can take a trace of You can use the trace parser to consume traces and The trace parser tool can be
runtime execution directly from analyze performance in your deployment. The trace found in the PerfSDK folder in
Traces and
the UI. The trace parser can be parser can find and diagnose various types of errors. your development environments.
trace parser used to read the trace. You can also use the tool to visualize execution of X++
methods, as well as the execution call tree.
Performance timer is a tool in To open the Performance timer, open your web page The tool itself has a performance
the web client that can help you with the added parameter debug=develop. You can impact.
to determine why your system's see counters for client time and server time, and the
Performance performance might act slow. total time. Additionally, you can see a set of performance
timer counters, a list of expensive server calls, how many
SQL queries were triggered by this individual call and
which SQL query was the most expensive.
Implementation Guide: Success by Design: Guide 15, Extend your solution 414
Fig.
(continued)
15-8
Under Environment Monitoring The logs provide for example: The tools under Environment
in LCS there a comprehensive Monitoring are very effective at
▪ Activity Monitoring: A visualization of the activity
collection of tools and information diagnosing potential or growing
that has happened in the environment for given
LCS Logs that you can use to analyze and
timeline in terms of user load, interaction and
performance issues. Keeping
diagnose your cloud environment. an eye on these metrics can
activity.
help pinpoint problems with
▪ Information about slow queries, deadlocks, etc. extensions.
The CAR report is an advanced The CAR report can be run by command line in a A clean CAR report is a
Customization best practice check tool. development environment. The output is an Excel requirement for the go-live
Analysis Report workbook with recommendations issues readiness review prior to
(CAR Report) and warnings. enabling production.
A breaking change is a change Breaking changes are, for example, changes to data That although the application is
that will break all or some of the model and extended data types, changes to access massive, we tend to only extend
consumers of your code that modifiers on classes and methods and many others. the same relatively small subset
already exist. of elements. It is not as unlikely
Understand and
This is especially important in heavily extended solutions, that you may think that other
avoid breaking
if you are making a basis for a global rollout, if you developers use your components
changes
are making an ISV, or if you have multiple developers or code.
sharing common custom APIs or constructs but should
always be considered.
If you find a need for an Extensibility requests are logged to a backlog. Extensibility requests are
extension point that is currently Microsoft engineers prioritize all requests, and then following the same cadence as
not available, log the request work on them in priority order. Please note that the platform updates.
Log extensibility
early. This is done via an Microsoft is not guaranteeing that all requests will be
requests early extensibility request in LCS. fulfilled. Requests that are intrusive by nature will not
be supported, as they will prevent a seamless upgrade.
Sometimes developers put a Developers are the first line of defense against bugs, It is a lot easier and more cost
lot of effort into building the performance issues and semantic issues that may effective for the developer to
extension, but little effort into exist in the specification. By simply going through the find and fix a problem before it is
unit testing it before determining intended functionality from perspectives such as: checked in, built and deployed.
whether it is ready to deliver. ▪ Will this code scale with high volumes?
Proper unit
▪ Does it do what I expect?
testing ▪ Can I do things I am not supposed to?
▪ Could the user accidently do something unwanted?
▪ Does the requirement make sense or force me to
break best practices or patterns?
For more on these topics, see the “Reference links” section later in
this chapter.
Additional components
It is important to note that the extended product architecture contains
several additional components. Along with that are multiple avenues
Implementation Guide: Success by Design: Guide 15, Extend your solution 415
and tiers for approaching a requirement for extension. It depends on
the specific area and nature of the requirement.
Reference links
▪ Extensibility home page - Finance & Operations
▪ Application stack and server architecture - Finance & Operations
▪ Microsoft Power Platform integration with Finance and Operations
▪ Microsoft AppSource – destination for business apps
▪ Commerce for IT pros and developers - Commerce
▪ Write extensible code - Finance & Operations
▪ Breaking changes - Finance & Operations
▪ Extensibility requests - Finance & Operations
▪ Grid capabilities - Finance & Operations
▪ Mobile app home page - Finance & Operations
▪ Take traces by using Trace parser - Finance & Operations
▪ Testing and validations - Finance & Operations
Implementation Guide: Success by Design: Guide 15, Extend your solution 416
Customer
Engagement apps
The Power Platform, shown in Figure 15-9, provides the ability to use
configurations, low-code/no-code customizations, and still allows de-
velopers to programmatically extend first-party customer engagement
apps like Dynamics 365 Sales. Custom business applications can also be
created. You can even mix the two approaches to provide applications
adjusted to specific organizational needs.
Fig.
15-9
Microsoft Power Platform
The low-code platform that spans Office 365, Azure, Dynamics 365, and standalone applications
Implementation Guide: Success by Design: Guide 15, Extend your solution 417
Power Apps democratizes custom business app building experience
by enabling non-professional developers to build feature rich, custom
business apps without writing code.
Implementation Guide: Success by Design: Guide 15, Extend your solution 418
Both types of apps are built to be responsive by adjusting and being
accessible from different devices.
Implementation Guide: Success by Design: Guide 15, Extend your solution 419
or Dynamics 365 Customer Service, use Dataverse to store and secure
the data they use. This enables organizations to build or extend apps by
using Power Apps and Dataverse directly against the business data.
Data
The entity designer and option set designer determine what data the
app is based on and allow changes to the data model by adding ad-
ditional tables and fields as well as relationships and components that
use predetermined options for users to select.
Logic
The business process flow designer, workflow designer, process designer,
and business rule designer, determine the business processes, rules, and
automation of the app.
420
Visualizations
These determine what type of data visualization and reporting the
app includes, like charts, dashboards, and reports based on SQL Server
Reporting Services.
You can create a range of apps with Power Apps, either by using canvas
or model drive apps to solve business problems and infuse digital
transformation into manual and outdated processes.
Solution analyzers
Solution checker
The solution checker can perform a rich static analysis of the solutions
against a set of best practice rules to quickly identify patterns that may
cause problems. After the check is completed, a detailed report is gen-
erated that lists the issues identified, the components and code affected,
and links to documentation that describe how to resolve each issue.
Portal checker
The portal checker is a self-service diagnostic tool that can identify
common issues with a portal by looking at various configuration
parameters and providing suggestions on how to fix them.
Implementation Guide: Success by Design: Guide 15, Extend your solution 421
Conclusion
Cloud-based solutions offer ready-to-use applications solutions that
can be easily delivered, reducing the time required for an organization
to start taking advantage of them. SaaS solutions typically reside in
cloud environments that are scalable and offer native integrations with
other SaaS offerings. These solutions benefit from continuous updates
that add innovation and new capabilities multiple times per year. This
reduces costs and effort and eliminates the downtime associated with
upgrades in a traditional on-premises model.
This means that organizations can start using the solution as-is. Still,
in some scenarios, additional requirements are needed to add value
to the business or to empower users and drive adoption of the new
solution. Thus, modern SaaS solutions also provide rich capabilities to
further extend it. These can range from simple configurations using a
low-code/no-code approach or an extension by using custom code by
professional developers.
Implementation Guide: Success by Design: Guide 15, Extend your solution 422
References is how performance is affected. During the Solution Performance im-
plementation workshop, the FastTrack solution architect works along
Business Apps | Microsoft Power Apps with the implementation partner and the customer to review how the
extensions can impact the overall performance of the solution. Typical
Build Apps – Canvas Apps or Model-driven
Apps | Microsoft Power Apps risks and issues are identified related to overextending forms, synchro-
nous events that impact the user experience, impact on capacity such
Microsoft AppSource – destination for
business apps as service protection limits, resources, and allocation limits.
423
Checklist
Review if any potential ISVs were considered before Ensure code and customizations follow only the
deciding to extend the solution. The AppSource market- documented supported techniques, and don’t use
place contains ISV-managed solutions that may replace deprecated features and techniques.
the need to create a custom solution.
Implementation Guide: Success by Design: Guide 15, Extend your solution 424
Case study
The power of making it your own
A wealth management services company that delivers personalized
services had the need to standardize, simplify processes, drive efficien-
cies, expose more data, and increase visibility and interoperability to
their users.
The off-the-shelf functionalities gave the ability for the users to achieve
a single view of the customer lifecycle across all products. Through
customizations and specific extensions of the solution, the company
added special wealth management services and customized the user
experience to better serve their customers.
Implementation Guide: Success by Design: Guide 15, Extend your solution 425
16 Guide
Integrate
with other
solutions
Introduction
Together, Dynamics 365 apps provide a rich and
mature feature set across a wide variety of industries.
However, there are situations in which you might want to go beyond the
boundaries of the application suite and extend processes or incorporate
data assets from other sources. In this chapter, we examine how to
integrate Dynamics 365 apps with other solutions, or with each other.
In line with the design pillars, we look at how the choice of integration
This chapter covers the platform should fit into the architectural landscape.
following topics:
Defining business goals
We also look at how to choose an integration design that offers users
Choosing a platform the capabilities they desire in the short and long term.
Choosing a design
Product-specific guidance And finally, before diving into the product specifics, we walk through
some of the common challenges people face when integrating systems.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 427
Defining
business goals Defining business goals
Choosing a platform
To ensure that your integration work aligns with the overall direction of
Choosing a design the business, it’s important to match each requirement of cross-system
Choosing a pattern processes against the overall goals of the project and the business. To
accomplish this, begin your integration work by defining goals that
Challenges in integration
map to the business perspective.
Product-specific guidance
Multi-phased implementation
You might be implementing Dynamics 365 apps in a planned multi-
phased approach, starting with a geographic location, a single division,
or a single Dynamics 365 app; Finance, for example. In this scenario,
some level of integration with the parts of the legacy solution that will
be implemented in future phases could be necessary.
Regulatory requirements
Regulatory, government, and industry data exchange or reporting
are often standard practice or required by law in some industries and
regions. It’s often reported electronically, but more complex exchanges
of data could require a more traditional integration approach.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 428
Financial consolidation
Perhaps your organization is a subsidiary of a larger operation that
requires data to be consolidated and reported in a corporate parent
entity. This often requires extracts of data to be transformed and
loaded into the corporate consolidation system. In some cases, it’s the
other way around: your organization might expect integration of con-
solidation and other data from your subsidiaries into your new system.
Many more scenarios are not described here. Some might even be
combinations of several of the examples.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 429
different departments is key. If the sales department depends on
information from accounting, integration can help the data to flow
at the time that the sales department needs it. Whether the data
arrives in real time or in batches, you can program it to arrive at a
certain time. The right pattern of communication will help satisfy
the different business units.
▪ Reduced human errors An error in the data like a typo, an
incorrect number, or a button pressed at the wrong time could
significantly affect your process, or even worse, degrade the cus-
tomer experience.
▪ Increased productivity The users become more focused on the
processes that add value, since they won’t need to reach for or
search other systems to retrieve the information they need.
▪ Process optimization Integration simplifies your processes so
that you spend less time executing tasks, thereby adding more
value to your business.
▪ Increased security By automating processes and data move-
ment through integrations, organizations implement better
controls for data access and reduce the need for users to directly
work with sensitive data.
▪ Regulatory compliance Automated integrations could help
meet some of the controls needed to meet regulatory requirements
like HIPAA (Health Insurance Portability and Accountability Act) or
FDA Regulations in the US.
▪ Reduced cost of operations Integrations might help reduce
repetitive manual activities, human errors, and training activities,
which could lead to an overall decrease in cost of operations.
▪ Avoiding data silos Integrations break down data silos and
For supplemental information, read the
Integration design for Dynamics 365
improve the value of the organization’s data. With AI, machine
solutions on Microsoft Learn. learning, and IoT usage on the rise, this data might help drive bet-
ter insights, data science, and predictive analytics in your company.
Integration planning
To properly incorporate business goals into the entire project lifecycle,
we recommend you plan the integration architecture in the initial
stages of the implementation project.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 430
underestimate volume or complexity. To prevent this, we highly
recommend you create a blueprint of the design before starting any
specification work and do some calculations of loads and transaction flow.
Success by Design highly recommends
that you approach integration work the
same way as an extension project, by Conceptualizing
following a defined software development
lifecycle (SDLC) that incorporates business
Creating a blueprint and thoroughly planning the implementation
stakeholders and collects their buy-in. activities will also help you formulate the testing and performance test-
The SDLC should include requirements,
design, development, and testing, as well ing of the solution later in the implementation. That’s why integration
as performance testing, deployment, and architecture is a key part of the Success by Design solution blueprint.
application lifecycle management (ALM).
The work-to-define requirements should
be business driven and process focused.
To create a blueprint for integration, you can leverage several types of
For more information, refer to Chapter 7, diagrams, which we describe here.
“Process-focused solution,” and Chapter 11,
“Application lifecycle management.”
▪ Application integration diagram The application integration
diagram is often a high-level representation of solution architec-
ture. An example of one is shown in Figure 16-1. Many styles exist,
but in its basic form it should provide an overview of which sys-
tems in the solution need to be integrated and, ideally, what data
is exchanged and the direction of the data flow. Once the overview
is established, details can be added about the specific interface
touchpoints, frequency, and volume information, and perhaps a
link to ALM information, such as a functional design document (or
Fig.
16-1
FDD) number or link.
App 2 App 4
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 431
Fig.
16-2
Process start
Message Response
User action User trigger event
Fig.
16-3
▪ Process documentation, mapping, and volume calcu-
lations Once the high-level overviews are defined, we
System 1 System 2
Sales Operations recommend you document the cross-systems processes in more
detail, for example by defining user stories, before starting the
AccountNumber Customer Account
system-to-system mapping. Mapping can be done as part of a
Name OrganizationName functional or technical design document. (An example of map-
Description SalesMemo ping is shown in Figure 16-3.) The design document, along with
the process definition, should be detailed enough to ensure that
Telephone1 PrimaryContactPhone
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 432
Defining business goals
Choosing a platform
Choosing
a platform In this section we discuss what you should consider when choosing a
Choosing a design platform and, if needed, a middleware solution to achieve the previ-
ously mentioned business goals.
Choosing a pattern
Challenges in integration Data storage needs have increased exponentially in recent years, and
it will continue to intensify in this direction. The amount of data that’s
Product-specific guidance
generated and transferred today is practically unimaginable. In addi-
tion to the new data that’s being captured or generated, historical data
from legacy systems might need to be preserved. The platform you
choose will need to incorporate all of these realities, and must be able
to reliably handle the storage and transfer of vast amounts of data.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 433
Fig.
Simple cloud Simple hybrid
16-4
System 1
System 1 System 2
Cloud
Cloud
On-premises
On-premises
System 2
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 434
Fig.
16-5 Primarily on-premises Hybrid
Cloud
System 5
Middle-
ware
Middle- Middle-
ware ware
On-premises
On-premises
System 1 System 2 System 3 System 4 System 4 System 5 System 6 Storage/DW
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 435
using different technologies, and it seems System 4 is acting as
a hub. No middleware platform is applied, and strategy is likely
somewhat loosely defined. The perceived cost of applying an
integration platform and middleware is making it necessary to use
point-to-point integration when new systems are introduced.
When implementing a system that’s
▪ Primarily cloud In the scenario shown on the right side of
primarily cloud, compare the cost of Figure 16-6, the organization has a higher level of cloud maturity,
implementing a clear strategy and
using a platform and middleware such platform, and middleware are platform-as-a-service (PaaS) cloud
as Azure and Power Platform with the
services. The organization has integrated a few on-premises
cost of operating and maintaining the
existing integrations. components that are necessary, and additional line-of-business
systems are all in the cloud.
Primarily cloud
Fig.
16-6 Hybrid, no middleware
Cloud
Middle-
ware
On-premises
On-premises
System 4
System 4 System 5 System 6 Storage/DW
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 436
Be sure to continue using the cloud-based strategy and the toolset
provided by Azure and Power Platform to integrate to on-premises
components when necessary.
Middleware
Integration middleware is software or services that enable communi-
Middleware provides specialized cation and data management for distributed applications. Middleware
capabilities to enable communication,
transformation, connectivity,
often provides messaging services on technologies such as SOAP, REST,
orchestration, and other messaging- and JSON. Some middleware offers queues, transaction management,
related functionality. Dynamics 365
Finance and Operations are business monitoring, and error logging and handling. Different middleware
apps that provide integration capabilities
platforms can support on-premises, cloud-based, or hybrid scenarios.
to support interfaces, but they are not
designed to replace middleware. The following are characteristics you should consider when choosing a
middleware platform.
Key characteristics
When deciding whether to integrate to an existing system, you should
consider several important characteristics of the solution, the system,
and its context.
▪ Scalability and performance The planned platform, middle-
ware, and supporting architecture must be able to handle your
organization’s expected persistent and peak transaction volumes
in the present, the short term, and scale well in the long term.
▪ Security Authentication defines how each system confirms a
user’s identity, and you should consider how that will work across
systems and with middleware. Authorization specifies how each
system grants or denies access to endpoints, business logic, and
data. It’s important to ensure that an integration platform and
middleware are compatible with system security and fit into the
overall security architecture.
▪ Reliable messaging Middleware typically provides messaging
services. It’s important that the messaging platform supports the
architecture of the integrated systems and provides a reliable
mechanism or technology to ensure that messaging across sys-
tems is accurately sent, acknowledged, received, and confirmed.
This is especially important in situations in which a system or
part of the supporting architecture is unavailable. Error-handling
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 437
concepts related to messaging, such as idempotency and transient
versus persistent errors, are discussed in the “Mind the errors”
section later in the chapter.
▪ HA and DR The middleware platform supports the expected
level of stability and availability across the interface in line with the
cross-system requirements, for example, HA for mission-critical
scenarios. Requirements for DR is another consideration for the
platform and middleware. For example, if the integrated systems
or the supporting infrastructure experiences an outage, it’s im-
portant to consider what might happen to shared or synchronized
data when the outage is resolved or if either system has to roll
back to the latest backup.
▪ Monitoring and alerting The platform, middleware, and
supporting infrastructure must support the requirements for
monitoring activity and alerting users and administrators if there’s
a problem. In addition to ensuring that the platform can manage
adequate levels and venues of alerts and notifications, it’s equally
important to consider who in your organization is responsible
for responding to those alerts and notifications and whether the
necessary workload is feasible.
▪ Diagnostics and audit logs It’s important to also consider the
requirements for monitoring cross-system activity and for audit
trails and logging. You should consider whether the planned
platform and middleware support that.
▪ Extensibility and maintenance It’s important to consider the
impact of additional business requirements in the long term.
Considerations include the following: the level of effort required to
extend the integration beyond the first implementation; whether
the integration requires expertise in multiple technologies and
changes to multiple systems or whether there is a single low-code/
no-code point of extensibility; how and how often updates are ap-
plied; and what level of technical and functional support is available.
Dynamics 365 apps and Power Platform
▪ ALM To keep costs as low as possible, consider how you can
allow users to design, configure, and
customize an application to meet a ensure effective lifecycle management, version control, and doc-
customer’s business needs. In doing so, it’s
important to consider performance and
umentation of the integrated solution. You should verify that all
scalability. Ensuring a performant system is components and parts of the solution can use ALM tools such as
a shared responsibility among customers,
partners, ISV providers, and Microsoft. Azure DevOps. For more information about ALM, refer to Chapter
11, “Application lifecycle management.”
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 438
Fig.
Data consistency across multiple systems is important, and for that to
16-7
occur you need to ensure data quality. Keep in mind the saying that
“if garbage comes in, garbage comes out,” it’s important to verify
Microsoft Power that your data flows correctly and in a timely manner across all the
Platform integrated systems.
The low-code platform that spans The following section discusses the preferred platform to use to inte-
Office 365, Azure, Dynamics 365,
grate with Dynamics 365 apps on Azure.
and standalone applications
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 439
Works with any kind of data
▪ Dataverse is integrated into data-centric tools and services such
as Microsoft Excel, Outlook, Power BI, Power Query, and AI Builder
A data warehouse provides capabilities to
that are traditionally used by knowledge workers and integration
store, process, aggregate, analyze, data
mine, and report on both current and engineers.
historical data sets typically with data
aggregated from a varied set of cloud
▪ The Dataverse has built-in analytics, reporting, and AI that you
and line-of-business (LOB) applications. A can use to provide insights into your organization and support
data warehouse has specific architecture
optimized for analytical functions. An decision making. You can obtain analytics and reporting data
Enterprise Resource Planning (ERP) using low-code/no-code options or by using the full capability set
solution such as Finance and Operations
apps maintains a highly normalized data of Azure Data Factory, Power BI, and Azure Databricks. For more
set optimized for transactional processing
information about analytics and reporting, refer to Chapter 13,
and business functionality. It’s important
to select the appropriate data warehouse “Business intelligence, reporting, and analytics.”
platform to meet your organization’s data
warehouse and analytical needs.
Let’s take a look now at how you can use Power Platform and
For more information on reporting and
analytics solutions, refer to Chapter 13, Dataverse with Dynamics 365 apps.
“Business intelligence, reporting, and
analytics.”
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 440
Defining business goals
Choosing a design
Choosing a platform
Many factors influence the choice of patterns for your integration.
Choosing
Integration scenarios can be grouped roughly into three kinds: UI,
a design
data, and process integration. While there is some degree of gray area
Choosing a pattern
and overlap between them, there are also some distinct characteristics
Challenges in integration and expected behaviors.
Product-specific guidance
UI integration
In UI integration, the primary point of integration is centered around
an action that’s performed on the UI. The integration might or might
not trigger business logic or cause anything to be written to the sys-
tem. UI integration creates a seamless user experience even though
the data and process might exist in separate systems, as shown in the
example in Figure 16-8.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 441
The following are additional examples of UI integration:
▪ Commodity price ticker; an email view
▪ On-hand inventory
A key characteristic of UI integration
▪ Embedded Power BI dashboard showing the current day’s produc-
design is that it’s embedded.
tion quality issues in a manufacturing plant
The benefit of UI integration is that a
user can retrieve and provide real-time
information from multiple sources
without switching between systems,
thereby saving on training, user licenses,
Data integration
and more importantly, time when Data integration, shown in Figure 16-9, is integration between systems
interacting with customers.
that takes place at the data layer, and data is exchanged or shared
between systems. Data integration is different from process integration
in that both systems work with a representation of the same data,
whereas in process integration the process starts in one system and
continues in the other system.
Process / Process /
▪ Exchange rate updates
Business logic Business logic ▪ Postal code and city list synchronization
▪ Accounts and customer data synchronization
Data integration
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 442
Fig.
In the example in Figure 16-10, orders are synchro-
16-10
nized between a Sales and an ERP system. When the
order is created in Sales, the ERP system sends regu-
System 1 System 2
Sales Enterprise lar updates of the order data to Sales, which enables
Resource Planning
users to find accurate order status information.
Create order
Create order
Data integration is also a useful integration type
Ship order
because multiple features within Dynamics 365 apps
Update status enable administrators or users to configure many of
these integration points out of the box, for example,
Look up status Invoice order Microsoft Teams, journal uploads for expenses or bank
Update status account statements, exchange rate synchronization,
and features for regulatory reporting and extracts.
Order paid
Ship order
Another type of data integration, shown in Figure
16-11, is when two systems share the data layer, so
Look up status updates to one system are instantly reflected in the
other system. This is possible with cross app integra-
Invoice order tion with Dynamics 365 apps and Dataverse.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 443
applications are built and deployed separately.
Process integration
Process integration refers to when a business process is designed to
span multiple systems. There are many varieties of process integrations,
such as from a plan-to-produce workflow in which production fore-
casting or block scheduling occurs in a Production Scheduling System
and the rest of the manufacturing and supply chain management
process (production, order management, and fulfillment) and the
Fig.
16-12 billing process occur in an ERP system. Figure 16-12
shows this type of integration.
System 1 System 2
Production Enterprise
scheduling Resource Planning In this scenario, the business process spans two
systems. The user in the Production Scheduling
Plan production orders
System creates and maintains the production sched-
Firm production order Create order ule, thereby initiating the business logic to procure,
Create
production produce, and reserve the order in the ERP system for
order Procure raw materials
later picking and shipping. User interaction might
Update Consume raw materials be required in the ERP system, but the point is that
production
order
the event of creating the order in the Production
Update production order Update order
Scheduling System triggers certain business logic in
Report finished the ERP system, and the rest of the business process
steps are handled either automatically or by the
Pick, pack, and ship order
appropriate users in the ERP system.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 444
often requires batched, scheduled, and sometimes near real-time
integration.
A key characteristic of process integration In the example shown in Figure 16-12, without the integration part,
design is that it’s event driven.
orders would have to be manually created twice, increasing time and
The benefits of process integration are
risking errors such as typos and missed updates.
accuracy, efficient timing of information
and activities in the organization, and
reduction of manual errors.
Power Automate
Power Automate provides low-code/no-code solutions to help you au-
tomate workflows and build integration between various applications.
Power Automate automates repetitive and manual tasks and seamless-
ly integrates Dynamics 365 apps inside and outside Power Platform.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 445
requirements from simple scenarios to enterprise architecture scenarios.
Challenges in integration type it handles. We recommend that you also consider what type of
actions you need the integration to perform, such as the following:
Product-specific guidance ▪ Data types and formats What types of data are you sending?
Transactional, text, HTML?
▪ Data availability When do you want the data to be ready, from
source to target? Is it needed in real time, or do you just need to
collect all the data at the end of the day and send it in a scheduled
batch to its target?
▪ Service protection and throttling When you use certain inte-
gration patterns, service protection might be built in, for example,
by limiting the number of records, or maximum system resource
utilization used for interface activity.. Sometimes the provider
places this limitation on you, but in other cases, if you’re offering
integration points to other systems, you want to impose a limita-
tion or throttling to ensure that the general performance of your
system is not impacted by traffic from external requests.
▪ Transformation of data Do you want to convert or aggre-
gate transactional data into analytical data? Are you changing
.txt to HTML?
▪ Triggers What action will trigger sending data from the source
to the target?
▪ Trigger actions Do you want specific actions to be automated
after the data arrives at the target?
▪ Process error handling What type of monitoring will you put in
place to detect any issues with the interfaces? What type of alerts
do you want to handle the patterns that you’ll be using?
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 446
▪ Flow direction Is the flow bidirectional or unidirectional?
▪ UI The UI determines how the data is going to be displayed once
it arrives at the target system. The UI determines the user experi-
ence; how the user will interact with and handle the information.
▪ Scalability How will the interface pattern handle the expected
transaction volumes in the present, short term, and long term?
What would happen if you exceeded those expectations?
Pattern directions
Let’s take a closer look at some common patterns for individual inte-
grations and the pros and cons for each. This table is generalized; for
more information, refer to the “Product-specific guidance” section in
this chapter.
Asynchronous Data is exchanged Scheduled and user Pros: Loose coupling of systems Most recommended
unattended on a initiated. Can wait for off makes the solution robust. Load integration patterns and
periodic schedule or as a hours or idle time. balancing over time and resources. technologies supported are
trickle feed using Can be very close to real time. asynchronous, although
messaging patterns. Timely error handling. they can be near real time.
Push One system puts Originating system user or Pros: If technical expertise lies For reactive scenarios.
(pushes) data into system event. within the pushing system, the Receiving system provides
another. Information custom effort lies here. Good for a turnkey API and
flows from the originator reactive scenarios. organization’s developer
to the receiver. Cons: Pushing system might not skillset is with the
have visibility into availability and originating system.
load and idle times in the
receiving system.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 447
Pattern Mechanism Trigger Considerations Use when
Pull Receiving system requests Receiving system request Pros: If technical expertise lies For proactive scenarios.
data from the originator, based on schedule. within the pulling system, the We might not have the
a subtle but significant custom effort lies here. Good for option to add triggers or
difference from the proactive scenarios. Clear visibility events in the originating
Push pattern. into availability and load and idle system. Originating system
times in the receiving system. provides a turnkey API and
Cons: Originating system might not organization’s developer
have the APIs needed to pull from. skillset is with the
receiving system.
One-way sync Data from one system is Data state, system, or Pros: Establishes a clear system of One system is owner or
synchronized to another user event. record. Simple conflict resolution. system of record and
by one or more trigger other systems consume
Cons: Sometimes the receiving
events. that data.
system or non system of record
doesn’t have built-in capability to Consider scenarios in
make the data read only, which which the originating
can confuse users. table is a subset of a
similar table in the target.
Bidirectional sync Data from two or more Data state, system, Pros: Data is kept in sync across Scenarios in which there
systems are synchronized. or user event. applications. Acquired divisions on isn’t a clear system of
multiple platforms can continue to record.
use their existing systems. Users can Data from best-of-
use their system to make changes. breed systems should be
Cons: Complex conflict available in Dataverse for
resolution. Redundant data is Power Apps and other
replicated for each system. tools and services.
Synchronized data might be a
subset of data in systems. The rest
must be automatically given values
or manually updated later.
Aggregation Data from a specialized Any. Pros: Detailed data is kept in the Aggregates are needed for
system is integrated to system where it’s used. Aggregation calculating or processing,
another system on an can derive a small dataset from a for example, on-hand
aggregated level for large one, thus limiting traffic inventory by warehouse,
processing or reporting. across platforms. revenue by invoice header,
or revenue by customer by
Cons: Users often expect to be able
day, or operational data
to drill down to the detailed level.
for posting in a finance
While this could be done with
system.
embedding, it does require
additional integration complexity or
users operating in two systems.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 448
Pattern Mechanism Trigger Considerations Use when
Embedding Information from one User event. Pros: Simple because the data A mix of information from
system is seamlessly remains in the originating system. first-party applications (for
integrated into the UI of example, Bing, Power BI,
Cons: Difficult to use the data for
another system. and Exchange), third-party
calculations for processing.
components, canvas apps,
or other information
embedded in the UI of
an application.
Batching Batching is the practice Any. Pros: Great for use with Whenever it isn’t
of gathering and messaging services and other necessary to transmit
transporting a set of asynchronous integration individual records.
messages or records in a patterns. Fewer individual
batch to limit chatter and packages and less message traffic.
overhead. Cons: Data freshness is lower.
Load in the receiving system can
be affected if business logic is
executed on message arrival.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 449
or dataset. Some messaging platforms allow for prioritizing messages
based on type, origin, destination, or payload.
Fig.
16-13
Monitoring Orchestration
and alerts and routing
Data logging
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 450
a duplicate or an out-of-order message. Services such as Azure
Service Bus support this capability and allow you to implement it
without the need for additional code.
▪ Service-level agreement (SLA) or end-to-end cycle latency
Consider whether there are minimum requirements for how fresh
the data in the receiving system must be. For example, orders
taken on an e-commerce site should reach the warehouse in less
than five minutes.
▪ Triggers and actions end-to-end process It’s also important to
look at the cross-system business process in the bigger picture:
▫ What happens before the integration steps?
▫ Is there potential risk of introducing semantic errors earlier in
the process?
▫ What event triggers the integration?
▫ Does the integration trigger action in the receiving system?
▫ What business process steps happen after the integration steps
are done?
▫ If an error is identified, logged, and notified, how is the problem
corrected? How often might errors occur, how long might it
take to correct them, and who is responsible for making the
corrections?
▪ Batching Batching messages or datasets enables less frequent
communication, or “chatter.” But it also typically makes messages
and payloads bigger.
▫ Do the requirements support a batched approach in which
datasets are consolidated into each message, or does the
integration require individual records or rows?
▫ If a batched approach is suitable, how many rows and records
are in each message?
▫ Are there service protection volume limits to consider?
▪ Topology and architecture Topology and architecture are
important considerations for the short and long term as your
organization grows internationally or as you migrate systems to
the cloud and need to address new challenges and considerations.
▫ Are there integration components that will span the cloud and
on-premises boundary? Does the messaging service handle that?
▫ Where in the world is the data located, and does the inherent
latency between geographic regions support the requirements?
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 451
▫ Does any data exist in countries or regions of the world that
have local privacy policies and regulations?
▫ Are there service protection limitations in technology?Protocol
or platform dependencies in any of the integrated systems?
▫ Does the data exchanged require transformation, reformatting,
or remapping as part of the integration?
The more integration touchpoints there are, the greater the potential
for errors. Therefore, error handling also needs to be planned in ac-
cordance with the integration scenario. For example, in the scenario
of a synchronous integration pattern, an error might require a com-
plete rollback of the entire process, whereas in an asynchronous data
integration scenario, it might be acceptable to fix a data issue just by
452
Fig.
notifying the administrator. (See Figure 16-14.)
16-14
System 1 System 2 Let’s now discuss error management for two key pat-
Quote Enterprise terns of synchronous and asynchronous integration.
Resource Planning
?
Return price
the application will resume after a few retries. Note
that retrial limits should be predefined to avoid a
situation in which all resources are blocked. Once
the retrial limit has been crossed, the entire process will need to be
rolled back, and appropriate error messages should be logged.
Transient errors such as network timeouts get fixed after a few retries.
However, persistent errors require intervention to be fixed.
The following are some of the most common error scenarios in any
integration between Dynamics 365 apps and other applications:
▪ System becomes unavailable
▪ Authorization and authentication errors
▪ Errors caused by platform limits
▪ Errors caused by service protection limits applied to ensure
service levels
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 453
▫ API limits (limits allowed based on the user license type)
▫ Throttling
▫ Process errors
▪ Runtime exceptions
Consider using tools such as Azure Monitor that help maximize your
applications’ availability to collect, analyze, and act on telemetry.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 454
▪ Ease of troubleshooting and resolving errors
▪ Minimize business impacts when errors occur
▪ Increase user satisfaction
Choosing a platform
Challenges in integration
The diverse applications used in organizations make up the IT back-
Choosing a design
bone of those companies. Organizations use applications that might
Choosing a pattern be a combination of on-premises, cloud, and third parties. These
applications need to communicate with each other for different busi-
Challenges in
ness needs. Integration challenges during a project implementation
integration
can cause delays or cost increases. A key focus of any IT department is
Product-specific guidance
to ensure that these integrations enable business productivity and not
become a blocker to business growth.
Business
Each integration scenario has a direct impact on the application you’re
integrating with and on its users. Any downstream applications might also
be indirectly impacted. For integration to be implemented in a successful
manner, you should address the following during the Initiate stage:
▪ Identification of application owners and stakeholders
Application owners need to identify downstream applications that
might be impacted in an integration. However, a common scenar-
io is to bring in these owners after the planning is complete. This
often results in misaligned timelines and scope and in turn creates
project delays and poor quality. Integration design needs to take
every impacted application into consideration.
▪ Alignment between business owners Business stakeholders
have different needs for their organization’s various applications.
Unless there is a collective understanding of integration scenarios
and approaches, requirements might be mismatched among the
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 455
various owners. This in turn often results in delayed timelines, cost
overruns, and a lack of accountability. System integrators should
consider the following:
▫ Identify the key owners and bring them together to walk
through the scenarios.
▫ Differentiate between process, data, and UI integration to
simplify and streamline the integration scope.
▫ Outline the impact on business groups affected by the
integration.
▫ Highlight issues and risks in the absence of following a consis-
tent approach.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 456
apps often depend completely on a well-defined cross-system
business process, and if the specific process details and broader
requirements are not properly defined, the integration project
might take much longer or go through several iterations of trial
and error. For more information, refer to Chapter 7, “Process-
focused solution.”
Technology
Most enterprises have legacy applications with traditional, on-premises
architecture. The move to cloud applications requires consideration of
the patterns they support and the best practices when planning for inte-
gration. A low-code/no-code pattern should now be at the forefront of
any integration architecture. Engaging in conversations about not just the
current setup but also about future planning for performance, extensi-
bility, and maintenance plays a key role in choosing the right technology.
When choosing the appropriate technology, consider the following.
▪ Does one size truly fit all? Many enterprises have an enterprise
architecture approach that might or might not be aligned with
the modern approaches for cloud applications. Prior to investing
in a specific approach, evaluate whether the existing architecture
aligns with cloud patterns. Sometimes, a generic approach is
taken; this can result in inefficiencies in integration, unscalable
architecture, and poor user experience and adoption. Therefore,
it’s crucial to consider design paradigms such as the following:
▫ Definition of the integration approach based on multiple
parameters
▫ Benefit of a proof of concept to determine the pros and cons of
one approach over another
▫ Synchronous versus asynchronous integration
▫ Process, UI, and data integration
▫ Single record or batch
▫ Frequency and direction of the integration
▫ Message reliability and speed
▫ Data volume
▫ Time expectations (some scenarios require a batch integration
to be completed during a specific time window)
▫ Error management and retries
457
▪ Will sensitive data be exposed? System integrators must un-
derstand IT and customer concerns around security, especially in
the context of integrating on-premises applications with Dynamics
365 apps. Categorizing security concerns as follows aids in identi-
fying who and what is required to help address them:
▫ Access control
▫ Data protection
▫ Compliance and regulatory requirements
▫ Transparency
For more information, refer to Chapter 12, “Security.”
▪ Storage costs and platform limits To ensure service quality
and availability, Dynamics 365 apps and Power Platform enforces
entitlement limits. These limits help protect service quality and
performance from interference by noisy behavior that can create
disruptions. System integrators must incorporate these limits as
part of the overall architecture. If these aren’t planned for, the
service will be throttled, resulting in failure and errors within the
integration layer. Storage costs are also often ignored. Although
this might not have an impact initially, over time, it can result
in increased storage costs and therefore should be planned for
appropriately.
▪ Connectivity Network latency can become a constraint, espe-
cially in heavy data-load scenarios. System integrators must ensure
that they design payloads accordingly for the efficient use of
network resources without compromising performance.
▪ Anti-patterns Implementation architects should follow the
best practices for Dynamics 365 apps. Sometimes these architects
don’t take cloud patterns sufficiently into account in integration
scenarios with on-premises applications, resulting in poor perfor-
mance. The behaviors leading to such situations are referred to as
anti-patterns. Consider the following common anti-patterns:
▫ Are there repeated connections between on-premises compo-
nents and Dynamics 365 apps that impact performance? If so,
consider sending data in batches.
▫ Is there latency between a customer’s on-premises applications
and a Dynamics 365 datacenter? If so, consider using a cloud
service such as Power Automate, Azure Functions, or Azure SQL
to reduce the latency impact.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 458
▫ Is a lot of data being synchronized with Dynamics 365 apps
for reporting purposes? Keep in mind that the database under
Dynamics 365 apps isn’t intended to be a data warehouse for
all of the organization’s data assets. Consider using a dedicated
datastore for reporting purposes.
▪ Proprietary technology Customers might be using third-party
technology within their IT landscape that doesn’t provide interface
details or adequate support to enable integration easily. Often
such issues are identified either toward the end of design or during
the development stage. This causes delays in the project timeline,
burdening the customer with time constraints to mitigate such risks.
System integrators must highlight such dependencies in the plan-
ning stage to ensure adequate support or an alternate approach.
▪ Readiness With the increasing pace of transformations in the
technology world, architects sometime choose an approach due
more to its familiarity than its applicability. Customers and system
integrators must evaluate whether to request additional resources
specialized in the specific technology who will be a better fit for
their current and future needs.
Project governance
The initial stage of a project should include a defined project gover-
nance model. Integration between on-premises and Dynamics 365
apps can range from simple to complex, and the lack of well-defined
project governance areas results in gaps and issues in the smooth im-
plementation of a project. Following are common project governance
concerns specifically for the integration components:
▪ Has the impact of the integrations been identified for the end user,
process, and reporting? This might require planning for change
management activities, including communication and training.
▪ Making a solution performant should be at the forefront of any
design decisions made by the implementation team. This applies
equally to the integration layer and the application layer. Is perfor-
mance testing planned and does it cover integration components?
Performance testing is another activity that tends to be considered
optional. However, architects and project managers must consider
459
embedding this in their Dynamics 365 apps implementations. This
will help identify any performance bottlenecks prior to deploy-
ment for end users.
▪ Are development and test environments available for all appli-
cations for thorough system integration testing? Is a plan for
stub-based testing during the unit testing phase required?
Asking these questions during the initial stages of the project enables
both the implementation partner and customer to proactively identify
and plan for any dependencies and risks.
Choosing a design
Data entities
In Finance and Operations apps , a data entity encapsulates a business
concept, for example, a customer or sales order line, in a format that
makes development and integration easier. It’s a denormalized view in
which each row contains all the data from a main table and its related
tables instead of the complex view of the normalized data model
behind it.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 460
Said another way, data entities provide conceptual abstraction and
encapsulation, a denormalized view of what’s in the underlying table
schema to represent key data concepts and functionality. There are
The out-of-the-box data entities are more than 2,000 out-of-the-box data entities.
general purpose entities built to support
a wide variety of features surrounding
a business entity. For implementations Data entities help encompass all the table-related information and make
in which a high-volume, low-latency
the integration, import, and export of data possible without the need to
interface is required, it’s recommended
to build custom data entities with the pay attention to the complex normalization or business logic behind the
necessary targeted and specific features.
scenes. For more information, see the “Data entities overview.”
You can replicate an out-of-the-box data
entity and remove any fields and logic
around features not used by your Finance and Operations apps integration patterns
organization to create a highly performant
The following table is a list of integration patterns with pattern descrip-
interface.
tions, pros and cons, and use cases.
Custom A developer can create User actions and Pros: Easy for developers to add and Used when invoking an
Services—SOAP, external web services by system events. expose service endpoints to use with action or update, for example,
REST, and JSON extending the application integration platforms. invoicing a sales order or
with X++. Endpoints are returning a specific value. We
Cons: Requires ALM and SDLC
deployed for SOAP, REST, recommend using REST Custom
for coding and deployment of
and JSON. Services in general because it’s
extensions into Finance and
optimized for the web.
Operations apps. The payload is low
compared to other patterns. REST is preferred for high-
volume integrations because
there’s reduced overhead
compared to other stacks such
as OData.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 461
Pattern Mechanism Trigger Considerations Use when
Consuming A developer can consume Scheduled and user Pros: Easy for developers to add and Use when consuming services
web services external web services initiated. Can wait expose service endpoints to use with from other SaaS platforms
by adding a reference in for off hours or idle integration platforms. or products, for example,
X++. time. commodity tickers and
Cons: Requires ALM and SDLC
lookups of real-time values.
for coding and deployment of
references into Finance and The recommended pattern
Operations apps. The payload is is to use Power Automate
low. Risk of hardcoding connection instead when possible.
strings and credentials in related
code. Maintenance workloads and
risks when the service is updated.
Data The REST API helps Originating system Pros: On-premises and large- Large-volume integrations.
Management integrate by using data users or system volume support. The only interface Scheduling and
Framework REST packages. events. that supports change tracking. transformations happen
Package API outside Finance and
Cons: Supports only data packages.
(asynchronous, Operations apps.
batched, cloud, or
on-premises)
Recurring Data With the Data Receiving system Pros: On-premises and large-volume Large-volume integrations
Management Management REST API, requests based on support. Supports files and packages. of files and packages.
integration REST you can schedule the the schedule. Supports recurrence scheduled in Scheduling and
API (asynchronous, recurring integration Finance and transformations (XSLT) if transformations happen
cloud, or of files and packages. the file is in XML. inside Finance and
on-premises) Supports SOAP and REST. Operations apps.
Cons: None.
Electronic A tool that configures Data state, system, Pros: Data extracts and imports Electronic reporting to
Reporting formats for incoming or user events. are configurable in Finance and regulatory authorities and
and outgoing Operations apps. It supports several similar entities.
electronic documents in local government formats out of
accordance with the legal the box. It can be scheduled for
requirements of countries recurrence.
or regions. Cons: Comprehensive configuration
when used for messaging purposes.
Excel and Microsoft Office Data state, system, Pros: Out-of-the-box integration (ex- Extracts for ad hoc reporting
Office integration integration capabilities or user events. port and edit) on almost every screen or calculations.
enable user productivity. in the product. Fast editing of column values
Cons: Performance decreases with and entry of data from manu-
the size of the dataset. al sources.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 462
Pattern Mechanism Trigger Considerations Use when
Business events Business events provide a User or system Pros: Provides events that can be To integrate with Azure
mechanism that lets events. captured by Power Automate, Logic Event Grid, Power Automate,
external systems, Power Apps, and Azure Event Grid. or Logic Apps. To notify of
Automate, and Azure events inherently driven
Cons: Extensions are needed to add
messaging services receive by a single data entity, for
custom events.
notifications from Finance example, an update of a
and Operations apps. document or a pass or fail of a
quality order.
Embedded Power Finance and Operations Users. Pros: Seamlessly integrates Whenever the required data
Apps (UI) apps support integration information from Dataverse without exists in Dataverse and is
with Power Apps. Canvas integrating the backend. Opens loosely coupled with Finance
apps can be embedded opportunities for low-code/no-code and Operations apps.
into the Finance and options directly into the Finance and Using an app embedded in
Operations apps UI to Operations apps UI without the need the UI provides a seamless
augment the product’s for updates and compatibility. experience for users.
functionality seamlessly Cons: Power Apps and related
with Dataverse. artifacts aren’t deployed with the
build and must be configured in the
environment directly. Separate ALM
stories for Power Apps.
Embedded Power Seamlessly integrates Users. Pros: By using graphics and visuals Whenever reports,
BI (UI) Power BI reports, supported by Power BI to present dashboards, or visuals exist in
dashboards, and visuals data from any source, workspaces Power BI.
with information from can provide highly visual and
Dataverse or any interactive experiences for users
other source, without without leaving the Finance and
integrating the backend. Operations apps UI.
IFrame (UI) The Website Host control Users. Pros: Seamlessly integrates UI from When information from
enables developers to other systems or apps without loosely coupled systems
embed third-party apps integrating the backend. can be displayed within the
directly into Finance and Information goes directly into the Finance and Operations
Operations apps inside an Finance and Operations apps UI apps UI. The experience is
IFrame. without the need for updates or enhanced if the external
compatibility. system supports single sign-
on (SSO) and deep linking.
Cons: External systems might have
separate lifecycles, and updates to
those might affect user experience
with little notice.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 463
Priority-based throttling
Service protection is important for ensuring system responsiveness,
availability, and performance. In Finance and Operations apps, service
protection is enforced by throttling. Throttling affects OData and
custom service pattern integrations only. Administrators can configure
priorities for external services (registered applications) directly in the
application so that lower priority integrations are throttled before
high-priority integrations.
Customer Engagement
In this section we talk about frameworks and platforms to use when
integrating from Customer Engagement.
IFrames
IFrame is a popular approach commonly used for hosting external
URL-based applications. Consider using the Restrict cross-frame script-
ing options to ensure security.
Canvas apps
Canvas apps is a cloud service that enables citizen developers to easily
build business apps without the need to write any code. These apps can use
connectors from other cloud services and be embedded in model-
driven apps to present data from other applications on the user interface.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 464
HTML web resources
A precursor to Power Apps component framework, HTML web re-
sources were commonly used to render data in a visual form, providing
designers with more flexibility. They can be used to pull data into
external applications using the available endpoints.
Virtual tables
Virtual tables pull data on demand from external data sources. This ap-
proach is implemented as tables within the Dataverse layer but doesn’t
replicate the data because the data is pulled real time on demand. For
more information, read about the limitations of virtual tables.
Webhooks
Commonly used for near real-time integration scenarios, webhooks
can be invoked to call an external application upon the trigger of a
server event. When choosing between the webhooks model and the
Azure Service Bus integration, consider the following:
▪ Azure Service Bus works for high-scale processing and provides a
full queueing mechanism if Dataverse pushes many events.
▪ Webhooks enable synchronous and asynchronous steps, whereas
Azure Service Bus enables only asynchronous steps.
465
▪ Both webhooks and Azure Service Bus can be invoked from Power
Automate or plug-ins.
Azure Functions
Azure Functions uses serverless architecture and can be used to extend
business logic, including calling external applications. It runs in Azure
and operates at scale. Azure Functions can be called through Power
Automate and Azure Logic Apps. For more information, read about
Azure Functions and Using Azure Functions in Power Apps.
Virtual tables
The virtual table option enables you to connect Dataverse to Finance
and Operations apps entities as virtual tables that offer the same full
CRUD (create, retrieve [or read], update, delete) capabilities as the
entity endpoint in the app. A benefit is that we can access the data in
Finance and Operations apps in a secure and consistent way that looks
and behaves the same as any other table or construct in Dataverse. We
can also use Power Automate to connect to almost anything.
Another benefit is that the data doesn’t have to live in both the
transactional database underneath Finance and Operations apps and
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 466
Fig.
Dataverse but is still seamlessly available as tables and rows in Dataverse.
16-16
Dual-write Dual-write
The second option is dual-write integration, shown in Figure 16-16.
Model-driven Dual-write also provides synchronous, real-time integration between
apps in Finance and Operations apps and applications in Dataverse.
Dynamics 365
The benefit of using the dual-write approach is that the two applica-
Dynamics 365 tions share the same dataset, so changes in one system are seamlessly
Finance and and instantly reflected in the other one.
Operations Apps
Another benefit is that in some cases the functionality is out of the
box and configurable with minimal effort, but it’s also extendable by a
developer. Additionally, the full set of capabilities of Power Platform is
available for the shared datasets.
For more information about the mapping concepts and setup, read
about dual-write.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 467
The supported scenarios are as follows:
▪ Prospect to cash integration The Prospect to cash integration
is a process and data integration between Dynamics 365 Sales
and Finance and Operations apps . The process integration en-
ables users to perform sales and marketing activities in Sales, and
other users handle order fulfillment and billing in Finance and
Operations apps. For more information, read Prospect to cash.
▪ Field Service integration Field Service integration adds inte-
gration Implementation Guide: Success by Design: Integrate with
other solutions points between the Dynamics 365 Field Service
and Finance and Operations apps to enable process integration
for work orders and projects. For more information, read the
Integration with Microsoft Dynamics 365 Field Service overview.
▪ Project Service Automation Similarly, the configurable integra-
tions provide integration between Finance and Operations apps
and Dynamics 365 Project Service Automation by synchronizing
project essentials such as contracts, tasks, milestones, time, fees,
expense forecasts, and actuals. For more information, see the
Project Service Automation overview. Please note that Dynamics
365 configurable integrations (a.k.a. Data Integrator) is a viable
and supported solution, however Microsoft’s current investments
are focusing on the Dual Write integration technology which
should be always preferred where applicable.
Out of the different options for cross app integration, they are all via-
ble, but Microsoft investment is focused primarily around Dual Write
and Virtual Entities. These are the preferred options.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 468
References Conclusion
REST vs CRUD: Explaining REST & CRUD
Operations Implementing Dynamics 365 apps is often a building block into an
Messaging Integration Patterns organization’s larger solution landscape. In that case, the organiza-
Open Data Protocol (OData)
tion can benefit from automation to gain streamlined and effective
cross-system processes and avoid manual errors.
Consume external web services
469
Choose a design
Checklist Align the designs of each integration with the overall
integration architecture.
Define business goals Clearly state the options and benefits of each of the
following: UI, data, process integration, and Dataverse.
Document and define goals and expected benefits of
integrations being implemented in a business-centric way.
Choose a pattern
Align the planned integration’s purpose with short- and
long-term organization goals. Design integrations to favor robust, asynchronous
messaging-based patterns.
Ensure the overview of the integration architecture, sys-
tems, and integration points is clear and understandable. Align patterns used for each integration with expectations
for volumes, frequency, and service protection limitations.
Ensure that stakeholders have a shared understanding of
the purpose and scope of the integrations that are being Set realistic estimates of the operating costs for services,
implemented. platforms, and storage involved and be aware of how
scaling affects them in the future.
Choose a platform
Project governance
Ensure the organization understands the concept of
cloud versus on-premises platforms and the boundary Plan each integration for user and performance testing
between them. under realistic loads, as well as the end-to-end process
leading up to the integration, across the system bound-
Plan to use either an integration middleware or
ary, and after the point of integration.
messaging service.
Plan for testing the end-to-end process patterns used
Ensure the integration architecture, platform, or middle-
for each integration in line with recommendations for
ware supports the expectations for monitoring, audit,
volumes, frequency, and service protection limitations.
notifications, and alerts.
Have change management activities related to integra-
Ensure the integration architecture supports the expect-
tions that reflect and support overall business goals.
ed level of security, availability, and disaster recovery.
Complete the impact analysis on upstream and down-
Ensure all components of the integration architecture
stream processes.
support ALM and version control.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 470
Case study
Public sector infrastructure
organization learns how to
choose the right solution
for integration
A public sector infrastructure service organization was implementing
the Dynamics 365 Customer Service app and required integration with
a public-facing website, several on-premises apps, and Office 365 ser-
vices such as SharePoint Online. One of the implementation objectives
was to gather insights from the data that was scattered across different
business applications.
The organization was working with an IT vendor that had several years
of experience building complex integrations using technologies such
as IBM MQ and Microsoft BizTalk.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 471
The organization chose Azure API Management to abstract their APIs
and implement a secure integration layer.
As the team started building and testing the initial components, they
identified some challenges due to the architecture:
▪ They experienced slow performance with batch-integration sce-
narios because they called the services as they would have in a
point-to-point integration.
▪ They couldn’t use standard functionalities that would have been
available with out-of-the-box approaches such as SharePoint
Online integration with Power Platform.
▪ For an aggregated view, they decided to replicate all data into
the Dynamics 365 Customer Service app, which led to additional
storage costs.
▪ They encountered throttling and API limits issues, which prevented
successful completion of the integration.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 472
▪ They sent that data as batch data to the Customer Service app to
reduce the number of individual connections.
▪ They considered API and throttling limits and built retries as part
of the design.
Implementation Guide: Success by Design: Guide 16, Integrate with other solutions 473
17
Guide
A performing
solution,
beyond
infrastructure
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 474
Patience is such a waste of time.
Introduction
When a new Dynamics 365 implementation is
delivered, users typically expect an improvement in
system performance.
Good performance is often assumed as a given, a default experience. The
The following are performance factors to
consider when designing solutions: reality is that although Dynamics 365 products are scalable and powerful,
• Solution performance is critical for user various factors are involved in achieving a high-performance solution.
adoption, customer experience, and
project success and necessary to enable
These include defining and agreeing to performance metrics, testing
businesses to achieve their goals. throughout the phases of a project, and taking design and build consid-
• Clear goals and realistic expectations
are vital to developing a solution that erations into account, particularly for customizations and integrations.
performs well.
• Scalable solutions begin with the
correct use of the right software. In this chapter, we explore various aspects of performance and how
• Customizations increase the risk of
they should be addressed throughout the stages of implementation. We
performance issues, but these risks can
be mitigated with the right mindset discuss why performance is directly related to the success of a project and
and planning.
• Performance testing needs to be
why it must be prioritized early. We also discuss how to align expectations
realistic to be meaningful. with business users to enable meaningful performance discussions so that
• Performance issues are complex
to resolve and therefore are better challenges are identified and translated into design requirements.
avoided than fixed.
• Ensuring performance is an iterative
process that involves incremental Finally, we cover performance testing strategies to protect organiza-
improvements throughout the
solution lifecycle.
tions from common performance risks and anti-patterns as well as how
to approach resolving any issues that do occur.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 475
Why performance matters
Performance can pose challenges as projects approach go live. These
challenges are typically due to a failure to properly prioritize perfor-
mance in earlier phases. Let’s back up a bit and consider first why good
performance is important.
People expect superior response times and a smooth experience from the
organizations they choose to provide products and services. If customers
don’t get this, they tend to look elsewhere. Let’s go back to our street
design. Because of poor street design, people might decide not to live in
or visit that city. Residents might even elect different leadership because
of their poor experience. The takeaway is that system performance affects
people in numerous ways and can have serious consequences.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 476
Often, though, customers interact with systems indirectly through an
agent within an organization, for example, when a customer calls a call
center to discuss a faulty product or delayed order shipment. These
types of interactions can be challenging for a business to resolve in
a positive way and are only made more challenging if an agent can-
not access information in a timely manner. This illustrates again that
system performance is key to ensure a positive customer experience,
regardless of whether the customer interacts with the system directly
or indirectly.
System success
As businesses evolve and optimize, the ambition remains to achieve
more with less. The need to sell more products or service more cus-
tomers is always accompanied by the need to reduce the expenditure
of money, effort, people, and time, at all levels of the organization.
This constant need to achieve more with less creates pressure on em-
ployees to work in the most efficient way possible. Time spent waiting
for a system to perform an action is wasted time, and employees who
rely on these systems to do their jobs quickly realize this.
User adoption
User adoption is a critical factor in the success of any software project.
Any business case (and projected return on investment) depends on
the system being used as intended. Poor performance directly drives
user dissatisfaction and can make user adoption incredibly challenging.
Users are keen to adopt systems that increase their productivity, which
essentially means minimizing wasted time. Business users achieve their
goals when solution performance is optimal and as expected. A poorly
performing system wastes users’ time and therefore reduces productivity.
477
they’ll likely find more efficient ways of working on their own. These
workarounds ultimately serve the interests of specific users rather than
the business and often result in data stored off-system (for example,
in spreadsheets). This might eventually lead to a situation in which the
system no longer serves its purpose, doesn’t deliver a return on invest-
ment, and ultimately fails.
System reputation
Even before go live, performance can help or hinder user adoption.
During the development phase, the application typically is presented
to a set of key users in different areas of the business to collect feed-
back. These users then talk to colleagues about the implementation.
In this way, the reputation of the application spreads throughout the
business long before most users touch the system. Keep in mind that
performance impressions tend to spread quickly. For example, if a
demonstration goes well, a wave of excitement might flow throughout
the company. This positivity can help increase user adoption because
of the anticipated improvement in productivity.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 478
solutions that accommodate users who have a variety of hardware,
increased network latency, and a range of network quality.
We look at more examples later in the chapter, but for now it’s import-
ant to be clear that most performance issues are best solved by correct
implementation decisions and not by adding hardware. Moreover, it’s
crucial to acknowledge that performance is not guaranteed simply
because the software is running in the cloud. It’s still the responsibility
of the project team to deliver a well-performing solution.
479
Fig.
17-1
▪ Describe clearly the business goals ▪ Understand and document the ▪ Ensures the core platform
and predict transactional volumes customer processes components have sufficient
▪ Complete the usage profile based ▪ Capture the performance non- resources to deliver acceptable
on the licenses purchased functional requirements performance, when used
▪ Whenever applicable, optimize the ▪ Build a solution that is optimized/ reasonably
business operations to maximize fit-for-purpose that doesn’t disrupt ▪ Deliver solutions to performance
the efficiency of the solution the scalability of the Dynamics 365 issues within Microsoft’s scope (e.g.
▪ Plan the performance testing and product index, product hotfix, infrastructure)
allocate the relevant resources ▪ Provide the expertise to educate ▪ Support investigations into
the customer on performance performance issues outside
testing of Microsoft’s scope (e.g.
▪ Support the customer for the customization fixes)
performance testing execution
Prioritize performance
Given that performance is so important to users, customers, and ulti-
mately the success of the overall system, let’s look at how performance
relates to project delivery.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 480
Anything added to the core Dynamics 365 app has the potential to
negatively impact performance, so the impact needs to be understood
and managed to ensure performance remains acceptable.
Data strategy
Is the volume of data stored likely to cause performance issues for users?
Integration strategy
▪ Are real-time integrations feasible given the performance
expectations of the users?
▪ Can overnight batch integrations complete within a given timeframe?
Data modeling
Do we need to denormalize for performance reasons?
Security modeling
▪ Will our security model work at scale?
▪ Are there bottlenecks?
Environment strategy
▪ Is a performance test environment available?
▪ Is our performance test environment representative of
production?
▪ Have we budgeted for performance testing?
481
keeping future expansions in mind?
These aren’t the types of questions the delivery team should be asking
when a performance issue surfaces, especially when approaching the
go-live deadline. A successful project will have answers to these ques-
tions early on. At the least, the delivery team should identify risks and
seek possible resolutions in the early stages of delivery. This might lead
to proof-of-concept work to test performance.
Let’s put this into the context of our street design scenario and consid-
er the questions that need to be asked and answered to maximize the
design. For instance, how many residents currently live in the city? How
much traffic can we expect to be on the roads? What’s the projected
population growth and how long will the design support it? Will traffic
lights be installed? If so, how many and how will that affect traffic?
What’s the speed limit and are there risks associated with that limit?
Each of these questions helps determine the best street design before
we even start the project.
Resources
Considering performance from the early stages also ensures that the
correct expectations are set in terms of time, money, effort, and peo-
ple. For example, for performance testing, a dedicated performance
test environment is needed as well as the people to do the testing.
Business stakeholders might need additional time with the delivery
team to understand, agree with, and document performance require-
ments. It might even be necessary to allocate more development time
and resources for code optimization.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 482
across the development teams during all phases of the project.
Identify and fix performance issues in these phases Do not wait to fix issues in these phases
User confidence
Performance is always linked to perception. It’s important to be aware
of user feedback during the implementation of the project because
it can affect users’ engagement during testing and ultimately help or
hinder user adoption at launch. Projects that prioritize performance
early on tend to present better-performing solutions during imple-
mentation. This early planning helps reassure users that they’ll receive
a solution that enables them to become more productive, and this
leads to better engagement and adoption overall.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 483
implementations don’t prioritize performance. There are several rea-
sons for this. In some cases, people lack experience with the specific
products. Performance issues can be complex, and it’s often difficult to
foresee them without extensive knowledge of the platforms and asso-
ciated patterns and anti-patterns gained over years of experience.
Establish requirements
Acceptable performance is the goal of every project, but the definition
of “acceptable” is often vague, if defined at all. To successfully deliver
acceptable performance, we need to be clear on what that means and
then be able to track progress against our performance goals.
484
Why do we need performance
requirements?
If you ask a user for their performance requirements, you’ll likely get a
response that the system needs to be fast and not much else. However,
“fast” is an ambiguous term; people have different expectations of
what that looks like, which makes understanding whether you’ve
achieved acceptable performance nearly impossible.
This approach is the same for other system requirements. It’s vital that
performance be considered like other requirements gathered in the
initial stages of implementation.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 485
Fig.
isn’t acceptable. Decisions made without input from users risk rejection
17-3
in the later stages of the project, which is what we want to avoid.
This doesn’t mean users are solely responsible for deciding perfor-
mance requirements. It’s important to be clear that performance
comes with an associated cost, and business stakeholders need to
assess requests coming from users within the context of the wider
“I need to load this record
quickly for a customer on project. Aggressive performance requirements might be achievable,
the phone; otherwise, I but they require additional development, testing effort, and people.
might lose the sale.”
With this in mind, it’s important to understand the underlying need
for each performance requirement and for business stakeholders to
be prepared to consider a compromise where it makes sense to do
so. Performance for the sake of performance is expensive and unnec-
essary. Communicate this to users and take a pragmatic approach to
focus on what specific performance requirements are important.
“I need to be able to load
the products into the Identify performance-critical tasks
delivery truck to meet my
shipment schedule.” Users will use the system under development to perform their own
set of tasks. Some of these tasks will be more performance critical
than others. For example, consider a call center agent attempting to
access customer data at the start of a telephone call with a customer.
It’s critical that the agent is able to access the data, and every second
counts. Time spent waiting for a form to load is a waste of time for the
agent and the customer.
Spend time with users to understand the activities for which per-
“This overnight process
needs to happen within the
formance plays a critical role. Agree on these areas with project
time window; otherwise, the stakeholders and then focus performance-related work on these
users won’t be able to work.” activities to maximize the value of the efforts. Consider performance
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 486
testing for each system area, including the following:
▪ Functional processes
▪ Background operations (for example, batch and workflows)
▪ Integrations
▪ Data migration
▪ Reporting and analytics
Anticipate growth
When discussing performance requirements, consider the roadmap of
the business as well as design requirements for an increase in demand
on the system. Data and user volumes play an important part in how a
system performs, so it’s important to anticipate any growth expected
in the near future and design for that rather than focus on the current
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 487
requirements. Along with future growth, also plan for seasonality in the
system load, for example, during the end of the year.
Document requirements
It’s crucial that performance requirements be documented (the same
as for other system requirements). Documenting requirements pro-
vides visibility to all parties about the expectations of the software and
provides clear goals for the implementation team. Additionally, any
performance risks identified during discussions with users should be
documented in the project risk register to ensure they’re tracked and
mitigated as much as possible.
Assess feasibility
The implementation team should review performance requirements
along with other system requirements as early as possible to assess the
impact to the organization. Some performance requirements might
have a significant impact on design decisions, so you want to be aware
of these as soon as possible.
488
aligned with the intentions of the software or are due to poor solution
design. Poor decision making during the design phases of the project
can open the door to performance challenges further into the project.
Looking at this from the perspective of our street design discussion, there
are different ways to tackle a situation like traffic. We could design our
roads to be 10 lanes; that would handle a lot of traffic. But this creates
other complications. Do we have land that will support that infrastructure?
What are the relative costs associated with that design? How easy will it
be for residents to cross the street? How long will a light need to be red
for them to cross it? Will the street need additional lighting?
For example, the xRM concept, which became popular around the
release of Dynamics CRM 2011, spawned many systems to manage any
type of relationship in the CRM product. The ease of development for
basic data access, including a built-in user interface and security model,
combined with its rich extensibility features, made it a popular choice
to begin system development. Although this proved successful in
many situations, many projects ran into trouble because they used the
product in a way unsuited to its strengths. Dynamics 365 products are
designed for specific use within a business context. They’re designed
and optimized for users to access master and transactional business
data, not for keeping high-volume historical transactions.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 489
enhancements compared to older approaches.
Many areas of this book discuss what to consider when making system
design decisions. Often the consequence of making the wrong deci-
sions during these stages are performance issues. It’s important that
use of Dynamics 365 products is in line with their intended purpose.
490
Data migrations and integrations
Many performance issues related to data migration and integration
happen due to unreasonable expectations of the platform, specifically
around data volumes. A good data strategy often works around this by
ensuring that Dynamics 365 products contain only the data required
for users to complete their day-to-day operations.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 491
Prepare for implementation
In this section, we discuss the design decisions the team should make
before proceeding with the implementation.
Environment planning
Chapter 9, “Environment strategy,” discusses environment planning in
detail. From a performance perspective, consider the following as the
team moves towards implementation:
▪ Performance testing typically occupies an environment for a signifi-
cant amount of time, so a separate environment is usually advisable.
▪ Latency adds overhead to every operation. Minimize overhead as
much as possible by ensuring that applications are located as close
to each other as possible.
▪ Choose a performance test environment that’s representative of
the production environment whenever possible. For example, for
Finance and Operations apps, the implementation team should
recommend the appropriate environment tier based on expected
load and volume.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 492
Customization and
performance
The extensibility of Dynamics 365 applications provides the powerful
ability to tailor software to meet individual business needs but also
introduces risk, performance issues commonly involve product cus-
tomizations and integrations, particularly those involving code. This
section discusses these issues and provides guidance on how to avoid
common problems.
493
Bad design leads to bad code
Performance issues related to poorly performing code often aren’t
due to the code itself but rather to a flaw in the design decisions. This
can result in situations in which developers are unable to write code
to perform an operation efficiently, even when carefully following
best practices, because of the constraints placed on the solution by its
design.
Retrofitted performance
Sometimes developers focus on getting code working correctly before
working quickly. Although this can be a reasonable approach, the pres-
sure of deadlines often means that the optimization planned for a later
date doesn’t get done, leading to technical debt and a need for re-
work. A better approach is to be clear on any performance constraints
from the beginning and then implement the solution accordingly.
Connected components
Performance issues can occur even when developers write fast,
efficient code. This is because in larger projects involving several devel-
opers, the isolated nature of development might result in performance
issues not being discovered until components are put together as part
of a larger process. This risk should be identified during design activi-
ties, and it can be mitigated using a code review process during which
team members can consider the impact of the implemented pieces of
code running together.
Requirement evolution
A change in functional requirements can be another reason for cus-
tomization-related performance problems. A developer decides how
to implement code based on the given requirement at a point in time.
Some of the content in this section
A change in requirements might invalidate some or all of these deci-
discusses technical concepts. For those
less interested in the technical aspects, sions and cause the implementation to become unsuitable.
note that the details aren’t essential to
understand. Also note that free tools
are available for less technical people
who want to learn about these concepts,
including the Customer Engagement apps
Common mistakes
Solution Checker and the Finance and
Code can become suboptimal from a performance standpoint for
Operations apps Customization Analysis
Report (CAR). a number of reasons, and specific guidance is beyond the scope of
this chapter. However, the following factors are often involved in
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 494
performance challenges, so we recommend that you understand and
avoid them during implementation.
Chatty code
One of the most common causes of code-related performance issues is
excessive round trips. Whether between the client and server or between
the application and database, an excessive number of requests for an
operation can really slow it down. Every request carries latency and pro-
cessing time overhead, and it’s important to keep these to a minimum.
Round trip issues are sometimes due to poor control of variables; a process
might be so complex that it’s easier for a developer to retrieve the same
data multiple times than structure the code to minimize the calls. We
recommend that developers avoid this practice and that the implemen-
tation team identifies such problems during the code review process.
Round trip issues are also sometimes due to queries being executed
within a loop, or worse, a nested loop. The code structure works for
the developer, but the parameters of the loop can add a significant
number of calls into the process, which in turn results in a significant
Fig.
performance issue.
17-4
10 10 100 5 seconds
It’s common for developers to
50 50 2,500 2 minutes, 5 seconds write logic that iterates across
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 495
collections of data that are dynamic in size, due to the data-driven na-
ture of code within Dynamics 365 implementations. It’s also common
for developers to work with low volumes of data within development
environments. However, that means that these types of issues often
aren’t identified until the latter project stages, which include meaning-
ful data volumes. Low data collection can be avoided by prioritizing
minimal data retrieval during code design, retrieving each piece of
data only once, and identifying any deviations from these steps as part
of a code review process.
The same applies for rows. Be mindful of the amount of data being
retrieved because it takes time to query, download, and process the
volume of records even before the code is able to interact with them.
Unintended execution
Performance issues sometimes happen because customizations are
executed accidentally or multiple times in error, for example, duplicate
496
plug-in steps or duplicate method calls. Project teams should ensure
that developers are aware of exactly how their customizations are trig-
gered and mindful of circumstances that might inadvertently trigger
their code. Background processes or batch jobs recurrence should be
set according to business needs.
Performance testing
approach
The project team and users need to be confident that requirements
identified earlier in the project are achieved when the system is
implemented. A performance testing strategy ensures that system
performance is measurable and provides a clear indication of whether
performance is acceptable.
Realistic
Being realistic means understanding the quantity and personas of
users using the system at a given time and defining day-in-the-life
activity profiles for the personas to understand the actions they’ll
perform. If the performance test incorrectly assumes that all the users
will run the most complex processes in the system concurrently, the
projected demand placed on the system will be far higher than reality.
Strive to model the users in an accurate way to get the most meaning-
ful results from the test.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 497
functionality and testing as a user would use the system. Functional
testing is often achieved by invoking specific logic and measuring the
outcome, for example, an API call, whereas user testing is testing that
models the actual behavior of a user. Testers often create a suite of
functional tests pointed at a web service, execute them with a large
number of concurrent users, and call that a performance test; but this
provides misleading results. Typically, this approach pushes certain
functionality much more than the usage expected once the features
are live (creating false performance issues), and other functionality can
be bypassed entirely (for example, the user interface). The result is that
genuine performance issues can slip through testing unidentified.
Keep in mind that user interaction with the application is fairly slow.
For a given process, users don’t typically click buttons as quickly as they
can; they take time to think between actions, and this can be incorpo-
rated into a performance test as a think-time variable. This can vary
from user to user, but an average figure is sufficient to model behavior.
The key point here is to develop a performance test that represents a
number of users working concurrently and place a realistic amount of
demand on the system.
Isolated
A dedicated performance testing environment is generally recom-
mended for two main reasons.
Performance test results are meaningful only if testers are aware of the
activities occurring in the system during the test. A performance test is
498
worthless if an unaccounted-for process places additional demand on
the system during test execution.
Business data
Key to good performance testing is using business data such as setups,
configurations, masters, and transactions. It’s recommended to use
the configurations and data to be migrated that will ultimately go live
in production. Additionally, all the data preparation activities must be
ready in advance, for example, data from 100,000 customers or sales
orders should be available via an import file.
Functionally correct
A performance test result is meaningful only if the system functions
correctly. It’s tempting to focus on the performance metrics of
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 499
successful tests. However, if errors occur, they should be corrected and
the test should be executed again before any analysis is performed on
the results. Deviations in behavior between test runs can significantly
skew a performance test result and make any comparison meaningless.
Document results
The output of performance testing activities should be documented
clearly so that interpretation is straightforward. The results should
be mappable to performance testing criteria and enable the team to
quickly assess whether the requirements were achieved or whether
there are gaps between requirements and results. For example, page
load times can be captured for certain activities and compared to
acceptable page load time requirements agreed to by the business.
It should also be possible to identify when the performance test was
executed and against which version of code if applicable.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 500
▪ Guidelines and workflow for investigating and fixing performance
issues
▪ Risks such as delays and budget related to performance testing
To benchmark what’s feasible for the solution when all network conditions
are favorable, ensure that the performance testing environment is located
in the same Azure region in which you plan to deploy production for
business users. As mentioned earlier, latency adds overhead to every oper-
ation, and minimizing testing latency can help identify potential location
network issues if the test results vary with the location testing.
Address
performance issues
Analyzing performance problems is complex and situation specific and is
beyond the scope of this book. However, let’s discuss some broad consid-
erations that might help you structure an approach to resolving issues.
Deal in facts
When faced with a performance problem, implementation teams
often begin to frantically point fingers at certain parts of the system
501
and develop ideas to find the magical quick fix to solve the problem.
Unfortunately, this approach often causes more problems than it
solves because teams make significant changes based on instinct, often
degrading performance further or causing regression bugs as a result.
Expectations
Keep in mind that performance issues are rarely due to a single issue.
Suboptimal performance is most commonly the result of a number of
factors working together, so a single fix is typically unrealistic. Generally,
performance is an iterative process of applying incremental improvements.
Knowledge is power
Performance issues can be complex and difficult to resolve, so it’s vital
that the implementation team has sufficient knowledge to be able to
ask the right questions and analyze issues meaningfully. The imple-
mentation team is often able to assist with performance issues, but
issues can surface after go live and expiration of any warranty period.
It’s therefore crucial to transfer knowledge to allow business as usual
(BAU) teams to resolve performance issues.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 502
first assess whether the issue occurs in a specific customization. Many
Dynamics 365 customization options can be enabled and disabled,
and strategically disabling customizations often helps identify whether
the issue is customization related, and if so, which customization is the
source of the issue.
Low-hanging fruit
It’s advisable to identify smaller opportunities for performance gains,
rather than consider reworking large and complex areas of the system.
For example, for a poorly performing piece of code, there are usually
several options for optimizations with varying risk of causing issues and
varying performance gains. In some situations, it might be advisable to
make a number of low-risk tweaks; in other situations, it might be better
to make a more complex change and manage the risk. Which approach
to take is entirely situation dependent, but it’s important that the team
agrees on the approach and that the changes are managed carefully.
Workshop strategy
FastTrack runs a solution performance workshop focused on solution
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 503
design that covers the impact of additional configuration and cus-
tomization on the overall performance and end-user experience. The
workshop emphasizes the importance of performance prioritization,
goals, and testing during the stages of a project.
Workshop scope
The workshop includes the following topics that address how to
incorporate performance activities into the overall delivery plan and
allocate sufficient performance expert resources to the project:
▪ Data volumes Projected workload and integration volumes to
ensure expectations are within limits and aligned with intended
product usage
▪ Geolocation strategy Physical locations of users and servers to
identify any network-related challenges
▪ Key business scenarios Main areas of the business for which
performance is particularly important
▪ Extension performance Planned customizations to understand
how implementation is aligned with recommended practices
▪ User experience performance Modifications to the user expe-
rience in conjunction with best practices
▪ Performance testing Performance-related goals and testing
strategy to ensure performance is measured
Timing
We recommend that you conduct the performance testing strategy
workshop before solution design or as soon after as the team is able
to provide detailed information about performance requirements and
the performance testing strategy. Scheduling a workshop later in the
implementation is risky because any findings and recommendations
from the workshop could cause significant rework.
504
Product-specific guidance
Finance and Operations apps
Following are recommendations for achieving optimal performance in
your Finance and Operations apps solutions:
▪ Use Tier-2 or higher environments based on business objectives.
Don’t use a Tier-1 environment.
▪ Keep the solution up to date with hotfixes, platform updates, and
quality updates.
▪ Identify and maintain a log of performance-related risks.
▪ Use DMF to import and export large volumes. Don’t use OData for
large volumes because it isn’t natively built to handle large payloads.
▪ Use set-based data entities and parallelism to import and export
large volumes.
▪ Build your own data entities to avoid potential standard entity
performance issues. Standard entities contain fields and tables that
you might not need for your implementation.
▪ Configure a batch framework including batch groups, priority, and
threads.
▪ Define batch groups and assign a batch server to each batch
group to balance batch load across AOS servers.
▪ Design heavy batch processes to run in parallel processing.
▪ Use preferable non-continuous number sequences with
pre-allocation.
▪ Perform cleanup of routines and jobs regularly.
▪ Avoid record-by-record operations; use set-based operations such
as insert_recordset and update_recordset where applicable.
▪ Be aware of the implications of security on performance. An ad-
ministrator role, for instance, will have better performance than a
user with limited access. Execute performance tests with users who
have appropriately configured security settings.
▪ Use the Optimization advisor workspace to identify business
processes to be optimized.
▪ Be aware of Priority-based throttling introducing service pro-
tection settings that prevent the over-utilization of resources
to preserve the system's responsiveness and ensure consistent
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 505
availability and performance for environments running Finance
and Operations apps.
▫ Configure priorities for the OData and custom service-based
integrations, depending on your business-critical need for
these integrations.
Performance tools
▪ Trace Parser
▫ Diagnoses performance issues and various errors
▫ Visualizes execution of X++ methods as well as the execution
call tree
▪ Lifecycle Services Environment Monitoring
▫ Monitors server health metrics
▫ Monitors performance by using the SQL insights dashboard
▪ Query Store
▫ Reviews expensive SQL queries during defined intervals
▫ Analyzes the index used in queries
▪ PerfSDK and Visual Studio load testing
▫ Simulates single-user and multi-user loads
▫ Performs comprehensive performance benchmarking
▪ Performance timer
▫ Helps determine why a system is slow
▪ Optimization advisor
▫ Suggests best practices for module configuration
▫ Identifies obsolete or incorrect business data
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 506
▪ Display the minimum required number of fields required in the forms.
▪ Design forms and pages to display the most important information
at the top.
▪ Minimize the number of controls in the command bar or ribbon.
▪ Use collapsed tabs to defer loading content.
▪ Avoid unsupported customizations such as direct Document
Object Model (DOM) manipulation.
Conclusion
This chapter discussed why performance is expected and critical for
user adoption, the customer experience, and project success. We noted
that although Dynamics 365 projects are built to perform well at scale,
their flexibility means it’s crucial that implementation teams consider
performance as an iterative process throughout the solution lifecycle.
Good solution performance starts with good design. This means using
Dynamics 365 products for their intended purposes and fully considering
the performance implications of design decisions early on in the process.
Many performance issues occur due to poor customization, configuration,
and design choices, so it’s important to proactively work to avoid this
by clarifying requirements and implementing best practices.
507
how the system will perform in the real world.
References
Product-specific guidance Product-specific guidance
(Finance and Operations) (Customer Engagement apps)
Monitoring and diagnostics tools in Lifecycle Services (LCS) Performance tuning and optimization
Performance troubleshooting using tools in Lifecycle Services (LCS) Optimize model-driven app form performance
Tips and best practices to improve canvas app performance
Work with performance and monitoring tools in Finance and
Operations apps Introducing Monitor to debug apps and improve performance
Performance considerations with PowerApps
Query cookbook
Priority-based throttling
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 508
Checklist
Performance focus
Establish that the responsibility to deliver a performant
solution on the SaaS platform is shared by the cloud
service provider and the implementor who is customiz-
ing and extending the out-of-the-box application.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 509
Case study
Corporate travel company
learns how testing is critical
to measure and optimize
performance
A global corporate travel company was implementing Dynamics 365
Customer Service to drive a call center transformation. Several pro-
cess flows required integration with legacy systems, many with high
transaction volumes, and there was a business-mandated customization
that the project team agreed couldn’t be achieved using out-of-the-box
functionality. This complex solution was projected to support 4,000 users
at scale and needed careful planning to ensure optimal performance.
While still in the design stage, the project team decided to use the
best practices from the Success by Design framework and their own
experiences to highlight the risks of leaving performance testing out
of scope. The project team pointed out potential negative outcomes if
performance wasn’t considered:
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 510
▪ Users wouldn’t adopt the system if its performance affected user
productivity.
▪ Costs would be higher if reactive measures were required to ad-
dress performance issues later.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 511
▪ Reinforced and incorporated basic patterns (such as retrieving
only the required attributes) across their code components.
Implementation Guide: Success by Design: Guide 17, A performing solution, beyond infrastructure 512
Prepare
18 Prepare for go live
19 Training strategy
513
18 Guide
Prepare
for go live
Implementation Guide: Success by Design: Guide 18, Prepare for go live 514
The final countdown for
the start of a journey.
Introduction
Go live is the process through which a new solution
becomes operational. It’s a critical milestone during
the deployment of a new business solution.
This is the stage in which all the parts of the project come together and
are tested and validated to ensure everything works as expected, not
just on their own, but as a whole.
And finally, the cutover, the last sprint! Go-live readiness includes all the key activities identified in the initial
stages of the project and refined throughout the project to ensure a
Implementation Guide: Success by Design: Guide 18, Prepare for go live 515
smooth go live. These activities encompass getting all required re-
sources to get the solution ready for production and ensuring that end
users are trained, engaged, and part of the process to drive adoption
and stickiness to the new solution.
When going live with a new solution, there’s often a transition period,
also known as cutover. The cutover process involves several steps that
need to be planned, executed, and monitored to ensure the completion
of all the essential activities critical to the transition to the new solution.
Go-live readiness
All the tasks and efforts undertaken during an implementation project
are preparation for the biggest milestone of the project: go live. In
the Success by Design framework phases, completion of these tasks is
when the project reaches the Prepare phase.
At this point, if you followed all the previous guidance and recommend-
ed practices, the solution should have sufficient maturity for going live.
You should perform an evaluation on the preparedness of the people,
processes, and systems for the solution to ensure no critical details have
been overlooked for go live. While you’ll never be in a position of zero
risk for go live, the go-live readiness review is a qualitative method to
determine how fully prepared the new solution is to run your business.
When the go-live strategy is aligned with best practices, requirements
for going live have been successfully completed (including testing, code,
and configurations), and there’s a concise and agreed-to plan for the
next actions required for the transition to the new system (cutover), then
the project is ready to go live. Figure 18-1 shows the Success by Design
implementation phases and when go live occurs.
Fig.
18-1
Implementation Guide: Success by Design: Guide 18, Prepare for go live 516
To get ready to go live, you need to verify completion of the most
important project tasks. Once confirmed, you can perform the cutover
activities. The next sections discuss these topics in detail.
The review uncovers potential risks and issues that could imperil go live
and provides a set of recommendations and actions to mitigate them.
Start early to prepare for the go-live review. Schedule time to complete
the go-live checklist and conduct the review, accounting for time to
mitigate possible risks and issues, especially go-live blockers. Keep
in mind that time is needed to prepare the production environment.
Starting the review just a few days away from the go-live date risks a
Implementation Guide: Success by Design: Guide 18, Prepare for go live 517
delay. However, starting the review too early can also be detrimental
to the process because there might not be sufficient information about
the project to complete the go-live review with reliable results to
properly identify potential risks of the solution.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 518
Keep the review objectives in mind
When executing the review, make sure you understand the responses
on the checklist. If something isn’t clear, ask questions and validate
the responses.
Identify all the possible risks and issues and get the recommendations
on each key area. All issues and risks must have a mitigation plan.
Identify workarounds for any go-live blockers. Follow up on mitiga-
tions of issues identified for critical activities.
SIT completion In the next section we discuss in depth the main topics in the checklist.
Refer to the “Product-specific guidance” section for specific product
UAT completion requirements.
Performance testing
completion
The Solution Scope to be released
Data migration readiness What you check
and validation
The Solution Scope conceptualizes the final solution that will be ready
Confirm external for go live. It includes all the requirements designed and developed
dependencies
during the Implement phase, ensuring their alignment with the busi-
Change management for ness needs to be able to operate once live.
initial operations readiness
Operational support Consider the following during the Solution Scope review:
readiness
▪ Business processes supported by the solution
Implementation Guide: Success by Design: Guide 18, Prepare for go live 519
▪ Different solutions and applications used
▪ Whether the solution contains customizations and independent
software vendor (ISV) solutions
▪ Type and volume of integrations
▪ Number of go lives or rollouts
▪ Number of users at go live and in future rollouts
▪ Solution Scope signed off on by the business
Expected outcome
Solution Scope is aligned with the solution that’s going to be live. It has
been communicated, shared with stakeholders, and signed off on by
the business, agreeing that the expected scope is covered.
520
Mitigation plan
Compare the Solution Scope with the solution delivered for go
live, share the Solution Scope and comparison results with the key
stakeholders, and determine whether the solution is complete or if
functionalities are missing. If any are missing, prioritize them and
assign level of risks, owners, and expected completion date.
UAT completion don’t come up after end users get their hands on the new system.
Performance testing As described in Chapter 14, “Testing strategy,” testing accurately gaug-
completion
es the readiness of the solution. Testing measures a solution’s quality
Data migration readiness and effectiveness because testing simulates how the solution will oper-
and validation
ate in real life. As a result, testing builds a solid foundation on which to
Confirm external determine whether the solution is acceptable, enabling the business to
dependencies make a go/no-go decision. The business is required to sign off on the
Change management for solution when there’s objective and rigorous testing to prove that the
initial operations readiness solution fulfills the business vision.
Operational support
readiness Expected outcome
Unit testing through end-to-end testing must be completed success-
fully and the results signed off on by the business. By doing so, the
business states that the solution as built meets their end-to-end pro-
cess needs and that those processes are ready to be executed on the
new solution.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 521
in production. There’s a high risk of issues such as rollbacks, slow
performance, unstable integrations, security holes, last-minute devel-
opments, and platform churns.
Mitigation plan
Following the best practices in Chapter 14, “Testing strategy,” you’ll
ensure the integrity and effectiveness of the solution, minimize any
go-live blockers, and avoid rushing into fixing unexpected and critical
bugs close to go live. You’ll complete the testing and ensure the integ-
rity and effectiveness of the solution.
SIT completion
Why you check it
SIT validates that the behavior of the solution, including integrations
UAT completion
between different applications and with external systems, works
Performance testing properly. During SIT it’s important not only to test and verify the inter-
completion actions between all parts of the solution but also to plan and prepare
Operational support
readiness Risks of not reaching expected outcome
Don’t wait until UAT to perform SIT because this could cause significant
Implementation Guide: Success by Design: Guide 18, Prepare for go live 522
failures during UAT. If you execute SIT during or after UAT and discover
that the incorrect integration pattern was used and you need to rede-
sign your integrations, this will put your go live at risk.
Going live without fully testing the integrations might result in un-
steady and ineffective system connectivity, performance issues, flawed
and weak interfaces, and data flow issues such as data inconsistency
and data not being available on time.
Mitigation plan
Complete SIT with peak volumes that are close to actual peak volumes
and get sign-off from the business that the integration testing strategy
works for the go-live. For guidance on which integration pattern to
choose, see Chapter 16, “Integrate with other solutions.”
Implementation Guide: Success by Design: Guide 18, Prepare for go live 523
Expected outcome
UAT should include the following requirements:
▪ Test cases should cover the entire scope of the requirements,
happy path and edge scenarios.
▪ Use migrated data for testing because it validates how real data
will behave in real life.
▪ Perform UAT with the correct security roles assigned to users; don’t
use a general security role or assign the System Admin role to all users.
▪ The solution must comply with any company-specific, country- or
region-specific, and industry-specific regulatory requirements.
Document all features and testing results.
▪ UAT must be conducted in an environment in a Microsoft apps
subscription to ensure that the environment’s properties approxi-
mate your production environment as much as possible.
▪ Obtain approval and sign-off from the customer.
Mitigation plan
Typically, UAT is considered complete when all the cases are covered
and there are no blocking issues and any high-impact bugs are iden-
tified. Establish a mitigation plan to address open items identified
during UAT, with an owner and an estimated time of completion.
Without a mitigation plan, there’s a substantial risk of delaying the
go-live date.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 524
Performance testing completion
Go-live checklist
What you check
Solution Scope Validate that performance testing is successfully completed and signed
to be released
off on by the business.
Acceptance of
the solution Why you check it
Overlooking performance testing might result in performance issues
SIT completion
post–go live. Implementations often fail because performance testing is
UAT completion
conducted late in the project or not until the production environment
is ready, when it’s used as a testing environment. A production envi-
Performance testing ronment shouldn’t be used as a testing environment. It is easier to fix
completion
performance issues in a test environment than in a live environment.
Data migration readiness Also, bear in mind that fixing issues in production may require downtime
and validation
and it also might result in production downtime when trying to fix it.
Confirm external
dependencies Any open issues critical for production need to be addressed before
Change management for going live.
initial operations readiness
Implementation Guide: Success by Design: Guide 18, Prepare for go live 525
justify skipping performance testing. However, performance is not only
about infrastructure, as discussed in Chapter 17, “A performing solution,
beyond infrastructure.” Therefore, we highly recommend that you
conduct performance testing before going live in a test environment. A
production environment cannot be used to conduct core testing; it’s used
only for mock cutover, cutover, final validations, and live operations.
If performance checks are done just during UAT, that might not be a
good representation of the actual usage post–go live. For example, if
during UAT there’s good coverage of roles and personas, regional ac-
cess, devices, and environments but those are for a small percentage of
users, UAT can validate the solution readiness in terms of functional and
even end-to-end performance of each business process, but it doesn’t
represent the full load and concurrency of actual usage post–go live.
Therefore, consider defining a separate performance testing strategy
that can simulate stress on the solution and concurrency usage.
Mitigation plan
Go-live checklist Execute performance testing in parallel with UAT. Establish a mitigation
Solution Scope plan to address issues and assign owners and expected completion
to be released dates. Identify the root cause of issues so as not to replicate them in
the production environment.
Acceptance of
the solution
Implementation Guide: Success by Design: Guide 18, Prepare for go live 526
Expected outcome
Data migration is tested several times prior to execution during the cu-
tover to validate the strategy and processes, identify data corruption or
duplicated data issues and address them, and make sure performance
is measured, validated, and fits within the cutover time window.
All scripts and processes planned for the cutover migration are tested
and signed off on by the business.
Any migration issues and risks are logged and a mitigation plan estab-
lished. This information helps inform the go/no-go decision.
If incorrect data is flowing in your new system, that could also be re-
flected when communicating with your customer through the systems
once you are live.
Mitigation plan
Execute multiple dry runs of how to migrate the data and prepare the
migration plan. Follow more recommended practices to prepare your
data migration by reviewing Chapter 10, “Data management.”
527
Confirm external dependencies
Go-live checklist
What you check
Solution Scope Verify that external dependencies such as ISVs and third-party systems
to be released
and services are aligned with the timelines and scope for go live.
Acceptance of
the solution Why you check it
External dependencies are outside the direct control of the project team.
SIT completion
This means it’s even more important for these dependencies to be ac-
UAT completion
counted for when building the project plan and managing the schedule.
Change management for All external dependencies should be documented and monitored.
initial operations readiness
Mitigation plan
It is good practice to have regular meetings to review dependencies
status because problems can cause delays and other project issues.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 528
true value of the business solution is realized when end users actively
Go-live checklist adopt and engage with the processes implemented.
Solution Scope
to be released It’s natural to be resistant to change even if it’s positive and can sig-
nificantly improve one’s experience. It’s important to have a change
Acceptance of
the solution management plan that prepares users for changes. There are several
resources and strategies available to achieve successful change and
SIT completion adoption that include ways to communicate, reduce resistance, and
allow feedback.
UAT completion
Expected outcome
Performance testing
completion Our goal here is not to explore the plethora of change and adoption
strategies but to highlight the key principles and activities that are
Data migration readiness
and validation critical before the rollout of a new business solution.
Confirm external
dependencies It’s important to assess early in the project, and continually reassess,
the changes in an organization to ensure there’ll be a smooth adoption
Change management for
of the system.
initial operations readiness
Operational support The following are key elements to drive change management close to
readiness
go live.
Training For go live to be successful, it’s vital that users know how
to use the system on day one of operations. Therefore, successful user
training is key to go live.
Ensuring that users are trained helps them achieve the desired results from
the new solution and leads to higher productivity and adoption. Failure to
establish a training strategy can negatively impact usage and adoption.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 529
feedback before the new solution went into production.
Your program must have an effective way to engage the end users of
the solution to help drive adoption and also eliminate the risk of build-
ing a solution that doesn’t necessarily meet user requirements.
530
Business sponsorship The business sponsor, or executive sponsor,
is a trusted and influential leader who enables cultural changes and
plays an essential role in championing transformation throughout the
organization by communicating the value of the new solution.
Mitigation plan
Reassess the change management plan throughout the implementa-
tion project.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 531
Operational support readiness
Go-live checklist
What you check
Solution Scope Verify that there’s a monitoring and maintenance plan for the solution
to be released
in the production environment as well as for transitioning the plan to
Acceptance of support teams.
the solution
UAT completion
live. Before go live, there must be a plan for the transition to the solu-
tion’s support teams.
Performance testing
completion
Expected outcome
Data migration readiness Support teams can be from the partner or customer organization, and
and validation
they need to have the necessary resources, tools, and access to sub-
Confirm external ject matter experts, business analysts, process specialists, and change
dependencies management champions. Support teams should also be trained on the
Change management for solution; this ensures that tickets that come through the help desk are
initial operations readiness properly addressed.
Operational support
readiness Notify stakeholders when all parties agree that the system is ready to
go into production and send them a list of end users who will use the
new system.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 532
For discussions of continuous deployment plans, refer to Chapter 20,
“Service the solution,” and Chapter 21, “Transition to support.”
There’s also a risk that the new solution could go live in a system ver-
sion that’s out of service. In such a scenario, if an issue is uncovered in
production, you’ll need to update the solution to the latest version to
be able to apply the hotfix. In addition, automatic updates that install
unexpectedly might affect the deployed solution and cause outages,
A support plan unavailability, and blockings.
enables support
teams to be more Mitigation plan
proactive and As discussed earlier in this section, it’s important before go live to plan
the transition to the teams who will support the solution. A support
preventive rather plan enables support teams to be more proactive and preventive
than reactive. rather than reactive.
533
Production environment
readiness
It’s critical to prepare the production environment before go live. There
are different approaches to achieve this, depending on the applications
in the scope of the deployment.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 534
completed ahead of time, especially configurations that take time to
complete. For example, setting an integration with an email engine
such as Outlook might depend on the size of a user mailbox, and
the bigger the mailbox, the longer the process takes to complete.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 535
Fig.
18-3
Rehearse
Communicate
Go/No-go
Cutover strategy
The cutover strategy begins in an early stage of project planning, is
refined throughout the project, and is completed before the cutover
plan is created.
The cutover strategy ensures alignment of the cutover plan with orga-
nizational objectives and includes the following aspects:
▪ Timing, market conditions, and other environmental aspects
necessary to go live with the new business solution
▪ Organizational readiness requirements in terms of available com-
petencies and resources
▪ Setting up the communication plan
▪ Defining roles and responsibilities
Plan to go live
when there’s a Time your go live
It’s challenging to establish a realistic and achievable go-live date be-
slower flow of cause this date is set early in the implementation. It’s always important
activities in the to have some buffer time when planning the milestones dates, but this
business. is especially true for the go-live date.
To set the go-live date, consider the time required to complete testing
and resolve any issues that might arise, in addition to time for the
preparation of the production environment and the cutover activities.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 536
Preferably, plan to go live when there’s a slower flow of activities in the
business. For example, in the case of a seasonal business, like for some
retailers, choose the season when sales are slow to lessen any negative
impact on the business. Also consider avoiding going live during any
busier times, for example, month or quarter end. In addition, take hol-
idays into account, they might mean low activity for some businesses
and for others might be the busiest times of the year. It’s also import-
ant to make sure that all team members and users are fully available
during that time.
The stakeholders should verify that all necessary resources are avail-
able not only for the requested duration of the cutover plan activities
but also to support post-go live.
Communications plan
The communications plan is an essential part of the cutover plan. This
Implementation Guide: Success by Design: Guide 18, Prepare for go live 537
plan identifies all the stakeholders involved in the cutover. The plan
should include at least one communication channel and appropriate
distribution lists, depending on the type of communication required.
It’s important to identify and document the different communications
required for go live, who is responsible for triggering the communica-
Effective communication helps avoid
tions, and the recipients list. Having a communication plan as part of
uncertainty and provides visibility to
stakeholders about project status and the the cutover plan enables better visibility into who the stakeholders are,
results of each activity, which is important
for a successful cutover.
at what point in time they should receive communications, and who
the points of contact are.
Cutover plan
The cutover process is a critical and complex step that must be planned
and practiced in advance. The center of the cutover process is the cutover
plan, which lists in detail every step that needs to be executed and moni-
tored to prepare the production environment for business processes to be
executed once the cutover is complete. Cutover activities include system
configuration, data migration, data validation, and decommissioning of
legacy systems when applicable. These steps must be precisely orches-
trated to minimize disruption and ensure a successful deployment.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 538
verification steps, and any additional documentation associated with
each task. To arrive at the correct timings for the cutover plan, activities
must be practiced and tested multiple times. It’s recommended to
perform a “mock cutover,” or dress rehearsal, simulating the activi-
ties of the real cutover in sandbox environments. Depending on the
complexity of the solution, it might be necessary to conduct multiple
rounds of mock cutover. The goal is that no matter how complex the
cutover procedures are, the project team has high confidence that final
execution will work flawlessly.
The implementation team should rehearse all the cutover plan steps to
ensure that the cutover strategy and communication plan are ready. By
doing so, the predictability of the outcome increases, backup plans can
be created to avoid dependency issues, the duration of each activity is
539
validated, and any identified issues with the plan can be corrected. Figure
18-4 is an example of a cutover plan and shows the list of activities, the
different elements for good traceability, and execution of the cutover.
Fig.
18-4
Cutover plan example
Go-live date: 8/1/2021
Today: 6/28/2021
70
80
The cutover plan should also specify whether there are workarounds or
additional plans that might prevent any issues from delaying go live,
for example, if a blocking issue was identified during UAT but there’s
a workaround that can be implemented until a permanent fix can be
applied. The workarounds should take into consideration the impact to
end users. After all, users judge a solution by how well they can use it,
regardless of whether there’s a workaround. This is another reason why
all key stakeholders need to participate not only to define the success
criteria but also to make the final go/no-go decision.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 540
Cutover execution
The cutover execution is the series of activities carried out in adherence
to the cutover plan. Cutover execution includes the following steps:
▪ Communicate activities, milestones, and results to stakeholders in
accordance with the communication plan.
▪ Ensure that all activities are executed and completed and that any
unforeseen issues are addressed or can be addressed after go live,
have been communicated, acknowledged by stakeholders, and
documented.
▪ Execute the activities.
▪ Monitor the execution of the cutover activities.
If you execute a mock cutover, it’s crucial to validate the duration time
of the different tasks and their sequences so that you can achieve a re-
alistic plan for the actual cutover. This validation also helps you identify
541
the need for mitigation plans and whether you need to run additional
mock cutovers.
Successful go live
and transition to support
Once the cutover execution is completed, the new solution is released
to end users. At this point, the solution becomes operational, and
it’s crucial to ensure a smooth handover to the operations teams,
following the operational support plans discussed in the “Operational
support readiness” section in this chapter. For additional information
on support, visit Chapter 21, “Transition to support.”
Product-specific
guidance
Finance and Operations Apps
Operations must follow the go-live readiness review process to go live.
This review process acts as the quality gate for releasing the produc-
tion environment, which will be deployed only if the go-live readiness
review has been successfully completed.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 542
▪ LCS also contains tools such as LCS environment to monitor, diag-
nose, and analyze the health of the environment and troubleshoot
issues if they occur.
▪ Continuous updates Ensure that your environments comply with
the One Version policy. This policy specifies that an environment
must have a version no older than four monthly service updates
from the latest available one. Set your Update Settings in LCS for
continuous updates.
▪ Review the targeted release schedule to verify that the version of
your environments hasn’t reached the end of service. It won’t be
allowed to deploy production in an older version.
▪ Regression testing is needed when updating or deploying fixes or
making changes to the solution. The Microsoft Regression Suite
Automation Tool tests regressions due to continuous updates.
Upgrades
For Operations, upgrades from Dynamics AX 2012 are common. It’s
important to perform an assessment for go live with the specific re-
quirements when upgrading your solution, such as the following:
Cutover
Figure 18-5 shows the ideal cutover sequence for bringing data
into the production environment and validating the data. This works
for Operations.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 543
Fig.
18-5
Opening
Move Master data
Configurations balances in Mock go live Cutover
configurations in production Live
complete production complete complete
to production complete
complete
Configured Back up and Additional data, Load opening Simulate real-life After running From this point,
manually or with restore DB in like master data, balances with operations for a the mock go no more database
data entities/ sandbox and is added on data entities small period to live, restore movements
packages. production. top of restored to reflect a confirm solution production DB allowed, just data
configurations real operation stability in to any previous increments using
manually, with scenario or as production. reusable data entities or
data packages or final load. milestone. manually created.
integrations.
Configurations
environment
Production environment
Implementation Guide: Success by Design: Guide 18, Prepare for go live 544
performance, and integration performance under the realistic produc-
tion load, should also be validated.
Several tools are available to aid with building testing automation. One
such tool is Easy Repro, an open-source library intended to facilitate
automated UI testing.
Capacity management
Management of your solution is important, especially post-go live,
when migration activities are complete and users are actively using the
system. Storage usage and growth need to be validated; using storage
above the limits has an impact on some features, such as on demand
and restoring backups.
Data migration
You can use several tools, like out of the box, ISVs, and custom built, to
migrate data. It’s important to follow best practices and include them
as part of your migration strategy.
Various factors can impact data migration activities, for example, the
service protection API limits that ensure consistency, availability, and
performance for everyone. Keep these in mind when estimating the
throughput and performing testing and the final data migration activities.
Requests limits and allocations can also have an impact and should be
taken into consideration.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 545
Production environment readiness
When using Customer Engagement apps, the production environments
can be initiated at any point in time. It’s important that apps in production
are the same version as those in the development and test environments;
otherwise, issues might occur that impact the release process.
Several tools help you prepare for go live, including the following:
▪ Power Apps checker helps validate model-driven apps as well as
canvas apps and should be used throughout the implementation.
▪ Microsoft Dataverse analytics enables access to metrics that help
you monitor the solution, especially during the operations phase.
Continuous updates
In the words of Mo Osborne, Corporate Vice President and Chief
Operating Officer of the Business Applications Group at Microsoft, “To
enable businesses everywhere to accelerate their digital transformation,
we are continuously enhancing Dynamics 365 with new capabilities.”
Service updates are delivered in two major releases per year, offering
new capabilities and functionalities. These updates are backward com-
patible so that apps and customizations continue to work post-update.
are complete and This chapter isn’t intended to explain how go-live readiness and the
users are actively cutover plan should be built; there are other resources that cover
using the system. these topics. The aim of this chapter was to provide a summarized
546
explanation of the key activities for go-live readiness and their purpose
and the important takeaways, best practices, tips and tricks, and common
pitfalls that can happen during this phase and their impact on how the
new solution is received and adopted by end users.
Planning and preparing are key to avoiding delays that can impact
the deployment execution. All resources must be available in time for
activities to be executed and completed. Environments must be created,
configured, and operational on time.
Data migration can be complex. Realistic goals and good throughput are
important to ensure that there are no delays that might impact the go
live date and that all tasks are executed according to the data migration
plan. Ensure that the recommended practices to improve throughput are
in place. Validate that throttling limits aren’t being reached.
Support during cutover, go live, and post-go live is essential for all
activities to run in the expected time. It’s crucial to have resources
available to mitigate any issues so that tasks can complete on time
or with as slight a delay as possible. A poor support strategy for a
successful rollout will impact the success of the rollout over time. It’s
important to create a list of responsibilities for first, second, and third
lines of support as well as an escalation path and to transition knowl-
edge from the implementation team to the support team. Ensure that
there’s coverage outside traditional working hours and on weekends
for critical situations.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 547
References up appropriate alerts so that the operations team can take necessary
actions and notify users. A strong monitoring and communication plan
GitHub - Microsoft/EasyRepro: Automated UI has major impact on adoption and confidence in the solution. All crit-
testing API for Dynamics 365
ical application components should have a logging mechanism with a
Use solution checker to validate your model- well-defined process to monitor the system and alert administrators.
driven apps in Power Apps
548
Checklist
Implementation Guide: Success by Design: Guide 18, Prepare for go live 549
Case study
The gift:
go live in a cloud solution
A toy company recently embarked on an implementation project and
migrated all their processes from on-premises to Dynamics 365. They
had integrations with Dynamics 365 Customer Engagement, Microsoft
Power Apps, and Power BI. The company plans to implement Dynamics
365 Commerce and Warehouse Management mobile app in Microsoft
Dynamics 365 Supply Chain Management in future rollouts, but they
needed to move all financial data and their CRM system to the cloud in
time for the end of the year, their busiest time.
At the end of October, the team was excited and a little nervous
about the approaching go live. The impact of changing from a system
in place for many years would be huge. The team was used to their
manual and inefficient processes but were ready to move to a system
that would automate and improve processes and increase revenue
growth and profitability along with scalability. Change management
was critical for success.
The implementation team needed to make sure that everything was ready
for going live. With go live four weeks away, it was time to start review-
ing their readiness for the implementation. SIT was completed, and UAT
and performance testing were almost completed. The system seemed
performant, although not optimal, it was considered ready for production.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 550
The team was ready to start the go-live review, to switch to the Prepare
phase. They prepared the go-live checklist and asked themselves the
following: Everything was thoroughly tested, but was the system really
ready? Were the users ready? Was all the infrastructure ready? Were all
the systems up to date? Was the mock cutover planned? Were the ISVs
working well?
Concerns came up over risks and issues identified during the as-
sessment. For instance, they realized that the mail service and other
services were in another tenant, so they needed to perform a tenant
move to enable single sign-on for their users. In addition, Microsoft
was planning an automated version release by their go-live date, so
they needed to update their environments to the latest version. This
would necessitate conducting another smoke test and verifying that
the ISVs worked correctly with the version. They missed dates because
they were focused on UAT and making fixes and addressing new re-
quirements identified during the testing phase.
Would they need to change the go-live date that was so close to the
holiday season?
What could they have done better? They couldn’t have changed the
date because they needed to be ready for the holidays, but they could
have planned the go-live date earlier so that they had more time for
the ramp-up and to address any delays or issues. They could also have
had an earlier assessment review, with UAT almost complete. The
timelines were tight, and they had the choice to go live with what they
had, which was risky because that might mean stopping operations in
Implementation Guide: Success by Design: Guide 18, Prepare for go live 551
the middle of the busiest season, or move the go-live date to January-
December wasn’t an option. A third option was to hire more resources
and work more hours to try to deliver on time, which was risky because
the ramp-up would be during November. A big-bang go live wasn’t
optimal for this situation.
The team learned that it’s important to start the readiness review on
time and to schedule sufficient time and resources. Additionally, it’s
crucial to have a solid support plan for production. The importance of
the Prepare phase also shouldn’t be underestimated; plan with enough
time to mitigate any issues and risks.
Implementation Guide: Success by Design: Guide 18, Prepare for go live 552
19
Implementation Guide: Success by Design: Guide 19, Training strategy
Guide
Training
strategy
553
“There is no saturation
point in education.”
– Thomas J. Watson, founder of IBM Corporation
Introduction
At its core, the goal of a training strategy and training
is to ensure that all necessary users of your system
are educated on the new application. That way their
knowledge of how to complete their work results in
successful user adoption following go live.
Training is not the only factor in meaningful user education and adop-
In this chapter, we define a training
strategy and determine at a high level
tion; empowering users to perform their necessary tasks in the system
what components you need to embark correctly and efficiently should be the ultimate “why” in the develop-
on a methodical approach to a successful
training execution for your Dynamics 365 ment of a training strategy. As part of user adoption, organizations
implementation. Each of these areas is
should strive to train in a way that gives users confidence in the appli-
covered in detail in this chapter.
cation and inspires a sense of delight when using the system.
For a successful training
strategy consider these
In this chapter, we cover several high-level objectives, as well as exam-
main areas:
• Training objectives
ples of more organization-specific objectives, that should be included
• Training plan in your organization’s training strategy. A proper training strategy
• Scope
• Audience should center around the creation and execution of a comprehensive
• Training schedule
training plan. Furthermore, the training plan should align to the broad-
• Training material
• Delivery approach er training strategy of your Microsoft Dynamics 365 implementation.
• Assumptions, dependencies, and risks
• Training as an ongoing process
We discuss how to create an appropriate scope for your organization’s
• Training best practices training as part of this plan, as well as how to confirm the different
groups of users that need to be trained.
Training objectives
One of the first things that your organization should consider when
beginning to develop a strategy surrounding training users, and a
plan behind it, is the objectives. Defining the objectives of a successful
555
training strategy is key, and it can help shape the crafting of a training
plan as well as lead to a more successful rollout of training itself.
What not to do
As a counterpoint to the recommended strategies, let us spend a little
time discussing the training objectives that an unprepared organiza-
tion might create.
557
and user adoption perspective. Thus, a key objective should be that all
users receive proper training on this specific job function.
Fig.
19-1 Training plan elements We recommend you put together
a training plan that, at a high
level, consists of at least the
Training following elements (as illustrated
objectives
Training
Scope
in Figure 19-1):
resources
▪ Training objectives
▪ Scope
Training as
an ongoing Audience ▪ Audience
process
▪ Training schedule and resource
availability (project planning)
Training ▪ Delivery approach
schedule and
Training
materials Training plan resource
availability
(project
▪ Validation of training success/
planning) objectives
▪ Assumptions/dependencies
▪ Risks
Training Delivery
environment approach ▪ Training environment
management
management
Validation
▪ Training materials
of training
Risks
success/
▪ Training as an ongoing process
Assumptions/ objectives
dependencies ▪ Training resources
Scope
A crucial element of ensuring that your organization’s project has a
proper training strategy is defining an accurate scope for the training.
Here are different areas that cover the breadth of scope that you will
need to build into your training plan:
▪ Business processes completed by end users, derived from your
user stories and functional requirements
▫ Example: Users must be able to create and process an invoice
from end to end in for Finance and Operations apps.
▪ Any nonfunctional requirements that require training users for the
requirement to be met
▫ Example: Users should have the ability to provide feedback
directly from the application.
▪ Any administrative tasks that users might need to accomplish
while in the application, separate from their business-driving work
▫ Example: Users must be able to configure and use the
Advanced Find tool inside Dynamics 365.
▫ Example: Users should be able to upload templates using the
Dynamics 365 data management tool.
There could be other areas of your application that should fall into the
scope of your training. If there is a task or process that your users need
to accomplish while working directly with your Dynamics 365 appli-
cation, it should probably fall into the scope of your training, as you
should assume there are users who need to be trained on that topic.
An example of this would be training project team members during
Once you have the full set of business processes, requirements, and
tasks, you can use them to create a true scope, consisting of actionable
training areas, for your training. There might not be a one-to-one re-
lationship between them, as there may be overlap between processes
as well as additional training topics that need to be added. This is our
guidance on how to avoid overlap and how to add more training topics.
The next step in setting the scope for your training is to categorize
and prioritize the list of training areas that you have created. As an
example, some business processes in your new application might
be different from the “as is” processes in your organization’s current
application. This is important for a few reasons:
560
training effectiveness. As part of your scope of training, and combined
with your training materials and delivery, you will want to separate
system training on the new processes and application from topics that
veer more into organizational change management.
Field technicians are moving Using the new 4 Because of the impact this
from a pen-and-paper-based mobile application change has, multiple live
work order management system to complete a work training sessions will be held for
to a mobile application order technicians to ensure proper
education
The following section identifies the core groups of people who should
be trained on your Dynamics 365 application. Note that each group
of users can be broken into subgroups. Breaking down these users
into personas is important. If personas are incomplete or incorrect,
your organization might not complete proper training on the correct
business processes or, worse, it might not complete training at all for
specific user groups.
As part of your training plan, be sure to identify each business role that
Trainers
Trainers are a special category of users that require training. They
should be knowledgeable in all areas of the system and are responsible
for developing training material and training other users. “Train the
trainer” is a common approach to onboarding this type of user to your
application. Depending on the size of your organization, you might
Super users
Super users are an elite group of (usually) end users who act as cham-
pions of the application. This group is key to driving excitement and
adoption of the new system. These users are often involved in early
feedback cycles with the project team. As subject matter experts (SMEs),
they become educated on the application early on so that by the time
end-user training begins, your super users are experts in various business
processes. As part of your training delivery plan, your organization can
use super users strategically (by having them attend training sessions
and serving as application advocates) to aid your trainers in their job
functions. Additionally, these super users can act as a “first line” to help
answer employee questions, keeping your support desk from being
overwhelmed and at the same time keeping morale high.
564
training these users so that other members of your organization are
aware that training is ongoing. You can also notify these stakeholders
of training that other roles are expected to undergo, which can help
from a buy-in perspective.
Training scope
Once you have defined the distinct groups (and subgroups) that
should receive training, you should work on matching the different
training areas defined in the Scope section of this chapter with the
groups of users defined earlier. Each group of users will require a spe-
cific set of training (based on training areas). We recommend creating
a matrix in your training plan, with training role against training subject
area, that you can refer to when creating training materials, planning
training delivery, etc.
training topics to basic training on Dynamics 365 administration (to assist other users
during training and to implement training tools), while administrators
users in a way that is and support desk personnel might require advanced training on the
comprehensive but same topic. You should structure your assignment of training topics to
not superfluous. users in a way that is comprehensive but not superfluous.
565
Training schedule
Successful integration of a training strategy into the main project
effort requires coordination with project management resources. Your
organization’s training strategy does not exist independent of other
ongoing efforts in the project, and it is important that key milestones,
resource requirements, dates, and tasks are acknowledged and ac-
counted for as part of your training plan and are incorporated as tasks
or dependencies in your overall project plan.
Trainees
▪ Who is necessary to take part in training activities, and when will
these activities happen?
▪ Are there any resource dependencies (e.g., Resource A is required
before Resource B)?
▪ How will training, and attendees, be staggered so that there is no
gap in coverage for day-to-day business functions?
Training plan
▪ When will the training plan be completed?
▪ Who will review it, and what will be the plan for updating it if
Fig.
19-4
necessary?
Question
Training execution and feedback
Are any project-related activities ▪ When, and where, will training occur?
dependent on training?
▪ Will training have an impact on other project efforts?
Answer
▪ If there are subsequent trainings, how will feedback be evaluated
and what is the cycle for processing this feedback?
We often recommend conducting training
prior to your user acceptance testing (UAT).
The major benefits of this order are: As you can see, many questions related to a training strategy have an
• During UAT, users are already aware of impact on, or are reliant upon, activities that occur separate from tradi-
how to navigate through the system and
accomplish tasks. And as a result, there tional training activities. In Figure 19-4, we examine one question from
should be fewer items logged as bugs that
this list to underscore the importance of asking and getting answers
should be listed as training issues.
Training materials
There are multiple consumable types of content for Dynamics 365
applications that your organization can create. Depending on your
project, you could create multiple types for use in different trainings.
Here are a few common examples of the types of training materials
that an organization could create.
Videos
Trainers can create videos that explain key training areas in detail and
walk through the application, providing guidance on how users can
navigate the system and complete their job tasks. Videos can be more
helpful than documents for many users, since they “show” rather than
“tell” certain features of the application. Videos can also be recorded
in small chunks that represent discrete processes in an application, as
opposed to creating one longer video that walks a user through an
end-to-end flow. This benefits trainers from a maintenance perspec-
tive; instead of having to rerecord the entire video when changes are
made to your application (and create rework around having to record
training for concepts that have not changed), they can focus on updat-
ing specific videos around new features and functionality.
Microsoft Learn
Microsoft has published several labs via the Microsoft Learn portal
that provide a wealth of knowledge on Dynamics 365 applications, as
well as other applications on which your organization might require
its users to be educated. Microsoft Learn allows the user to follow any
number of predetermined paths specifically designed for different user
personas. Individuals can also create their own knowledge path that
is specifically tailored to their learning requirements. Users can learn
on their own schedule and can browse videos related to topics when
necessary and engage with other community members.
Guided help
Dynamics 365 includes the ability to create custom help panes and
guided tasks, out of the box, to assist your users in walking through
the application and completing basic tasks in the system. Guided help
is easy to set up and implement and does not require an additional
solution on top of your application. Additionally, Dynamics 365 ap-
plications can install the Guides application, which allows for visual or
568
holographic displays to show step-by-step instructions on tasks that
your users need to perform.
Delivery approach
As a subset of your training plan, your organization should create a
training delivery document that contains specifics about the actual
execution of training itself. While many of these topics are covered in
the overarching training plan, it is important to have a specific plan
for training delivery to ensure that your organization is prepared to
execute training.
We will now discuss the topics that your training delivery document
should cover.
Live training The content is critical for Better interaction Scheduling challenges.
(in person or business function and you between trainer and
want strict control over participants as well as In-person variants of this type of
virtual)
delivery. between participants. training require a physical room with
computers, since Dynamics 365 is a
Content contains non- Immediate feedback. web-based application.
functional requirements,
or business processes that Collaboration in business In-person variants of this type of
require specific devices. processes is significantly training are limited to the number of
easier (i.e., front office people per room, per session.
and back office need
to work together to
complete an order).
Interactive Content is moderate in Web-based content Trainings are less effective than in-
web-based difficulty. can be built once person, since there is less interaction
and consumed by an and the end user faces more
training
Business processes are unlimited number of distractions.
best learned through users.
repetition and ongoing Trainings are not usually real-time,
training, since users’ People can train on their meaning any questions or challenges
access to the application own. No scheduling that users face during training might
is necessary. requirements and no not be answered quickly.
physical location needed.
Web-based training, when built
with digital adoption tools, can be
costly and also require technical
maintenance.
Self-paced Content can be easily Written and video This is the least interactive form of
training explained in a user content can be updated training, meaning users who rely on
manual or video series. quickly as future releases interaction either with a trainer or
and updates occur on with a web-based application might
Content is best your Dynamics 365 struggle to learn when having to
consumed as written application. read/watch content.
reference material that
can be used as needed. Training content is easiest
to create.
We believe that it is never too early to begin training super users and
trainers. Since the main goal of training these users is to help them
become familiar with your application early in your project, you should
start conducting training with them (whether formal or informal)
during development and continue it as an ongoing process. These
early training sessions should familiarize super users and project team
members with basic application functions and serve as a backbone for
future train-the-trainer sessions.
In the sample project plan seen in Figure 19-6, end-user training takes
place over the course of a single week, in a single location. If this or-
ganization were international and multilingual, most of the items in
the plan become more complex. To illustrate the increased complexity,
we consider what end-user training might look like in a multinational
Fig.
19-7 organization in Figure 19-7.
April
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Trainers work to End-user training #1 End-user training #2 End-user training for field
develop/translate Conducted in the US, Conducted in the US, technicians
material into English, France and Spain France and Spain Conducted in France
French, and Spanish
Training material Environment cleanup Environment cleanup
(including translations)
are validated
575
way in every iteration; it should be a work in progress that is updated
frequently. Once a training session is completed, there is still work to
be done. Many organizations require multiple training sessions, and all
of them present opportunities to revisit, evaluate, and improve future
training sessions.
You can measure the effectiveness of your training through several dif-
ferent recommendations. The first is by assessing your users’ learning
at the end of, or shortly following, training, in particular, prioritizing
the processes that they will use most frequently in their jobs. It is
much more important that sales users understand how to create and
update an account record in Dynamics 365 without assistance if it is
a process they will need to do every day. Such prioritization should
be emphasized during the creation of any assessment method you
choose. Earlier in the chapter, we discussed how creating solid training
objectives would help you assess the effectiveness of your training. We
can now take those training objectives and use them to form the basis
for how we will assess our users, so long as the training objectives we
created contain the most critical topics that we are training users on.
take your users 10 minutes to complete. This could be used to measure your organization’s
readiness for the new application. If certain business metrics must be
minutes or 2 minutes met regardless of the application used, make sure these metrics are met
to complete. within the Dynamics 365 application. As an example, if your call center
If the bug is critical enough to block entire Similarly, after going live, you can use helpdesk statistics to determine
business scenarios and prevent trainees
the longer-term effectiveness of training. If users are submitting help
from accomplishing any tasks, it might
be necessary to fix these bugs directly in desk tickets, or requesting assistance, on specific areas of your applica-
the training environment. However, we
recommend this only in situations where
tion, it is a good idea to review training or training documents related
the bug prevents any workarounds or other to those features to see if they should be updated. Again, if necessary,
sessions that could be taught. Fixing the
bug directly in a “live” environment could additional user training might be needed.
cause additional, unforeseen issues to occur,
which would further lower users’ confidence
in the application. Furthermore, any bug Another common approach to collecting feedback on training is to
fixes that happen in this instance would also
have to be fixed in the lowest instance where
conduct pilot training sessions with a limited audience. You can invite
development is occurring, and regression super users to attend these pilot training courses and gather feedback to
testing would have to be completed and any
potential conflicts would have to be resolved. improve. A side benefit of conducting these pilot trainings is that it helps
super users in their process of becoming experts in your application.
Assumptions,
dependencies, and risks
As discussed in the “Training schedule” section of this chapter, suc-
cessful training relies on many external factors that should be defined
in your training plan. These factors can be assumptions, which are
Your training plan and project plan should also include dependencies,
which can be represented in a process workflow. They effectively show
that certain tasks or activities must occur before others as part of your
training strategy. Figure 19-8 depicts a sample process workflow.
We recommend making sure each training
participant has a user account with the
same level of access as their forthcoming Fig.
19-8
Create training documentation on
production user account.
lead/opportunity management
We do not recommend giving system
administrator access to training users. This
is a common mistake we see organizations
make during training. They offer a variety Update Finalize
Review 1 document document
of reasons for doing this, including the Create draft Review 2
of document based on based on
of document of BPO
desire to not run into access issues during by SME feedback feedback
training and the security model having from SME from BPO
not been finalized.
Internal change
Internal change, for the purposes of this section, refers to change that, at
a transactional level, is unique to the organization experiencing it. Each
of these types of change could require training material to be updated
and subsequently conducted on the same or different groups of users.
Personnel change
It’s important that User turnover is inevitable, either when employees leave an orga-
nization and new ones take their place, or when employees change
new users are job roles within the company. All new users are expected to quickly
provided support learn their job roles and how to navigate and complete their work in
during and after the application; to do this, these users need to be trained. Note that
present during the experienced users who have been using the application for a longer
period, but they might receive fewer training resources than users who
application’s initial were present during the application’s initial rollout.
rollout.
Your organization’s training material for users who joined after go live
will be similar to that used by your end users before go live, but again,
Application change
Many Dynamics 365 projects have adopted a phased rollout approach,
in which features of each application are released to production over
time, as opposed to a “big bang” approach, in which a single deploy-
ment to production prior to go live has all included functionality. While
the rest of this chapter covers the development and distribution of
training regarding the first release of application functionality, future
phases of the project must include updates to training material and
new training sessions.
Phase two training might include
users who are already familiar with the
application, having used it after the phase Your organization might want to treat each phase of a rollout as a
one go-live, as well as employees who
distinct project from a training perspective, wherein each future phase
have never used or been trained on your
system. Any new and updated training contains a distinct, more condensed training plan and training delivery
material should reflect the difference in
experience and skill level of these user
schedule to ensure proper user training is conducted at every stage.
groups. It should not be too simple as to These future phases will focus heavily on making updates to training
be irrelevant to the experienced user, nor
too complex as to confuse the new user. material based on new business processes, and training users, both
new and experienced, on the application.
External change
In other chapters, we discuss Microsoft Dynamics 365 as a software as
a service (SaaS) that provides continual updates to organizations, both
from a technical standpoint and a business standpoint. It is important
that your organization prepares for these updates as part of its training
strategy and that they are acknowledged in the training plan and ex-
ecution. Note that we allow customers to activate updates to software
at their own pace.
Traning needs
Keep an eye on release notes to leverage new features. The authors of this book have experience with
Work with your implementation partner and your Microsoft Dynamics 365 implementations of all sizes
internal teams on your heatmap for Dynamics 365 and complexities. We have created and reviewed
capabilities. Have a regular review of future projects, training plans, participated in and hosted training
incorporating new features delivered.
sessions, and evaluated the outcomes of training
Join the D365UG User Group for Dynamics 365 to learn on end-user adoption. From most of these projects,
about member- driven education, networking, and events. common themes have emerged. We share the most
Create a Yammer group or Microsoft Teams channel to important recommendations here as best practices,
continue conversations on best practices and learnings. things to keep in mind during your organization’s
Host engagement events like town halls” or “Lunch and training process. These best practices are by no
Learns” to continue training after go live and drive end- means comprehensive, nor are they set in stone, but
user engagement. they are meant to guide a project team to a more
Share success stories about how people are using your successful training outcome.
application in innovative and impactful ways.
It is also important that your training material does not contain too
much duplicate content. While it is certainly critical that both the web-
based and live content cover the same features (especially if users are
not consuming multiple types of content), too much overlap can result
in training fatigue, and overall apathy regarding training. An organi-
zation would certainly want to avoid a situation in which users skipped
valuable content because they considered it repetitive.
For example, say you have a junior salesperson in their 20s and a senior
salesperson in their 60s attending the same introductory training. Your
application includes an embedded LinkedIn widget on your contact
entity. Trainers should not assume that both users (or, more divisively,
It’s important to only one) are familiar with LinkedIn and its capabilities. A good rule of
not lose sight of the thumb is to assume all training participants arrive with the same level
of background information and context of the application, plus any
end goal: successful technology knowledge required to use it effectively.
user education
and meaningful Accessibility concerns
adoption of your Your organization must consider accessibility while creating training
Dynamics 365 material. All Office 365 products have an accessibility checker that can
application. help you spot potential issues in your content.
583
Identify ongoing challenges
As discussed earlier, feedback is an important part of any improvement
cycle. In addition to identifying areas of improvement for future train-
ings and training material, we recommend that the people conducting
training or collecting feedback from the training identify key areas
where users are struggling with new functionality. This information can
be used to create more focused training material that can be distribut-
ed to users, or to create webinars or other sessions specifically focused
on these problem areas.
Product-specific
guidance
Up to this point in the chapter, our training guidance has applied to
Finance and Operations apps as well as Customer Engagement apps.
While both live in the Microsoft Dynamics 365 ecosystem and custom-
ers frequently adopt both systems (often simultaneously), there are
differences between the two, which can mean differences in how each
project should train its users. In this section, we highlight some of these
differences and how to apply them to your training material.
Help on docs.microsoft.com
The Dynamics 365 documentation on Microsoft’s docs.microsoft.com
site is the primary source for product documentation for the previously
listed apps. This site offers the following features:
▪ Access to the most up-to-date content The site gives
Microsoft a faster and more flexible way to create, deliver, and
update product documentation. Therefore, you have easy access
to the latest technical information.
▪ Content written by experts Content on the site is open to
In-product Help
In the Dynamics 356 client, new users can enter the Help system to
read articles that are pulled from the Docs site’s Dynamics 365 area
and task guides from the business process modeler (BPM) in Lifecycle
Services (LCS). The help is contextualized to the form that the user is in.
For example, if a user is in a sales orders screen and wants to know how
to create sales orders, the Help system will show the Docs articles and
Fig.
task guides related to sales orders (see Figure 19-10).
19-10
References
Training plan template
TechTalk Series: Training Plans and Content for your Finance & Operations
Project (Microsoft Dynamics blog)
Have a process to update the training plan incrementally Define a process for feedback and continuous improve-
as necessary to reflect scope, changes, risks, and depen- ments to training materials.
dencies to ensure adoption and engagement.
Identify a process to provide for continuous training in
Consider what to include regarding system process and alignment with Microsoft updates and changes to the
business process, so that the training provides the best solution as well as changes in roles and responsibilities.
possible foundation to end users.
In the training plan, the company included a mix of high-level and de-
tailed training objectives. The high-level objectives included these goals:
▪ To not allow access to the live system without a solid training program
▪ To prepare a core team of trainers to help support the initiative
▪ To continue receiving feedback and improving the training approach
▪ To develop the training materials early and schedule classes early
▪ To prepare all application users to efficiently use the application
(Dynamics 365 Finance and Dynamics 365 Supply Chain
Management or Dynamics 365 Customer Engagement), as well
as address any key business process changes required in their
job function
The team understood that for training to be successful and for mean-
ingful user adoption to be achieved, they needed to begin planning
early and set up a strong team to support it.
Formal feedback was recorded after the trainings and Microsoft Teams
channels were created for employees to continue providing feedback
to the team. Users were encouraged to share knowledge, ask ques-
tions, and suggest improvements to the training materials. The team
was also able to collect feedback and create metrics using help desk
tickets, which helped them identify areas of the application that users
found particularly challenging.
In the first few months, an evaluation of key KPIs showed that the orga-
nization was on track to meet all the detailed objectives set by the team.
593
20
Implementation Guide: Success by Design: Guide 20, Service the solution
Guide
Service the
solution
594
Continuing the business applications journey.
Introduction
Your solution has been deployed to production.
Users have been trained and onboarded. A support
process is in place.
Most implementation projects focus on building and deploying the
solution, but perhaps not nearly as much emphasis is put on educating
the owner of the new system in keeping it optimal and healthy.
For successful project Let’s take a scenario where you rent an apartment in a large building
governance, consider these
complex. The owner of the building is responsible for cleanliness and
main areas:
upkeep of the building and common areas. They make sure to have
Monitor service health
proper plumbing and electricity in each apartment. They may add or
Service updates
improve amenities such as an exercise gym or a garden in the yard.
Environment maintenance But it is each renter’s responsibility to keep their apartment in good
Implementation Guide: Success by Design: Guide 20, Service the solution 595
to contact the building manager so that a repair person can be dis-
patched to help. In a similar way, Microsoft, the owner of the software
as a service (Saas) system, will be responsible for the platform, the
building and property in our scenario. The customer is responsible for
the upkeep of the health of their solution, the apartment.
Monitor
service health Monitor service health
Service updates A key principle for a successful onboarding experience to Dynamics
365 is knowing the health of your environments at all times. Your team
Environment maintenance must be able to troubleshoot issues right away.
Implementation Guide: Success by Design: Guide 20, Service the solution 596
Microsoft highly recommends monitoring service health and provides
you the tools to do so. That way, you know when irregularities are
found and action is needed.
Performance monitoring
Microsoft recommends that Dynamics 365 project teams consider add-
ing performance testing to the project’s test cycle. Performance testing
is an effective way to gauge the impact that your customizations may
have on the baseline Dynamics 365 solution.
597
▪ The network traffic can vary throughout the day depending on an
organization’s usage patterns
▪ For remote workers, reliance on individual internet connections
Chapter 17, “A performing solution, beyond could cause different outcomes for each user
infrastructure,” covers the importance of
having a performance strategy that includes
elements such as defining performance Ultimately, the responsiveness experienced by end users is caused by
requirements, establishing baseline metrics,
and ongoing performance testing.
a mix of multiple factors that aren’t limited to the performance of the
Dynamics 365 application itself.
For Customer Engagement apps, As this data is collected, monitor the performance of the solution and
Dataverse analytics is helpful. With it you
can gauge and monitor performance from
set up notifications to warn you when performance of any aspect of
within the Power Platform Admin Tool. the solution varies from a defined range.
You can also use Azure Application Insights
to monitor applications like Dynamics 365 Poor performing solution leads to low user adoption. But you can stay
for custom telemetry needs.
ahead of this through proper monitoring and alerts.
Implementation Guide: Success by Design: Guide 20, Service the solution 598
Depending on your needs, you can add more data to suit your business
size and expected growth.
Users and integrations aren’t the only cause of storage growth. Logs
from system jobs, indexes created to help performance, and additional
application data added from new modules also contribute to storage
growth. Another scenario that impacts storage allocation is in the copy
and restore operations of an environment. Depending on the type of
copy, the size of the database can be very different. As an administra-
tor, you need to be mindful of who can create new instances and what
Refer to Chapter 10, “Data their true needs are to minimize impact on the storage allocation as
management,” for details on storage
entitlements, segmentation, and impact
these copies are being restored.
to allocations when backing up and
restoring instances.
Administrators should monitor the volume of storage that is currently
For information on storage capacity for used as well as its growth rate. This information will help you budget
Finance and Operations apps, see the
Dynamics 365 Licensing Guide. for any additional storage needs or look into data archiving and de-
letion to free up space. Scheduling and cleaning up data from time
Note that Dataverse storage capacity
entitlements and usage changed in 2019. to time will help as well. This is covered in the “Environment mainte-
nance” section of this chapter.
Implementation Guide: Success by Design: Guide 20, Service the solution 599
even prevent API calls to be run. Err or logging shows you when these
limits are exceeded.
“Who has access to my customer data?” may seem like a simple ques-
tion, but as the number of users and system usage grows, it can be
daunting to keep track. You need to be able to identify who is access-
ing the system and what they’re doing with the data.
Implementation Guide: Success by Design: Guide 20, Service the solution 600
Microsoft recommends that you have a proper auditing strategy to
capture the information needed to track user actions so that you can
satisfy these types of requests. Most common areas are covered by
Refer to Chapter 12, “Security,” for details
Dynamics 365. Because auditing takes up mor e storage and poten-
on security strategy and planning, and
see the security center overview and the tially impacts performance, administrators need to turn some of these
compliance center overview for more
details. The Microsoft 365 security center
capabilities on where they may not be by default.
provides the ability to search through
Dataverse activity logging for Customer
Engagement apps.
User access and resource usage
Your organization may be subject to
rules such as the General Data Protection For any application or solution, it’s important to understand your or-
Regulation (GDPR) that give users specific
ganization’s resource usage patterns. Business sponsors want to know
rights to their personal data. You may
need to respond to data subject requests who is using (and not using) the system and the frequency of business
(DSRs) to delete a user’s personal data.
processes and use cases that are being run.
Implementation Guide: Success by Design: Guide 20, Service the solution 601
Turning to notifications and application logs to proactively look for
entries is a good way to find trouble spots and address them before
they impact users.
You can use tools such as Azure Application Insights for Dynamics
365 and other applications and services that are part of the overall IT
solution for your organization. Application Insights lets you collect
telemetry both in and outside of Dynamics 365.
Monitoring and Diagnostic tools in LCS
allow administrators to monitor and
query logs for issues detected in the
For example, if a process is initiated by a user in Dynamics 365 that
system for Finance and Operations apps.
calls on an integrated, but external system, Application Insights can still
Trace logging in Dataverse provides
plugin error information for Customer
detect performance and exceptions at different stages of the execu-
Engagement apps and the Power Platform. tion pipeline. You can see these issues whether they occur at the user
You can also use Microsoft 365 service interface level, in Dynamics 365, or the external system. This empowers
health to identify service issues, and the administrators who monitor alerts to react quickly the moment the
administrators can be notified via email
or through the mobile app. exceptions surface. They also have immediate access to information on
the source of the issue.
Implementation Guide: Success by Design: Guide 20, Service the solution 602
than optimal, the task will be much more manageable.
Key alerts you can receive from Microsoft include the service updates to
your Dynamics 365 solution. In the next section, we discuss when and
how Dynamics 365 is updated and what you can do as an administrator
to take advantage of the predictable nature of the service updates.
Implementation Guide: Success by Design: Guide 20, Service the solution 603
One Version promises to bring predictable release management to
these areas.
Planning Scheduling
effectively to avoid conflict with Opportunities to advance your knowledge
for updates project plan ▪ Faster access to new features
▪ Ease in sharing ideas and collaborating with the greater community
Raising Handling because everyone is on the same version
awareness unsupported
of feature changes techniques
With One Version, Dynamics 365 addresses the tough challenges faced
by customers and partners and reduces common rollout concerns
Staying aware Encouraging (Figure 20-1). The solution automatically enhances the predictability
of what’s coming adoption
of product updates. For example, One Version reduces the effort and
capacity needed to test the update. It makes it easier to manage busi-
ness change and facilitate appropriate adoption.
Understanding Aligning
effort and capacity the team with the
for testing implementation plan
Although One Version greatly reduces the impact of deployments, the
solution owner and the implementation team are still responsible for
Implementation Guide: Success by Design: Guide 20, Service the solution 604
making sure certain tasks are addressed. These include planning for the
update, assigning ownership of the tasks, raising awareness of coming
changes, and supporting adoption. We cover these items throughout
this section.
Release readiness
System administrators and others who have signed up for service
notifications are alerted about upcoming releases, minor updates, and
bug fixes. Being properly prepared is the key to successfully managing
your solution. This isn’t just up to the Dynamics 365 administrator. You
need tight coordination between systems administrators, the Dynamics
365 project team that works on the platform, and the business user
groups and subject matter experts (SMEs) who are the end users and
champions of the system.
605
production use. For planning purposes, these public previews can
give you a good idea of what’s coming up in the short-term road-
map. Not every feature is available through a public preview.
▪ Early access You can apply these production-ready, fully sup-
ported updates and features to your environment prior to the GA
date. See more details in the following section about the process
of opting in early.
▪ General availability This is the date that these features are
deployed to your environment if the administrator hasn’t opted in
for early access.
Each release wave includes features and functionalities that you can
enable for different types of users:
▪ Users, automatically These features include changes to the user
experience for users and are enabled automatically.
▪ Administrators, makers, or analysts, automatically These
features are meant to be used by administrators, makers, or busi-
ness analysts. They’re enabled automatically.
▪ Users by administrators, makers, or analysts These features
must be enabled or configured by the administrators, makers, or
business analysts to be available for their users.
If you choose to opt in to early access updates, you get features that
are typically mandatory changes automatically enabled for users. Each
feature in the release notes indicates which category it falls under.
Implementation Guide: Success by Design: Guide 20, Service the solution 606
Deprecated features continue to work and are fully supported until
they’re officially removed. After removal, the feature or capability no
longer works. The deprecation notes provide information on what
Refer to the Message center for
detailed information on notifications,
features are being removed, when this will happen, why it’s happening,
email preferences, and recipients for and what actions you can take to address the impact. Just like when
service updates.
you’re getting ready for new features, organizations must plan and
For Finance and Operations apps, prepare well before the removal of features to avoid negative impact
refer to the One Version service
updates overview for details on release from the deprecation.
planning, release notes, deprecations,
and release cadence.
Opt in for early access
For Customer Engagement apps and
the Power Platform, refer to the release
At this point, you have the release notification and reviewed the re-
notes, deprecation announcements, lease notes to do an impact assessment. The next step is to work with
and release cadence.
the project team to test the release with your solution. Some features
available through our public preview program may be of interest for
your organization.
Another important area to cover for early access is to work with the
business sponsors to help them understand the impact to the end us-
ers. As part of the release, some updates may impact user experience,
such as user interface (UI) navigation changes. Even small differences
can have a meaningful impact. Imagine users in a large call center
scenario, in which every additional second on a call with a customer
can impact their service goals. In such a case, business managers want
to make sure that the user group receives proper communication and
takes time to provide training if necessary.
607
After this due diligence, your organization can schedule and apply the
new release to the production environment. You should time this task
for when it’s most convenient to the users and administrators. Your
organization can determine the best time to apply the release, when
there will be the least amount of disruption to end users, other tech-
nical dependencies, and impact to other projects. Don’t wait for the
release to be automatically applied to your environment.
608
Fig.
20-2
Safe deployment practice for
Finance and Operations
Implementation Guide: Success by Design: Guide 20, Service the solution 609
Fig.
Update cadence
20-3
Customers are required to take a minimum of two service updates per
year, with a maximum of eight service updates per year (Figure 20-3).
8 updates
delivered You can choose to pause up to three consecutive updates at a time to
per year accommodate your project schedule.
Jan Pausing a service update can apply to the designated user accep-
Feb tance testing (UAT) sandbox, the production environment, or both.
Mar If the pause window ends and the customer hasn’t self-updated to a
supported service update, Microsoft automatically applies the latest
Apr
update based on the configuration selection available in LCS.
May
June System updates follow these guidelines:
July ▪ Updates are backward-compatible
▪ Updates are cumulative
Aug
▪ Customers can configure the update window
Sept
▪ Quality updates containing hotfixes are only released for the
Oct current version (N) or previous version (N-1)
Nov ▪ System updates contain new features that you can selectively
choose to enable
Dec
Release readiness
We strongly recommend that you plan ahead and work the updates into
your internal project release cadence. To do so, take the following steps:
▪ Step 1: Plan Have a good understanding of the release schedule
and a plan to work this into your application lifecycle manage-
ment (ALM) strategy. Because you can pause updates up to three
months, you can set aside plenty of time for testing, impact anal-
ysis, and developing a deployment plan. Use the impact analysis
report from LCS to identify areas of change that may affect your
solution and help determine the level of effort needed to remedi-
ate any impact.
▪ Step 2: Test We recommend using a non-production envi-
ronment such as your UAT instance to opt in early and apply the
release. You can configure service updates through LCS and spec-
ify how and when you receive service updates from Microsoft to
your environments. As part of the configuration, define the update
Implementation Guide: Success by Design: Guide 20, Service the solution 610
environment (production) and an alternate sandbox (UAT). Use the
Regression Suite Automation Tool (RSAT) to perform regression
testing to identify any issues. Work any fixes into your ALM cycle
and deployment plans.
▪ Step 3: Deploy After you define environment and schedule for
the service updates through LCS, the back-end tools automatically
update the system.
Implementation Guide: Success by Design: Guide 20, Service the solution 611
Fig.
20-4 Safe deployment practice for
Customer Engagement
Customer First Standard release
Engagement Team release and servicing
Station 1
Station 2
Station 3
Station 4
Station 5
Station 6
Customer
Engagement Team Station 1: First release Station 2 JPN, SAM, CAN IND
▪ Extensive integration ▪ Production quality
testing and validation ▪ Early view of weekly release
Station 3 APJ, GBR, OCE
▪ Solution checker ▪ Select customers
Station 4 EUR
Implementation Guide: Success by Design: Guide 20, Service the solution 612
geographies. Therefore, as the deployment cycle of a new release com-
mences, instances in different stations can be on different versions of
the solution. As your project team is developing new features that are
Check the latest release of station
consumed into your ALM deployment process, set up a version check
mapping and their corresponding regions.
Release updates to Station 1 through for Dynamics 365 and make sure that there is a match so that your
Station 6 follow the dark hours defined for
each geography.
project doesn’t encounter incompatibility issues. When the version
of the source environments matches the destination, you can safely
deploy your solutions.
Update cadence
Customers receive two major updates per year, in the April and
October GA releases (Figure 20-5). You can get early access and opt
Dynamics 365 apps have a different in months before the GA dates. These updates apply to both Power
cadence from the major releases. For Platform and Dynamics 365 apps. We encourage you to opt in early to
example, Dynamics 365 Marketing
and Dynamics 365 Portals have test and apply the release. The releases are production-ready and fully
monthly updates. Apps from ISVs from
supported even when applying prior to the GA date. Activation for
AppSource, Microsoft’s app marketplace,
may also have a different cadence. You major updates is automatic through safe deployment processes for the
should consult with the ISVs for any
third-party apps you’re using.
region where the Dynamics 365 instance resides, on the deployment
dates specified for the region.
Implementation Guide: Success by Design: Guide 20, Service the solution 613
Fig.
20-5 Feature release
October October April April
public preview release public preview release
May June July Aug Sept Oct Dec Jan Feb Mar Apr May
Access to latest features (often weekly) Prep time latest release Latest release
(admin opt-in) (admin opt-in) (automatically applied)
F1
F2
F3 Features Features
F4 F1-F6 F1-F6
F5
F6 Access to latest features (often weekly) Prep time latest release Latest release
(automatically applied)
(admin opt-in) (admin opt-in)
F7
F8
F9
F10 Features Features
F11 F7-F12 F7-F12
F12
Continuous updates that can include feature code which has no UI impact (often weekly)
Opt-in to experience all UI features coming in the next scheduled update (preview)
Release readiness
We strongly recommend that organizations work updates into their
internal project release cadence. To do so, take the following steps:
▪ Step 1: Opt in for early access Before you apply the changes
to existing production or non-production environments (which
may disrupt users and developers), we recommend you create a
new instance. You can’t revert back to the previous version, so all
testing should be done on a new instance. Take a copy of the test
environment that has your latest solution and data and create a
new instance from it. Enable early access to apply the new release
capabilities. After you opt in, some features are turned on by
default; others may require an administrator to explicitly configure
Implementation Guide: Success by Design: Guide 20, Service the solution 614
them. The details are documented in the release notes.
▪ Step 2: Test Run regression testing to make sure that the solu-
tion continues to function as expected. Early opt-in features are
production-ready and fully supported, so if you encounter any
errors, you can submit a service request to report and get help
with issues. Another important aspect of enabling the early access
capabilities is that some UI and navigation changes may impact
users. Work with your user group to do a side-by-side comparison
between the current and opt-in versions. Use the release notes
to identify the areas where UI or navigation changes have been
made. Document the change with screenshots to be shared with
the users. Depending on the significance of the changes, it may
warrant some level of end user training as well as communications
out to the group.
▪ Step 3: Deploy When the testing is complete, you can turn
the early access features on in other non-production instances.
This ensures that new features and customizations by the project
team are developed on the upcoming early release solution. You
may keep one or more instances without the early access features
turned on to support production bug fixes or anything that needs
to be deployed prior to enabling the early access features in the
production environment. When it’s time to deploy your solutions
to production, you enable the early access features on the produc-
tion instance. You should notify business users of the new features
(within the product as well as anything custom built by the project
team) and any changes to how end users will navigate through the
system. Timing the deployment is important. Microsoft doesn’t
Refer to the Dataverse storage capacity
guidance to understand the impact on your recommend opting in at the same time you have a new project
storage allocation for your tenant when
creating new instances from a backup.
release in the production environment. If you encounter deploy-
ment issues, it’s easier to troubleshoot when you’re not deploying
multiple solutions to the environment.
Monitor service health
Service updates
Environment
Environment maintenance
maintenance Protecting your solution and providing continuous availability of
Continue the business service is your primary goal as the system administrator. In a cloud
application journey environment, these maintenance jobs are automated, but it’s critical
Implementation Guide: Success by Design: Guide 20, Service the solution 615
for an organization to have a strategy so that these routine jobs are
appropriately configured and scheduled. In some cases, you may need
to perform these tasks manually but still in alignment with your overall
planning and strategy.
Implementation Guide: Success by Design: Guide 20, Service the solution 616
Dynamics 365 environment management
Dynamics 365 provides point-in-time restore (PITR) capabilities for
databases. This means that all databases are backed up automatically
by the system and retained for a set number of days. In the event of
Although the backup and recovery accidental corruption or deletion, administrators can choose to restore
operation is dependable, it could also be
time-consuming depending on the size the database from any of the backups taken. The automated backup
of the backup.
system and PITR provides a zero-admin way to protect databases.
In a Customer Engagement apps scenario
when solution imports fail, it’s often
better to fix the import issue instead
If your organizations require a proactive approach to manually take
of restoring from a backup. Fixing the backups (such as before a deployment of a new release of your solu-
import should take significantly less time
than restoring. tion), the administrator may be called on to assist. You should perform
these tasks in line with your organization’s environment strategy.
Data management
Data is central to all applications. It drives business decisions through
analytics and artificial intelligence. It also reveals crucial information
about the overall health of the system and what administrators need
to do for maintenance. The theme of this chapter is to be proactive
in planning and strategizing around the upkeep of your system. Data
maintenance is no different. In this section, we discuss the service
aspects of data management. To explore the broader topic, refer to
Chapter 10, “Data management.”
Implementation Guide: Success by Design: Guide 20, Service the solution 617
The rate of growth can fluctuate depending on the number of users
or even during certain times of the year if your business practice has
special circumstances that may impact record creation. Monitoring
You can find details on storage allocation
and purchasing additional storage for
storage growth and using historical trends will help estimate data
Finance and Operations apps in the growth. This information can help you determine how often the data
Dynamics 365 Licensing Guide.
archiving and removal process should take place.
Read about the storage capacity model for
Dataverse and how to check storage growth.
What data can be removed?
Planning starts with identifying the types of data that need to be
stored over particular timeframe. Besides cost, there are performance
implications of having a large dataset. Building a data removal and
retention strategy will help determine what to do when data is no
longer useful.
Transactional data may help you make key business decisions, assist
customers, and use AI to determine the next best action. But after
years of inactivity, the data may be taking up storage space and not
providing any value.
You may have logs, notifications, and other system records that you
can delete with no impact to the business. You may also have other
Review the cleanup routines for Finance transactional data that can be deleted. Because Dynamics 365 appli-
and Operations to delete historical logs and
cations have heavy parent-child relationships between records, pay
notifications. You should only run these
cleanup routines after the business has careful attention to how records are deleted and any impact to related
completed a detailed analysis and confirmed
that the data is no longer required.
records. Look for triggers that run extension code or workflows when
a record is modified or deleted. A delete operation could potentially
You can also free up storage space for
Customer Engagement apps. write a new entry in the audit log to record the transaction. You must
account for all these things when planning for bulk deletion.
Implementation Guide: Success by Design: Guide 20, Service the solution 618
Archive and retention strategy
What happens if the database is growing too large, and you’re seeing
the impact of it, but deleting records isn’t an option? You may need
to retain the data due to organizational, compliance, or regulatory
requirements. In this case, you can archive the data instead.
For example, let’s look at integrating Dynamics 365 with external sys-
tems. It wasn’t very long ago that you needed custom code modules in
order to efficiently pass data to and from Dynamics 365. But with tools
like Power Automate and Azure Logic Apps, you can build very power-
ful integrations through configuration with little to no code.
Implementation Guide: Success by Design: Guide 20, Service the solution 619
Reporting is another good example. The out-of-the-box experience
for reporting had limitations in Dynamics 365—you could only build
reports with data stored in Dynamics 365. Now, with tools like Power BI
and its capabilities to build reports from data in and outside of Dynamics
365, you have much mor e flexibility to quickly build out and embed
powerful reports. Also, some advanced Azure and Microsoft 365 services
coexist with Dynamics 365, and are going through the same type of
evolution. Everything from business functionality to ALM build and
deploy tools are constantly seeing incremental improvements.
The only way to truly understand these benefits and how to best apply
them in your solution for the most return on your IT investment is by
continuing your education. Gain an understanding of where the busi-
ness application domain is headed. Strive to understand what is in the
Dynamics 365 roadmap, why it’s included, and how it can make your
solution and your organization more efficient and robust.
Implementation Guide: Success by Design: Guide 20, Service the solution 620
Conclusion
In summary, take steps to be aware of your solution performance, be
proactive in taking action, and be prepared with a solid strategy for
maintaining your environments. Keep up on new trends and tools that
can help improve your solution and your organization.
It all starts with visibility into the health of the system through proper
monitoring. Having the right telemetry and managing notifications
from the system as well as Microsoft will help you to prioritize and act
to address maintenance needs.
621
Case study
Fruit company learns the
importance of servicing
the solution
An agricultural business that grows fruit implemented Finance and
Operations apps as soon as the cloud version of Dynamics 365 became
available. The company has been a global leader distributing fruit
across different regions, and warehouse operations and transportation
are part of the company’s core business.
Dynamics 365 has been evolving since its initial release, when the appli-
cation and the platform were released as separate components. By the
time continuous updates and a single, unified version became the norm
for Dynamics 365, the fruit producer’s operations in the cloud were
mature in taking updates under this modality. The company was ready
to adopt the modernized update pattern and take advantage of the
continuous innovations from Microsoft. They also wanted to fulfill one of
their expected returns on investment (ROIs) by entrusting Microsoft to
bring new functionality, instead of developing it on their own.
Implementation Guide: Success by Design: Guide 20, Service the solution 622
The fruit company’s solution involved standard Dynamics 365 apps,
some extensions, and an ISV. As the direct implementer of the entire
solution, the ISV created a strategic partnership with the company and
provided IT outsourcing services.
But the company noticed that while the ISV kept the solution up to
date with the Dynamics 365 releases, the ISV always provided an up-
date of their solution using Microsoft’s last update ring, or missed it
entirely. Because of this, the company had to apply Microsoft’s updates
late in the release cycle or, in some cases, not at all.
Then Microsoft notified the fruit company about new functionality for
advanced warehouse operations in an upcoming release of the Supply
Chain Management app. Because of the expansion of the company’s
operations and the complexity of their warehouse management, these
new features were crucial to increasing productivity and saving time
and resources while managing the expansion.
To adopt this new functionality, the company had to test the features
in advance to align their processes. They realized that being late on up-
dating standard features wasn’t workable, and they wanted to optimize
their ALM and validate their solution in early update rings. To be ready
for their peak season, completion of testing was time-sensitive, and
the company would only have enough time to adopt new functionality
Implementation Guide: Success by Design: Guide 20, Service the solution 623
and do proper regression testing if all solution components were in
place and ready. So, they asked their ISV to provide software earlier
than the general availability of their new version.
The challenge came when the ISV wasn’t aligned to that timeline. The
ISV’s usual practice of adopting Dynamics 365 releases when they
became generally available, in the last ring, meant the fruit company fre-
quently was one version behind on Finance and Operations apps release.
After conversations with the fruit company, the ISV agreed to align
with Microsoft’s update rings and give the company, and their other
customers, an opportunity to test their entire solution when the stan-
dard release was available.
Implementation Guide: Success by Design: Guide 20, Service the solution 624
21 Guide
Transition
to support
Implementation Guide: Success by Design: Guide 21, Transition to support 625
Poor service is always more
expensive than good service.
Introduction
When introducing a new system into an organization,
we need to think through the various areas that will
influence how the project transitions from project
mode to support mode.
In this section, we discuss how to construct a strategy to help you
prepare, define, and operate a support model.
Fig.
21-1 Organizations that spend the necessary time and energy to construct
a strategy that explicitly addresses how to create a fit-for-purpose
Support scope support organization for their Dynamics 365 application tend to have
▪ Enterprise architecture better user satisfaction, better adoption of the system, and therefore
▪ Business and IT policies
▪ Dynamics 365-specific considerations
higher-quality outcomes for the business.
▪ Business processes
▪ Business continuity
▪ System update cycles If this is the first Dynamics 365 business application for your company,
you may not have experience in setting up the support organization,
Support models support infrastructure, and support procedures for this application.
Support models
The scope definition influences the decisions that need to be made to
identify the best support model and the constitution of the resulting
support organization. This helps define the “who” of the support model.
Support operations
Finally, we discuss the distinct support requirements that emerge from
the transition and hypercare project phases.
Business continuity
Enterprise architecture
System upgrade cycles
In most organizations, the Dynamics 365 system is embedded within
the wider enterprise system landscape. The Dynamics 365 application
has connections to multiple systems and to underlying and coexisting
technologies. When considering the strategy for defining the support
model, the focus is often solely on the Dynamics 365 application archi-
tecture. It’s worth accounting for the changes, constraints, procedures,
Business continuity interact with Dynamics 365 applications. You should make a distinction
between new systems being introduced into the enterprise architec-
System upgrade cycles ture and those that may be influenced by the new systems.
During the project, the new Dynamics 365 system will probably be
implemented within a special sandbox (test environment) and not
necessarily be subject to all the influences and rules that the pro-
duction system is subject to. This also applies to the third-party test
systems that are part of the middleware or integrations. For example,
production environments have more limited access to the Dynamics
SQL database, and the process by which custom code is promoted to
production or how the Microsoft system updates are applied isn’t the
same. You should specifically examine the implications of the produc-
tion system environment on support operations and not rely solely on
the experiences of the test environment.
The impact of, and on, the surrounding architecture can be difficult
for the customer project team working on the Dynamics 365 business
application to fully identify. In almost all cases, you need the enterprise
architects from IT and the business to be involved in identifying the
changes. Some changes may also impact the roles of individuals formally
part of a support organization and those in peripheral organizations.
Enterprise architecture
Business and
IT policies Business and IT policies
Dynamics 365-specific
considerations All new systems need to think about how they will operate within the
Business processes wider organization’s policies and standards. When Dynamics 365 is
introduced into the enterprise, it needs to follow the applicable poli-
Business continuity cies already in place and may require new operating policies. In either
case, you need to review the existing policies and standards to deter-
System upgrade cycles
mine which policies to add, which to review, and which to apply to the
support model (Figure 21-2).
Fig.
21-2
In many cases, the policies apply not only to the creation of the support
Group and company policies
and standards model and its scope of responsibility, but also to how it operates as an
Dynamics 365 application-level policies organization and how it addresses the lifecycle of a support request.
Data classification and retention Group and company policies and standards
Group and company-level security and When evaluating group-level policies or standards, you may have to
access management
review the impact of business policies and procedures on various levels:
Regulatory compliance
▪ Contracting with and managing third-party vendors
The support model needs to include some type of contract with
Support scope
technology partners and with Microsoft
You could set up some of these policies within the Dynamics 365 ap-
plication, such as new vendor approval policies, customer credit limits,
purchase order approval thresholds, and travel and expense policies.
The support team may need to help provide information on compli-
ance or enforce compliance as part of their duties.
The Dynamics 365 support team needs to work with these other
enterprise IT and business teams to define the rules and procedures
for managing some of the security topics that may impact Dynamics
365 applications:
▪ Azure Active Directory groups and identities
▪ Single sign-on (SSO)
▪ Multifactor authentication
▪ Mobile device authentication and management
▪ Authentication and management for custom applications working
on Microsoft Dataverse (such as Power Platform apps), which
requires an understanding of Dataverse security concepts
▪ Application access for third parties (such as vendors and customers)
▪ Application access for third-party partner support organizations
(such as technology partners and Microsoft)
▪ Service account and administrator account use and management
▪ Password renewals and security certificate rotations
▪ Secure and encrypted communications within the enterprise and
Identify any outside the enterprise (such as those involved with integrations with
internal systems, or with external web services or banking systems)
enterprise-level
policies that The Microsoft Trust Center can help your organization consider overall
intersect with security and managing compliance in the cloud. Chapter 12, “Security,”
application-level provides a more detailed discussion of security for Dynamics 365
business applications.
requirements.
631
Data classification and retention
Consider how the organization’s data classification and retention pol-
icies reflect on and need to be expanded to include the new Dynamics
365 application:
▪ How is the support team expected to enable and enforce these
policies?
▪ What is the impact on the backup, restore, and archive process?
▪ What is the impact on creating and managing database copies?
▪ Do any data classification properties flow between systems, or do
they need to be recreated or audited by the support team?
All of these different areas of business and IT policies shape the nature,
size, and scope of the support organization. Early examination of these
factors will help the team be effective from the start.
Enterprise architecture
Dynamics 365-specific
Business and IT policies
Dynamics 365-specific
considerations
considerations In this section, we examine some topics specific to Dynamics 365 busi-
Business processes ness applications that we should consider when designing the support
model. These can be broadly grouped by operational and maintenance
Business continuity
topics and business process topics.
System upgrade cycles
Typically, you need to apply these tasks and considerations for the
following environments:
▪ Dynamics 365 application support environments, which are recent
copies of the production system
▪ Test environment for testing the next versions of the application
software
▪ Development environments
▪ Any related data migration, training, or integration environments
▪ Test environments for integrated systems
The level of skill Because Dynamics 365 applications are cloud and SaaS-based, many
and effort required maintenance tasks and responsibilities that are common to on-prem-
ises solutions are now managed by Microsoft. In general, the burden
to manage of infrastructure provision and maintenance for production systems is
integrations reduced, which leaves more time to focus on managing and improving
depends on business process performance.
their complexity, Define the system maintenance requirements and what is within the
criticality, and scope of responsibilities for the support teams. Typically, these are in
robustness. the following areas:
634
▪ Servicing the non-production environments, which can include:
▫ Requesting and configuring new Dynamics 365 environments
▫ Requesting and configuring database copies and restores
between environments
▫ Managing any customer-managed, cloud-hosted environments
▫ Performing specifically requested backups
▪ Managing system operations, which can include:
▫ Assigning users to security roles
▫ Reviewing and running periodic system cleanup jobs
▫ Managing system administration messages and notifications
▫ Batch calendar management
▫ System update calendar
Performance management
Performance management for a Dynamics 365 business application is
a mix of tasks and responsibilities for Microsoft and for the customer. In
this section, we consider the implications on the support model.
The support team needs to have some way to proactively monitor and
respond to any questions from the users on performance. Therefore,
the support team needs to be able to do the following:
▪ Understand the impact of the reported performance issue on
the business
Enterprise architecture
Business processes
Business and IT policies
As a business application, supporting the users in the day-to-day pro-
Dynamics 365-specific
cesses is a key function of a Dynamics 365 support organization. The
considerations
support organization is expected to provide coverage across all the key
Business processes business process, or at a minimum, know where to direct questions and
issues that were agreed as being outside their support scope.
Business continuity
System upgrade cycles Based on the definition of the key business processes in scope, consider
the following for each process:
▪ What is the level of expertise needed in the business process?
▪ What are the likely support tasks and queries related to this process?
▪ What is the type of usage, frequency, and volume?
These questions and more will help shape the operating model for the
support team.
Enterprise architecture
Business continuity
Business and IT policies
Many organizations need to have a business continuity strategy and
Dynamics 365-specific exercise regular checks of business continuity in case of a system-down
considerations
disaster. This may be required by external regulations or by internal
Business processes policies. In any case, the support organization is probably expected to
play a major role.
Business continuity
System upgrade cycles Depending on the size and complexity of the system landscape and
the types of disaster scenarios being exercised, this may need a signifi-
cant amount of preparation, coordination, and timely communication
between multiple parties.
638
help mitigate to reduce the impact on the business. You may also need
to apply specific setups (such as IP allowlists) to the secondary site.
Enterprise architecture
System update cycles
Business and IT policies
Microsoft Dynamics 365 application updates are one of the changes
Dynamics 365-specific that the support team probably needs to manage. From the perspective
considerations
of defining the scope of responsibilities for the support organization,
Business processes you must understand what is involved and plan for the updates.
Business continuity
Creating a calendar for the Microsoft updates helps the support team
System upgrade cycles plan for the resources and effort associated with the following:
▪ Microsoft Dynamics 365 updates
▪ ISV updates
▪ Updates for custom code
▪ Associated testing, including any regression testing and release
management
First level
Many customers have a concept of identifying and nominating super
users within a business unit. Typically, the super users gained their
deeper knowledge of the system and processes in their previous role as
subject matter experts (SMEs) during the project cycle.
641
▪ Triage and communicate issues in a way that makes it easier for
the helpdesk or resolving authority to understand the issue, repli-
cate it, and fix it rapidly
▪ Collate best practices, FAQs, and issues, and provide the core team
better data on areas of priority for them to tackle
▪ Provide early warning on issues that have the potential to escalate
Fig. and help with better adoption by the business
21-3
Super users per business unit Super users per business unit
1 or function
1 or function
1 Super users per business unit or function
Dynamics 365 CoE or Dynamics 365 Dynamics 365 CoE or Dynamics 365
3 Partner 3 project core team
3 project core team
5 Microsoft support or ISV support 5 Microsoft support or ISV support 5 Microsoft support or ISV support
Second level
Almost all customers have some internal helpdesk function for IT
systems, and many also have them for business applications. The size
and nature of the helpdesk varies depending on the model adopted.
For a fully outsourced model, the internal helpdesk registers the issue
and assigns it to the resolving authority (internal or external). This fully
outsourced model is tougher to deliver for highly-customized business
systems compared to less customized ones. Determining the correct
resolving authority can be tricky in business systems, even assuming
the super-user roles have eliminated the cause as a business process
or training issue, you may have many different system layers, such as
an internal network, infrastructure software managed internally or by
Microsoft, standard Dynamics 365, custom code, ISV code, or integra-
tion software at either end.
In the fully internal model, the internal helpdesk also routes the issue
to a small internal 365 Dynamics team that triages the issue and helps
determine the best resolving authority. This team is also responsible for
ensuring the issue is driven to resolution, regardless of the number of
parties (internal or external) that may need to be involved. The differ-
ence is that the next resolving authority may be the internal Dynamics
365 Center of Excellence (CoE).
Third level
In a fully outsourced model, the partner is responsible for triage and
resolution. In the mixed model, the most common scenario is for the
internal Dynamics 365 support team to determine if they can resolve
the issue; if not, they work with the Dynamics 365 CoE and the partner
to ensure it’s well understood and correctly prioritized.
In the fully internal and mixed model, the Dynamics 365 CoE includes
functional and technical experts (including developers) who can
resolve issues with custom code. If the issue is seen to be with stan-
dard Dynamics 365 or an ISV, the CoE logs the issue and works with
Microsoft or the ISV to get it resolved.
Fourth level
In the fully outsourced model, the partner manages the process and
the customer is involved as necessary. Most customers tend to have
some parts of the solution written or managed by their internal IT team
(such as integrations), so you still need a definition of how the partner
should work with the customer.
In the fully internal model, the Dynamics 365 CoE takes on the diagno-
sis and resolution if it’s within their control. They only involve external
parties if the issue lies with standard Dynamics 365, platform hosting,
or an ISV.
644
In the mixed model, the internal Dynamics 365 CoE or core team
typically fixes the simpler issues, but involve a partner for issues that
require deeper Dynamics 365 knowledge or for complex issues.
When the issue is resolved, the method to get the fix back into produc-
tion also requires a clear definition of the expected standards, testing
regimen, and deployment process. Even in a mostly outsourced model,
the partner drives this to a pre-production environment and the inter-
nal team deploys to production. Most customers don’t give partners
admin access to production environments.
Fifth level
Registering the issue with Microsoft support tends to be the common
escalation point, after the determination is made that the most likely
root cause is the standard Dynamics 365 software or service.
646
▪ Dynamics 365 application system update management (code
branching, any software development, unit testing new changes, etc.)
Responsibilities may also overlap with those for business process support,
with the following distinctions for the expected areas of understanding:
▪ Gaining and maintaining a sufficient understanding of the relative
criticality of the various functions within a business process
▪ Gaining and maintaining a deep understanding of the underlying
technical processes involved in the system functions (especially
integrations, custom code, and critical processes)
647
support team may also be expected to keep track of new features and
developments (positive and otherwise) that may impact the system.
Often, the planning for the break/fix part of a support team’s orga-
nization is well thought through, but the planning for other areas of
responsibility may be neglected. For example, many support team
members may also need to be involved in helping with the lifecycle of
the next system update. This may include the following duties:
▪ Helping with assessing the impact of proposed changes on the
current system
▪ Providing insights into areas that need reinforcement based on
statistical analysis of the areas with the most issues
▪ Testing the changes
▪ Updating documentation to reflect process and system changes
▪ Training in any new processes
▪ Sending communications of the system update
▪ Managing the next release through the development, test, and
production environments
▪ Refreshing the support environments with the latest production
environment
This section concentrated on the tasks and activities that your support
organization may need to cover. For details on operational consider-
ations, refer to Chapter 20, “Service the solution.”
The following is a typical set of roles (depending on the size and com-
plexity of the implementation, sometimes multiple roles may be filled
by a single individual):
▪ Business super user(s) As we discussed earlier, super users serve
as first-line support with some form of part-time role. The level of
formal recognition of the role varies from customer to customer.
▪ Business process expert(s) Also called SMEs or functional
leads, these experts in the support team usually have a full-time
or near-full-time role. A lead (technical architect) usually has
oversight on the full solution architecture across all the business
process workstreams.
▪ Technical architect Also called system architect or solution
architect, this role within the support team ensures the enterprise
and Dynamics 365 technical standards are met. They have over-
sight across all the technical tasks and duties mentioned earlier in
this chapter. The technical experts, developer, and release manag-
er roles often report to the technical architect.
▪ Technical expert(s) This role has expertise in the technical
servicing and support duties that we discussed earlier.
▪ Developer(s) This role is responsible for bug fixes to custom
code and writing code for new custom functionality.
▪ Release manager This role is responsible for confirming that a
release has been appropriately validated for release to production.
They’re also often responsible for other servicing and support tasks.
▪ Support manager This manager is accountable for the support
services for Dynamics 365 applications (and often other related
systems).
▪ Business and IT stakeholders You should consider these roles
significant because their sponsorship, direction, prioritization, and
decisions are key parts of establishing the right support organiza-
tion, defining fit-for-purpose processes for support, and meeting
business expectations.
Fig.
21-4
Executive
• Provides executive guidance • Clears roadblocks
• Sets priorities • Secondary escalation point
• Authorizes budgets sponsors
650
by analyzing the scope of the requirements for the support organiza-
tion (as discussed earlier in this chapter) as well as the size, complexity,
functional spread, and geographical (time zone, language, local data
regulations) distribution of the implementation.
Implementation
• Infrastructure management and deployment
• Datacenter networking, power and cooling
roles and
responsibilities Customer leading, supported by
implementation partner
User/Data Application
• Security, identity configuration, and • Define and test business processes
management
• Develop and test customizations
• Monitor sandbox environments
Consider the full set of tasks and activities in scope for the support or-
ganization (as discussed earlier) and map these to the various roles and
resolving authorities over the full lifecycle of a support job or request
so that no gaps appear in the flow. You can use this to make sure that
the specific responsibilities of all the roles and resolving authorities can
be mapped to agreements and service-level expectations.
For internal parties, this may mean defining budget and resource splits
and less formal service delivery agreements. For third parties, that may
mean formal contracts with formal SLAs. Mismatched expectations
between the customer and a third-party support provider in the mid-
dle of dealing with high-priority issues are not uncommon. A contract
created based on a deeper understanding of the expected tasks and
service expectations is much more likely to avoid misaligned expecta-
tions and provide a faster path through the process.
Hypercare
Transition
Support organizations for business applications are rarely fully formed
on the first day of go live. You normally have a transition period from
project mode to support mode. However, the transition period often
starts later than it ideally should, and the quality of the transition is
often not sufficient to provide the level of service that is expected. This
If, however, the existing SMEs supporting the legacy system are
expected to support the new system, consider formalizing their in-
volvement in the project during the Implement phase, from the early
build steps and especially in shadowing the business functional leaders
at critical points in design and software playbacks.
You can apply a similar approach to the technical roles, they can be
involved in shadowing the project team at critical points and be given
project tasks to complete (under the supervision of the project team).
Transition
Requirements management
Requirements When the application is in production, it doesn’t mean that the sys-
management tem solution can now be considered static for the next few years. The
Change management solution will need to deal with new requirements for several reasons:
▪ Some projects implement a minimum viable product (MVP) on
Hypercare their first go live with the goal to incrementally add the lower-pri-
ority requirements over time
▪ Some projects have planned multiple rollouts that may impact the
already live areas
▪ Some changes to Dynamics 365 are driven by changes in connected
systems in the enterprise
▪ Businesses need to react to the changing world around them
▪ In a modern cloud SaaS world, incremental functional and tech-
nical updates from Microsoft and ISVs help keep the customer’s
solution secure and updated
Hypercare
However, support teams may need to accommodate other sources of
change in the production system, such as the following:
▪ Core data changes (new customers, suppliers, items as part of
normal business operations)
▪ System parameter changes (such as in response to the growing
volume of data)
▪ Configuration changes (such as in response to regulatory changes)
▪ Approval process changes in the system (such as in response to
internal policy changes)
▪ Bug fixes
Some of these changes can be made within the agreed change control
process enforced and facilitated by the system itself. In other cases,
proposed changes need to be reviewed, prioritized, and approved by
business or IT stakeholders and any other relevant parties.
The role of the support team during this period is critical, and it’s import-
ant that the expectations are clearly set out and agreed with all parties:
▪ Define a clear exit strategy for the end of the hypercare period
based on meeting explicit criteria. For example, criteria might
include no open P1 issues without an acceptable action plan, key
business processes are operating at an acceptable efficiency rate,
over 90 percent SLA targets being met for P1 and P2 support re-
quests, or the support team can resolve over 95 percent of support
requests without using resources reserved for hypercare only.
▪ Decide on an end date for the hypercare period and have an
action plan in place to meet the exit criteria.
▪ Define expectations and an explicit plan for the project team (and
partner resources) during the hypercare period so you can get reli-
able help from the team. Otherwise, you may find that the project
team is committed to working on the next phase of the project
and will resist being drawn into support issues.
▪ Make sure the support team gets the necessary documentation
and top-up training from the project team and implementation
partner during the hypercare period.
Conclusion
During operational use of the Dynamics 365 system, the support
operations are expected to function efficiently and evolve alongside
the expanding use of the system. For that to happen smoothly, the
preparation and definition of the support operating model are essen-
tial precursors.
Use the system update cycles section, including managing new re-
quirements and changes, to define the means by which the Dynamics
solution can stay updated and evolve with the additional and im-
proved features. This should deliver a detailed definition of the tasks,
the roles responsible, and the standards expected to continuously
improve the business value from the Dynamics 365 system. This is im-
portant to ensure that the support organization can keep the current
operational system functioning optimally while also working in parallel
on the next update.
Finally, transition guidelines can help you make sure that the transition
from project to support happens incrementally and through practical
experience of the system during implementation. We also encourage
you to validate the readiness of your support processes and organiza-
tion prior to go live.
Operating considerations
The nature and details of the support tasks are influenced by the spe-
cifics of the business application because they are so embedded in the
All of these topics have a bearing on the operational patterns that the
team needs to support.
Well-defined Establishing a formal escalation procedure with defined criteria will also
help prevent frustration on slow-moving cases and help mitigate the
scope and support impact of those that escalate issues beyond their true business impact.
operations will
yield an efficient Tools and access
Most support organizations have some type of internal helpdesk
support model software that is used to manage support requests across all IT ser-
that can evolve as vices. Most continue using their existing helpdesk for Dynamics 365
needed.
658
applications but need to consider new internal and external resolving
authorities. Third parties such as partner support organizations and
ISVs have their own helpdesk systems. When the issue is diagnosed as
likely to be in standard Dynamics 365, the recommended method of
registering the issue with Microsoft is to use the in-application process:
Dynamics 365 Finance and Operations apps Support and Dynamics
365 CE Power Platform Support. Tracing a support request across
multiple technologies and especially across third-party organizations
requires some deliberate planning and possible configuration so that
the internal helpdesk has a reference from start to finish of any ticket.
For example, if the policy only allows for partners to view anonymized
customer data, having an automated way to copy the production
system that includes anonymization will help reduce delays when
troubleshooting or testing.
Avoid delays by
defining access
policies for partner
resources.
659
Support operational
Checklist considerations
Establish a transition strategy that drives the support
team to participate in the emerging solution and im-
prove their support readiness over the project lifecycle,
During the Prepare phase, when some members of the support team
were asked to assist with the testing, the team didn’t feel ready to par-
ticipate until they had been through the training, which was scheduled
after the testing. The project SMEs didn’t feel they could conduct the
internal training because they also had very little hands-on experience
in the working system.
When UAT was complete, the project team had other unplanned
activities that were considered to be higher priority, and so the training
and handover to the support team was reduced to two days.
The go live went ahead on schedule, but the support team struggled
to adequately support the business. The initial assumptions that the
support team would just pick up the new system support were found
to be mistaken, because the underlying processes had also undergone
significant changes. The support team hadn’t been informed about
all the business process decisions and weren’t familiar with the new
control and approval requirements configured in the Dynamics 365
application. The shortened training and handover were conducted
without reference to the significantly changed business processes or to
the new enterprise architecture, so didn’t provide the necessary context
to support a production system.
The support operating model had been revised, in theory, from the
distributed, system-level team structure to be more business process
and enterprise IT-based, but it wasn’t exercised during the Implement
In the previous support operating model, all queries and issues came
directly (and with little structure) to an individual support team member,
and the full lifecycle of the issue was performed with little external visi-
bility. In the enterprise-level support, even with the first limited rollout,
the support team was receiving large numbers of administrative queries,
which reduced time to address the more business-critical tickets.
The project team had planned to start working full-time on the next
phase of the project after only two weeks of hypercare. The budget
hours for the implementation partner hypercare support were planned
to last four to six weeks, but were exhausted within two weeks or so
because the support team needed far more of their time. Many SLAs
were missed in the early weeks, and the business was forced to formally
extend the hypercare period for both the project team and the imple-
mentation partner because trying to get the timely attention of the
implementation teams was difficult given that they had other planned
priorities for the next project phase.
After the P1 and P2 support tickets were resolved and the volume of
new tickets was down to a manageable level, the post-go live
retrospective review made the following key recommendations to be
enacted as soon as possible, to improve the support outcomes during
the next rollout phase:
▪ The scope of the activities, tasks, and actions expected from the
support team should be explicitly defined, in the context of the
new system, processes, people, and architecture.
▪ The support operating model needs to have a more formal
This story exemplifies the adage “Poor service is always more expensive
than good service.”
Faisal Mohamood
General Manager, FastTrack for Dynamics 365
668
Acknowledgments
This book celebrates Microsoft’s desire to share the collective thinking
and experience of our FastTrack for Dynamics 365 team, which is
currently made up of more than 140 solution architects who serve
Microsoft Dynamics 365 customers and partners around the world.
For being the kind of leader who purposely carves out time in your
busy schedule to think about, write about, and share your vision for
Dynamics 365 and the Microsoft Power Platform, thank you, James
Phillips. We hope that the way you’ve inspired us comes through on
every page of this book.
For your tireless efforts in making Dynamics 365 a better product and
for making us better solution architects, thank you, Muhammad Alam.
We’re also grateful for the time you gave us to round out the content
in Chapter 1.
No one who Thank you to the book’s authors and reviewers.You kept your promise
to write the truth into each chapter, and to make sure the message
achieves success they contain is valuable to our readers.
does so without
acknowledging To the entire Steyer Content team for your professional and highly
the help of others. competent effort in advancing the book’s content and design. Thank you.
The wise and Finally, a special thanks and debt of appreciation is due to all our
the confident current and former Dynamics 365 customers and partners from whom
acknowledge this we learn and by whom we’re inspired with each encounter. We marvel
at what you do.
help with gratitude.
-Alfred North Whitehead —The FastTrack for Dynamics 365 Team
669
Appendix
Test strategy Test plan Plan the testing strategy. Test lead
Test consultant
Define test activities such as
unit testing, functional, user Functional consultant
experience (UX), integration, Solution architect
system integration testing Customer IT team
(SIT), performance, and time
Customer business
and motion study.
consultants
Data strategy Data migration Define data that needs to be Customer IT architect
plan and migrated to Dynamics 365 Solution architect
execution apps, along with data volumes,
Functional consultant
strategy approach, and tools to be used.
Data migration
Define high-level plan of Lead/architect
data migration activities,
such as data extraction,
transformation, and load;
migration runs; initial and
incremental migrations;
data validations and data
verification; and production
migration.
Security strategy Security strategy Outline the strategy, including: Customer information
▪ Scope security group (ISG)
▪ Key requirements team
▪ Approach Solution architect
▪ Technology and tools
Identity SME
Federation, SSO Define the needs for single Customer
sign-on and federation with infrastructure SME
other identity providers.
Customer IT architect
Security roles Define all the roles required
and assign these roles to
business process activities so as
to map them to business roles/
personas.
Security strategy Azure Active Define the requirements for Customer information
continued Directory Active Directory or Azure security group (ISG)
(AAD) access Active Directory, identity team
management integration needs, and cloud Solution architect
identities.
Identity SME
Information Elicit security, privacy, and Customer
security compliance requirements; infrastructure SME
regulatory needs; and audit
Customer IT architect
events.
ALM strategy Release process Define the processes for DevOps consultant
continued
release management of the Customer IT architect
solution.
Solution architect
ISV ALM Define the process for Technical lead
managing the DevOps process
Customer IT team
for other components and
solutions (such as Azure/non-
Azure components) that make
up the solution.
Program strategy Project scope and Following all related activities, Customer PMO
requirements list the project scope and RTM to Customer business
signoff be signed off on. stakeholders
Cutover strategy Complete this deliverable by Project manager
and plan the start of the build. Solution architect
Solution design Once design is documented, Functional consultant/
signoff it needs to be signed off on. architect
All changes that are required
follow review, approval, and
version control.
Test strategy Develop unit test Build unit test cases for the Test lead
cases defined functionality to ensure Test consultant(s)
TDD (test-driven development)
DevOps consultant
practices are adhered to.
Technical consultant(s)
These unit test cases should Performance test
be executed as part of CI/CD consultant
pipelines.
Evaluate migration in
Dynamics vs. data warehouse
storage.
Security strategy Security roles When a custom role needs to Technical lead
continued (F&O) be created, a good practice Solution architect
is to not modify the standard
Functional consultant
roles because it will impact
the continuous updates from Customer IT architect
Microsoft. Customer information
security group (ISG)
Apply strict change
management control.
Testing strategy UAT test cases Refine UAT test cases used for Customer business
validating the solution. users (UAT test users)
UAT test report Prepare and present UAT test Test lead
execution report that baselines Functional consultant
the UAT signoffs. Test consultant(s)
Testing strategy Performance These results help establish the Performance test lead
continued benchmark confidence to go live for the Solution architect
testing results anticipated concurrent loads,
Customer IT architect
and transactions.
Data strategy Data migration Team completes the last Tech lead
execution results/ cutover migration of the Technical consultant(s)
signoff data required following the
Solution architect
shutdown of the incumbent
systems before opening Customer IT architect
Dynamics 365 for go-live.
ALM strategy Escrow Escrow all the project artifacts Tech lead
collected during the Initiate, Technical consultant(s)
Implement, and Prepare
Solution architect
phases.
Customer IT architect
Knowledge Register the documents and
transition maintain all pre- and post-
documents deployment activities that the
support team should consider.
User adoption/ User adoption Document the solution usage Customer business
usage document (user patterns and user interviews, users
interviews, along with strategies to Customer IT architect
feedback loops) improve the adoption of the
PMO
system.