Data Science for Supply Chain Whitepaper
Data Science for Supply Chain Whitepaper
1
INTRODUCTION
It seems obvious that Data is one of the foundations of Supply Chain
Management : as the basic component of the Information flow, Data itself
is a part of the very definition of Supply Chains : global networks used
to deliver goods and services through engineered flows of information,
physical goods and cash (source : APICS)
From Master Data driving ERP software to real-time IoT streams enabling fine grain tracking and optimization,
Data is omnipresent when dealing with Supply Chain processes. However, the abundant information is not al-
ways leveraged to actually improve the overall flow. Often locked away in specialized software, siloed between
departments of the same company or simply not made accessible to business practitioners, the premise of a
data-driven Supply Chain falls short of expectations.
At IRIS by Argon & Co, we help our clients to build high-performing global Supply Chains by leveraging the
power and benefits of Data Science techniques and tools, combined with a transformational mindset of
processes and organization.
To tackle this challenge, one approach proved to be efficient : the agile development and deployment of high-
value, targeted use-cases to support Supply Chain operations. Under varying formats like dashboards, custom
business planning tools, AI models, low-code apps and automated workflows, these solutions fill the functional
gaps left by more wide-span software.
In this this kind of architecture, legacy software such as ERPs, APS, WMS or TMS still serve as the backbone
of Supply Chain Management processes. The relevance of these software to manage both master data and
transactional data under Supply Chain Management best practices is not under debate, with decades of
experience baked into each solution’s functionalities.
Now, to achieve the required agility over time, these monolithic solutions need to be augmented with targeted
business tools. Benefits arise when such tools :
• Effectively improve the efficiency of the Supply Chain processes (example use-case: improved forecast
accuracy thanks to a custom forecasting algorithm) …
• Or provide a competitive advantage (ex. new service of vendor-managed inventory enabled by a shortage
prediction at SKU level).
• Can be quickly developed and deployed (in under a few weeks or months).
• Are continuously updated to fit the latest requirements dictated by the Supply Chain reality.
2
we think about Data Science as the sum of components to be combined to serve this objective : such as Data
Visualization tools, Data Platforms, advanced analytics, AI / Machine Learning algorithms, etc.
In a conceptual way, data-driven solutions for Supply Chains fall under four distinct categories, in growing
order of maturity (or complexity) :
Descriptive : e.g. reporting on stock situations across all DCs and Stores, at a glance.
Diagnostic : e.g. link between a delay on a particular raw material and the customer orders that will be affected
down the chain.
Predictive : e.g. probabilistic forecasting of likely demand quantities for the short-term.
Prescriptive : e.g. recommendation to place an additional purchase order to a secondary supplier, because of
(i) probable increase in demand and (ii) predicted delay on primary supplier and (iii) low level of global stock
for the corresponding finished goods.
In the examples cited above, it clearly appears that the most mature use-cases combine intelligence from the
lower-level applications. However, even the less mature use-cases often show a clear ROI on their own, before
serving as a steppingstone towards more advanced applications.
In a more practical way, the following figure illustrates a series of Supply Chain use-cases with Data Science
techniques at their heart (this list being neither exhaustive, nor dedicated to a particular industry), which we’ve
encountered and deployed in our client’s organizations.
3
Selection of Data Science use-cases for the extended Supply Chain
More often than not, we observe a significant gap in organizations between the day-to-day Data Management
in place and the requirements of this approach. As such, the Enterprise Data Platform presents itself as an
ensemble of concepts and tools working together to bridge this gap, delivering and sustaining business use-
cases. Below is our vision of such a platform : combining technological components, supporting teams and the
related data governance, acting as the “single source of truth” for all business tools and use-cases.
Both Data teams and Supply Chain teams share converging aspirations in the perspective of the Enterprise
Data Platform :
• Building a single-source-of-truth for business domains, exposing high-quality data models. For instance,
the table object “sales forecast” should be built with the demand planners’ expertise and then made
4
available to the rest of Supply Chain stakeholders, as well as Purchasing, Marketing, Merchandising,
Product Development…
• Connecting to many Data sources, getting the data out of the systems of records, and making it available
to a variety of usage. For instance, enabling end-to-end visibility on a global Supply Chain by ingesting
data from all the country-level ERP systems.
• Building an abstraction layer between the legacy IT and analytical tools, therefore decoupling software and
IT projects from the business side. As an example, an APS tooling can be deployed without having to fully
re-design your demand and supply planning KPIs.
However, there is even more significant value to be captured from offering supply chain teams with highly
accessible Data (curated or not) transversal to the company activities, combined with modern and powerful
tools to enable flexible data exploration, analysis, and modelling. Under the concept of “self-service analytics”
for business users, we identify several tools such as :
• Data Visualization softwares (Power BI, Tableau, Qlikview, Looker…)
• Collaborative SQL editors coupled with notebooks and/or spreadsheets (Databricks, Azure Data Studio,
PopSQL…)
• Or all-in-one platforms which can also support AI models deployment (Dataiku, Alteryx, DataRobot…)
What is expected from Data teams is to support the Supply Chain teams on the road to more and more
widespread adoption of such tools, with the multiplier effect it will have on the generation of new use-cases
and value delivery for the organization. As such, Data Feature teams are responsible for administering the
platform architecture itself, monitoring the data quality in the platform (freshness, completeness, validity,
coherence), offering training resources to self-service users and building high-value data products.
Delivering this vision requires on a close-knit collaboration between the supply chain teams and their related
feature-team :
• On one hand, Data teams can “push” use-cases that are more complex to address, leveraging their
specialist profiles (Data Engineers, Data Scientists, UI Expert…). For instance, a machine learning
forecasting algorithm for new products potential prediction, or a lead-time prediction model addressing
purchase orders to suppliers and subcontractors.
• On the other hand, Supply Chain teams can build their own use-cases on top of data core models made
available by the feature-teams. For example, a stock policy simulation model and accompanying stock
analysis dashboard.
• Between the two, Supply Chain teams can develop a first version of a use-case, before relying on the
Data team to expand its scope and bring it to the expected Data standards for company-wide usage. For
instance, a dashboard supporting the S&OP process, first tested on a single business unit before a more
widespread adoption.
5
Conclusion
The modern Data Stack and Data Science approach have a lot to offer to Supply Chain teams, yielding high
value-added use-cases and enabling Supply Chains to become truly adaptative. The Enterprise Data Platform
is a vision towards which both Supply Chain and Data feature teams must collaborate. As a Supply Chain
practitioner, collaboration with Data teams will help you achieve expected benefits on use-cases of growing
complexity, within a long-lasting platform. As a Data Leader, working with Supply Chain teams will drive your
roadmap towards a truly end-to-end platform, with opportunities to tackle a variety of use-cases.
At IRIS by Argon & Co, we combine Digital and Data Science expertise with Argon & Co experience in
operations and change management. Our 30+ data experts operate at the intersection of operations,
technology, to develop disruptive data solutions in Supply Chain, Manufacturing, Merchandising, Procurement,
and Finance, from idea to implementation and process transformation.
6
IRIS
IRIS by Argon & Co
by Argon & Co
IRIS by Argon & Co is an integrated team of operations experts, Data Scientists and Data
Engineers within Argon & Co that specialises in Data Analytics for Operations.
We use Data Analytics, AI, IoT and digital technology to design and build clear solutions,
and provide a new level of efficiency and profitability for clients. Our people apply a
combination of operations experience, data expertise and broad business knowledge
to improve operational performance. We deliver robust, transparent and practical data-
driven insights and solutions to generate real change.
We are based in Paris, and work collaboratively with the Argon & Co global offices.
www.irisbyargonandco.com
Authors
Samuel Demont
Associate Partner
[email protected]
After a ten-year experience in the Supply Chain, Samuel has developed expertise in the
use of digital technologies and Data Science in operations. He is now the Head of IRIS by
Argon & Co, our team of Data Scientists and new technology specialists.
Guilhem Delorme
Manager
[email protected]
Guilhem delivered Supply Chain transformation projects for several years before joining
the IRIS by Argon & Co team as a Data Project Manager. He leads the design, build,
deployment and execution phases of data-driven use cases for Supply Chain and Logistics
(Business Intelligence, Machine Learning models, Enterprise Data Platforms).