Distributed Microservices Architecture For Supply Chain Management System
Distributed Microservices Architecture For Supply Chain Management System
Realized By :
- Abderrahmane Boucenna
Year : 2022-2023
1
Abstract
To apply this solution we must first study the software architecture of the
system and design it in a way that responds effectively to the system needs
that were observed notably interoperability and performance .
In this document we study the environment , the structure, the driving quality
attributes and the tactics of implementing the solutions to our software
architecture and the diffrent architectural styles involved.
Contents
1 Project Description 6
1.1 The Proposed Project : freight management system for supply
chain optimization . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 System Description . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 The Transporters . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 The Shipper . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Administrator . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Architectural Challenges . . . . . . . . . . . . . . . . . . . . . 7
1.4 Stakeholders Description . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Project Owner . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1.1 Ayrade . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Clients: . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2.1 Bejaia Logistiques . . . . . . . . . . . . . . . . 8
1.4.2.2 NUMILOG . . . . . . . . . . . . . . . . . . . 8
1.4.3 Private investors : . . . . . . . . . . . . . . . . . . . . 8
1.5 Functional Description . . . . . . . . . . . . . . . . . . . . . . 8
1.5.1 Common . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5.2 The Transporter . . . . . . . . . . . . . . . . . . . . . . 9
1.5.3 The Shipper . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.4 The Administrator . . . . . . . . . . . . . . . . . . . . 10
1.5.5 Class Diagram : . . . . . . . . . . . . . . . . . . . . . . 11
1.5.6 Use case diagrams . . . . . . . . . . . . . . . . . . . . . 12
1.5.6.1 Transporter use case diagram . . . . . . . . . . 12
1.5.6.2 Shipper use case diagram . . . . . . . . . . . . 13
1.5.6.3 Administrator use case diagram . . . . . . . . 14
1.6 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Work Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 15
2
2 Objectives Definition according to Quality Attributes 16
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 The Importance of interoperability for the system . . . . 16
2.2.1.1 Service Discovery . . . . . . . . . . . . . . . . 16
2.2.1.2 Service Orchestration . . . . . . . . . . . . . . 18
2.2.1.3 Tailor Interface . . . . . . . . . . . . . . . . . 19
2.2.2 The Importance of performance for the system . . . . . 20
2.2.2.1 System Caching . . . . . . . . . . . . . . . . . 20
2.2.2.2 System Workload Balancer . . . . . . . . . . . 21
2.2.3 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . 21
3
3.4.3 modifiablity . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Performance Tactics . . . . . . . . . . . . . . . . . . . . . . . . 41
3.5.1 Manage sampling rate . . . . . . . . . . . . . . . . . . 41
3.5.1.1 Mechanisms . . . . . . . . . . . . . . . . . . . 41
3.5.1.2 The data concerned with sampling rate . . . . 42
3.5.2 Introduce concurrency . . . . . . . . . . . . . . . . . . 42
3.5.3 Mechanisms : . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.4 Load balancing . . . . . . . . . . . . . . . . . . . . . . 43
3.5.5 Maintaining multiple copies of computations and data 44
3.5.5.1 Mechanisms . . . . . . . . . . . . . . . . . . . 44
3.5.6 Increase Resources . . . . . . . . . . . . . . . . . . . . 45
3.5.6.1 Mechanisms . . . . . . . . . . . . . . . . . . . 45
3.5.7 Prioritize Events: . . . . . . . . . . . . . . . . . . . . . 46
3.5.8 Increase resource efficiency . . . . . . . . . . . . . . . . 46
3.5.8.1 Mechanisms . . . . . . . . . . . . . . . . . . . 47
3.5.9 Schedule Ressources . . . . . . . . . . . . . . . . . . . 47
3.5.9.1 Mechanisms . . . . . . . . . . . . . . . . . . . 47
3.6 Performance Checklist . . . . . . . . . . . . . . . . . . . . . . . 47
3.6.1 Allocation Of Responsibilities . . . . . . . . . . . . . . . 47
3.6.2 Coordination model: . . . . . . . . . . . . . . . . . . . 49
3.6.3 Data model: . . . . . . . . . . . . . . . . . . . . . . . 50
3.6.4 Mapping among architectural elements . . . . . . . . . 50
3.6.5 Resource Management . . . . . . . . . . . . . . . . . . 50
3.6.6 Binding Time : . . . . . . . . . . . . . . . . . . . . . . 51
3.6.7 Choice of technology : . . . . . . . . . . . . . . . . . . . 51
3.6.7.1 Prometheus . . . . . . . . . . . . . . . . . . . 51
3.6.7.2 Grafana . . . . . . . . . . . . . . . . . . . . . 51
3.6.7.3 Threading libraries . . . . . . . . . . . . . . . 52
3.6.7.4 Message queuing . . . . . . . . . . . . . . . . . 52
3.6.7.5 Hadoop . . . . . . . . . . . . . . . . . . . . . 52
3.6.7.6 Containerization technologies: . . . . . . . . . 52
3.6.7.7 No SQL Database . . . . . . . . . . . . . . . . 52
3.7 Side effects of performance tactics . . . . . . . . . . . . . . . . 53
3.7.1 modifiability . . . . . . . . . . . . . . . . . . . . . . . 53
3.7.2 Security . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4
3.8 Scalability as an additional focus . . . . . . . . . . . . . . . . 54
5
1
Project Description
6
to participate in the process. The representative is mainly responsible for
sharing empty return offers.
1.2.3 Administrator
Manages and supervises system users, their roles and privileges, he also manages
the activities, operations and dashboard related to the empty return control
system.
- Integration with existing systems ,since the project will deal with a number
of systems such as logistics,inventory management system ,commands system
,planification system... along side external systems such as mailing and
maps ,this could lead to a difficult data synchronization and handling .
- the System must support a high number of vehicles and should be able to
expand and scale up when adding new clients .
- the system contains real time data that is the location of the transportation
vehicle, the performance could face degradation as the system scales up .
7
1.4 Stakeholders Description
1.4.1 Project Owner
1.4.1.1 Ayrade
1.4.2 Clients:
1.4.2.1 Bejaia Logistiques
Founded in 2008, the Sarl BEJAIA LOGISTIQUE is today one of the actors of
references in the field of road transport in ALGERIA. Their activities are
extended from the public transport of goods, renting of machines and and
equipment for construction, public works and handling, rental of vehicles...
1.4.2.2 NUMILOG
In 2007, Numilog was created by the Cevital group to accompany the development
of its activities and to ensure its logistical support. The company has been
able to capitalize on its experience in the food, household appliance, retail,
automotive and construction industries. In 2014, Numilog opened up to the
external market and Today, Numilog has a turnover of 75 million euros and
1,400 employees.
8
- The system allows any type of user to see their profile.
- the system stores the actions of each user and allows to visualize it.
9
- The system allows the shipper to add,modify,cancel a command in a waiting
state.
- The system notifies the shipper in case of acceptance or refusal of their
command.
- The system allows the user to visualize their dashboard.
10
1.5.5 Class Diagram :
11
1.5.6 Use case diagrams
1.5.6.1 Transporter use case diagram
12
1.5.6.2 Shipper use case diagram
13
1.5.6.3 Administrator use case diagram
14
1.6 Objectives
This systems first Aim is to optimize and ameliorate the process of supply chain
from various perspectives, to achieve that a number of objectives must be met
, we cite the following :
- the system must Improve communication between the two actors, carrier
and shipper
- the system must Rent a customizable private "user" space for each company.
- The system Must ensure the best return plans for transporters .
- The system should respond to future expansion in case of new clients.
- The solution must ensure access to data at all times and under all circumstances.
- The system must coordinate between various systems .
- The system must provide an effective mailing and commanding system.
- The system must take advantage of the empty return plans and maximize
the number of commands achieved in the given time and vehicles access.
- The system must minimize the cost of transportation and take full use of
vehicles .
15
2
2.1 Introduction
Having seen the description of our supply chain solution , we now move on
to Our quality attributes,The domain of supply chain is a domain distinct for
having many systems interconnecting with each other ,beginning from storage
management,their planification,organization to the transportation management
to command system and distribution so naturally their related systems would
be similarly complex that is having many complex systems working together to
create an optimized flow for the supply chain process to work on .we now cite
the principal quality attributes that we will be focusing on ,this doesn’t mean
that our system will not provide any technique to assure the rest but the priority
quality attributes are ones who will take the most of work to be assured.
2.2 Interoperability
Our most important Quality Attribute, since the system contains many subsystems
interacting with each other we view interoperability to be the most crucial one
.Now to illustrate interoperability we must go through it’s dimensions.
16
would be crucial is the following :
- This is a common scenario where let’s say the admin wants to visualize
statistics generated from the statistics server .to first access the statistics
generated we must know their ip adress . and what if this ip address is
unknown or it’s ip address has changed . this makes the service undiscovered
and rendering the system nonfunctional.
- if in the case where our services are inside docker containers. the ip
addresses frequently change .meaning there’s no way we could frequently
and manually change services.
17
2.2.1.2 Service Orchestration
18
2.2.1.3 Tailor Interface
19
their own data handling methods . if there’s no standard data formats this
leads to inconsistency .
- a service is trying to access a service that has already been upgraded or
changed version this could lead to system crash .
20
2.2.2.2 System Workload Balancer
at the end of the month we want to calculate the statistics for the entire
year.the statistics server is having heavy load of information,if there are multiple
instances of the system instead of having one instance take all the heavy load
,we should find a mechanism to distribute this workload to take the maximum
advantage of our hardware
2.2.3 Scalability
21
The system at it’s core is an inter-organizations effort but it is more likely to
expand in the future reaching a national scale.if we want to add new service
in the future or accommodate to higher consumption we will need to avoid as
much as possible hardware expansion as it is costly ,this means that we should
allow the system to expand without the need of adding physical ressources.
22
3
3.1 Introduction
After deriving the quality attributes viewed critical for our supply chain project
we provide the tactics that we envision will ensure these quality attributes. The
process of documenting these tactics will be through a checklist for each desired
quality attribute , we will also examine the effect of each tactic on the rest of
quality attributes.
23
the map service stored in Eureka registry Our system uses a variety of services
both and external which creates the need for service discovery mechanisms.the
service oriented nature of our system provides a mechanism to achieve this
through the use of service registry. there are several approaches for this such
as service registry which is centralized directory that maintains a list of all
available services and their metadata,UDDI( Universal Description, Discovery,
and Integration) which is a standard protocol for service registry and and
dynamic discovery where services advertise themselves to the network using
multicast or broadcast messages and finally Directory Services such as LDAP
or active directory, can be used for service discovery .
In order to choose among these approaches we had to add another dimension
into consideration which is the performance and this is why we chose service
registry as it provides the best performance with a centralized directory of
available services that can be queried quickly and efficiently. A service registry
can also use load balancing algorithms to distribute requests to available services,
improving performance and scalability.
This means that information about services will go be stored in a centralized
registry and requests to use these services will go through a service registry
software. the most common one would be Eureka.for example we want to access
the map service stored in Eureka registry
const Eureka = require(’eureka-js-client’).Eureka;
const L = require(’leaflet’);
24
name: ’MyOwn’,
},
},
eureka: {
host: ’eureka-server-host’,
port: 8761,
servicePath: ’/eureka/apps/’,
preferIpAddress: true,
},
});
client.logger.level(’debug’);
client.start((error) => {
if (error) {
console.log(error);
} else {
console.log(’Eureka client started’);
const mapServiceUrl = client.getInstancesByAppId(’map-service-name’)[0]
console.log(‘Map Service URL: ${mapServiceUrl}‘);
// Use the map service URL to create a Leaflet map
const map = L.map(’map’).setView([51.505, -0.09], 13);
L.tileLayer(‘${mapServiceUrl}/tiles/{z}/{x}/{y}.png‘).addTo(map);
}
});
25
3.2.2 Orchestration
Since our system contains many services which rely upon each other to each
certain tasks , orchestration is considered a key aspect in interoperability.if we
take the following example :
A process begins by receiving a command from the command system which
will then be processed in the freight management system which will use the
map to find the best route if available and executes it . this process along side
many others needs complex orchestration . for this we have several strategies
to achieve this from applying an event driven architecture ,using a centralized
orchestrator or using a distributed workflow engine. event driven architecture
is the most complex to implement compared to the two and it has the risks
of creating event loops while the centralized orchestrator is the easiest it has a
downfall on performance as it will be treat all interactions in the distributed
system.this is why we will go to the approach of a distributed workflow engine.
A distributed workflow engine can help manage the flow of data and events
between services especially if they are distributed.to implement this workflow
engine we will
- use Zeebe as a tool for this.initially we would use Camunda but it’s centralized
nature meant a future obstacle for scalability ,as we dove deeper into the
world of workflow engines we met Zeebe a relatively new workflow engine
that is distributed meaning scalable and provides low latency.
- install distributed versions of it across servers that host the services that
are interdependant
- implement the broker node or the central workflow engine instance that
will be responsible for creating,monitoring the rest of nodes. (it is worth
mentioning that the nodes will communicate with each other to coordinate ,
the broker is there essentially to monitor them and communication doesn’t
have to pass by him.
- define the workflow schema for each service depending on the task..moving
back to the previous example the process of commanding an empty return
trip would be like this :
26
Figure 3.1: The process of launching an empty return command
Figure 3.2: Workflow diagram for the tasks and their order in Zeebe
27
Each node in the diagram corresponds to a task and each task has it’s own
id , we could easily using in the NodeJs environment like this
const ZB = require(’zeebe-node’)
28
3.3 Interoperability Checklist
3.3.1 Responsibility Allocation
The system contains a number of services with some being internal like the
users module and other external like the various APIs, to determine which of
our systems will need to interoperate with the other we first have to assign
responsibilities
- A "Logistics Integration Module" : responsible for handling interactions
with the logistics system.
- A "Commands Integration Module" : responsible for handling interactions
with the commands system.
- A "Planification Integration Module" : responsible for handling interactions
with the planification system.
- A "Mailing Integration Module" : responsible for handling interactions
with the mailing system.
- A "Maps Integration Module" : responsible for handling interactions with
the maps system.
- A "Distance Matrix Integration Module" : responsible for handling interactions
with the Distance Matrix system.(these three mentioned are external api
services)
- A " Directions Integration Module" : responsible for handling interactions
with the Directions system.
- A "Freight Management Module" : responsible for processing the shipments.
This module will use the data received from the different Integration Modules
and Maps Integration Module to manage the shipments and delivery routes.
It will also use the Mailing Integration Module to send delivery notifications
and updates to customers.
the request detection and handling will all be done through the orchestrator
node which will have the appropriate response giving the task ordered the
communication with unknown services will be done through the api gateway
which in of itself can give permissions or block certain services the discovery of
services will be managed by a centralized service registry
29
3.3.2 Coordination Module
For the coordination model we will discuss the degree in which our mechanisms
respond to our quality attributes being the interoperability and performance
and to a lesser extent scalability.
• Service Discovery: the approach is to use a centralized service registry , this
allows to find and register the services and easily handle their occurring
updates , the centralized approach allows for better performance as the
synchronization of a distributed service registry is less performance efficient
however this might have a negative effect on the scalability of the system as
the number of services grow,however this effects negatively on the availability
of the system as having one service register down would threaten the ability
to find services in the system.
• Orchestration: for orchestration the approach being a distributed workflow
engine distributed across the servers holding the services that need to
be managed. this approach is scalable as adding a future interacting
service would require adding a new node, it also performance efficient
as it leverages streaming into the client and use binary communication
protocol this is very efficient and performant.however this makes system
modifiability harder as each node has to be configured separately.
• Tailored Interfaces: Monitoring services is hard to say the least especially
as the system grows , using simple decorator won’t be sufficient , the use
of a more performant and scalable solution is a must , we previously
discussed the use of an API gateway this is one of the most common
approaches in such system, it’s good for monitoring , it can control the
behavior and interactions of unknown services, and for interoperability it
provides a unified interface to handle API’s meaning that the language
implementation,data formats won’t effect the interoperability ,it can help
with scalability as it distribute requests to multiple backend services or
servers, improving scalability and fault tolerance. This allows for horizontal
scaling where additional instances of a subsystem can be added to handle
increasing traffic,on the other hand it might face a bit of performance issues
with network latency or high traffic due to it’s centralized nature, afcourse
we cannot say that our solution will be 100 % efficient but we will handle
this further more in the performance tactics (load balancing ) .
30
3.3.3 Data Model
The main data formats that would be used for the system is JSON , the majority
of the system is build on JS technologies , the database is a document oriented
database providing JSON like documents,which will be exchanged mostly using
the http protocol,websockets are also a solution for real time data concerning
the tracking system. In some instances data will take the form of XML and
for this there are relatively easy ways to guarantee data consistency such as
using a converter class for example when defining a new workflow in the zeebe
workflow engine , we have built in XML support but we could send the format
using JSON by applying a simple BPMXML diagram within the JSON like this
// Define the workflow to deploy
const workflow = {
bpmnProcessId: ’processId’,
name: ’Workflow Name’,
// Set the version of the workflow to deploy
version: 1,
// Define the BPMN diagram as a JSON object
bpmnXml: ‘
<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MOD
<bpmn:process id="processId" name="Process Name">
<bpmn:startEvent id="start" />
<bpmn:serviceTask id="task" name="Task Name" />
<bpmn:endEvent id="end" />
<bpmn:sequenceFlow sourceRef="start" targetRef="task" />
<bpmn:sequenceFlow sourceRef="task" targetRef="end" />
</bpmn:process>
</bpmn:definitions>
‘,
};
with that being said we show the different operations concerning data abstractions
in the system as follows : as for the major data abstractions we state the
following :
• User
31
Figure 3.4: different data operations in the system
{
"name": "...",
"user_type": "...",
"id": "...",
"vehicles": [...],
"email": "...",
"password": "...",
"company": "..."
}
• Vehicle Location
{
"vehicle_id": ...,
"company" :...,
"timestamp": "2022-05-01T10:30:00Z",
"latitude": ...,
"longitude":...,
"speed": ...,
32
"heading": ...,
}
{
{
"name": "...",
"description": "...",
"waypoints": [
{
"name": "...",
"latitude": ...,
"longitude": ...
},.....],
"distance": 30.2,
"duration": "00:35:00"
}
• Zeebe Workflow
{
bpmnProcessId: ’..’,
name: ’..’,
version: ..,
bpmnXml: ‘..‘,
};
33
3.3.4 Management of Resources
as for the management of ressources we tried to take two factors into consideration
scalability and performance , as for the ressource identification and management,the
system is distributed across multiple servers.each server will host a service or a
group of closely bonded services .
34
3.3.5 Mapping across architectural elements
Figure 3.5: Component/deployment diagram of the services and the tactics ensuring them
35
3.3.6 Binding Time Decisions
The binding time decisions of our system are done through the combined effort
of the service registry,api gateway and the workflow engines. the decision of
when to interact with a service is done mainly through the use of the workflow
engine as it has predefined schemes for achieving tasks , meaning that if at a
given state a service a has to interact with a service b then the orchestrator
decides that. now say that an interaction with a service is denied for any reason
such as it’s not available or too busy , this will be handled through the API
gateway as it’s a powerful approach towards monitoring and controlling service
interactions,the API gateway will determine who will get to interact with what
service. finally for service discovery it is a combination between the service
registry which keeps track of the ports and addresses even if they change and
the API gateway for reaching external services . so to sum up the orchestrator
responds to the question of (when does service a interacts with service b),the
API gateway answers to the question(can this service interact with this service?)
and the service registry responds to (where is this service located?).
3.3.7.1 git
3.3.7.2 Travis CI
An automation tool for continuous integration,it can help identify and resolve
issues that may arise during the different stages of binding time. By frequently
integrating code changes, developers can catch errors that may arise during
compile time, load time, or runtime, and fix them before they become larger
36
problems. This can help ensure that the software is stable and reliable throughout
its development life cycle.
3.3.7.3 Heroku
37
3.3.7.6 Zeebe Work flow engine
3.4.1 Availability
- having a centralized service registry,API gateway means less fault tolerance
as having the server holding them down would mean the loss of the ability
to locate services .
- the use of tailor interfaces in the form of the API gateway might block
certain services rendering them unavailable , this would ultimately cause
the system to stop and therefore be unavailable when trying to achieve
tasks that require services cooperation.
It’s worth mentioning that having a centralized database would also harm the
availability of the system as its significantly less fault tolerance.
38
3.4.2 Security
- though the use of the API gateway should drop the risk of interacting with
a dangerous external service , it remains a possible threat.
- the orchestration between services which is of the form of apis creates
vulnerable points as api’s are vulnerable to numerous attacks such as
injection, spoofing, and man-in-the-middle attacks.
- relies on services and external apis makes it crucial to regularly update and
patch these dependencies to avoid vulnerabilities that can be exploited by
attackers overall another case for augmenting the security risk.
- the distributed nature and multi service nature of the project means more
surface for attacks and more effort into securing the system .
- the orchestration implemented means more data exchange and more data
exposure this means a higher risk of security data attacks .
3.4.3 modifiablity
the decentralized nature of the project means that adding a new service in
the future would be much harder and complex as it has to be discoverd and
stored in the service registry,integrated and orchestrated with the WFM and be
controlled and configured by the api gateway, this makes the modifiablity of the
system a much more complex task, on the otherhand modifying the behaviore of
a service means that the interactions with it would be modified, the permission
to it would also be modified. overall we estimate that this is the most damaged
quality attribute
39
3.4.4 Summary
though the tactics proposed for interoperability have various negative sides on
the other quality attributes, they only add to the risk of them being violated. we
can’t have a system that has 0 security and availability.However modifiability
remains the only quality attribute that would be seriously impacted by our
tactics to a noticable desgree
40
3.5 Performance Tactics
3.5.1 Manage sampling rate
Managing sampling rate is a performance tactic that involves controlling the
rate at which data is collected and processed. This tactic can improve the
performance of the system by reducing the amount of unnecessary data that
is collected and processed, while ensuring that important data is captured in a
timely manner. For our system it is important to use Manage Sampling Rate.
As the system deals with a large volume of data. To process this data in a timely
and efficient manner. This involves selecting a suitable interval for sampling the
data at a rate that is appropriate for the system’s processing capabilities. To
avoid overloading the system with more data than it can handle, leading to slow
performance or even system crashes.
3.5.1.1 Mechanisms
41
needs to capture those differences to provide an accurate representation of
the population.
• Periodic sampling : where data is sampled at regular intervals.
• Threshold-based sampling : setting thresholds based on certain performance
metrics. When the metric exceeds a certain threshold, the system increases
the sampling rate to capture more detailed information. This mechanism
is useful for systems where performance metrics vary widely over time and
space, and where it is difficult or impractical to use a fixed sampling rate.
42
- achieve specific performance goals, such as maximizing fairness or throughput
using scheduling policies
3.5.3 Mechanisms :
• Parallel processing: The system can process multiple tasks simultaneously,
such as tracking shipments and scheduling transportation... This can
reduce the overall processing time and increase system throughput.
• Multi-threading: The system can use multiple threads to perform
different tasks concurrently, such as processing customer orders and updating
inventory. This can improve system responsiveness and reduce the time it
takes to complete tasks.
• Distributed processing: The system can distribute tasks across multiple
servers, allowing for more efficient use of resources and improved fault
tolerance.
• event-based concurrency: involves processing different streams of
events on different threads to improve system performance.such as freight
pickup requests, delivery requests(this is handled in the interoperability as
well)
• scheduling policies: Once concurrency has been introduced, scheduling
policies can be used to achieve the desired goals. Different scheduling
policies can be used to maximize fairness, throughput,Response time,Quality
of service...
43
dedicated to specific servers, the tactics to ensure this has already been specified
in the tailored interface tactic
3.5.5.1 Mechanisms
44
- Service workers : Are JS background functionalities that can intercept
network requests made by the application and caches frequently accessed
resources . The first mechanism is used for the frequently accessed
data by one user and is implemented in his local machine , on the other
hand the second solution is used for the frequently accessed data by
all the users of the system .
3.5.6.1 Mechanisms
45
3.5.7 Prioritize Events:
The freight management system contains events with different degree of priority
, so we can define a priority scheme that rank events according to their :
• Cruciality : Assigning priority based on the criticality of the event is very
important . Critical events should be given the highest priority, while less
critical events, such as user requests for non-critical functionality, can be
given a lower priority. For example : adding vehicles and their technical
information is more important than visualizing the list of historic trips .
• User impact: Prioritizing events based on their impact on the user
experience can help ensure that the system is providing a high-quality
experience for users.Since one of our objectives is to make sure that the
UI/UX must be simple and comfortable for the users we should pay attention
to events that have a big time of use by users in order to give them more
priority
• Time sensitivity: in our system we should prioritize events based on
their time sensitivity , this can help to ensure that the system is meeting
time-critical requirements. for example events with minimal deadlines such
as an empty return plan and its different commands . . . . . . . . . ..
• Frequency: In order to ensure that the system is efficiently handling the
most frequently occurring events, High-frequency events must be given the
highest priority such as adding empty return trips by the transporter
46
3.5.8.1 Mechanisms
a single mecanism not tackled yet to increase the resources efficiency is : Resource
monitoring: Resource monitoring involves tracking resource usage, such as
memory, CPU, or network usage, to identify inefficiencies and optimize resource
allocation.
3.5.9.1 Mechanisms
47
as well , which is maintaining multiple copies of data in the client-side to
preserve the data recently used such as authentication by generating tokens
in the app mobile or client’s browser to save the credentials of the user .
• Road estimation and time calculation: Our application should have
an efficient module to choose the best road to take by the transporter
while having an empty return , and so to calculate the total distance and
time spent in the trip , while having a huge amount of users this may
take a long time for some users . That’s why caching tactic seems one of
the best solution to minimize the time execution by saving the frequently
accessed data and computations to avoid recalculate the same information
over and over , this tactic may reduce the response time by 50% , however
adding new resources or upgrading the existing resources will accelerate the
calculation part of the system while doing other tasks .Another solution is
sharing resources between different modules of the app in order to make
the calculation process more efficient and optimal .
• Statistics generation: All the users of the system can consult the
details of their planification and their vehicles as well as the information
and the planification of other users so they can ask for a shipment command
, in addition they can visualize their dashboard . On the other side the
administrator can see the list of users , vehicles and the statistics related
to vehicles and usage. All those statistics are overwhelming the system in
the normal process , so balancing the workload is necessary by distributing
them over the nearest server in the first place to reduce the time response.
In case of surcharging this last , looking for the least connections server is
the best solution . Cache tactic is used as well by its both form : client-side
and in memory , and the most important tactic is introducing concurrency
to boost the computation part .
• Analytics and Reporting: This responsibility involves generating
reports and analyzing data to optimize the supply chain. To manage
the processing requirements and ensure timely response, a mechanism like
stratified sampling can be implemented to focus on processing data that is
most relevant to the current analysis or report.
• Freight Tracking: To manage the processing requirements and ensure
timely response, a burst sampling can be implemented to handle sudden
48
spikes in the volume of data.
49
messages related to the status and location of shipments, which can be
subscribed to by other elements of the system such as the order processing
responsibility.
Other orchestration mechanisms are already specified in the interoperability
part .
50
3.6.6 Binding Time :
There is a need for the tactic of bound execution time in the binding time,to
ensure that certain components of the system are executed within a specified
time frame to meet the performance requirements of the system. This can be
achieved through techniques such as time-slicing, iteration capping, and resource
allocation.For example:by implementing the tactic of bound execution time in
the routing optimization responsibility we can ensure that the calculation is
bounded within an acceptable time frame to meet the performance requirements
of the system and ensure that the system remains responsive to user requests.
3.6.7.1 Prometheus
3.6.7.2 Grafana
51
3.6.7.3 Threading libraries
These are libraries that provide APIs for creating and managing threads in a
program. Examples pthreads library for C/C++
These are systems that allow components to communicate with each other
through a message queue.this is already established in the interoperability with
zeebe
3.6.7.5 Hadoop
52
3.7 Side effects of performance tactics
3.7.1 modifiability
Implementing caching, increasing resources and load balancing can increase the
complexity of a system, as additional infrastructure and software components
may need to be introduced. This can lead to higher maintenance and development
costs, as well as potential performance overhead due to increased communication
and synchronization . This may have a negative impact on the modifiability
attribute which consists of reducing the complexity .
3.7.2 Security
Increasing the number of resources or introducing caching may improve performance
but could also increase the risk of security vulnerabilities such as Denial of
Service (DoS) attacks or unauthorized access to cached data. Similarly, implementing
load balancing may require the use of third-party tools or services, which could
introduce additional security risks. It is important to consider these potential
trade-offs and take appropriate steps to mitigate any security risks.
53
3.8 Scalability as an additional focus
Although not a major quality attribute , it is a concern for our system.as the
stakeholders envision the system growing out to a national scale the system has
to reply to the needs,naturally each of chosen tactics for the performance and
interoperability had a hidden concern that is scalability . some of the approaches
towards making the system scalable were :
- addition of ressources as it made the system distributed and scalable as
the system is brocken down to services or subsystems
- implementing Load balancing which involves distributing the workload
evenly across multiple nodes. This can be achieved using a load balancer,
which can route traffic to the appropriate node based on various criteria,
such as CPU usage or network traffic..
- Data caching which involves storing frequently accessed data in memory,
so it can be retrieved quickly. This can reduce the load on the system and
improve performance. There are several caching solutions available, such
as Redis or Memcached.
Although many design decisions were taken with respect the performance and
interoperability and scalability .sometimes these presented conflicts . the choice
of a centralized database which although respects performance as it gets rid
of synchronization issues , it does leave scalability damaged as the centralized
database and service registry would limit the scalability
54
4
55
Figure 4.1: Decomposition view of the system
They are often be used to model the static deployment view of a system .our
system being a distributed multi platform SaaS microservice system,where services
are spread across servers.each service is hosted inside a container environment
where it is independent from the other services. closely bonded services are
hosted inside the same physical server .
Taking a closer look now at the inside deployment for our services . to provide
the maximum scalability and performance for our distributed system . we said
we will be using containers. services will be hosted inside containers where they
have their own storage and computation. the moment a service is overloaded
the load balancer will intervene to balance the workload . when the system
scales up new instances of the service will be created.
56
Figure 4.2: Deployment diagram
57
4.1.3 Layered View
this just provides a global vision on the level of each component in the system,
with each layer representing a different level of abstraction , each layer is
designed to be independent of the other layers and has well-defined interfaces
for communicating with the other layers.software layers:
• Presentation layer: This layer is responsible for presenting information
to the user and capturing user input. for our system the web application
and mobile application interfaces used by customers and employees to
access the system.
• API layer:represents the interface that allows clients and services to
interact with the underlying system.it is responsible for handling incoming
requests and returning appropriate responses .
• Application layer: responsible for the business logic and processing of
data. This layer includes the different services representing responsibilities
of the system, such as order processing,Road estimation, and freight tracking.
• Data layer: This layer is responsible for managing data storage and
retrieval,This layer includes : the database systems used to store information
such as customer profiles, order history, and shipment tracking data.
58
Figure 4.4: Layered View of the system
59
Figure 4.5: High level communication view
4.2.1 Micro-services
Microservices refers to a style of application architecture where a collection
of independent services communicate through lightweight APIs.Microservices
architectures make applications easier to scale and faster to develop, enabling
innovation and accelerating time-to-market for new features.this type of architecture
is used in our system for multiple reasons :
• The independent nature of subsystems : the subsystems in our
system are highly decoupled . on one hand each subsystems handles
different data from the other, for example the user management deals with
60
the data related to users ,companies.. the mail system contains mailing
data . these systems are highly isolated , with each system needing it’s
own database, type of computation ... .
• Ensuring interoperability: Along with virtualization technologies,
microservices have enabled the loose-coupling of both service interfaces
(message passing) and service integration (form and fit).microservices provides
a slew of ready to use mechanisms for ensuring interoperability . a service
could be using xml and another could be using json. the statistics db
is of type no-sql while the user db is sql. micro services could be the
best architecture to provide interoperability in a distributed virtualized
heterogeneous environment .
• Scalability improvements : One of our primary concerns ,Since each
microservice runs independently, it is easier to add, remove, update or
scale each cloud microservice.developping a system for future deployment
on a national or even international scale means that the architecture should
allow scalability without the need for expensive horizontal or vertical sclaing.
For instance, if a particular microservice experiences increased demand
because of seasonal buying periods, more resources can be efficiently devoted
to it. If demand drops as the season changes, the microservice can be scaled
back, allowing resources or computing power to be used in other areas.
61
Figure 4.6: Microservices in the supply chain system
62
Figure 4.7: client-server in a sub part of the system
63
• Data QueryIn order to effectively apply map reduce on the database we
must use effective optimized queries. now this could be done directly by
the mongodb queries but instead we opt for a better solution that is the use
of pig. traditionally pig only supported hbase but now it support mongo
db as well .
4.2.4 Broker
This concerns the distributed workflow manager of the system. this functions
as a broker architecture with four three components
4.2.4.1 Clients
64
- Publish messages
- Activate jobs
- Complete jobs
- Fail jobs
- Handle operational issues
- Update process instance variables
- Resolve incidents
4.2.4.2 Brokers
The Zeebe broker is the distributed workflow engine that tracks the state of
active process instances. Brokers can be partitioned for horizontal scalability
and replicated for fault tolerance. A Zeebe deployment often consists of more
than one broker.It’s important to note that no application business logic lives
in the broker. Its only responsibilities are Processing commands sent by clients
Storing and managing the state of active process instances Assigning jobs to job
workers Brokers form a peer-to-peer network in which there is no single point
of failure. This is possible because all brokers perform the same kind of tasks
and the responsibilities of an unavailable broker are transparently reassigned in
the network.
4.2.4.3 Exporters
The exporter system provides an event stream of state changes within Zeebe.
This data has many potential uses, including but not limited to Monitoring the
current state of running process instances Analysis of historic process data for
auditing, business intelligence, etc. Tracking incidents created by Zeebe The
exporter includes an API you can use to stream data into a storage system of
your choice. Zeebe includes an out-of-the-box Elasticsearch exporter, and other
community-contributed exporters are also available.
4.2.5 MVVM
used in the mobile app it has three levels view,viewmodel and the model
65
Figure 4.9: zeebe broker architecture
• View the UI of the mobile app , the view will be similar for both shipper
and transporter with slight additional options.it observes changes in the
viewmodel and updates the view accordingly.
• Model the model represents the scheme of the data of our application ,
there are various data classes involved in the application.
• ViewModel the central layer of the system , it takes data gathered from
the various api’s and converts them to the data that follows the models
scheme .
66
4.2.6 MVC
Similar to MVVM, this will be used for our web app.In MVC, the controller is
the entry point to the Application, while in MVVM, the view is the entry point
to the Application.
67
Conclusion
In conclusion, the architecture of a distributed microservice application in the
supply chain context is a powerful tool for managing complex supply chain
processes. By breaking down the application into smaller, more manageable
microservices, it becomes easier to maintain and scale the system as needed.
Additionally, the use of distributed architecture helps to ensure that the
system is fault-tolerant and can recover quickly from failures. The
distributed microservices architecture breaks down the system into
independently deployable and scalable microservices. Each microservice
focuses on a specific business capability, such as inventory management, order
processing, logistics, or analytics. This modular approach allows for easier
development, testing, and maintenance of individual services while enabling
rapid adaptation to evolving business requirements.
68