HELORS2013 BookofProceedings
HELORS2013 BookofProceedings
1963-2013
50th Anniversary
Book of Proceedings
1963-2013
50th Anniversary
Edited by
Abstract
In this paper we introduce the term FRM (Fulfillment Request Management). According to the FRM in a BSS / OSS
environment we can use a unified approach to implement a SOA in order to integrate BSS with OSS and handle 1.
Orders 2. Events 3. Processes. So in a way that systems like ESB, Order Management, and Business Process
Management can be implemented under a unified architecture and a unified implementation. We assume that all the
above mentioned are 'requests' and according to the system we want to implement, the request can be an event, an
order, a process etc. So instead of having N systems we have 1 system that covers all the above (ESB, Order
Management, BPM etc) With the FRM we can have certain advantages such as: 1. adaptation 2. Interoperability. 3. Re-
usability 4. Fast implementation 5. Easy reporting. In this paper we present a set of the main principles in order to build
an FRM System.
KEYWORDS
SOA, Application mediation,ESB,BPM, Order Fulfillment,Service
1. INTRODUCTION
BSS/OSS
Operations support systems (OSS) are computer systems used by telecommunications service providers. The
term OSS most frequently describes "network systems" dealing with the telecom network itself, supporting
processes such as maintaining network inventory, provisioning services, configuring network components,
and managing faults. The complementary term, business support systems or BSS, typically refers to
“business systems” dealing with customers, supporting processes such as taking orders, processing bills, and
1
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
collecting payments. The two systems together are often abbreviatedwith the term BSS/OSS.
2. SOA
2
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
Now, let's describe each layer in greater detail and discuss the composition of each of these layers.
Layer 1: Operational systems layer.This consists of existing custom built applications, otherwise called
legacy systems, including existing CRM and ERP packaged applications, and older object-oriented system
implementations, as well as business intelligence applications. The composite layered architecture of an
SOA can leverage existing systems and integrate them using service-oriented integration techniques.
Layer 2: Enterprise components layer. This is the layer of enterprise components that are responsible for
realizing functionality and maintaining the QoS of the exposed services. These special components are a
managed, governed set of enterprise assets that are funded at the enterprise or the business unit level. As
enterprise-scale assets, they are responsible for ensuring conformance to SLAs through the application of
architectural best practices. This layer typically uses container-based technologies such as application
servers to implement the components, workload management, high-availability, and load balancing.
Layer 3: Services layer. The services the business chooses to fund and expose reside in this layer. They can
be discovered or be statically bound and then invoked, or possibly, choreographed into a composite service.
This service exposure layer also provides for the mechanism to take enterprise scale components, business
unit specific components, and in some cases, project-specific components, and externalizes a subset of their
interfaces in the form of service descriptions. Thus, the enterprise components provide service realization
at runtime using the functionality provided by their interfaces. The interfaces get exported out as service
descriptions in this layer, where they are exposed for use. They can exist in isolation or as a composite
service.
Layer 5: Access or presentation layer.Although this layer is usually out of scope for discussions around a
SOA, it is gradually becoming more relevant. I depict it here because there is an increasing convergence of
standards, such as Web Services for Remote Portlets Version 2.0 and other technologies, that seek to
3
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
leverage Web services at the application interface or presentation level. You can think of it as a future layer
that you need to take into account for future solutions. It is also important to note that SOA decouples the
user interface from the components, and that you ultimately need to provide an end-to-end solution from
an access channel to a service or composition of services.
Level 6: Integration (ESB).This layer enables the integration of services through the introduction of a
reliable set of capabilities, such as intelligent routing, protocol mediation, and other transformation
mechanisms, often described as the ESB. Web Services Description Language (WSDL) specifies a binding,
which implies a location where the service is provided. On the other hand, an ESB provides a location
independent mechanism for integration.
Level 7: QoS.This layer provides the capabilities required to monitor, manage, and maintain QoS such as
security, performance, and availability. This is a background process through sense-and-respond
mechanisms and tools that monitor the health of SOA applications, including the all important standards
implementations of WS-Management and other relevant protocols and standards that implement quality of
service for a SOA.
With SOA
1. We can implement the integration required between BSS /OSS.
2. We can define the business process as described in BSS
3. We can monitor all the activities and operation performed in BSS/OSS
In SOA each layer is built from a separate system and an integration between these systems is required.
This means that we need to go into a process of unifying different systems where they are built under
different rules. Since these systems can be also as stand alone in the company this means that their union
requires subsystems to play the role of adapters between them. These systems have to expose such
agnostic and generic interfaces in order to communicate each other thus SOA major rules are loose
coupling and reuse. The agnostic and generic interfaces sometimes require tedious and huge
implementation. So we realise that integration of these system is not often a simply work to do.
SOA also provides loose coupling across all the services but this also makes more difficult to trace
problems.
4
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
However, a more closer investigation shows to us, that all these systems have a common feature the "initial
request" (we assume that everything is a request) Depending on the system that process the request, this
could be an event, service, message or data. So the request is
1. data when the system performs transormation
2. message when the system performs routing
3. event when the system performs monitoring
4. service when the system performs fulfillment
3. FRM
Analyzing the Systems and the Layers inside SOA a question is born. Why having all these different systems?
All the above layers of SOA can be implemented with a unified approach.
With FRM we have common aproach to implement ESB, BPM Orhestration, Mediation functionalities. We
don't have such a categorization of these. We have only requests and fulfillment of these requests. FRM is
one system, with different functions and features according to the requirements referred each time.
5
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
With FRM we see the running of the business in light of the requests. These requests could be either
external (e.g made by a customer to BSS) either Internal (made by sales partner to BSS). SOA as
mentioned by definition is mainly Service Oriented while FRM is Request Oriented. A request can be
anything. We can implement SOA using FRM approach.
Request
Using a single system (and architecture) FRM advantages and contribution will be:
1. Instead of N different systems we have to choose our choice turns out to be simple and reflects as
in one single system. In the market there are many ESB / SOA systems, many order entry, many
BPM etc and the optimal combination of these is a tedious, time consuming and costly process.
Having one system greatly reduces the search criteria and finding the most appropriate solution.
2. Cost Reduction
a. Usage cost. There is a reduction in the cost of use since instead of 3 or 4 systems we have
1. Therefore the company is not required to buy licenses for all these systems, but only
one.
b. Operation cost. It is easier to manage and operate one system than running N systems.
Furthermore it requires fewer people to operate 1 than N systems.
c. Administration cost. It is easier to administrate one system than N systems. In any case you
need fewer people to cover this need.
3. Easy adaptation between systems because talking about one and the same system. The N systems
under the umbrella of FRM communicate fully with each other and the connections are intact and
unified.
4. Interoperability between processes and services. The FRM is not just one system, but a concept of
an approach in which hosted processes, services, events and orders are fully synchronized between
them.
5. Reusability. Many of the FRM entities based on reuse depending on usage. The concept of one
system alone gives us the possibility of FRM components reused either in processes or in services
etc.
6. Easily you can the trace a problem since you have only one system to search for it.
6
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
7. Fast Implementation. The concept of a common approach and common architecture creates some
standards that help in easy and fast development. For N systems referred, with FRM we have
common development philosophy and operation. Consequently this reduces the development time
of individual implementations that may be required.
8. Easily export of reports and KPIs. The export reports from the various systems as well as the
individual control between the various systems are a painful but necessary process in an enterprise.
Through reports we test the systems and produce the necessary data to be used by CRM and then
from the management of the company for various purposes: economic, strategic, etc. Through FRM
the reports are coming out of one system and there is no complexity in to combine reports from N
systems. Reports in FRM are exported through a single common data model which makes reporting
easy to use and implement in less time and low probability of error.
WS Interface
Input Canonical 1. All Adapters are registered
Data Data FPA
R upon instantiation. They may
Model E Model FRM CORE Output
stay in Sleep or they may be
C Data
FPA triggered at the start.
E Model
I
2. An adapter is either auto
V either manual
E 1. Business Rules 3. You may have also
FPA Orchestration rules inside
2. Routing Rules
each adapter
3. Orchestration Rules
FRM is an architecture which can be divided into three entities as illustrated by the above figure.
1. We have the Receiver who is responsible for entering the data regardless what these are. (order,
event, process)
2. FRM Core. The centralized entity where within they are all the rules business, routing and
orchestration.
3. Fulfillment Process Adapters which include the third interconnection systems (e.g. it can be an
interface with ERP system). Within the FPAs can also define rules orchestration that will affect the
actual FPA or another. The reason for this is because we want to be able to extend the
orchestration outside the core. They can either in standby mode either pre-activated depending on
the rules defined in the orchestration. The FPAs operating simultaneously, synchronously or
asynchronously and interact according to the orchestration that will define.
The FRM uses a normalized data model. Step of normalization of data is a necessary step to implement a
relational database, as long as results:
a. using all the data needed for system,
b. organizing data in a form that a data exists in one and only one point.
c. visualization of the relationships between the entities of interest to the system.
7
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
When entering data in the Receiver they are converted to the normalized model so that all operations and
processes within FRM Core are made in a single data model. The normalized model is converted through
the Fulfillment Process Adapters in the model that requires the respective third interconnection system.
4. CONCLUSIONS
Summing up all the above to the question why someone choose a FRM system the answer is:
An FRM system is the interconnection of requirements and procedures within the company. With FRM we
see the running of the business in light of the requests. External or Internal depending on the type of claim.
As we have mentioned FRM is not simply a system, but a concept, an approach and architecture designed to
process claims under common rules, creating a powerful mechanism requirements management within the
broader Orchestration processes and requirements.
REFERENCES
J. P. Reilly and M. J. Creaner, 2005 NGOSS distilled: The essential guide to next generation telecoms management, 1st
ed., The Lean Corporation, Cambridge University Press
Thomas Erl 2005 Service-Oriented Architecture: Concepts, Technology and Design, Prentice Hall PTR, ISBN 0-13-185858-
0
Grace A. Lewis 2013 Is SOA Being Pushed Beyond Its Limits? ACSIJ Advances in Computer Science: an International
Journal, Vol. 2, Issue 1, No. , 2013
G. M. Tere1 and B. T. Jadhav 2010 Design Patterns for Successful Service Oriented Architecture Implementation BIJIT –
BVICAM’s International Journal of Information Technology Vol. 2 No. 2; ISSN 0973 – 5658
Radu Stefan MOLEAVIN. 2012. SOA - An Architecture Which Creates a Flexible Link between Business Processes and IT .
Database Systems Journal vol. III
Sheikh Muhammad Saqib1, Muhammad Zubair Asghar1, Shakeel Ahmad1, Bashir Ahmad1 and Muhammad Ahmad Jan1
2011 Framework for Customized-SOA Projects (IJCSIS) International Journal of Computer Science and Information
Security, Vol. 9, No. 5
Xiaoguang Wang1, Hui Wang2,Yongbin Wang 2010 A Unified Monitoring Framework for Distributed Environment
Intelligent Information Management, 2010, 2, 398-405
Jesús Soto Carrión1, Lei Shu2, Elisa Garcia Gordo 2011 Discover, Reuse and Share Knowledge on Service Oriented
Architectures International Journal of Artificial Intelligence and Interactive Multimedia, Vol. 1, Nº 4.
Faîçal Felhi, Jalel Akaichi 2012 A new approach towards the self-adaptability of Service-Oriented Architectures to the
context based on workflow (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 3, No.
12
Shirazi H.M, Fareghzadeh 1 N. and Seyyedi A. 2009 A Combinational Approach to Service Identification in SOA Journal of
Applied Sciences Research, 5(10): 1390-1397
Paolo Malinverno, Janelle B. Hill 2007 SOA and BPM Are Better Together Gartner ID Number: G00145586
Choi, Jae,Nazareth, Derek L.andJain, Hemant K. 2010 Implementing Service-Oriented Architecture in Organizations
Journal of Management Information Systems Vol. 26 No. 4 , pp. 253 –286
8
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Gkoutzioupas S. | Fulfillment Request Management (The approach)
Sheikh Muhammad Saqib,Muhammad Zubair Asghar, Shakeel Ahmad, Bashir Ahmad , et al. Muhammad 2011
“Framework for Customized-SOA Projects” International Journal of Computer Science and Information Security
ISSN/EISSN: 19475500 Volume: 9 Issue: 5 Pages: 240-243
Radu Stefan MOLEAVIN 2012 “SOA - An Architecture Which Creates a Flexible Link between Business Processes and IT”
Database Systems Journal ISSN/EISSN: 20693230 Volume: III Issue: 1 Pages: 33-40
G. M. Tere, B. T. Jadhav 2010 “Design Patterns For Successful Service Oriented Architecture Implementation” BVICAM's
International Journal of Information Technology ISSN/EISSN: 09735658 Volume: 2 Issue: 2
Jesus Soto Carrion,Lei Shu,Elisa Garcia Gordo 2011 “Discover, Reuse and Share Knowledge on Service Oriented
Architectures” International Journal of Interactive Multimedia and Artificial Intelligence ISSN/EISSN: 19891660 Volume:
1 Issue: 4 Pages: 4-11
Haeng-Kon Kim 2008 Modeling of Distributed Systems with SOA & MDA IAENG International Journal of Computer
Science, 35:4, IJIS_35_4_10
Aarti Karande1, Milind Karande2 and B.B.Meshram 2011 Choreography and Orchestration using Business Process
Execution Language for SOA with Web Services IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 2
Tomasz Górski 2012 Architectural view model for an integration platform Journal of Theoretical and Applied Computer
Science Vol. 6, No. 1, 2012, pp. 25-34
Iulia SURUGIU 2012 Integration of Information Technologies in Enterprise Application Development Database Systems
Journal vol. III
Christian Emig, Jochen Weisser, Sebastian Abeck: Development of SOA-Based Software Systems – an Evolutionary
Programming Approach, International Conference on Internet and Web Applications and Services ICIW’06, February
2006
J. Kr´al and M. Zemlicka. Implementation of business processes in service-oriented systems. In Proceedings of 2005 IEEE
International Conference on Services Computing, volume II, pages 115–122, Los Alamitos, CA, USA, 2005.IEEE Computer
Society
J. Kr´al and M. Zemlicka. Crucial patterns in service-oriented architecture. In Proceedings of ICDT 2007 Conference, page
24, Los Alamitos, CA, USA, 2007. IEEE CS Press.
9
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
Abstract
Uncertainty is inherent in project management, most, if not all, project schedules tend to get disrupted and are
therefore in need of rescheduling in order to continue. The use of normal scheduling methods in such cases tends to
create new schedules that are either too different from the baseline schedule or relatively unstable to further
disruptions.
The objective of this paper is to propose a reactive scheduling process that may be used to revise or re-optimize a
previously developed baseline schedule when unexpected events occur, taking in consideration the existing schedule
and the work already done or currently in progress. The proposed approach is based on a hybrid algorithm that
combines genetic algorithms with simulated annealing and particle swarm optimization and aims at providing quick
response and multiple solution scenarios to the project manager so as to efficiently handle the experienced or
forecasted disruptions.
KEYWORDS
1. INTRODUCTION
The vast majority of the research efforts in project scheduling over the past several years has concentrated
on the development of exact and suboptimal processes for the generation of a baseline schedule assuming
complete information and a deterministic environment (Herroelen and Leus 2004). In practice, during
execution, projects are often subject of considerable uncertainty, which may lead to numerous minor or
major schedule disruptions.
In project scheduling, uncertainty can take many different forms. Activity duration estimates may be poor,
resources may become unavailable (e.g. break down, illness, etc.), work may be interrupted due to weather
conditions, new unanticipated activities may be added due to changes in project’s scope, etc. All of these
types of uncertainty may result in the violation of the project baseline schedule. In general, project
management wants to avoid these schedule deviations. This can be achieved by generating a baseline
schedule in a proactive way, trying to anticipate certain types of disruptions so as to minimize their effect if
they occur. If the schedule would still break down despite these proactive planning efforts, a reactive
scheduling policy will be needed to repair the infeasible schedule (Deblaere et al. 2011).
The objective of this paper is to propose a reactive scheduling process that may be used to revise or re-
optimize a previously developed baseline schedule when unexpected events occur. More precisely, given a
baseline scheduling that has been followed and updated with actual data from the project manager in a
timely manner, we assume that at a specific point in time, one or more disruptions on the schedule have
been noticed or even forecasted due to new data and/or changes of one or more environmental variables.
For example, a shortage of materials is expected due to upcoming strikes. In this case, the developed
10
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
baseline schedule becomes infeasible during project execution, due to the occurrence of one or more
resource or activity disruptions. Therefore, we need a reactive policy that dictates how to revert to a
feasible schedule that deviates as little as possible from the original baseline and resolves the schedule
infeasibilities caused by the disturbances that occurred during schedule execution.
This paper describes a new heuristic for reactive project scheduling that may be used to ‘repair’ multi-mode
resource-constrained project baseline schedules with variable resource availabilities and requirements that
suffer from multiple activity durations and/or resource disruptions during project execution. The aim is to
generate a new schedule that it is adapted to the emerged situations that are time and resource
disruptions, not only taking into consideration the baseline schedule but also trying to minimize the
deviations from it. The proposed approach is based on a genetic algorithm that moderates the schedule
generation process which aims at providing quick response and multiple solution scenarios to the project
manager so as to efficiently handle the experienced or forecasted disruptions. Finally, we discuss the
computational results obtained by applying the proposed approach on a set of randomly generated project
instances and on an illustrative example based on an actual construction project.
More specifically, the rest of this paper is organized as follows: in Section 2, a brief review of the problem’s
domain and the solutions proposed in the literature is presented. In Section 3, the proposed method is in
depth analyzed. In Section 4, the design of the experiment used to verify the applicability and effectiveness
of the proposed method along with the preliminary computational results are presented. Section 5,
concludes the paper.
2. LITERATURE REVIEW
The classical Resource-Constraint Project Scheduling Problem (RCPSP) has been studied and developed
extensively over the last decades (Demeulemeester and Herroelen 2002). In RCPSP we have a project of n
real activities, that represent actual work, plus 2 dummy activities, that represent the start and finish of the
project. This specific problem is constrained both by time and resource availability.
The time constraint is created by the relationships between each activity and the durations of the activities.
For example, when an activity j has a start to finish relationship with an activity i, then j cannot begin before
i has finished. Therefore, if activity i has a duration of d and has started at time Si then activity j cannot be
scheduled before Si + d.
The resource constraints have as source the relation between the availability of a resource type at a given
time and the resource demand of all the activities being executed at that time based on a specific project
schedule. For example, a resource k has an availability of Rkt at time t and an activity j requires rkj amount
of that resource at that time. Activity j should be scheduled at time t if and only if Rkt ≥ rkj.
The RCPSP can be solved keeping in mind those two constrain types in order to create an activity schedule
for a specific project. Usually, the variables of the constraints are deterministic, having been estimated up
to a specific error margin by the project management. Because the variables of duration, resource need and
resource availability are based on estimation, they may change during the project span. Changes like those
create disruptions on the flow of the schedule that must be addressed quickly in order for the project to
resume.
Reactive scheduling is the use of a reactive procedure in order to create a new schedule that is feasible
after having been adapted to the new information given by the disruptions (Deblaere et al. 2011). There are
multiple types of reactive scheduling, such as schedule repair and rescheduling (Herroelen and Leus 2004).
Schedule repair is the easiest method of reactive scheduling, which involves simple control rules like the
right-shift rule ((Sadeh et al. 1993), (Smith 1995)), where the activities are simply shifted to future dates to
11
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
accommodate the changes caused by the disruptions without any additional change to optimize the
schedule of the remaining activities. This method can lead to poor results, as it does not re-sequence
activities, at all. Rescheduling on the other hand, generates a totally new schedule based on the newly
emerged circumstances. Because of that you may get as result a schedule that differs considerably from the
baseline schedule which can raise issues related for example to previously scheduled deliveries of materials
or rental of equipment and usage of external sub-contractors.. Frequent rescheduling can result in
instability and lack of continuity in detailed plans. For that reason many researchers prefer reactive
procedures that change schedules locally to global regeneration of the schedule.
A very well accepted approach, focusing on processing time uncertainties, refers to the development of
efficient and effective procedures that can be used during project execution to repair the initial project
schedule when needed (Van de Vonder et al. 2007). The four reactive procedures are: priority list
scheduling, fixed resource allocation, sampling and weighted earliness-tardiness WET.
Priority list scheduling consists of two distinct priority rules, the static priority rule and the dynamic priority
rule to create the equivalent list. Those rules are used to order the activities of a project into a list that is in
turn used as the order in which the activities are to be scheduled. The Fixed Resource Allocation relies on
the procedure developed by Artigues et al. (2003) and generates a feasible resource flow network, which
shows how the resources are passed on among the activities, by extending a parallel schedule generation
scheme. The Sampling approach consists of two reactive scheduling schemes that rely on different priority
lists combined with different schedule generation schemes, basic sampling and time-window sampling. The
heuristic WET procedure is a customized version of a population-based iterated local search algorithm for
the weighted earliness-tardiness RCPSP with minimum and maximum time lags described in Ballestin and
Trautman (2006) – which is an adapted version of the procedure based on the metaheuristic Iterated Local
Search (Lourenço et al. 2001) – that omits some of the original features in order to reduce the
computational requirements of the algorithm.
Reactive procedures can also be tree-based exact search techniques instead of mathematical algorithms
(Deblaere et al. 2011), that are similar to those devised for solving the basic MRCPSP.
In this class falls the branching scheme procedure proposed by Deblaere et al (2011) that is based on mode
and delaying alternatives. In the same class is the lower bound method , where after a first feasible solution
has been found (upper bound), we work backwards on the search tree to find a new path that will lead to a
better solution (lower bound), and the usage of dominance like the data reduction (Sprecher et al. 1997),
the left-shift (Hartmann and Drexl 1998), (Demeulemeester and Herroelen 1992), the cutset
(Demeulemeester and Herroelen 1992, Demeulemeester and Herroelen 2000, Hartmann and Drexl
1998)and the resource infeasibility (Deblaere et al. 2008). Finally, there are the search strategies, like
regular branch-and-bound algorithms as well as a branch-and-bound with tabu search (Glover 1989, Glover
1990), iterative deepening A* or IDA (Korf 1985) as well as a binary search procedures.
Summarizing, the solution methods that have been most commonly used for reactive scheduling are either
based on priority rules or on algorithms computing local optimum solutions, in order to find a feasible new
schedule. On this paper we are introducing a hybrid genetic algorithm based procedure in order to find the
best overall solution to the new schedule.
3. SUGGESTED METHOD
The solution method (Figure 1) introduced on this paper has three distinct stages of operations:
1. Creation of a “subproject”
2. Hybrid Genetic algorithm (GA) on the “subproject”
3. Scheduling of the “subproject” and optimization by comparing it to the baseline.
12
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
Since we are working on a reactive procedure, it is logical to infer that part of the original baseline schedule
will be already completed. Because of that, it is required before starting the process of scheduling the
unfinished activities to remove from the project the activities that have already been completed either fully
or partially. This raises the question “what will happen to the partially completed activities?” The
“subproject” is actually the answer to this question. The project manager is responsible for deciding
whether the activities that are partially completed will be continued immediately after the disruptions or
will be split and therefore can be scheduled at a later date on a best fit basis. The splitting of an activity
although it might produce better results mathematically, might create problems for the project itself in
practical situations, e.g. changing worker schedules, storing materials for a longer period, etc., therefore
should be left to the project manager and even more should not be a per project decision but per activity,
to accommodate the specific characteristics of each activity. After the “subproject” has been created the
newly created “input” is passed to the hybrid GA in order to search for a near optimal solution. It should be
noted that in reactive scheduling there are two optimization objectives, the minimization of the total
duration of the remaining project (“subproject”)the minimization of the deviations from the baseline
schedule. .
The hybrid GA used on this paper is based on the algorithm proposed by Hartmann (1998). The
chromosome of the GA is an n+1-sized vector, where n is the number of activities in the “subproject”. The
first gene (0) of the chromosome is used to represent the solution algorithm (GA, SA, PSO) that will be used
for the rest of the activity list. The 1..n+1 genes of the chromosome represent an activity and the sequence
of the genes is the scheduling priority of the activities. Also we are using the 2-point crossover, as it has
been found to provide better shuffling of the genes (Hartmann, 1998). For the first generation of parents
for the algorithm we are using randomly generated feasible schedules. For the selection of chromosomes
between parents and offspring a fitness based selection is used. The fitness is calculated by the following
formula:
where, TN is the new finish time, TB is the finish time of the baseline schedule, and n is the number of
activities that have been rescheduled more than half their duration away from their baseline schedule.
Figure 1 Flow of events of the proposed approach
13
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
4. EXPERIMENTAL RESULTS
The experimental verification of the suggest method was done using the datasets from PSPLIB (Kolisch and
Sprecher 1997). We experimented with the J30 and J60 sets of data and created random disruptions (50%
of time and 30% of resource disruptions) for the projects and then used the proposed method to
reschedule the projects. The baseline schedule of the project was created using serial SGS and normal
priority rules. We used the sampling approach to find the best solution for each project. The next step is the
comparison of the results to those found by the best in class algorithms referred in the literature.
Furthermore, in Figures 2 and 3 an illustrative case before and after the application of the proposed method
is presented. It is a project with 30 activities and baseline schedule with makespan of 45 time units. At time
11, the schedule is disrupted. More specifically, at time 11 the project manager checks the progress of the
project and finds out that there are the following disruptions: at activity 15 we have a new duration of 12
time units instead of 10, at activity 21 we have a new resource need of 4 units of resource 1 instead of 2
and at activity 24 we have a new duration of 6 time units instead of 3. Furthermore, activities 2, 3, 6 and 8
have been completed, activities 1, 7, 9 and 11 are in progress and the rest of the activities have not been
started yet. After deciding that all the in progress activities will not be split, the proposed method is applied
and gives the schedule shown in Figure 3. The generated schedule has a makespan of x days and an average
of start times deviation from the corresponding in the baseline schedule of 20%.
14
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
5. CONCLUSIONS
Uncertainty is part of project management, there are always going to be disruptions in a project’s schedule,
the more complicated and lengthy the project, the bigger the consequences of these disruptions. For that
reason the need of a fast and reliable reactive process that will generate schedules easily applicable in
practice is great. The applicability of a schedule, generated to accommodate time and resource disruptions,
is highly related to the degree that already taken commitments are taken into consideration, which are
reflected in the initial baseline schedule. Furthermore, the minimization of the total project duration
remains of uppermost importance but should not overpower the need of minimal number of activities that
are scheduled far apart from their original scheduling times.
REFERENCES
Artigues, C., Michelon, P. and Reusser, S. (2003) Insertion techniques for static and dynamic resource-constrained
project scheduling. European Journal of Operational Research, 149(2), pp. 249-267.
Ballestín, F. and Trautmann, N. (2008) An iterated-local-search heuristic for the resource-constrained weighted earliness-
tardiness project scheduling problem. International Journal of Production Research, 46(22), pp. 6231-6249.
Deblaere, F., Demeulemeester, E. and Herroelen, W. (2008) Exact and heuristic reactive planning procedures for
multimode resource-constrained projects. Available at SSRN 1288546.
Deblaere, F., Demeulemeester, E. and Herroelen, W. (2011) Reactive scheduling in the multi-mode RCPSP. Computers &
Operations Research, 38(1), pp. 63-74.
Demeulemeester, E. and Herroelen, W. (1992) A branch-and-bound procedure for the multiple resource-constrained
project scheduling problem. Management science, 38(12), pp. 1803-1818.
15
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petridis I., Rokou E., Kirytopoulos K. |Reactive Scheduling in Practice
Demeulemeester, E. and Herroelen, W. (2000) The discrete time/resource trade-off problem in project networks: a
branch-and-bound approach. IIE transactions, 32(11), pp. 1059-1069.
Demeulemeester, E. L. and Herroelen, W. (2002) Project scheduling : a research handbook, International series in
operations research & management science, Boston: Kluwer Academic Publishers.
Glover, F. (1989) Tabu search—part I. ORSA Journal on computing, 1(3), pp. 190-206.
Glover, F. (1990) Tabu search—part II. ORSA Journal on computing, 2(1), pp. 4-32.
Hartmann, S. and Drexl, A. (1998) Project scheduling with multiple modes: A comparison of exact algorithms. Networks,
32(4), pp. 283-297.
Herroelen, W. and Leus, R. (2004) Robust and reactive project scheduling: a review and classification of procedures.
International Journal of Production Research, 42(8), pp. 1599-1620.
Kolisch, R. and Sprecher, A. (1997) PSPLIB-a project scheduling problem library: OR software-ORSEP operations research
software exchange program. European Journal of Operational Research, 96(1), pp. 205-216.
Korf, R. E. (1985) Depth-first iterative-deepening: An optimal admissible tree search. Artificial intelligence, 27(1), pp. 97-
109.
Lourenço, H. R., Martin, O. C. and Stutzle, T. (2001) Iterated local search. arXiv preprint math/0102188.
Sadeh, N., Otsuka, S. and Schnelbach, R. (1993) Predictive and reactive scheduling with the Micro-Boss production
scheduling and control system. in Proceedings, IJCAI-93 Workshop on Knowledge-Based Production Planning,
Scheduling and Control.
Smith, S. F. (1995) Reactive scheduling systems. in Intelligent scheduling systems: Springer. pp. 155-192.
Sprecher, A., Hartmann, S. and Drexl, A. (1997) An exact algorithm for project scheduling with multiple modes.
Operations-Research-Spektrum, 19(3), pp. 195-203.
Van de Vonder, S., Ballestín, F., Demeulemeester, E. and Herroelen, W. (2007) Heuristic procedures for reactive project
scheduling. Computers & Industrial Engineering, 52(1), pp. 11-28.
16
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
Camelia Slave1
Faculty of Land Reclamation and Environmental Engineering Bucharest
Bdul Marasti 59, Bucharest, Romania
Abstract
The actual design methods of structures under the influence of permanent, effective and climatic (wind, snow) loads, need
an elastic behaviour of the structure and a static action of loads. The dynamic aspect of seismic action and inelastic
behaviour of the structures affected by major earthquakes require specific design methods, governed by seismic design
regulations. In Romania the field is covered by Seismic Design Code- part III –provision for seismic evaluation of Existing
buildings, indicative P 100-3/2008. The article presents a calculation model of body A, building of Faculty of Land
Reclamation and Environmental Engineering, Bucharest and also correlation and regression analysis of mathematical
results to seismic evaluation of buildings, using MATHCAD PROFESSIONAL Software
KEYWORDS
correlation function, regression function, seismic risk, seismic force.
1. INTRODUCTION
The case study presented below aims to establish seismic risk class of an existing building of reinforced
concrete. The building selected for evaluation is body a of the faculty of land reclamation and
environmental engineering, university of agronomical sciences and veterinary medicine of bucharest. The
building is carried out between 1968 and 1970, has a resistance structure on reinforced concrete frames.
Designed according to the design concepts of that period the building does not meet many of the
requirements of current seismic design codes (fig. 1).
17
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
2. SECTION Ι
1. EXPERIMENT DESCRIPTION
Specifying conditions on seismic site. The building selected for evaluation is located in Bucharest. According
P100-1/2006, the area is characterized by a peak ground acceleration a g = 0.24g for design and control
period (corner) of the response spectrum Tc = 1.6 sec.
Normalized elastic response spectrum shape for horizontal acceleration of ground movement associated
components for Bucharest is presented in Fig. 2.
Building characteristic data. The Faculty of Land Reclamation and Environmental Engineering is located in
the north of the town. In the same area are the Village Museum, Free Press House and Romexpo Pavilion.
The building consists of three independent sections separated by expansion joints of about 50 mm. For
example the evaluation of seismic body has been selected (Fig. 1). The assessed building consists of a
ground floor and 4 levels with a total height of about 19 m. The non-structural walls are made for
compartmentalization of masonry, bricks being arranged in "American" system: two separate longitudinal
lines with bricks put on their lateral side and, from time to time, bricks put across. The result is a wall with
lower weight lower than the classic solution, having acceptable properties of thermal insulation, but
mechanical and deformation properties much lower. The foundation is isolated type (reinforced bushings
and plain concrete blocks) under the columns, associated with rectangular network of balancing beams
under the masonry walls of the basement. The foundation layer is a brown-yellow dusty clay, plastic
consistent, with a conventional pressure of 250 kPa at 2.00 m depth.
Regarding the reinforcement of concrete frame elements it should be noted that they were designed under
"Normative conditioning design construction in seismic regions": P13-1963. Given the limited knowledge of
seismic engineering at the time, sectional effort design of beams and columns are associated with a shear
force base of 4.5% weight of the building. In addition, compliance and reinforcement of concrete elements
are strongly influenced by the requirements and design concepts of "gravity" system of STAS 1546-56. Thus,
both plates and beams are reinforced in the "gravity" of straight and inclined bars.
In terms of reinforcement the columns have identified 27 different sections. It is noted that the ends of the
stirrups up columns is higher than the values specified in current seismic design standards. Structural elements
are of B 200 concrete and smooth brand OL38. Concrete quality, confirmed by a limited number of non-
destructive testing by sclerometry corresponds to a concrete class of C12/15.
18
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
The level of knowledge. Based on the information presented above should determine the appropriate level
of knowledge. P100-3/2008 defines three levels of knowledge
- KL1: limited knowledge;
- KL2: normal knowledge;
- KL3: complete knowledge. Thus, the knowledge selected and allowed calculation method determines the
value of confidence factor (CF).
As for the construction analysed: (a) overall dimensions of the structure and the structural elements are
available based on the original plans and the validity of such data was confirmed by random checks on the
field, (b) the composition of structural elements, reinforcement details are known from an incomplete set
of the original plans of execution and the validity was verified by limited field checking of the elements
considered most important and (c) mechanical characteristics of materials are obtained from the
specifications of the original project and are confirmed by limited field tests; it was agreed that appropriate
knowledge level is KL2: normal.
According to table 4.1 of P100-3 is allowed "Any method of calculation, according to. P100-1: 2006 "and the
trust factor that will be used to establish material characteristics of the existing structure is" CF = 1.20 ".
Thus, for the calculation of structural capacity (checked against the requirements), the average values of
resistance obtained by in-situ tests and the original design specifications fall in values of trust factors.
Qualitative assessment of the structure. Determination of the indicator R1 by determining "the degree of
fulfilment of conditions of seismic structure -R1" aims to establish the extent of compliance with general
rules of structures, structural and non-structural elements that are stipulated by the current seismic design
code P100-1: 2006.
Body A of the F.I.F.I.M. building falls in seismic risk class RsIII - buildings which, under the effect of design
earthquake, could have non-significant structural degradation but important non-structural degradation.
The degradation assessment. Determination of indicator R2. Assessment of degradation of the structural
elements are quantified by calculating the value of "degree of structural damage -R2". Its determination is
based on scores given in Table B.3 in Appendix B of the code P100-3, for different types of degradation
identified. Other types of degradation can be considered further by a reduction factor R2.
Following the evaluation results a "high degree of structural damage" R2 = 89 points. Seismic risk class
associated factor score of R2 is determined according to table 2, which is a reproduction of table 8.2 P100-
3/2008 code:
19
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
The new code provides three P100-3/2008 seismic evaluation methodologies for assessing construction,
defined by the conceptual level of refinement of calculation methods and level of detail of checking
operations:
According to a level P100-3/2008, methodology Level 1 can be applied for regular constructions of
reinforced concrete, with or without filling masonry walls, up to three floors, located in seismic zones with
values ag ≤ 0.12 g. However, for example, a level 1 methodology will be used as an exercise that allows a
comparison with results of other two approaches. These efforts fall to acceptable and consistent efforts to
obtain such insurance different values of the degree of structural seismic: RN3 values associated with axial
forces and RV3 values with shear forces associated. To establish the value of normalized design acceleration
is necessary to determine the fundamental period of vibration of the structure. This is estimated using one
simplified equations:
3/4 3/4
T=kTH =0,007x19 ≈0,65 s or T=0,1xn=0,1x5 levels ≈ 0,50 s (1)
Since the height level of 3,80 m is significantly higher than the usual one for residential buildings or offices,
and considering the relatively small cress section of columns, it was considered that the first equation
provides a value closer to the real . The fundamental period corresponds to an acceleration normalized β =
2.75 design. According to Table 6.1 of P100-3/2008 - methodology Level 1, for reinforced concrete
structures the value of behaviour factor is q = 2.5. Since the analysed building has a capacity of over 200
people on total exposed area, it should be of class II, characterized by an importance factor of 1.20. Since
the surface of the current floor is of 690 squares meters, and the equivalent load is 1.10 t / m for this type
of construction, results a total mass of approximately 3,800 t and an equivalent static seismic force of:
(2)
(3)
Simplified vertical distribution of seismic force is associated with a deformed linear equivalent. Follows:
20
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
The Level 1 methodology consists in determination of the structural seismic insurance degree associated
with shear forces in vertical elements using equation (8.1a) of P100-3/2008:
(4)
where: Vaverge represents average tangential effort, calculated as the ratio of shear force level and total area
of cross sections of columns at that level, vadm represents the reference admissible value for tangential unit
effort in the vertical elements. According to Annex B of P100-3 vadm = 1.4 fctd, fctd is design tensile resistance
2
of concrete. Thus fctd = 0.67 N/mm for concrete class C12/15 and considered a reliable factor of CF = 1.2
and vadm = 0.93 N/mm2. For each level of the structure results following values of the structural seismic
insurance degree associated with shear forces.
V
Table 4 Distribution by level of the structural seismic insurance degree R3
V
Level Share forces level Total aria of Tangential effort R3
2
”i” (kN) columns average Vmed(N/mm )
2
AC(m )
4 Floor 3341 6.67 0.50 0.74
3Floor 6012 6.67 0.90 0.41
2Floor 8012 7.71 1.04 0.36
1floor 9341 8.58 1.09 0.34
Ground floor 10000 9.84 1.02 0.37
It is noted that due to changes in cross sections of columns the minimum structural seismic insurance
V
degree is recorded on the ground floor, where " R3 "= 0.34".
Graphs obtained are performed using MATHCAD PROFESSIONAL Software. These graphs are obtained by
successive attempts looking at table 3 (distribution of seismic forces) and was established following the
simple correlation between seismic force level considered variable Y and height level. In this case the
function is a straight line regression and correlation ratio is rxy = 1 which means that there is a very
significant connection between the two variables.
Figure 3 The regression function graph of seismic force and height level
21
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
Figure 4 The regression function graph of share force level and height level
4. CONCLUSIONS
For the final evaluation of structural safety of existing building needs to be done the consolidation of the
results achieved in each stage of the evaluation process:
1. In terms of qualitative assessment results a "degree of fulfilment of the terms of seismic structure" - R1 =
69.5 points corresponding to seismic risk class RsIII.
22
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Slave C. | Seismic Risk Assessment Using Mathematical Correlation and Regression
2. In terms of assessing the state of degradation results a "high degree of structural damage" - R2 = 89
points corresponding to seismic risk class RsIII.
3. In terms of analytical evaluation by calculation results the following values of the "structural seismic
insurance degree"
• The methodology level 1: R3 = 34% - CSR seismic risk class
• The methodology level 2: R3 = 39% - RsII seismic risk class
• The methodology level 3:
a. linear static calculation: R3 = 46% - RsII seismic risk class;
b. nonlinear dynamic calculation: R3 = 44% - RsII seismic risk class; conclusion: Resistance of body
A, FIFIM building falls in seismic risk class RsII including construction that under the effect of design
earthquake may suffer major structural degradation, but the loss of stability is unlikely.
Therefore, it is mandatory to design a major intervention on the resistance structure in order to
enhance the safety of the building.
REFERENCES
[1] Armeanu I, Petrehuș V.(2006) Probabilities and statistics, Editura Matrix Bucuresști
[2] Alexe R, Călin A, Călin C, Nicolae M, Pop O, Slave C, (5-8 September 2006)- Earthquake risk reduction of university
buildings. Proceedings of European Conference on Earthquake Engineering and Seismology, Geneva, Switzerland,
paper#522
[3] Alexe R, Călin A, Călin C, Nicolae M, Pop O, Slave C, (5-8 September 2006)- Post – seismic Interventions to Buildings
and Balancing with the Environment. Proceedings of the European Conference on Environmental research and
Assessment, Bucharest, Romania, pp 146-188.
[4] Slave C. Seismic risk assessment of existing structures, PhD Thesis, UTCB, September 2010
[5] Cod de proiectare seismica– partea A III-A –Prevederi pentru evaluarea seismica a cladirilor existente indicativ P
100-3/2008 (Seismic Design Code, Part III-A-Provisions for seismic evaluation of existing buildings indicative P 100-
3/2008)
23
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pendaraki K., Spanoudakis N. | Constructing Portfolios Using Argumentation Based Decision Making and
Performance Persistence of Mutual Funds
Abstract
This paper proposes an argumentation based tool for the selection of mutual funds and then the composition of
efficient portfolios based on the performance persistence of the mutual funds. Argumentation allows for combining
different contexts and preferences in a way that can be optimized. It allows for defining a set of different investment
policy scenarios and supports the investor/portfolio manager in composing efficient portfolios that meet his profile.
Moreover, the proposed methodological approach and the obtained portfolios are validated through their comparison
to the return of the Market Index and with portfolios obtained using a traditional performance index. This approach is
applied on data of Greek domestic equity mutual funds over the period 2000-2011 with encouraging results.
KEYWORDS
Decision making, Argumentation, Performance Persistence, Mutual Funds, Portfolio Management
1. INTRODUCTION
The traditional portfolio theory developed by Markowitz (1959), accommodates the portfolio selection
problem on the basis of the existing trade-offs of risk and return in the mean-variance context. On the
same mean-variance basis several other approaches for the evaluation of funds portfolios have been
developed, including the Capital Asset Pricing Model-CAPM (Mossin, 1969), the Arbitrage Pricing Theory-
APT (Ross, 1976), single and multi-index models, average correlation models, mixed models, utility models,
etc (see Elton and Gruber, 1995). Many of these models were based on a unidimensional nature of risk
approach, and did not capture the complexity presented in the data. The portfolio selection problem is
often addressed through a two-stage procedure. At a first stage an evaluation of the available securities is
performed. This involves the selection of the most proper securities on the basis of the decision makers’
investment policy. At a second stage, the portfolio composition is performed which involves the
computation of the proportion of the initial budget that should be allocated in the selected funds.
The present study extends our previous work (Pendaraki and Spanoudakis, 2012), which uses
argumentation-based decision making (Kakas and Moraitis, 2003) for selecting the proper securities, in our
case, mutual funds (MF). Our approach gives the opportunity to an investor/portfolio manager to define
different investment scenarios according to his preferences, attitude (aggressive or moderate) and the
financial environment (e.g. bull or bear market), in order to select the best mutual funds which will
compose the final portfolios.
The contribution of this work is the validation of our hypothesis that composing portfolios taking into
account the past performance of mutual funds is more efficient than our naïve approach. Firstly, we
confirmed our findings that there is statistically significant performance persistence for 1-year and 4-years
holding periods through the “winner-winner, winner-loser” methodology developed by Brown and
Goetzmann (1995), Goetzmann and Ibbotson (1994), and Malkiel (1995). Then we used two different
strategies, the naïve portfolio with equal weights and the portfolio participation based to the fund’s
performance in the past in order to define the magnitude of its participation in the final portfolio.
24
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pendaraki K., Spanoudakis N. | Constructing Portfolios Using Argumentation Based Decision Making and
Performance Persistence of Mutual Funds
To illustrate the validity of the proposed approach we compare our findings with those using a traditional
performance index (the Sharpe index) and a broad domestic market index (the Athens Stock Exchange
General Index). To our knowledge, this study uses, for the first time, the combination of argumentation-
based decision making for selecting the proper funds and the “winner-winner, winner-loser” methodology
for composing efficient portfolios that outperform the market.
In what follows, we firstly present our data set, and then sections three and four discuss the proposed
methodology. Section five presents our obtained results, while section six concludes.
(1) Return of the funds: The return on a MF investment in a given time period is calculated by taking into
account the change in a fund’s net asset value. The fund’s return in period t is defined as follows:
NAVt DISTt NAVt 1
R pt ,
NAVt 1
where Rpt is the return of a mutual fund p in period t, NAVt is the closing net asset value of the fund on the
last trading day of the period t, NAVt-1 is the closing net asset value of the fund on the last trading day of the
period t-1, and DISTt is the income and capital distributions (dividend of the fund) taken during period t.
(2) The standard deviation is the most commonly used measure of variability. For a MF the standard
deviation is used to measure the variability of its daily returns and is defined as follows:
(1/ T ) ( Rpt Rpt )2 ,
where σ is the standard deviation of the MF in period t, R pt is the average return in period t, and T is the
number of observation (days) in the period for which the standard deviation is being calculated.
(3) The beta coefficient (β) is a measure of a fund’s risk in relation to the capital risk. The β coefficient
shows the sensitivity of mutual funds’ value on the increasing and decreasing ratings of financial market and
is defined as follows: β = cov (Rpt, RMt) / var (RMt), where cov (Rpt, RMt) is the covariance of the daily return of
a MF with the daily return of the market portfolio (Athens Stock Exchange), and var (RMt) is the variance of
the daily return of the market portfolio.
(4)The Sharpe index (Sharpe, 1966) is used to measure the expected return of a fund per unit of risk,
defined by the standard deviation. This measure is defined as the ratio (Rpt - Rft) / σ where Rft is the return
of the risk free portfolio in period t, which is calculated from the three month Treasury bill.
(5) The Treynor index (Treynor, 1965) is obtained by simply substituting volatility for variability in the
Sharpe index. This measure is defined as the ratio (Rpt - Rft) / β. The evaluation of MFs with these two indices
shows that a MF with higher performance per unit of risk is the best-managed fund, while a MF with lower
performance per unit of risk is the worst managed fund.
The examined funds are classified in three homogeneous groups for each one of the aforementioned
variables. The three groups are defined according to the value of the examined variables for each MF. For
example, we have funds with high, medium and low performance (return), funds with high, medium and
low beta coefficient, etc. Thus, we have 180 groups (12 years x 3 groups x 5 variables) in total.
25
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pendaraki K., Spanoudakis N. | Constructing Portfolios Using Argumentation Based Decision Making and
Performance Persistence of Mutual Funds
Briefly, an argument attacks (or is a counter argument to) another when they derive a contrary conclusion.
These are conflicting arguments. A conflicting argument is admissible if it counter-attacks all the arguments
that attack it. It counter-attacks an argument if it takes along priority arguments and makes itself at least as
strong as the counter-argument.
In defining the decision maker’s theory we specify three rule levels. The first level defines the (background
theory) rules that refer directly to the subject domain, called the Object-level Decision Rules. In the second
level we have the rules that define priorities over the first level rules for each role that the decision maker
can assume or context that he can be in (including a default context). Finally, the third level rules define
priorities over the rules of the previous level (which context is more important) but also over the rules of
this level in order to define specific contexts, where priorities change again.
For capturing the experts knowledge we consulted the literature but also the empirical results of applying
the found knowledge in the Greek market. We identified several contexts, i.e. two types of investors,
aggressive and moderate, an investor policy (selection of portfolios with high performance per unit of risk),
and two market contexts, bull (rising market) and bear (declining market).
In a bull market context, funds which give larger returns in an increasing market (those with high systematic
or total risk) are selected. On the other hand, in a bear market, funds which give lower risk and their returns
are changing more smoothly than market changes (those with low systematic and total risk) are selected.
An aggressive investor is placing his capital upon funds with high return levels and high systematic risk
(risking for higher returns). A moderate investor prefers funds with high return levels and low or medium
systematic risk.
Some types of investors select portfolios with high performance per unit of risk. Such portfolios are
characterized by high reward-to-variability ratio (Sharpe ratio) and high reward-to-volatility ratio (Treynor
ratio). These portfolios are the ones with the best managed funds.
The specific contexts are formed by combining the above roles and contexts. Priorities over possibly
conflicting rules are defined. For the details of the combined contexts the reader can consult Pendaraki and
Spanoudakis (2012).
A contingency table is used in order to identify the frequency with which funds are defined as winners (W–
a fund with returns above the median) and losers ( L–a fund with returns below the median) over successive
time periods. The three statistical tests used to examine the performance persistence of the examined
funds are described below.
Malkiel’s Z-test (1995): (Y-np) / (np(1-p)) , which shows the proportion of repeat winners (WW) to
winner-losers(WL),where Z is the statistic variable that follows a normal distribution (0,1), Y is the number
of winner funds in two consecutive periods, n is the sum of WW+WL, p is the probability of a winner fund in
one period to repeat as a winner in the subsequent period.
Brown and Goetzann Odds Ratio (OR) (1995): (WW*LL)/(WL*LW) . Using this ratio, the statistical
significance of the OR is determined applying a Z-test to the following Z variable which follows a normal
distribution (0,1), Z ln(OR)/σln(OR) .
i 1 j 1
where Oij and Eij are the actual and expected frequency of the ith row and the jth column in the
contingency table respectively.
Table 1 shows the contingency table of fund returns along with the results of the statistical tests of the null
hypothesis of no performance persistence between consecutive periods. Thus, 53% of all winners in any
year are winners in the next year (called Repeat Winners-RW), i.e. 179 (WW) of 336 (WW+WL). The
percentage of RW for 2, 3 and 4 years are 52%, 51% and 59% respectively. The results of the examined tests
show strong evidence of statistical significance performance persistence for 1-year and 4-years holding
periods. More precisely, according to the first two tests, the percentage of RW is above 50% while the Z-test
is also above zero and statistical significant, thus is indicative of performance persistence. The same results
come from the OR ratio that is greater than one and the Z-stat which is also statistical significant. Thus, the
results of the test statistics show that, at the overall level, there is evidence of performance persistence
according to all three criteria for 1-year and 4-years holding periods.
Having selected the funds that will compose the investment portfolios through the argumentation-based
reasoning phase, we then had the challenge of choosing the participation percentage of each one of them
to the final portfolio through two different strategies. Firstly, the naïve portfolio with equal weights, wi=1/N
where wi is the proportion of the available capital invested in the selected funds and N is the number of the
selected funds. Secondly, the portfolio participation based on the fund’s performance persistence:
y0 N y0
wi h / h ,
y yh
i
y
j 1 y yh
y
j
y
where yh is the year from which we have historical data, y0 is the year of the investment, H contains the
y
returns of funds above the median and ri is the return of the fund i for year y.
27
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pendaraki K., Spanoudakis N. | Constructing Portfolios Using Argumentation Based Decision Making and
Performance Persistence of Mutual Funds
Table 2: Argumentation vs Sharpe: Average Risk and Return of Naïve and Performance Persistence (PP) based Portfolios
The evaluation of the proposed approach and the obtained portfolios in year t, is performed through their
comparison to the performance of the Athens Stock Exchange General Index (final row of Table 2), in year
t+1. Table 2 validates both our claims. Firstly, it provides evidence that argumentation-based funds
selection is better than traditional approaches, comparing our portfolios with those obtained when we
select an equal number of funds based on their performance using the traditional financial performance
28
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pendaraki K., Spanoudakis N. | Constructing Portfolios Using Argumentation Based Decision Making and
Performance Persistence of Mutual Funds
index, the Sharpe index, and also comparing them with the Athens Stock Exchange index. Secondly, it
provides evidence that performance persistence based portfolio composition leads to better performing
portfolios than the naïve choice.
6. CONCLUSION
This contribution presents a tool, which allows a decision maker (fund manager) to construct effective
multi-portfolios of MFs under different, possibly conflicting contexts. The empirical results of our study
showed that our tool is well suited for this type of applications giving answers to two important questions:
(1) which MFs are the most suitable to invest in, and (2) what portion of the available capital should be
invested in each of these funds. The proposed approach gives the opportunity to a decision maker (fund
manager) to construct multi-portfolios of MFs in period t, that have the ability to achieve higher returns
than the ones achieved from the ASE-GI in the next period, t+1. The proposed tool has been validated using
the data set described in this paper and is available for demonstration at the Applied Mathematics and
Computers Laboratory (AMCL) of the Technical University of Crete, Greece. It is intended for use by banks,
investment institutions and consultants, and the public sector.
REFERENCES
Brown, S.J. & Goetzmann, W.N., 1995. Performance Persistence. The Journal of Finance, Vol. 50, No.2, pp.679–698.
Vidal-García, Javier, The Persistence of European Mutual Fund Performance. Research in International Business and
Finance. Vol. 28, May 2013, pp. 45–67.
Elton, E.J. and Gruber, M.J. 1995, Modern Portfolio Theory and Investment Analysis, Fifth edition, John Wiley & Sons,
New York.
Goetzmann, W.N., and Ibbotson, R.G., (1994), Do Winners Repeat? Patternsin Mutual Fund Performance, Journal of
Portfolio Management, Vol. 20, pp. 9-18.
Kakas, A. & Moraitis, P., 2003. Argumentation based decision making for autonomous agents. In Proc. of the second
Int.Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’03). New York: ACM Press, pp. 883–890.
Malkiel, B.G., 1995, Returns from Investing in Equity Mutual Funds 1971 to 1991, The Journal of Finance, Vol. 50, No. 2,
pp. 549-572.
Markowitz, H.M., 1959. Portfolio Selection: Efficient Diversification of Investments, New York: John Wiley & Sons.
Mossin, J., 1969, Optimal Multiperiod Portfolio Policies, Journal of Business, Vol. 41, pp. 215-229.
Pendaraki K. and Spanoudakis N.I.. An Interactive Tool for Mutual Funds Portfolio Composition Using Argumentation.
Journal of Business, Economics and Finance (JBEF), Vol. 1, No. 3, September 2012, pp. 33-51.
Ross, S.A., 1976. The Arbitrage Theory of Capital Asset Pricing. Journal of Economic Theory, Vol. 13, No.3, pp. 341–360.
Vicente, L. and Ferruz, L., 2005, Performance Persistence in Spanish Equity Funds, Applied Financial Economics, Vol. 15,
pp. 1305-1313.
29
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
Abstract
Project portfolio selection is the problem of selecting a subset of projects from a wider set, optimizing one or more
criteria and satisfying specific constraints. The basic tools are usually Multiple Criteria Decision Analysis and
mathematical programming. In the presence of multiple decision makers the preferences are not unique and there
must be a negotiation approach taking into account all the points of view. In the present work we use the Iterative
Trichotomic Approach (ITA) in order to seek convergence. With ITA we can draw conclusions for the acceptance of each
individual project as well as for the robustness of the final portfolio. The weights of evaluation criteria differ among the
decision makers so that each one of them finally selects a different “optimal” portfolio. ITA can classify the projects into
three sets: the green projects (selected in the “optimal” portfolio by all the decision makers), the red projects (not
selected in the “optimal” portfolio by any of the decision makers) and the grey projects which are selected by some (but
not all) the decision makers. A converging Delphi like process is designed for the weights so that in the next round new
weights are calculated for every decision maker. The mathematical model is updated according to the new weights and
solved. As the iterative process moves from round to round the green and the red set are enriched and the grey
projects are reduced. The iterative process terminates when the calculated weights for all the decision makers provide
the same “optimal” portfolio. The above method is illustrated with an example involving 133 energy projects. The final
outcome is the final portfolio as compromise among the decision makers as well as the degree of accordance on each
one of the projects that are finally selected. Finally a consensus index for the final portfolio can be extracted according
to the progress of the converging process.
KEYWORDS
Project portfolio selection, MCDA, Integer Programming, Group Decision Making
1. INTRODUCTION
Project portfolio selection is defined as the problem of selecting one or a subset from a set of projects (a
subset of projects is considered as a “portfolio of projects”). In the latter case, the usual approach is to rank
projects using one or more criteria and select the top ranked ones that cumulatively satisfy a budget
limitation. However, in real world decision making there are two concepts that complicate the process like
e.g. the existence of constraints imposed by the decision maker. The existence of constraints to be satisfied
by the final selection destroys the independence of projects, which is one of the main assumptions in
Multiple Criteria Decision Analysis (MCDA) ranking (see e.g. Belton and Stewart, 2002). In other words, the
top ranked projects may only by chance satisfy the imposed constraints. For such cases Integer
Programming (IP) is an appropriate tool that performs optimization under specific constraints. In case of
project selection, the combinatorial character of the problem implies the use of IP with 0-1 (binary)
variables expressing the incorporation (Xi=1) or not (Xi=0) of the i-th project in the portfolio. The earliest
contributions were published under the title of capital budgeting (see e.g. Lorie and Savage, 1955), using
strictly financial measures to measure the value of projects and portfolios, giving emphasis to the budget
constraint. From early sixties, the so called capital budgeting problem was recognized as equivalent to the
popular in Operational Research (OR) knapsack paradigm. The incorporation of multiple criteria can also
been found in literature using Goal Programming (see e.g for a review Zanakis et al., 1995), combinations of
30
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
MCDA with IP (see e.g. Golabi et al., 1981; Abu Taleb and Mareschal, 1995; Mavrotas et al., 2003; Mavrotas
et al., 2006; Mavrotas et al., 2008). In our case the problem is even more complicated as we have more
than one decision makers (Group Decision Making). The preference of each one of the decision makers is
expressed by assigning their own weights of importance to the criteria. The result is that each decision
maker has his/her own optimal portfolio of projects. In order to achieve a consensus a convergence process
is designed based on the Iterative Trichotomic Approach (ITA) as described in (Mavrotas and Pechak, 2013a
and 2013b).
In the second section we describe the methodology and in the third section we present an application that
illustrates the method. Finally, in section 4 the main conclusions are presented.
2. METHODOLOGY
31
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
green
set green
set green
Set of projects set selected
Multiple criteria
grey grey ... grey ...
set set set
Multiple constraints
not
selected
Multiple DM
red
red set
red set
set
Start
32
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
The convergence process is designed to converge the weights of the criteria and is depicted in Figure 3. We
adjust the weights of importance of the decision makers from round to round in order to converge as we
move in the iterative process. First, we select the maximum number of rounds that we are going to perform
(N) and we accordingly assign the convergence parameter α as α=1/(Ν-1). Then we calculate the deviation
of each one of the weights from their average across the decision makers.
Start
r=0
wpk(r)=(wpk- a r dpk)
NO
grey(r)= ?
YES
FINISH
Where p =index for Decision Makers (DM) p=1..P, k=index for criteria k=1..K, r=index for rounds in the
(r)
iterative process, wpk =weight of p-th DM for k-th criterion in r-th round. It must be noted that from round
to round the weights keep their property to sum up to 1 as it is shown from the following equations.
Given that for the original weights wpk we have:
P
K w pk
w
k 1
pk 1 p 1..P, and w avg
k
p 1
w ' pk
k 1
(wpk a(wpk wkavg ) (1 a)wpk a wkavg
k 1 k 1 k 1
K K
(1 a) wpk a wkavg (1 a) a 1
k 1 k 1
3. APPLICATION
We applied the method in a group decision making problem dealing with energy projects. There are 133
energy projects from three RES technologies (wind, small hydro, photovoltaic) distributed across the 13
33
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
regions of Greece. There are 5 criteria according to which the projects are evaluated, namely, (1) Regional
development (2) CO2 emission reduction (3) Economic efficiency (4) Employment (5) Land use. There are 12
decision makers that give the weights of importance shown in Table 1.
There are also the following constraints that must be fulfilled. The total cost of the 133 projects is 659 M€
and the available budget is 150 M€. The cost of projects in Central Greece should be less than 30% of the
total cost, the cost of projects in Pelopponese should be less than 15% of the total cost, the cost of projects
in East & West Macedonia, Northern & Southern Aegean, Epirus should be greater than 10% of the total
cost. In addition, the number of projects from each technology should be between 20% and 60% of the
selected projects and the total capacity of the selected projects should be greater than 300 MW. We apply
the group ITA method with convergence parameter α=0.1 an the results are depicted in the following Figure
4. The great advantage of the group –ITA method is that we see information about the consensus of each
one of the projects that participate or not in the final portfolio as well as the consensus on the final
portfolio.
4. CONCLUSIONS
The project portfolio selection problem can be effectively addressed with a combination of MCDA and IP. In
the presence of multiple decicion makers ITA can provide a sound framework for an iterative convergence
process that can be used in Delphi like approaches. The main advantage is that we can measure the
consensus per project as well as for the final portfolio. For future research we will examine the
incorporation of additional decision parameters that express the decision maker preferences in the group-
ITA e.g. the shape of utility function or the policy constraints. In this case, the convergence process must be
adjusted also to these parameters that may vary from decision maker to decision maker.
34
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
121 122 123 124 125 126 127 128 129 130 131 132 133
ACKNOWLEDGEMENT
This research has been co-financed by the European Union (European Social Fund) and Greek national funds
through the Operational Program "Education and Lifelong Learning"
REFERENCES
Abu-Taleb M, Mareschal B (1995) Water resources planning in the Middle East: application of the PROMETHEE V
multicriterion method. Eur J Oper Res 81:500-511
Belton V, Stewart T (2002) Multiple Criteria Decision Analysis. An Integrated Approach. Kluwer Academic Publishers, UK
Golabi K, Kirkwood CW, Sicherman A (1981) Selecting a portfolio of Solar Energy Projects Using Multiattribute
Preference Theory. Manage Sci 27:174-189
Mavrotas G, Diakoulaki D, Capros P (2003) Combined MCDA – IP Approach for Project Selection in the Electricity Market.
Ann Oper Res 120:159-170
Mavrotas G, Diakoulaki D, Caloghirou Y (2006) Project prioritization under policy restrictions. A combination of MCDA
with 0–1 programming. Eur J Oper Res 171:296-308
Mavrotas G, Diakoulaki D, Kourentzis A (2008) Selection among ranked projects under segmentation, policy and logical
constraints. Eur J Oper Res 187:177-192
Mavrotas G, Rozakis S (2009) Extensions of the PROMETHEE method to deal with segmenta-tions constraints. J Decis
Syst 18:203-229
Mavrotas, G., Pechak, O. (2013a) The trichotomic approach for dealing with uncertainty in project portfolio selection:
Combining MCDA, mathematical programming and Monte Carlo simulation. International Journal of Multiple Criteria
Decision Making 3(1) 79-97
35
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Pechak O., Siatras D., Siskos E., Psarras J. | Project Portfolio Selection in a Group Decision
Making Environment: Aiming at Convergence with the Iterative Trichotomic Approach
Mavrotas, G., Pechak, O. (2013b) “Combining Mathematical Programming and Monte Carlo simulation to deal with
uncertainty in energy project portfolio selection” Chapter 16 in F. Cavallaro (ed) Assessment and Simulation Tools for
Sustainable Energy Systems Springer-Verlag, London
Liesio J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project
interdependencies. Eur J Oper Res 190(3):679–695
Lorie J H, Savage LJ (1955) Three problems in rationing capital. J Business 28(4): 229-239
Zanakis SH, Mandakovic T, Gupta SK, Sahay S, Hong S (1995) A Review of Program Evaluation and Fund Allocation
Methods Within the Service and Government Sectors. Socio-Econ Plan Sci 29:59-79
36
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G.K.D., Fragkogios A., Zygouri E. | A Multi-periodic Optimization Modeling Approach for the
Establishment of a Bike-sharing Network: A Case-study of the City of Athens
Abstract
This study introduces a novel mathematical formulation that addresses the strategic design of a bicycle sharing
network. The developed pure integer linear program takes into consideration the available budget of a city for such a
network and optimizes the location of bike stations, the number of their parking slots and the distribution of the bicycle
fleet over them in order to meet as much demand as possible and to offer the best services to the users. The proposed
approach is implemented on the very center of the city of Athens, Greece.
KEYWORDS
Bike, Sharing System, Stations, Integer
1. INTRODUCTION
Bike-sharing networks have received increasing attention during the last decades and especially in the 21st
century as a no-emission option in order to improve the first/last mile connection to other modes of
transportation, thus facilitating the mobility in a densely populated city. The bike-sharing network consists
of docking stations, bicycles and information technology (IT) interfaces that have been recently introduced
to improve the quality offered to the users.
This expanding trend of bike-sharing networks necessitates their better planning and design in order that
they are successful. The goal of this paper is to propose a novel mathematical formulation to design such
networks incorporating the hourly demand estimation, the fixed costs of infrastructure, the proximity and
density of stations, as well as their size. Given a set of candidate locations of stations and with a predefined
available construction budget the model decides the number and the location of the stations, how large
they will be and how many bikes should they have at the beginning of the day in order to meet the assumed
demand.
2. MODEL FORMULATION
Given a set of candidate locations of bike stations and the time-dependent demand for bikes at these
locations during an average day it is necessary to know where to place the bike stations and how many
parking slots and bikes should each one have. The available budget of a city for the construction of the
whole bike-sharing system is predefined and so are the costs of a single bike, a single parking slot and a
single station. So it is a matter of optimization for the model to decide how many stations, bikes and
37
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G.K.D., Fragkogios A., Zygouri E. | A Multi-periodic Optimization Modeling Approach for the
Establishment of a Bike-sharing Network: A Case-study of the City of Athens
parking slots it will include in its solution. The walking time between the locations is another parameter of
the problem used to ensure the proximity of the constructed stations as far as this is possible.
As regards demand in each location, it is split into “Demand for Pick-Ups”, i.e. how many users would like to
take a bike from a station, and “Demand for Drop-Offs”, i.e. how many riders would like to leave a bike into
a station. The 24 hours of the day are discretized into time intervals of one hour, during which different
numbers of users come to a station either to pick up or drop off a bicycle.
In Figure 1 the thorough consideration of the problem is explained. N locations i are predefined together
with their “Demand for Pick-Ups” and “Demand for Drop-Offs” at all time intervals during an average day.
The walking time between these N locations is, also, known. It is a matter of optimization how many bike
stations will be established and where, so that every location has a nearby station. The locations k, where
stations are established, is a subset of the locations i.
If budget is not enough to construct stations at all N locations, at some locations i there will be no station.
These locations should have a nearby station k no more than a specific walking time away and only a
percentage of their demand is considered to be passed to this station k. The rest of their demand is not
served supposing that this part of citizens will not take a bike due to the distance of the station k from their
location i. In this way, it is assumed that location i is served by station k. On the one hand, this transfer of
the demand is inevitable as the restricted budget does not allow stations to be built at all locations. On the
other hand, it is not desirable because it means that the users of the network will have to walk from
location i, where they would rather a station to be present, to the established station k and vice versa. This
would result into poor service quality offered to the users of the bike-sharing network, as some potential
customers will not eventually use the network.
This consideration is accomplished through the following objective function. The objective function of the
model is a minimization of three terms. The first term expresses the amount of demand that is transferred
from a location i to its allocated station k, which are a specific walking time away from one another. Thus,
the model will propose a dense distribution of stations, establishing no station at locations with low
demand ensuring that they are as close to a station as possible. This term is multiplied by the penalty unit
cost to differ its importance from the other two terms. The second and the third term of the objective
function are introduced in order to minimize the unmet demand. There is a difference between the
parameters that express the “Demand for Pick-Ups” from location i during time interval t and the “Demand
for Drop-Offs” at location i during time interval t and the variables that express the number of bicycles that
are available at station k at the beginning of time interval t and the number of bicycles that could leave
station k during time interval t. The former express the users who would like to pick up and drop off a bike
38
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G.K.D., Fragkogios A., Zygouri E. | A Multi-periodic Optimization Modeling Approach for the
Establishment of a Bike-sharing Network: A Case-study of the City of Athens
from and to a candidate station location respectively. However, the station k may not have the required
bikes or free parking slots to meet these two types of demand respectively. So the number of bikes that
eventually leave or arrive at a station k is expressed by the two mentioned variables. Both the parameters
and the variables refer to each time interval t. These two terms are multiplied by the same penalty unit cost
meaning that no different importance is given to either of them.
In the mathematical model there is a constraint, which warrants that the total available budget is not
exceeded. Other constraints ensure that the bicycle parking slots at each constructed station are between
the permissible minimum and maximum value and that at all time intervals, each station cannot have more
bikes than the number of its parking slots.
Another constraint expresses that the number of bicycles at station k at the beginning of time interval t+1 is
equal to the ones it had at the beginning of time interval t plus the bikes that arrive minus the ones that
leave during time interval t.
Some other constraints guarantee that a location i cannot be served by location k, if a station is not built in
location k, also, that if a station is constructed at location k this location will be served by its own station.
Furthermore, that each location i may be served by exactly one bike station k and that a constructed station
k can serve only locations which are located within a maximum walking time from it.
Finally, among others there are some constraints, which guarantee that at every time interval the bicycles
that can leave the station can be no more than the available ones and the bikes that can come to a station
can be no more than the free parking slots.
At this point it is necessary to explain how the model decides the number of a station’s parking slots and its
bikes at first time interval. Giving a value at these two variables it determines where the station can or
cannot meet either the “Demand for Drop-Offs” or “Demand for Pick-Ups” at the first time interval. This
determines the values of the bikes that will leave or come to the station k at the first time interval. The last
ones determine the available bikes of the station k at the beginning of the next time interval t1 and so goes
on. Heading to minimize unmet demand the model proposes those values of the parking slots and bikes at
first time interval at each station that will result into having the suitable number of available bikes and free
parking slots in the following time intervals given the station’s different distribution of demand during the
day.
3. ATHENS CASE-STUDY
The authors chose 50 candidate locations where bike-sharing stations could be constructed. These 50
locations were categorized into the previously described four clusters and each one was given a scaling
factor.
The walking time between these locations was calculated using Google Earth. As regards the costs of the
network, two already implemented networks were taken into account, the first one in Greece (Karditsa)
39
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G.K.D., Fragkogios A., Zygouri E. | A Multi-periodic Optimization Modeling Approach for the
Establishment of a Bike-sharing Network: A Case-study of the City of Athens
and the second one in Cyprus (Nicosia). Examining the budget and the dimensions of each city and its
network the following data were assumed for the case of Athens. The cost of establishing a station is €
12,000. The cost of each slot in a station is € 900. The cost of a bike is € 500 and the total available budget is
€ 1,000,000. Furthermore, it is assumed that a location with no station cannot stand off a location with a
station within more than 7 minutes of walking time. The minimum and the maximum parking slots that a
station can have are as many as in the Velib’ network (between 8 and 70 per station).
3.2. Results
Figure 2 depicts the proposed established bike stations. The shape of each dot corresponds to the station’s
cluster, while its size represents the number of parking slots each station should have. The total number of
docking stations is 34 and the number of parking slots is 517 making a mean value of 517/34=15.2 slots per
station. Looking at the parking slots of each station, one can notice that the larger stations belong to the
cluster “Subway”, which is typical of the increased demand in the metro stations. The total number of bikes
in the network is 253 and their distribution over the established stations at the first time interval of the day
shows that stations of the cluster “Housing” are nearly full of bikes in order to meet the increased “Demand
for Pick-Ups” during the morning peak. On the other hand, the stations of the cluster “Employment” do not
have many bikes. This results in having more free parking slots in order to meet the increased “Demand for
Drop-Offs” during the morning peak.
Figure 2 The established stations of the solution of the case of Athens categorized in clusters and with their size
4. CONCLUSIONS
The knowledge gained from the already implemented networks can and should be used for the design of
future ones. This model reclaims the usage data from the Velib’ network of Paris to predict demand in
Athens and designs a suitable bike-sharing network to meet that demand. This paper proposes the suitable
design of a bike-sharing network so as to meet as much demand as possible during its usage afterwards.
However, some parameters could be altered to notice how the solution changes. Such parameters could be
the available budget or even the demand profiles to approximate the seasonal differences (winter-summer)
40
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G.K.D., Fragkogios A., Zygouri E. | A Multi-periodic Optimization Modeling Approach for the
Establishment of a Bike-sharing Network: A Case-study of the City of Athens
or the week differences (weekdays-weekend). The larger application of the model is, finally, another work
to be done concerning, for example, the whole Municipality of Athens.
REFERENCES
DeMaio, P., 2009, Bike-sharing: History, Impacts, Models of Provision, and Future, Journal of Public Transportation, Vol.
12, No 4, pp. 41-56
Farahani, Z. R., M. Hekmatfar, B. A. Arabani, and E. Nikbakhsh., 2013, Hub location problems: A review of models,
classification, solution techniques, and applications. Computers & Industrial Engineering. Vol. 64, pp. 1096-1109
Lin, J-R., T-H. Yang., 2011, Strategic design of public bicycle sharing systems with service level constraints. Transportation
Research Part E, Vol. 47, pp. 284-294
Sayarsad, H., S. Tavassoli, F. Zhao., 2011, A multi periodic optimization formulation for bike planning and bike utilization.
Applied Mathematical Modelling. Vol.36, pp. 4944-4951
Martinez, M. L., L. Caetano, T. Eiro, F. Cruz., 2012, An optimization algorithm to establish the location of stations of a
mixed fleet biking system: an application to the city of Lisbon. Procedia- Social and Behavioral Sciences. Vol. 54, pp. 513-
524
Garcia-Palomares, C. J., J. Gutierrez, M. Latorre., 2012, Optimizing the location of stations in bike-sharing programs: A
GIS approach. Applied Geography. Vol. 35, pp. 235-246
Lathia N., S. Ahmed, L. Capra., 2011, Measuring the impact of opening the London shared bicycle scheme to casual
users. Transportation Research Part C. Vol. 22, pp. 88-102
Etienne C., L. Oukhellou., 2012, Model-based count series clustering for Bike-sharing system usage mining, a case study
with the Velib’ system of Paris. Transportation Research-Part C Emerging Technologies. Vol. 22, pp. 88
41
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
Abstract
The Covenant of Mayors is a European Initiative, involving local and regional authorities. The signatories commit
themselves voluntarily to improve the energy efficiency and reduce by at least 20% greenhouse gas emissions within
the boundaries of the municipality by 2020. This can be achieved by the integration of Renewable Energy Sources (RES)
and Rational Use of Energy (RUE) technologies. One year after the signing of the Covenant, the municipalities are called
upon to submit a Sustainable Energy Action Plan (SEAP) approved by the local council. The SEAP includes the
municipality’s Baseline Emission Inventory and the Actions through which it intends to achieve the previous target. In
the frame of the Covenant of Mayors, this paper aims at the development of a Draft Sustainable Energy Action Plan for
the municipality of Chalkis, in the regional unit of Evia, Greece. Firstly, the energy consumption of all fields was
estimated by collecting the essential energy data and by applying estimations and inevitable approximate methods
based on published studies, when necessary. Subsequently, the emission inventory was complied for the year 2011 in
accordance with the principles and emission factor database of Intergovernmental Panel on Climate Change (IPCC).
Finally, possible RES and RUE actions were proposed, targeting to the improvement of the municipality’s energy
efficiency and the completion of its CO2 reduction target. Moving towards this objective, local authorities will propose
and implement RES and RUE technologies and policy measures.
KEYWORDS
Energy policy, Decision making, Sustainable development, Climate change mitigation, Energy efficiency, Greece
1. INTRODUCTION
A European Initiative known as the “Covenant of Mayors” has been created in order to support local
authorities to implement policies related to sustainable energy. More precisely, in order to increase energy
efficiency and use of renewable energy sources on their territories Covenant signatories commit to achieve
or even exceed the target set by the European Union to reduce CO2 emissions by 20% until 2020. In the first
stage a basic energy consumption and emission inventory of air pollutants is necessary within their borders.
After the preparation of a Baseline Emission Inventory, Covenant signatories undertake to submit, within
the year following their signature, a Sustainable Energy Action Plan (SEAP) outlining the key RES and RUE
actions planning to undertake in order to achieve the objective set. The purpose of this paper is the
assessment of carbon footprint and the greenhouse gas emissions inventory for the Municipality of Chalkis
in Evia, in accordance with the instructions of the Covenant, and the recommendation of actions and
measures towards this target of sustainable development.
42
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
2. METHODOLOGY
43
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
The total electricity consumption data for Evia district regarding the residential sector were provided by the
Hellenic Statistical Authority (ELSTAT), while population ratios were used for the identification of electricity
consumption at the municipal level. The calculation of heating oil consumption is based on the available
data from the ELSTAT and the Municipality of Chalkis regarding the area of residential buildings. Moreover,
specific indicators of the energy consumption were used, based on relevant studies regarding the Heating
Degree- Days in order to find the thermal requirements of the area. According to a relevant study of the
Centre for Renewable Energy Sources and Saving (CRES), the amount of energy consumed from biomass in
the residential buildings with other form of heating is 2,78 times greater than that from electricity.
The calculation of solar energy is based on a relevant study regarding the energy saving from solar panels’
2
installation (22,57 KWh/m ) and the rate of installed solar panels on buildings (36,9%). The total electricity
consumption data for Evia district regarding the tertiary sector were provided by the ELSTAT, while
population ratios were used for the identification of electricity consumption at the municipal level.
According to a relevant study of the CRES, the amount of electricity consumed is 3,92 times greater than
that of heating oil.
2.2.3. Transport
The category of transport includes municipal fleet, public transport and public and commercial transport.
The municipal fleet includes a variety of vehicles used to meet the needs of collection of waste, technical
support etc. Official data from the Municipality of Chalkis were used in order to estimate the gasoline and
diesel consumption of the municipal fleet, while the diesel consumption of the public transport was
estimated using the available data from the operator of Evia bus station. Finally, the total gasoline and
diesel consumption data for Evia district regarding the private and commercial transport were provided by
the Ministry of Environment, Energy & Climate Change, while population ratios were used for the
identification of fuel consumption at the municipal level.
44
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
In order to calculate the CO2 emissions attributed to electricity consumption, it is necessary to determine
the emission factor. It is worth mentioning that the emission factor for electricity depends on the local
energy production which is estimated at 575,82 MWh in 2011. Finally, the emission factor for electricity is
estimated at 1,147 tn CO2/MWh using the following equation.
Where
EFE = local emission factor for electricity [t/MWhe]
TCE = Total electricity consumption in the local authority (as per Table A of the template) [MWhe]
LPE = Local electricity production (as per Table C of the template) [MWhe]
GEP = Green electricity purchases by the local authority (as per Table A) [MWhe]
NEEFE = national or European emission factor for electricity (to be chosen) [t/MWhe]
CO2LPE = CO2 emissions due to the local production of electricity (as per Table C) [t]
CO2GEP = CO2 emissions due to the production of certified green electricity [t]
45
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
In the specific case where the local authority would be a net exporter of electricity, then the calculation
formula would be:
Moreover, a biodiesel blend is used in the city, including 5 % of sustainable biodiesel, and the rest
conventional diesel oil. Using the standard emission factors, the emission factor for this blend is calculated
based on the following equation:
The table below illustrates the Standard Emission Factors used for the estimation of the total CO 2 emissions
in Chalkis.
The table below illustrates the final CO2 emissions per sector of activity and per energy source in the
Municipality of Chalkis for the inventory year of 2011.
46
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
47
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
pumping stations, the installation of telemetry system in water supply network and the gradual
replacement of the existing bulbs in the public lighting with more efficient ones.
The Municipality of Chalkis does not have the opportunity for direct interventions in the residential
buildings and buildings of tertiary sector. To this end, the relevant actions will be focused on awareness
raising. Such actions are the establishment of the Energy Efficiency Department in the Municipality, the
design and distribution of brochures about the benefits of interventions in the residential buildings and
initiatives to support the citizens' actions and targeted seminars to professional groups. The development
of photovoltaic systems on the roofs of buildings and the replacement of oil burners with pellet or natural
gas are the measures with positive Net Present Value and
2.3.3. Transport
The transport sector is responsible for the 29% of total CO 2 emissions in Chalkis. Eco-driving seminars for
the drivers of public transport and the drivers of municipal fleet is a viable investment for the Municipality,
thus it is suggested to local authorities. Information events on the new vehicle technologies and measures
in order to increase the use of public transport and alternative modes of transport are some of the actions
that aim at reducing the CO2 emissions of private and commercial transport.
48
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papapostolou A., Angelopoulos D., Psarras J. | Development of a Sustainable Energy Action Plan for the
Municipality of Chalkis to Support Energy Policy Decision Making in Greece
3. CONCLUSIONS
The conclusions resulting from this paper are the following:
There was a variety of approaches for the estimation of energy consumption in the tertiary and
residential sector. Specifically, in the case of municipality of Chalkis, the development of the
energy consumption inventory was implemented with great accuracy, particularly in the sector of
municipal buildings and facilities. However, there was a great difficulty in avoiding necessary
approaches in various sectors, especially the tertiary sector, because of insufficient data.
Therefore, it is necessary the establishment of a municipal energy agency in order to gather energy
data for the monitoring of energy consumption, the promotion of RES investment and the
implementation of measures towards energy efficiency and sustainable development.
The CO2 emission factor for electricity in Greece and especially in Chalkis is very high (1,149 t
CO2/MWh and 1,147 t CO2/MWh respectively). This is a result of low energy value of coal, the
combustion of which is the primary method of power in Greece, and the low efficiency of thermal
power stations. Therefore, it is considered necessary for the adoption of RUE actions in all sectors
and the increasing penetration of RES in electricity generation.
The RUE actions alone are not sufficient to achieve the goal, and this is evidenced by the
proportion of local power generation from RES on emissions reduction objective, which amounts
to 30%.
Finally, this paper may contribute to the faster fulfillment of CoM commitments and to a more detailed
techno- economic study with more accurate data for determining the most advantageous actions.
ACKNOWLEDGEMENT
The authors would like to express their gratitude for the support received from the Municipality of Chalkis
and the local authorities, which provided the information on the energy consumption of municipal facilities.
REFERENCES
1. Covenant of Mayors, www.covenantofmayors.eu
2. Hellenic Statistical Authority (EL.STAT.), www.statistics.gr
3. Guyader Olivier, Berthou Patrick, Koustikopoulos C., Alban Frederique, Demaneche Sebastien, Gaspar M, Eschbaum
R, Fahy E, Tully O, Reynal Lionel, Albert A, 2007, Small-scale coastal fisheries in Europe
4. Papakostas K., Kyriakis Ν., Oikonomou D., 2006, Estimation of energy consumption for heating in residential
buildings of 36 Greek cities
5. C.A. Balaras, K. Droutsa, E. Dascalaki, S. Kontoyiannidis, 2005, Heating Energy Consumption and Resulting
Environmental Impact of European Apartment Buildings, Energy & Buildings, 37, 429-442
6. Papakostas Κ, Tsilingiridis G, Kyriakis N, 2005, Heating Degree-days for 50 Greek Cities, Tech. Chron. Science Journal,
TCG, IV 25 (1-2)
7. T. A. Varvaressou, T. D. Tsoutsos, 2005, Analysis of Environmental Impacts of a Domestic Solar Thermal System in
Greek Energy System
8. N. Ventouris, A. Tsakanikas, 2011, Agricultural machinery & the competitiveness of the primary sector
9. J. Barry, 2007, Watergy: Energy and Water Efficiency in Municipal Water- Supply and Wastewater treatment- Cost-
Effective Savings of Water and Energy
49
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
Abstract
The current financial and economic crisis, as well as the wider social and environmental pressures, put seriously into
question the traditional development patterns. These pressures create high expectations for coordinated actions and
holistic interventions towards a competitive economy, through the adoption of “green” practices. Several
methodologies are used for the collection and organization of data for companies’ energy and environmental corporate
responsibility. This fact creates immediately the question of how these data can be used more effectively, since analytic
data and information do not constitute energy and environmental policy. Nowadays, it is imperative need for the State
to support Small Medium Enterprises (SMEs) operation in this difficult business environment through the development
and adoption of appropriate policies, fostering green entrepreneurship and green energy growth. Integrated
approaches and methodologies for managing energy and environmental issues, as an aspect of CSR are a central
challenge towards SD. This paper aims to present a coherent and transparent methodological multi-criteria framework,
using linguistic variables, for assessing companies’ energy and environmental corporate policies. The use of linguistic
variables is a realistic approach, taking into consideration that the information needed is often unquantifiable,
imprecise and uncertain. The objective of the paper was to underline the use of linguistic variables in decision making
for energy and environmental corporate policy, based on the already developed “Linguistic TOPSIS”, so as to evaluate
energy and environmental corporate policies.
Moreover, a comparison with the 2-tuple LOWA operator and a sensitivity analysis are provided. According to the
results, SMEs that integrate systemic environmental practices and come from countries with essential implementation
of Corporate Social Responsibility (CSR) concepts achieve high overall performance, while SMEs with lower performance
have commitments and goals limited only to the required legislation.
KEYWORDS
Multi-criteria analysis; Linguistic variables; TOPSIS; Small Medium Enterprises; Sustainable Development
1. INTRODUCTION
Enterprises are at the heart of the Europe 2020 Strategy, taking into consideration their vital role towards
national prosperity and Sustainable Development (SD). Enterprises have to integrate social and
environmental concerns in their business operations and in their interaction with stakeholders on a
voluntary basis, within the framework of the Corporate Social Responsibility (CSR) concept. Moreover, there
is growing recognition of the important role Small Medium Enterprises (SMEs) play in these new fields of
green development, as the technology is still evolving and partnerships are described as “dynamic”. The
companies more than other stakeholders, have to address the problem in a long-term plan, and become a
driving force for the adoption of relative initiatives towards “green” development and the promotion of
energy efficiency and environmentally friendly practices, within the CSR framework (Toke and Oshima). The
purpose of CSR is to make corporate business activity and culture sustainable in three main aspects: the
economic aspect, the social aspect and the environmental aspect.
In our study emphasis was placed on SMEs. According to the new SMEs’ definition of the European
Commission, the main criteria for whether a firm belongs to this category or not is: the number of
employees, the annual turnover and the annual balance sheet total. More specifically, the commission
50
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
classifies SMEs to micro, small and medium-sized in accordance to the maximum limits set for each
category. Systematic energy and environmental policy issues are created and applied mostly by large and
multinational enterprises. SMEs are one of the main driving forces in the economy, but they often face
different and sometimes greater challenges than larger companies. Furthermore, according to the
European Environmental Association SMEs are responsible for the 64% of the total industrial environmental
footprint in Europe and for the 75% correspondingly in Greece. Therefore, the concept of CSR for SMEs is an
issue of importance, since it is considered as an effective tool to enhance their competitiveness.
2. PROBLEM IDENTIFICATION
It is accepted that SMEs differ qualitatively from the large enterprises. Large firms have greater visibility for
their actions and their impact, attract more public attention and thus greater pressure from society and
shareholders to demonstrate ethical and socially responsible behavior. SMEs generally lack in resources
that would allow them to maintain a close relationship with the developments and the appropriate
involvement in sustainable development issues.
Thus, the nature of the difficulties faced by SMEs in implementing CSR activities, depends on many factors
and varies from company to company. The most common difficulty is the economic costs involved in
implementing CSR practices. Taxes, high costs of management systems and the general difficulty of
fundraising are issues for the majority of Greek SMEs as identified by the Greek Network for Corporate
Social Responsibility. The second most important difficulty is the lack of time, either on the part of
management or on the part of the workforce. The lack of information is also a determining factor that limits
the application of CSR. In addition to the abovementioned difficulties, it is worth noting that bureaucracy,
lack of organization, firm size and each enterprise’s priorities are constraining factors for the adoption of
CSR policies. Moreover, according to the database disclosure of Global Reporting Initiative, the presence of
SMEs in sustainability and CSR reports is limited. Less than 20% of CSR and Sustainability Reports in GRI
have been developed by European and Greek SMEs during the period 2009-2011, as illustrated in Figure 1.
Thus, the need for customised tools and decision support methods, that connect indicators with policies,
especially as concerns the energy and environmnetal impact has emerged. Considering the aforementioned
inhibitors and the wide range of SMEs operation, it is necessary to develop flexible, transparent and easily
understood framework for assessing companies’ environmental and energy policies in order to address the
following two problems:
Enterprises level: Companies (especially SMEs) are seeking tools to tackle the crisis in a collective
manner, formulating energy and environmental corporate policy;
Policy-making level: It is imperative need for the State to support State’s strategies for green
corporate development policies, fostering in this respect green entrepreneurship and green energy
growth.
51
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
Important tools, in this dimension, are the multi-criteria decision making and the use of linguistic variables.
3. PROPOSED METHODOLOGY
3.1. Overview
An overview of the adopted approach is presented in Figure 2. The proposed methodology extends TOPSIS,
using linguistic variables based on the 2-tuple representation model, called “Linguistic TOPSIS”, already
developed and presented by Doukas et al. This paper argues that the use of linguistic variables, in the form
of 2-tuples, can be an important tool in multi-criteria decision making for energy and environmental
corporate policy. In addition, a comparison with the 2-tuple LOWA operator and sensitivity analysis of the
results are provided.
52
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
An important limitation of the symbolic linguist approach is the loss of information that implies a lack of
precision in the final results. To tackle this limitation, a new fuzzy linguistic representation model has been
proposed by the same authors, namely the 2-tuple representation model (Herrera and Herrera-Viedma).
From this concept, a linguistic representation model was developed, which represents the linguistic
information by means of 2-tuples (si,αi), si S and α [-0.5, 0.5):
The multi-criteria method, called “Linguistic TOPSIS” as introduced by Doukas et al, consists of a set A of n
alternatives and a set C of k criteria.
The description of the “Linguistic TOPSIS” on an ordinal linguistic scale begins by considering a set
A A1 , A2 ,..., An of n alternatives and a set C C1 , C2 ,..., Ck of k criteria. The ratings of the
alternatives to the criteria as well as the importance weights of the criteria are linguistic variables and their
linguistic term set can be expressed through an ordinal linguistic scale S s0 , s1 ,..., sg .
4. PILOT APPRAISAL
The criteria to be selected have to be operational, exhaustive in terms of containing all points of view,
monotonic and non-redundant since each criterion should be countered only once, as pointed out by
Bouyssou. Appraising the abovementioned SMEs, the appropriate selection of the evaluation criteria should
take into account the main pillars of energy management and systematic environmental policy adoption. A
number of studies exist in the international literature that analyses the aspects and strategies towards
sustainable and environmentally friendly enterprises such as J. Alberto Aragon-Correa et al and N. Nagesha.
Additionally, several studies related to the SME options’ characteristics associated with energy and
environmental policy, environmental management standards (ISO 14001, EMAS) and Global Reporting
Initiatives guidelines were surveyed for the compilation of the appropriate criteria.
With respect to the above, the research focuses on the provision of a small but clearly understood set of
evaluation criteria, which can form a sound basis for the comparison of the examined SMEs in terms of their
systematic energy and environmental policy integration as a part of CSR and SD. Concisely, the six criteria
are presented in Table 1.
53
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
4.2. Performances
For the assessment of the alternatives performance in the respective criteria, the following ordinal scale of
seven linguistic terms was used to assess the values of the ratings and the weights:
S s0 N , s1 VL, s2 L, s3 M , s4 H , s5 VH , s6 P
The influential data source for the evaluation of these options was the Global Reporting Initiative Disclosure
Database. Specifically, extensive study on the reports of the examined SMEs, concerning CSR and SD, led to
the assessment of the alternatives’ performance to the respective criteria. The rating matrix of the SME
options and the values of the criteria are presented in Table 2.
C3 M VL H M L H L N P L
C4 L VL N VL H N N L M N
C5 H H M L VH M H VL H M
C6 VH L L H VH M H M H M
54
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
The results were elaborated based on 2 values of the coefficient p, namely “0.5” and “0.6”, in order to
illustrate the effect of the coefficient in the results. The first case (p=0.5) means that the distance from the
negative ideal alternative is equal important with the distance from the ideal alternative, while the second
case (p=0.6) means that the distance from the negative ideal is 10% more important than that of the ideal.
The ranking is the same in both case; the difference is in the overall performance of each assessed option,
which is increased correspondingly as p is increased.
The results seem realistic since SMEs with a high overall performance (O-9, O-5 and O-1) implement energy-
efficient innovative technologies, use to some extent RES, apply bioclimatic building operation and adopt
clean transport technologies (e.g. clean fleets). They also participate in numerous dissemination activities
and integrate commitments considering the role of their suppliers towards the incorporation of
environmental responsibility in the entire supply chain. Additionally, SMEs with high scores come from
countries with essential implementation of CSR concepts. As demonstrated by Gjølberg, Greece is not
present among these countries; nevertheless it seems that there are SMEs that integrate systematic
environmental practices (O-1). On the other hand, options appearing at the bottom of Table 3 have lower
than average performance in the overwhelming majority of the criteria. The commitments and goals of
these alternatives are limited to the required legislation. Moreover, these companies do not seem willing to
invest in renewable energy projects and their Waste Management strategy is based on partial recycling.
In addition, most of the obtained results by “Linguistic TOPSIS” are in consistency with the GRI Application
Level. Application Levels indicate the extent to which the appropriate Guidelines have been applied in
sustainability and CSR reporting. However, it should be noted that it is difficult to have a completed
validation of the obtained results, since the Application Levels do not give an opinion on the sustainability
performance of the reporting organization. This also highlights the contribution of the proposed framework
to support the assessment of the quality of the reports and the corresponding SD measures implemented
by the examined enterprises.
The problem was also solved by the 2-tuple Linguistic Ordered Weighted Averaging (LOWA) operator. The
results of the 2-tuple LOWA operator are presented in Table 4. In general, the results of the fuzzy linguistic
quantifiers are in consistency with the final results of the “Linguistic TOPSIS”.
55
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
In the “Linguistic TOPSIS” the weights assigned for the criteria had an equal importance. This was important
in order to be in consistency with the comparison with the 2-tuple LOWA operator, in which the weights
have also equal importance and represent the concept of fuzzy majority. However, a crucial issue in multi-
criteria approach is to determine to what extent the final ranking of the alternatives is dependent and
sensitive to the estimated weights. A solution to tackle the sensitivity analysis issues is to define stability
intervals for the weights of the different criteria.
The sensitivity analysis is carried out for the weight of each criterion. The weight stability intervals derive
from the curves of the relative closeness coefficients of the alternatives. The stability intervals can be easily
determined by identifying the intersection points of the curves. The stability intervals are presented in
Table 5. From the results, it can be noted that in general the criteria weights are very sensitive, apart from
the weight of the criterion 4.
5. CONCLUSIONS
Integrated approaches and methodologies for managing energy and environmental issues, as an aspect of
CSR are a central challenge towards SD. Towards this direction, an integrated, transparent and
comprehensive approach is required, in order to address the following two problems:
Enterprises level: Companies (especially SMEs) are seeking tools to tackle the current financial and
economic crisis in a collective manner, formulating energy and environmental corporate policy;
56
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
Policy-making level: It is imperative need to support State’s strategies for green corporate
development policies at the regional/ national level, fostering in this respect green entrepreneurship
and green energy growth.
The objective of the paper was to underline the use of linguistic variables in decision making for energy and
environmental corporate policy, based on the already developed “Linguistic TOPSIS”, so as to evaluate
energy and environmental corporate policies. In addition, a comparison of the results with the 2-tuple
LOWA operator was provided. The results of the study can support the State in focusing its resources on
those types of companies and applying the appropriate practices that ensure green entrepreneurship.
The message addressed from the results is that SMEs that integrate systemic environmental practices and
come from countries with essential implementation of CSR concepts achieve high overall performance. On
the other hand, the SMEs with lower performance have commitments and goals limited only to the
required legislation. In addition, most of the obtained results are in consistency with the GRI Application
Level. However, it should be noted that it is difficult to have a completed validation of the obtained results,
since the Application Levels do not give an opinion on the sustainability performance of the reporting
organization. This also highlights the contribution of the proposed framework to support the assessment of
the quality of the reports and the corresponding SD measures implemented by the examined enterprises.
Further issues for investigation would be expanding the sample of companies examined, and performing
separate assessments for each SME category, in order to create benchmarks for further evaluation.
Moreover, the proposed methodology could establish a fruitful basis for evaluating other aspect of
enterprises’ CSR, since the information in most of the CSR dimensions is qualitative and the use of linguistic
variables can provide a sufficient supportive framework.
REFERENCES
J.A. Aragón-Correa, N. Hurtado-Torres, S. Sharma, V.J. García-Morales, 2008, Environmental strategy and performance
in small firms: A resource-based perspective Journal of Environmental Management, 86, 88–103.
D. Bouyssou, 1990, Building criteria: A prerequisite for MCDA, In C. A. Bana e Costa (Ed.), Readings in Multiple Criteria
Decision Aid. Berlin, Germany: Springer-Verlag, 58-80.
Commission of the European Communities, A renewed EU strategy 2011-14 for Corporate Social Responsibility, COM
(2011) 681 final, Communication from the Commission to the Parliament, the Council, the European Economic and
Social Committee and the Committee of the Regions.
Commission of the European Communities. 20 20 by 2020, Europe's climate change opportunity, COM (2008) 30 final.
Brussels, Belgium: Communication from the Commission to the Parliament, the Council, the European Economic and
Social Committee and the Committee of the Regions.
Commission of the European Communities, 2006, The new SME definition, User guide and model declaration, Enterprise
and Industry Publications.
H. Doukas, C. Karakosta, J. Psarras, 2010 “Computing with Words to Assess the Sustainability of Renewable Energy
Options”, Expert Systems with Applications, 37 (7), pp. 5491-5497.
H. Doukas, V. Marinakis, and J. Psarras, 2012, “Greening the Hellenic Corporate Energy Policy: An Integrated Decision
Support Framework. International Journal of Green Energy. 9, 487-502.
H. Doukas, K. Patlitzianas, A. Kagiannas, and J. Psarras, 2008, Energy Policy Making: An Old Concept or a Modern
Challenge? Energy Sources, Part B: Economics, Planning, and Policy. 3, 362-371.
European Environment Agency, 2010. The European environment state and outlook 2010: synthesis, Copenhagen.
57
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsiousi A., Psarras J. | Assessing Environmental and Energy Policy for SMEs
M. Gjølberg, 2009, The Origin of Corporate Social Responsibility: Global Forces or National Legacies? Socio-Economic, 7,
605-637.
F. Herrera, E. Herrera-Viedma, 2000, Linguistic decision analysis: steps for solving decision problems under linguistic
information. Fuzzy Sets and Systems. 115, 67-82.
F. Herrera, L. Martínez, 2000, A 2-tuple fuzzy linguistic representation model for computing with words. IEEE
Transactions on Fuzzy Systems, 8, 746-752.
L. Martinez, D. Ruan, F. Herrera, 2010, Computing with words in decision support Systems: An overview on models and
applications. International Journal of Computational Intelligence Systems, 3, 382-395.
N. Nagesha, 2008, Role of energy efficiency in sustainable development of small-scale industry clusters: an empirical
study Energy of Sustainable Development, 12, 34-39.
D. Toke, K. Oshima, 2007, Comparing Market Based Renewable Energy Regimes: The Cases of the UK and Japan.
International Journal of Green Energy, 4, 409-425.
L. Zadeh, 1975, The concept of a linguistic variable and its application to approximate reasoning. Part i. Information
Sciences, 8, 199–249.
58
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
Abstract
The multicriteria method MUSA (MUlticriteria Satisfaction Analysis) is a preference disaggregation model following the
principles of ordinal regression analysis (inference procedure). The method is used for measuring and analyzing
customer satisfaction and aims at evaluating the satisfaction level of a set of individuals (customers, employees, etc.)
based on their values and expressed preferences. This study presents an extension of the MUSA method based on
additional customer preferences. In particular, a customer satisfaction survey may include, besides the usual
performance questions, preferences about the importance of the criteria. Using such questions, customers are asked
either to judge the importance of a satisfaction criterion using a predefined ordinal scale, or rank the set of satisfaction
criteria according to their importance. All these performance and importance preferences are modeled using linear
programming techniques in order to assess a set of marginal satisfaction functions in such a way that the global
satisfaction criterion and the importance preferences become as consistent as possible with customer’s judgments.
Based on these optimality criteria, the extension of the MUSA method is modeled as a Multiobjective Linear
Programming (MOLP) problem. The main aim of the study is to show how combining customers’ performance and
importance preferences, the robustness of the estimated results may be improved compared to the original MUSA
method. An illustrative example is presented in order to show the applicability of this approach, while several MOLP
techniques (e.g., heuristic method, compromise programming, global criterion approach) and post-optimality
approaches are applied.
KEYWORDS
1. INTRODUCTION
The MUSA (MUlticriteria Satisfaction Analysis) method is a preference dissagregation model for measuring
and analyzing customer satisfaction. It follows the principles of ordinal regression analysis and aims at
evaluating the satisfaction level of a set of individuals (customers, employees, etc) based on their values
and expressed preferences. Considering that the MUSA method is based on a linear programming modeling
the problem of multiple or near optimal solutions appears in several cases. This has an impact on the
stability level of the provided results. Additional customer preferences such as preferences about the
importance of the criteria may improve the stability of the basic MUSA model.
linear programming techniques (Jacquet-Lagrèze and Siskos, 1982; Siskos and Yannacopoulos, 1985; Siskos,
1985). The ordinal regression analysis equation with the introduction of a double-error variable
representing the overestimation and underestimation error has the following form:
* n
Y bi X i
*
i 1 (1)
n
b 1
i 1
i
where the value functions Y * and X i* are normalised in the interval [0, 100], and bi is the weight of the i-
th criterion.
Removing the monotonicity constraints, the size of the previous LP can be reduced in order to decrease the
computational effort required for optimal solution search. This is effectuated via the introduction of a set of
transformation variables, which represent the successive steps of the value functions Υ * and Χ i* (Siskos
and Yannacopoulos, 1985; Siskos, 1985). The transformation equation can be written as follows (see also
Figure 3):
zm y
* m 1
y* m for m=1,2,...,α 1
(2)
wik bi xi bi xi for k=1,2,...,αi 1 and i=1,2,...,n
* k 1 *k
According to the aforementioned definitions and assumptions, the basic estimation model can be written in
a linear program formulation as it follows:
M
[min]F =
j 1
j j
n t ji 1 t j 1
w z
i 1 k 1
ik
m 1
m j j 0 , for j 1,2,..., M
a 1
z
m 1
m 100 (3)
n ai 1
w
i 1 k 1
ik 100
z m 0 , wik 0 , m, i, k
j 0 , j 0 , for j 1,2,..., M
60
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
α 1
wik for i 1, 2, , n
i
max F
k 1
subject to (4)
F F * ε
all the constraints of LP (3)
* *
where F is the optimal value of the objective function of LP (3) and ε is a small percentage of F .
The average of the optimal solutions given by the n LPs (4) may be considered as the final solution of the
problem. In case of instability, a large variation of the provided solutions appears and the final average
solution is less representative.
Based on such importance questions, each one of the satisfaction criteria can be placed in one of the
following categories C1, C2, …, Cq, where C1 is the most important criterion class and Cq is the less important
criterion class. Considering that Cl, with l the class index, are ordered in a 0-100% scale, there are Tq–1
thresholds, which define the rank and, therefore, label each one of the classes (see Figure 1). Thus, the
evaluation of preference importance classes Cl is similar to the estimation of thresholds Tl.
An ordinal regression approach may also be used in order to develop the weights estimation model. The
WORT (Weights evaluation using Ordinal Regression Techniques) model is presented in LP (5) in which the
goal is the minimization of the sum of errors under a set of constraints according to the importance class
that each customer j considers that a criterion i belongs (Grigoroudis and Spiridaki, 2003).
61
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
ˆ
ai 1 , bij Cl , l 2,..., q 1 i 1, 2,..., n και j 1, 2,..., M
t 1
wit 100 Tl Sij 0
ai 1
t 1
ˆ
wit 100 Tq 1 Sij 0, bij Cq
(5)
n ai 1
w
i 1 k 1
ik 100
Tq 1
Tq 2 Tq 1
T1 T2
wik , S , Sij 0 , i, j , k
ij
Here, b ij is the preference of customer j about the importance of criterion i., δ is a positive number, which
is used in order to avoid cases where bij = Tl l and λ a minimum value introduced to increase the
discrimination of the importance classes.
M
min F σ j σ j
j 1
n M
subject to
all the constraints of LPs (3) and (5)
The examination of possible improvement is done through the Average Stability Index (ASI). ASI is the mean
value of the normalized standard deviation of the estimated weights bi and is calculated as follows:
2
n
n
n bi bi j
j 2
1 n j 1
ASI 1
j 1
(7)
n i 1 100 n 1
62
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
where bi j is the estimated weight of the criterion i, in the j-th post-optimality analysis LP (Grigoroudis and
Siskos, 2002).
Since competitiveness of the objective functions is the main characteristic of MOLP problems, searching for
a solution that optimizes both F and Φ is rather pointless. The above problem may be solved using any
MOLP technique (e.g. compromise programming, global criterion approach, etc.). Here, an alternative
heuristic method, consisting of three steps, is presented (Grigoroudis et al., 2004):
Step 1: Step 2:
+ –
Solve the following Minimize the errors S ij and S ij using the
M following LP:
min F σ j σ j
n M
j 1
(8) min Φ Sij Sij (9)
LP: subject to i 1 j 1
α 1
wik for i 1, 2, , n
i
min F
k 1
subject to
F F ε1
*
(10)
Φ Φ ε2
*
A detailed discussion about modelling preferences on criteria importance in the framework of the MUSA
method, as well as real-world applications of the aforementioned approaches are given by Grigoroudis and
Spiridaki (2003) and Grigoroudis et al. (2004).
4. NUMERICAL EXAMPLE
A hypothetical numerical example has been used in order to examine whether additional information about
the weights of the criteria can improve the stability of the provided results. In this example 20 customers
express their satisfaction globally and for 3 different criteria using a three-level qualitative scale (see Table
1). Similarly, according to Table 2, customers are called to express their preferences about the importance
of the criteria using three importance classes.
63
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
Table 3 presents the results for a single set of values for the parameters of the MUSA extension model
(γ=γi=2, λ=0.1, δ=0.015, ε=ε1=ε2=0.1). Different scenarios with various values for the parameters have also
been examined with similar results. According to Table 3, criterion 2 is the most important one and criterion
3 is the less important in all cases. But considering the ASI index there is a remarkable increase when
additional information about the weights of the criteria is introduced regardless the MOLP technique
chosen for the solution of the problem and it reaches almost 100% when using compromise programming.
ACKNOWLEDGEMENT
This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national
funds through the Operational Program "Education and Lifelong Learning" of the National Strategic
Reference Framework (NSRF) ‐ Research Funding Program: THALES. Investing in knowledge society through
the European Social Fund.
64
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Politis Y., Grigoroudis E. | Combining Performance and Importance Judgment in the MUSA Method
REFERENCES
Grigoroudis, E. and O. Spiridaki (2003). Derived vs. stated importance in customer satisfaction surveys, Operational
Research: An International Journal, 3 (3), 229-247.
Grigoroudis, E. and Y. Siskos (2002). Preference disaggregation for measuring and analysing customer satisfaction: The
MUSA method, European Journal of Operational Research, 143 (1), 148-170.
Grigoroudis, E., Y. Politis, O. Spiridaki, and Y. Siskos (2004). Modelling importance preferences in customer satisfaction
surveys, in: C.H. Antunes, J. Figueira, and J. Climaco (eds.), Proceedings of the 56th Meeting of the European Working
Group “Multiple Criteria Decision Aiding”, INESC Coimbra, 273-291.
Jacquet-Lagrèze, E. and J. Siskos (1982). Assessing a set of additive utility functions for multicriteria decision-making: The
UTA method, European Journal of Operational Research, 10 (2), 151-164.
Siskos, J. (1984). Le traitement des solutions quasi-optimales en programmation linéaire continue: Une synthèse, RAIRO
Recherche Opérationnelle, 18, 382-401.
Siskos, J. (1985). Analyses de régression et programmation linéaire, Révue de Statistique Appliquée, 23 (2), 41-55.
Siskos, Y. and D. Yannacopoulos (1985). UTASTAR: An ordinal regression method for building additive value functions,
Investigaçao Operacional, 5 (1), 39-53.
65
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Abstract
Under the current conditions of mobility in Bogotá DC (Colombia), companies must give importance to decision making
in Transport Management for the three levels of planning: strategic, tactical and operational in land transportation that
is the most important in the city because has not local rail transportation. The decisions maker seeking to minimize
mainly the cost and then maximize the efficiency and service level, but the scope of this work is to only tactical and
operational, does not include purchases or improvements in infrastructure (roads, terminals, vehicles). This work is the
final product of a research project funded by Universidad Católica de Colombia, "Development and application of a
model and methodology for decision making in transport management in the Supply Chain (SC)" where the hybrid
method presented is validated for a SC perishable food sector. This paper presents a methodology for collecting and
analyzing data in the sector, then in the first phase the decision maker selects tactically the route of intervention
considering organizational policies, with the Multi Criteria Decision Technique: PROMETHEE, and in the second stage
decides operatively to improve the current distribution of products to the selected paths arranged as tactically: Path
Networks Suba and Engativá (Bogotá, Colombia) using a Vehicle Routing problem (VRP) in short presentation with
Traveling Sales Problem (TSP).
KEYWORDS
Supply Chain Management, Transportation Management, Vehicle Routing Problem (VRP), Multi Criteria Decision Aid
(MCDA), PROMETHEE, tactical and operational decisions.
1. INTRODUCTION
The purpose of a SC is to maximize the total value generated from a set of suppliers seeking to share
decisions, infrastructure and services at the same time, in order to satisfy a set of customers in terms of
time, form, quantity, quality, place and possession of a product or service (Chopra & Meindl, 2008; Urzelai
Inza, 2006) , the administration of these relationships, the flow of information and products upstream
suppliers and downstream consumers looking to offer added value at a lower cost for the entire set of SC
(Ballou, 2004; Christopher, 2005; Heizer & Render, 2004; Lambert, 2008; Wisner, Tan, & Leong, 2009).
Transport Logistics is a function of the SC (Hugos, 2006) with high costs in the supply and distribution
channels (Heizer & Render, 2004).
The decision making process is an everyday aspect in the management and administration of (industrial or
service) production systems, different techniques from operations research and statistics have been
designed for this purpose; makers of "experts" seeking decisions these tools and evidence, to provide an
element of objectivity provided by the formal sciences. Decision making in transport management for SC,
have high complexity, because it is responsible for the shift from a source to a destination of materials,
supplies and finished goods; covers between 2/3 and 2/3 of the SC logistics costs (Ballou, 2004) and from 10
% to 30 % of sales (Astals Coma, 2009), the efficiency of programming depends on other functions such as
storage, handling and production.
Some classical authors such as Ballou (2004), determined that making transportation decisions in the CA
should be limited to four: mode selection, route design, vehicle scheduling and shipment consolidation
66
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
(Ballou, 2004), however, other authors extend these concepts and determine decisions of logistics transport
function is a complex subset for SC in organizational tactical and operational levels, with strategic
infrastructure constraints and routes as shown in the following figure.
Figure 1 Eight (8) Decision Units for Transport Logistic Function in Complexity for Supply Chain
Hybrid Model
Técnicas de Decisión Multi Selection of
Criterio carriers
Cargo
Transportation Types of
Consolidation
modes carriers
degree
Assignment Vehicles
Transport fleet
vehicles to Routing and Load Plans
mix
consumers programing
Transportation management is perhaps the most important problem of tactical management for the SC
(Garcia C., et al., 2010; Langevin, et al., 2005). The complexity of the decision is given from left to right in
the net, where no prior assumptions and interrelated elements with arrows. Thus, the decision of logistics
units transport function in the supply chain are tactics for: Transportation Modes, Types of carriers, Carriers
Selection, Mixed Fleet Transportation, Consolidation Degree, and operative to: Vehicle Assignment, Routing
and Scheduling Vehicle and Cargo Planes (Langevin, et al., 2005; Trujillo & García, 2013; Trujillo & Gonzalez,
2014), all units are complementary logistics decision for strategic decision making in improving the
performance of the supply chain and freight infrastructure planning, roads, shopping vehicles, etc.
The hybrid model proposed in this article is for implementing actors in the SC: dispatcher, receiver and the
operator. Strategically: in a macroeconomic level, the government is the decision maker (Bowersox, Closs,
& Cooper, 2007) and in a microeconomic level is the SC Manager. In operational and tactical level this
model is by users or drivers (Wright, Ashford, & Stammer. Jr, 1998), transport agencies or transporters
(Blanchard, 2007; Bowersox, et al., 2007; Wright, et al., 1998), logistics operators (wholesalers -
distributors) (Ballou, 2004; Bowersox, et al., 2007; Khisty C. & Kent B., 1998; Lambert, Stock, & Ellram, 1998;
López Fernandez, 2004). Thus, the tactical logistics units are of great importance in the supply and
distribution channel of goods, while in the distribution channel are fundamental operational.
2. PROBLEM STATEMENT
Bogotá DC, is a city with an urban growth increased from 1950 ("Instituto de Estudios Urbanos," 2005), is
the largest and most popular urban center of the country. There have been many cases, however, despite
their obvious territorial and population growth, the infrastructure of land routes and public services has not
grown in the same way, 43% of the road network is in poor condition, according provided by the Urban
67
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Development Institute (IDU) (Cubillos M., 2013), interventions and increased fleet, are the main factors that
affect mobility, making higher travel times for land transport expected. e.g. the average displacement rate
in 2002 was 30.73 kph (kilometers per hour), while in 2011 was 23.27 kph ("Movilidad Bogotá," 2011),
which indicated that SC management freight is expensive due vehicle maintenance, long lead times, among
others.
Producers, transportation companies and/or SC perishable products distributed in Bogotá, know average
supply and distribution travel time due mobility problem exposed, land transport mode restrictions as alone
for products local distribution method with questions such as: a) Who and in what strategic order should
intervene study units, in order to make the making process tactical and operational decisions more
efficiently b) What is the best programming routes operation?, if the clients have time windows and SC seeks
not lose the quality conditions of perishable products and improve the level of service?.
This work takes place in a Supply Chain for distribution of banana type Armenia in Bogota DC, which serves
5 locations (South, Fontibon Center, Suba and Engativá), with 300 customers including retailers,
restaurants, supermarkets, grocery stores etc. The company has a fleet of 15 trucks with different capacities
and with a fixed time average utilization. The Manager performs empirically tracing routes with customer
service policies and forecasts for average quantity per week. However, in strategic planning, the decision
maker wants a 5% reduction in operating organization costs.
The presented hybrid model is generated after the review on the state of the art: for decision-making
techniques and optimization in efficient transport management which it proposes to achieve the strategic
objective. In the Tactical planning, the decision marker prioritizes routes to intervene using multi-criteria
decision under the most important SC criterions and all routes or study areas; and the operational planning
should optimize routes immediately selected in order to assess monetary reduction transport management
in the short and medium term searching to achieve the strategic objective. This hybrid model with the
integration of these techniques can be applied in other business contexts in which they seek to make
decisions and also improve what you have.
3. METHODOLOGY
The hybrid model methodology for making Tactical and Operational decisions in land transportation for the
case of a perishable Supply Chain, is presented in the next figure, and his explanation is in the following
paragraphs.
68
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
The variables collected in this research are collected from: a) financial and operating data provided by the
Manager of the Organization or "Decision Maker" for tactical decisions; and b) operational forms of
transportation scheduling for decision making operating Routing and Scheduling Vehicle. The variables
analyzed for the two phases of decision-making are:
Operational Variables: Amount to be delivered by customer type, waiting time, loading, unloading, travel
time, distance, average speed.
Tactics Variables: Route (Area), Number of trucks dispatched Quantity (units) Amount of release (kg),
number of customers, Route Total distance (km) Daily operating costs per route, Daily Gross Sales Revenue,
service level, quality level of products.
Data analysis poses with Methodology Analysis Input proposed in (Trujillo, et al., 2010), is adapted and
extended by the authors, as described in the following figure, however it serves for the analysis of any data
type to any type system , so the reader can replicate in any context as long as you meet the statistical
assumptions. The analysis of the variables in order to define the model, its complexity and the set of
relationships between the variables are inter -and intra-variable, that is, between and within the same
variable. In this analysis should also identify extraneous variables and / or disturbing that affect operating
performance (loss rate, absenteeism, etc.), Are as important in the study because they are to be checked in
a pure experimental design (Law & Kelton, 2000; Trujillo, et al., 2010).
69
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Initiation
If variable has No
Kruskal Wallis testing homogeneity (Inter variable
assumptions intra
factor) for each of the levels and interaction variables
normal behavior?
Si
Si
Si
Select at least 50 data by Simple Random Sampling
(SRS) with a Bernoulli distribution, where the
probability p(x) is equal to the amount of data to be
selected over the entire data.
With the above, the bivariate correlation test is used to identify the variables inter relationships, in order to
determine dependencies and independence between them. When the dependent variable is categorical
and indicates the number of levels and half the homogeneity test of multiple comparisons will, however be
so categorical or scalar variable can be dependent or independent.
Hypothesis testing is used to verify homogeneity stockings at different levels or factors, that for this phase
are: and , indicating that the independent variable
is in its n levels, the null hypothesis is equal to or come from the same population and the different
alternative or non-homogeneous, the null hypothesis is accepted when the p-value is greater than the
significance level of the test. In the event that data from or previously know that there normally in the
population ANOVA test or Fisher is done, but if the behavior of the population or of data is unknown then a
non-parametric Kruskal Wallis test is applied (Montgomery, 2002; Trujillo, et al., 2010). When there is not
70
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
enough evidence to accept the null hypothesis, then you should identify the subsets and / or homogeneous
contrasts for which you can use a test or Tukey Sheffey (Montgomery, 2002; Trujillo, et al., 2010)
For each homogeneous subset , tests of independence or randomness intra variable with a statistical
analysis tool STATFIT call (application ProModel Simulation Software®) (Garcia D., Heriberto, & Cárdenas B.,
2006; Trujillo, et al., 2010), the basic tests of this type are that the software offers are made: a) scatter plot (
graph visually shows the randomness and dependent behavior variable) , b) The correlogram and
autocorrelation index intra variable (as the value is close to zero there will be no relationship , if positive or
negative would not have independence ), c) runs test above and below e) test runs up and down the
middle, where the value of the p -value is greater than the significance level 0.05 indicating randomness of
the variable (Trujillo, et al., 2010). The decision rule is that if three of these tests are independent, possibly
intra Variable behaves randomly or under a probability distribution function.
STATFIT student version for only 50 data, both according to the level of precision that is sought, and it must
make a Simple Random Sampling (SRS) in the homogeneous subset or all of the variable if it is
homogeneous driving. Where the probability of success for a Bernoulli probability distribution type is
, indicating in this case the probability of being selected randomly for SRS. This cut , is
calculated on the size of the desired pattern on the entire data of the subset or of the homogeneous
variable. A set of random numbers where if the random number ( ) is greater than , will be a
successful event and therefore will be selected for testing goodness of fit is then generated.
To identify the type of behavior of the random variable intra Variable either subassembly or in its entirety,
the evidence of Goodness of Fit Chi Square, Kolmogorov -Smirnov and / or Anderson Darling, that
depending on the amount of data they carry variable reliability of their behavior (Trujillo, et al., 2010;
Vivanco, 2005), e.g. when large volumes of data the Chi square is recommended, while for small amounts of
data the Kolmogorov Smirnov 's fine. The ranking or level adjustment thrown by STATFIT , ranging from 100
% to 0 % when the program does not generate a reference but the ranking is random or independent
variable must generate an empirical distribution for count data or frequency distribution to determine the
probability of occurrence of a specific event in a dataset .
You can use other tools at times to determine whether the distribution type thrown in rank by testing
goodness of fit shows the behavior of the data, these graphs are the PP -PLOT or QQ -PLOT (Law & Kelton,
2000), these contrast testing the hypothesis of the theoretical distribution , so if the data fit a theoretical
distribution or predicted, the center line of the test is perfectly delineated in the center and this distribution
is accepted as the behavior of the data , otherwise rejected.
The data analyzed for the case study were the distances between points of supply quantities (kilograms)
delivered to each customer, waiting times, travel times for trucks and service times (charging/discharging)
from trucks. Homogeneity tests applied in the case study are the F-test in SPSS and Ms- Excel®, fulfilling the
assumptions of normality, then tests of independence and goodness of fit in the generated module STATFIT
ProModel®, their results are presented for tactical and operational decisions.
cancellation: PROMETHEE (Preference Ranking Organization Methods for Enrichment Evaluations) I, III, III,
IV, IS and TRI; PROMETHEE V and VI (for collective decisions); PROMSORT specialized logistics (Behzadian,
Kazemzadeh, Albadvi, & Aghdasi, 2010).
According to Roy, MCDA used to four main issues: selection, rating, ranking and description, supported on
comparing pairs of alternatives (Lu, Wang, & Mao). The PROMETHEE Method is part of outranking methods
(J.-P. Brans & Mareschal, 2005) of MCDA based on construction and operation of overqualification relations.
It is one most developed multicriteria evaluation techniques used for selection, classification and ordering
alternatives where the criteria are usually in conflict with each other. The PROMETHEE I (J.P. Brans, 1982)
gives a partial preorder; PROMETHEE II (J.P. Brans, 1982) gives a complete preorder (J.-P. Brans &
Mareschal, 2005; J.P. Brans, 1982), these are complemented by visual modeling technique GAIA
(Geometrical Analysis for Interactive Aid), qualitative decision tool that helps understanding the
contentious issues between the criteria and the determination of weights associated with them (J.-P. Brans
& Mareschal, 2005; Dulmin & Mininno, 2003; Bertrand Mareschal & Brans, 1988).
PROMETHEE technique has been used to solve various problems types: selection suppliers, product
development (Behzadian, et al., 2010), investment and banking, human resources planning , water
resourses, medicine, chemistry, health care, tourism, ethics in organizations, dynamic management (J.-P.
Brans & Mareschal, 2005), facility location (Behzadian, et al., 2010; J.-P. Brans & Mareschal, 2005), among
others. In literature reviewed in this research, there is no evidence of application in tactical and operational
decisions for intervention and route design (Behzadian, et al., 2010; Bertrand Mareschal, 2013; Pavić &
Babić, 1991) (Behzadian, et al., 2010; Jean Pierre Brans & Mareschal, 1994; Dulmin & Mininno, 2003;
Karkazis, 1989; Leyva López & Fernández González, 2003; Bertrand Mareschal & Brans, 1988; Bertrand
Mareschal & Brans, 1992; B. Mareschal & Brans, 1994; Mladineo, Lozić, Stošić, Mlinarić, & Radica, 1992;
Radojevic & Petrovic, 1997; Raveh, 2000) (Anagnostopoulos, Giannopoulou, & Roukounis, 2003; Araz,
Mizrak Ozfirat, & Ozkarahan, 2007; Araz & Ozkarahan, 2007; Elevli & Dmirci, 2004; Fernández Castro &
Jiménez, 2004; Jugovic, Jugovic, & Zelenika, 2007; Jugovic´, Baricevic´, & Karleusa, 2006; Bertrand
Mareschal, 2013; Marinoni, 2005; Wang & Yang, 2007).
The aggregates and tactics data so collected for each sector and route are presented in the following table,
were analyzed according methodology presented in the previus section; Probability Density Function (PDF)
for preferencia functions (Behzadian, et al., 2010) obtained is in the last line on table next.
Route Daily
Route Dispatched Total operating Gross Customer Product
Number Shipped Number
(Area) Quantity Distance costs per Income Level Level
trucks (units) clients
(kg) (km) route Daily Sales Satisfaction1 Quality
($Col)
1 Sur 1 765 19125 28 16 122.861 28.687.500 85% 9
2 Fontibon 3 2690 67250 63 10 368.583 100.875.000 90% 2
3 Center 4 2450 61250 84 25 491.444 91.875.000 95% 2
4 Suba 4 3250 81250 91 32 491.444 121.875.000 100% 6
5 Engativa 3 2760 69000 65 17 368.583 103.500.000 98% 9
5 5 5 10 8 8 7 6 8
8% 8% 8% 16% 13% 13% 11% 10% 16%
Orientation Min Max Max Max Min Min Max Max
PDF Gaussian Gaussian Lineal Level Gaussian Gaussian Gaussian Usual Usual
Source. The Authors
1
Measured as:
72
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
With above information: parameters and results in Visual PROMETHEE software (I, II, III, VI, V and VI), is
determined intervene Suba and Engativá route. Thus, in the following section the design of the routes
proposed using VRP.
It begins approach classic VRP model, with three indices. Where i= output node, j=arrival node and k= truck
chosen. In order to reduce size problem, using cluster methodology grouping customers by proximity
geographical, using Linkage agglomerative method with SPSS® software. Subsequently, as fleet is
homogeneous (same capacity), the k index will be superfluous, therefore it raised three TSP (traveling
Salesman Problems) for three customer’s clusters for each route. This in the case where they are assigned
Engativá 3 trucks. For Suba only TSP approach only one group because it is a single truck.
The VRP models necessarily to minimize the cost, the objective function could be in terms of money,
distance or operating times. These circuits are used to provide optimum routing in order to distribute a
certain demand within a set of clients, to which the following programming model and applies whole:
Subject to:
Where, the matrix is square nxn, n is the number of places to visit, including storages and is a binary
decision variable, where
73
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Using an integer programming formulation using GAMS® software to obtain new routing circuit (Ramos,
Sánchez, Ferrer, Barquín, & Linares, 2010). Subsequently is validated reducing cost with respect to strategic
objective with the goal: “To reduce operating costs by 5%”, after which is met by 7% with new circuits
validated, in next figure.
74
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
4. CONCLUSIONS
For SC, the land transport as a mode of procurement and local distribution in Bogotá DC (Colombia), is one
of the most important because of its high operating cost for both small and large transport companies.
Thanks to the overpopulation of the city, the lack of access to roads and other transportation modes for
deliver goods, e.g. such as railway terminals It is necessary that the operator trucking take appropriate
tactical and operational decisions in order: to minimize cost, increase efficiency and service level in
customers; for which the hybrid model proposed accomplishes this in a practical way.
The hybrid model allows the decision-maker complement tactically and operationally strategic decisions on
transport for SC, combining several statistics techniques for data analysis and clustering, operations
research and multicriteria techniques.
The data analysis methodology allows the decision maker to analyze the interaction, behavior and varying
inter and intra variable.
The application of multicriteria decision making for tactical transport in the SC to select and prioritize the
route timely hits over the strategic objective of the organization and the decision maker can avoid doing
complete studies and fractionating these tools according to the resources allocated for this purpose.
The route planning in most large or small organizations especially for land transport, where decision makers
do not obey mathematical criteria, but previous experience is relevant to generate solutions that integrate
tactical and operational decisions; models efficient and economical hybrids from the computational point of
view, in order to be implemented in any organization without costly investments in specialized software
developments. Thus, the decision maker is looking to reduce operation costs: waste of waiting, the
distances and the average travel time. At the same time he/she aims to increase: the use of vehicles can
generate clusters through geographical proximity and the conversion of VRP (3 indices) to TSP (2 indices)
which reduces the time of computation to model the route and transfer to a spreadsheet.
5. PERSPECTIVES
The decision maker can give more complexity to the hybrid model presented herein using fuzzy multicriteria
techniques in the state of art parameterized according to the needs of the operator. Similarly, in order to
mitigate the high computational time runs routing model, the decision maker can propose a metaheuristic
75
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
techniques combined with optimization methods to determine a feasible solution for design efficient routes
and circuits.
For decision making, it should be very clear strategic objective to impact SC, from this, identify the tactics
and operational issues involving variables reduce the cost in a timespan and improve level service. Thus the
land transport operator should propose several observation scenarios where these two objectives will be
achieved based on average transport time, pilot tests generating sufficient evidence to avoid deviations
from the normal model.
ACKNOWLEDGEMENT
* Trujillo, Johanna: Masters in Industrial Engineering from Pontificia Universidad Javeriana and
Industrial Engineering from Universidad Católica de Colombia. HELORS Member (Hellenic Operational
Research Society), CIOL member (Centre for Research on Optimization and Logistics. Research fellow Group
Industrial Management (GEGI) recognized COLCIENCIAS; leader in research training for Integrating the
Supply Chain (InCaS). Teaching at Universidad Católica de Colombia and Universidad Jorge Tadeo Lozano.
Has presented research papers in ALIO-INFORMS (Association of Latin-Iberoamerican Operational Research
Societies-Institute for Operations Research and the Management Sciences), 2010; 2nd International
Symposium and 24th National Conference on Operational, 2013; 26th EURO-INFORMS Conference, 2013.
Has advanced courses in Operations Research, Multicriteria Decision Techniques, Metaheuristics
Techniques and Lean Manufacturing. Research Project Manager in Supply Chain and Transport Logistic
Function and has published scientific articles, case studies in sector real, for Measurement, Statistical
Analysis, process control and decision making, especially in transport Supply Chain using simulation
techniques , Multicriteria, Routing, data Mining and Business Intelligence. Mail: [email protected];
[email protected]; [email protected].
*** Velásquez, Andrés: Master of Industrial Engineering, Universidad de los Andes. Industrial Engineer.
Universidad Distrital Francisco José de Caldas. Specialist In Production and Distribution Logistics Fundación
Universitaria del Area Andina. Storage Systems, Pontificia Universidad Javeriana. MRP- II, Top Management.
Modern Techniques and Inventory Management, Universidad de los Andes. Integrated Management
Manufacturing INCOLDA - CESA. Logistics Research Professor at Universidad Católica de Colombia and
Escuela de Administración de Negocios. Has published more than 20 articles and three books on logistics,
strategy and SC management. Consultant logistics strategy and business management. Member of the
Colombian Society of Operations Research (Partner) , System Dynamics Society | Chapter Latin American
Institute of Industrial Engineers and (IIE). Mail: [email protected]
REFERENCES
Anagnostopoulos, K., Giannopoulou, M., & Roukounis, Y. (2003). Multicriteria evaluation of transportation
infrastructure projects: an application of PROMETHEE and GAIA methods. In L. J. Sucharov & C. A. Brebbia (Eds.), Urban
Transport IX (Vol. 14). United Kingdoom: WIT press (Wessex Institute of Technology).
Araz, C., Mizrak Ozfirat, P., & Ozkarahan, I. (2007). An integrated multicriteria decision-making methodology for
outsourcing management. Computers & Operations Research, 34(12), 3738-3756. doi:
http://dx.doi.org/10.1016/j.cor.2006.01.014
76
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Araz, C., & Ozkarahan, I. (2007). Supplier evaluation and management system for strategic sourcing based on a new
multicriteria sorting procedure. International Journal of Production Economics, 106(2), 585-606. doi:
http://dx.doi.org/10.1016/j.ijpe.2006.08.008
Astals Coma, F. (2009). Almacenaje, manutención y trasporte interno en la industria: Ediciones de la Universitat
Politecnica de Catalunya.
Ballou, R. H. (2004). Logistica Administración de la Cadena de Suministro (Vol. 5ta Edición). Mexico.
Behzadian, M., Kazemzadeh, R. B., Albadvi, A., & Aghdasi, M. (2010). PROMETHEE: A comprehensive literature review on
methodologies and applications. European Journal of Operational Research, 200(1), 198-215. doi:
10.1016/j.ejor.2009.01.021
Blanchard, D. (2007). Transportation: Logistics a la Mode Supply Chain Management Best Practices (pp. 103-119). USA:
John Wiley & Sons, Inc.
Bowersox, D. J., Closs, D. J., & Cooper, M. B. (2007). Administración y Logística en la Cadena de Suministros: McGraw
Hill.
Brans, J.-P., & Mareschal, B. (2005). Promethee Methods. In J. Figueira, S. Greco & M. Ehrgott (Eds.), Multiple Criteria
Decision Analysis: State of the Art Surveys (Vol. 78, pp. 163-186). New York, United States of America: Springer
Science+Business Media.
Brans, J. P. (1982). L’ingénierie de la décision. Elaboration d’instruments d’aide à la décision. Méthode PROMETHEE. In R.
Nadeau, Landry, M (Ed.), L’aide a la Décision: Nature, Instruments et Perspectives d’avenir (pp. 183-214). Québec,
Canada: Presses de l’Université Laval.
Brans, J. P., & Mareschal, B. (1994). The PROMCALC & GAIA decision support system for multicriteria decision aid.
Decision Support Systems, 12(4–5), 297-310. doi: http://dx.doi.org/10.1016/0167-9236(94)90048-5
Bufardi, A., Gheorghe, R., & Xirouchakis, P. (2008). Fuzzy Outranking Methods: Recent Developments. In C. Kahraman
(Ed.), Fuzzy Multi-Criteria Decision Making. In B. Media (Series Ed.): Springer Science.
Cubillos M., N. (2013, julio 04). El 43% de malla vial en Bogotá está en mal estado: IDU. La República.
Chopra, S., & Meindl, P. (2008). Administración de la cadena de suministro (3ra Edición ed.). Mexico.
Christopher, M. (2005). Logistics and supply chain management: creating value-added networks: Financial Times
Prentice Hall.
Dulmin, R., & Mininno, V. (2003). Supplier selection using a multi-criteria decision aid method. Journal of Purchasing and
Supply Management, 9(4), 177-187. doi: http://dx.doi.org/10.1016/S1478-4092(03)00032-3
Elevli, B., & Dmirci, A. (2004). Multicriteria choice of ore transport system for an underground mine: application of
PROMETHEE methods. [Article]. Journal of the South African Institute of Mining and Metallurgy, 104(5), 251-256.
Fernández Castro, A. S., & Jiménez, M. (2004). PROMETHEE: an extension through fuzzy mathematical programming.
Journal of the Operational Research Society 56(1), 119-122.
77
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Figueira, J., Mousseau, V., & Roy, B. (2005). Electre Methods. In J. Figueira, S. Greco & M. Ehrgott (Eds.), Multiple
Criteria Decision Analysis: State of the Art Surveys (pp. 133-153). New York, United States of America: Springer
Science+Business Media.
Garcia C., R. G., Echeverri S., G., & Aranzales L., C. S. (2010). Marco de Referencia para el soporte a la toma de
decisiones en la cadena de abastecimiento.
Garcia D., E., Heriberto, G. R., & Cárdenas B., L. E. (2006). Simulación y análisis de sistemas con ProModel: Prentice Hall.
Garcia, R., & Trujillo, J. (2013). Desarrollo del marco de decisión de la Función Logística de Transporte de la Cadena de
Abastecimiento. Revista de la Escuela Colombiana de Ingeniería.
Golden, B., Raghavan, S., & Wasil, E. (2008). The Vehicle Routing Problem: Latest Advances and New Challenges.
Heizer, J., & Render, B. (2004). Principios de la Administración de Operaciones (5 ed.): Pearson Education.
Herrera, C. J., & González La Rotta, E. C. (2013). Aplicación de una metodología para el diseño de rutas eficientes en una
empresa distribuidora de alimentos en el sector de Suba en Bogotá D.C. Pregrado, Universidad Católica de Colombia,
Bogotá D.C.
Hugos, M. H. (2006). Essentials of Supply Chain Management (pp. 4): John Wiley & Sons, INC.
Jugovic, T. P., Jugovic, A., & Zelenika, R. (2007). Multicriteria optimisation in logistics forwarder activities. Promet-Traffic
& Transportation, 19(3), 145-153.
Jugovic´, T. P., Baricevic´, H., & Karleusa, B. (2006). Multicriteria optimisation of the Pan-European corridor VB
competitiveness. Promet- Traffic&Transportation, 18(3), 189-195.
Karkazis, J. (1989). Facilities location in a competitive environment: A promethee based multiple criteria analysis.
European Journal of Operational Research, 42(3), 294-304. doi: http://dx.doi.org/10.1016/0377-2217(89)90440-2
Khisty C., J., & Kent B., L. (1998). Transportation Engineering: an introduction (2nd ed.). New Yersey: Prentice Hall.
Lambert, D. M. (2008). Supply Chain Management: processes, partnership, performance: Supply Chain Management
Institute.
Lambert, D. M., Stock, J. R., & Ellram, L. M. (1998). Fundamentals of Logistics Management: Mc Graw Hill Higher
Education.
Langevin, A., Riopel, D., & Campbell, J. F. (2005). Logistics Systems: Desing and Optimization: Springer.
Law, A. M., & Kelton, W. D. (2000). Simulation Modelling and Analisys (Vol. III): Mc. Graw Hill.
Leyva López, J. C., & Fernández González, E. (2003). Un nuevo método para el apoyo de decisiones en grupo basado en
la metodología ELECTRE III European Journal of Operational Research (EJOR), 148(1), 14-27.
Lu, G., Wang, H., & Mao, X. Using ELECTRE TRI Outranking Method to Evaluate Trustworthy Software.
78
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Mareschal, B., & Brans, J.-P. (1988). Geometrical representations for MCDA. European Journal of Operational Research,
34(1), 69-77. doi: http://dx.doi.org/10.1016/0377-2217(88)90456-0
Mareschal, B., & Brans, J. P. (1992). PROMETHEE V: MCDM problems with segmentation constraints. INFOR Journal,
30(2), 85-96.
Mareschal, B., & Brans, J. P. (1994). The PROMETHEE-GAIA decision support system for multicriteria investigations
Investigación operativa, 4, 107-117.
Marinoni, O. (2005). A stochastic spatial decision support system based on PROMETHEE. International Journal of
Geographical Information Science, 19(1), 51-68. doi: 10.1080/13658810412331280176
Mladineo, N., Lozić, I., Stošić, S., Mlinarić, D., & Radica, T. (1992). An evaluation of multicriteria analysis for DSS in public
policy decision. European Journal of Operational Research, 61(1–2), 219-229. doi: http://dx.doi.org/10.1016/0377-
2217(92)90283-F
Olivera, A. (2004). Heurísticas para Problemas de Ruteo de Vehículos. Instituto de Computación, Facultad de Ingeniería,
Universidad de La República. Retrieved from Http://Www.Fing.Edu.Uy/Inco/Pedeciba/Bibliote/Reptec/Tr0408.Pdf
Pavić, I., & Babić, Z. (1991). The use of the PROMETHEE method in the location choice of a production system.
International Journal of Production Economics, 23(1–3), 165-174. doi: http://dx.doi.org/10.1016/0925-5273(91)90059-3
Radojevic, D., & Petrovic, S. (1997). A fuzzy approach to preference structure in multicriteria ranking. International
Transactions in Operational Research, 4(5–6), 419-430. doi: http://dx.doi.org/10.1016/S0969-6016(97)87512-0
Ramos, A., Sánchez, P., Ferrer, J. M., Barquín, J., & Linares, P. (2010). Modelos Matemáticos de Optimización. Madrid,
España: Universidad de Comillas.
Raveh, A. (2000). Co-plot: A graphic display method for geometrical representations of MCDM. European Journal of
Operational Research, 125(3), 670-678. doi: http://dx.doi.org/10.1016/S0377-2217(99)00276-3
Sanchez Guerrero, G. d. l. N. (2003). La técnica ElectréTécnicas participativas para la planeación (pp. 183-196). Mexico.
Retrieved from http://www.capac.org/web/Portals/0/biblioteca_virtual/doc004/.
Thot, P., & Vigo, D. (2000). The Vehicle Routing Problem, Siam Monographs on Discrete Mathematics and Applications.
Trujillo, J., & García, R. (2013). Taxonomy for Decision Making in Transport Lositics Function for Supply Chain. Magister
Research, Pontificia Universidad Javeriana, Bogotá.
Trujillo, J., & Gonzalez, E. (2014). Metodología para la toma de decisiones en la Gestión del Transporte para la Cadena
de Abastecimiento - Caso de Estudio. In U. P. d. Colombia (Ed.), Simposio 2013. Bogotá D.C.: Editorial Universidad Piloto
de Colombia.
Trujillo, J., Vallejo, J., & Becerra, M. (2010). Methodology Call-Centers's Simulation - Study case. Studiositas, 5(3), 117-
138.
79
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Trijilo J., Gonzale E., Velásquez A. | Hybrid Model for Making Tactical and Operational Decisions in Land
Transportation for the Case of a Perishable Supply Chain
Urzelai Inza, A. (2006). Manual Básico de Logística Integral: Ediciones Díaz de Santos.
Vivanco, M. (2005). Muestreo Estadístico: Diseño y Aplicaciones. Santiago de Chile: Editorial Universitaria.
Wang, J.-J., & Yang, D.-L. (2007). Using a hybrid multi-criteria decision aid method for information systems outsourcing.
Computers & Operations Research, 34(12), 3691-3700. doi: http://dx.doi.org/10.1016/j.cor.2006.01.017
Wisner, J. D., Tan, K.-C., & Leong, G. K. (2009). Principles of Supply Chain Management A balanced approach: Cencage
Learning.
Wright, P. H., Ashford, N. J., & Stammer. Jr, R. J. (1998). Transportation Engineering, Planning and Design (4 ed.): John
Willey & Sons.
80
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Koulinas G.K., Demesouka O.E., Sougkara I.G., Vavatsikos A.P., Anagnostopoulos K.P. | Regional Units
Evaluation Using Extended Fuzzy Analytic Hierarchy Process: The case of Central Macedonia
Region
Abstract
Multicriteria Decision Analysis tools provide procedures to support rational decision making. As a result numerous
applications has been developed up to now in a variety of scientific areas. Multicriteria Analysis facilitates the
representation of multidimensional problems and simplifies the decision-making process due to its flexibility. Given that
it can be used to evaluate regional units in order to investigate inter/intra-regional inequalities. The paper at hand aims
to provide such an evaluation aiming to provide guidelines regarding the interventions planning and in the same time to
highlight priorities in the spatial context. This is achieved using the most popular extension of Analytic Hierarchy Process
(AHP) to fuzzy logic known as Extended AHP.
The case study is performed the Central Macedonia Region, which is divided into seven regional units; Thessaloniki,
Serres, Imathia, Chalkidiki, Pella, Pieria and Kilkis. These units are evaluated using demographic, geographic, urban,
social, environmental and economic data. Technically this is achieved using a four levels hierarchy. The first level
represents the overall analysis goal, which is the evaluation of Central Macedonia regional units. The second level
includes the four criteria, used to apply the evaluation. The criteria referring to the units are: the productivity, the social
characteristics, the technical characteristics and the environmental and geographical characteristics. At the third level,
the sub-criteria of the above criteria are set. Each criterion is analyzed into six sub-criteria. Finally, the seven units of the
Central Macedonia Region are settled in the hierarchy’s forth level.
The proposed framework can be a useful tool for both policy makers and practitioners since it provides both the
capabilities and requirements of the Region of Central Macedonia as a whole as much as at a county level. In that sense
it can be used to provide indicators that can be used by the investment programs operators of Greek public sector, as
well as the operators and investors of private sector.
KEYWORDS
Regional Planning, Central Macedonia, Multicriteria Analysis, Fuzzy AHP.
1. INTRODUCTION
The region of Central Macedonia is one of the largest regions in Greece and the second most populated
2
one. It covers an area of 18.811 m which is 14.2% of the area that Greece covers. According to the last
inventory the population comes to 1.874.590 residents. It is bordered by the region of West Macedonia and
Thrace and by the region of East Macedonia. Central Macedonia is the main road gate to the Balkans and in
the north borders with Bulgary and FYROM. The southern part of the Region is bordering Thermaikos,
Toronean and Singitic bays belonging to Aegean Sea and the Strymonico bay owned to the Thracian Sea. It
consists of seven regional units, which are the regional units of Thessaloniki, Chalkidiki, Pella, Pieria,
81
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Koulinas G.K., Demesouka O.E., Sougkara I.G., Vavatsikos A.P., Anagnostopoulos K.P. | Regional Units
Evaluation Using Extended Fuzzy Analytic Hierarchy Process: The case of Central Macedonia
Region
Hmathia, Kilkis, Serres. The geomorphology consists both of great mountains and big lakes and rivers. The
population density is 99.66 residents per square meter, whereas the population density of the whole
Greece is 79.7 residents per square meter. The region is rich in flora and fauna however there are many
wetlands, coastal areas and massifs that are affected by human intervention. The development of decision
analysis framework that can be used to obtain inter/intra-regional inequalities it is essential to planning of
future interventions. Especially nowadays where the available funds provided by the public sector are
limited it is important to develop decision support frameworks that maximize the derived benefits from the
realization of public works for the local communities and their citizens. The issue of planning public sector
interventions consists a multidimensional decision analysis problem driven by a variety of economic,
political and social factors. The paper at hand proposes a framework based on the fuzzy extensions of
Analytic Hierarchy Process (AHP). AHP is a well-known approach to deal with multidimensional decision
analysis problems and since its introduction has been applied to a variety of semi structured decision
problems. The last two decades a significant effort has been given to represent decision makers’
preferences and their vagueness through extending the typical AHP to fuzzy logic. The present case study is
performed in Central Macedonia Region using the Fuzzy AHP in order to provide guidelines regarding the
planning of interventions and to highlight priorities in the spatial context.
82
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Koulinas G.K., Demesouka O.E., Sougkara I.G., Vavatsikos A.P., Anagnostopoulos K.P. | Regional Units
Evaluation Using Extended Fuzzy Analytic Hierarchy Process: The case of Central Macedonia
Region
used to apply the evaluation. These criteria provide measures according to the production volume, and
technical aspects, environmental and geographical characteristics. At the third level, the sub-criteria,
provide further specifications of the parent node subcriteria. Finally, in the fourth level the seven regional
units of Central Macedonia are settled.
Figure 1: The four-leveled hierarchy model
4. RESULT ANALYSIS
With respect to the first criterion it is derived that Pieria and Serres are ranked sixth and seventh,
respectively. On the contrary Hmathia is ranked first. The analysis shows that Pieria and Serres both have a
low rate of distribution of the per capita gross national product. Moreover the per capita income as well
with the economically active population settled there, are too low. Therefore, there is a great need for
those regional units to enhance business and investment sector.
With respect to the socioeconomic criteria the analysis shows that the greater deficiencies are obtained for
the units of Chalkidiki and Serres. This is mainly owned due to the high unemployment levels and the
recorded amount of people that faces poverty issues. As a result the population of these units appears
strong rates of relocation to the other regional units. Moreover, births are eliminated year by year the last
decade. Finally, regarding the investments in Research and Development, there are no R&D centers in
Chalkidiki, while the present situation of the transportation network shows that Chalkidiki it is not provided
with the means to support its leading role as touristic center in Northern Greece.
With respect to the third criterion it is highlighted that the regional unit of Pella reveals significant
deficiencies. Even though Pella facilitates a significant industrial area, it is really inefficiently organized and
with a great luck of technical infrastructure. Furthermore, population decreases through the last decade
given that new jobs are not offered anymore for the population that do not emigrates to the neighbor units
of Kilkis and Thessaloniki. Finallly, Pella presents lack of appropriate infrastructures to deal with the
subfunctional flood control system which in combination with the bad characteristics of the road network
denotes that a serious number of interventions should be planned.
Figure 2: Result Analysis of the first criterion, Figure 3: Result Analysis of the second criterion, “Social
“Productivity” Characteristics”
14,310% 14,320%
14,303% 14,308%
14,310% 14,305%
14,300% 14,300% 14,300%
14,300%
iki
lla
s
ria
i
kis
idik
ia
iki
lla
rre
ria
i
kis
ath
on
idik
Pe
rre
Pie
Kil
ath
on
Pe
Se
al k
Pie
Kil
sal
Hm
Se
al k
sal
Hm
Ch
es
Ch
es
Th
Th
84
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Koulinas G.K., Demesouka O.E., Sougkara I.G., Vavatsikos A.P., Anagnostopoulos K.P. | Regional Units
Evaluation Using Extended Fuzzy Analytic Hierarchy Process: The case of Central Macedonia
Region
Figure 4: Result Analysis of the third criterion, “Technical Figure 5: Result Analysis of the fourth criterion,
Characteristics” “Environmental and Geographical Characteristics”
14,290%
14,283% 14,300%
14,230% 14,240%
14,280% 14,277%
14,273% 14,200%
14,270% 14,130% 14,140%
14,270%
14,100%
14,260%
14,000%
14,250%
13,900%
ia
iki
lla
s
ria
i
kis
idik
rre
ath
on
Pe
Pie
Kil
ia
iki
lla
s
ria
i
kis
Se
idik
al k
sal
rre
Hm
ath
on
Pe
Pie
Kil
Ch
es
Se
al k
sal
Hm
Th
Ch
es
Th
Figure 6: Total priorities for the seven Regional Units
14,310%
14,300%
14,300%
14,291%
14,289% 14,288%
14,290%
14,284%
14,280% 14,277%
14,272%
14,270%
14,260%
14,250%
a
ki
lla
s
i
is
ia
ik
hi
rre
ni
lk
er
Pe
kid
at
lo
Ki
Se
Pi
m
sa
l
ha
H
es
C
Th
The results for the last criterion the Environmental and Geographical Characteristics reveal that the two
regional units that have shown the most deficiencies in this sector are the units of Kilkis and Thessaloniki,
whereas the units of Pieria and Hmathia contribute better. The large amount of the produced waste
amount in parallel with the lack of inappropriate solid waste management system and the high pollution
indicators in the coastal zones are the main handicaps of the Kilkis and Thessaloniki units. The priorities
vector that was formed is the following: W=(0.14277 , 0.14300 , 0.14291 , 0.14289 , 0.14284 , 0.14272 ,
0.14288) The final results of the research and overall priorities estimations shows that the regional unit of
Chalkidiki was proved the one that had shown most deficiencies whereas the one of Kilkis was proved the
most dominant. This mainly owned due to the moderate and sometimes insufficient productivity rates in
Chalkidiki. The image is getting worst when considering the low rate of economically active population and
the increasing unemployment rates. Chalkidiki presents low population density which in accordance with
the bad characteristics of the road network increases the travel costs for both citizens and the provided
services. Moreover there is noticed a lack of R&D centers and the existence of really few manufacturing
enterprises. The income mostly comes from the primary sector. Generally, there are many actions that
need to be done in order to strengthen this particular regional unit in a lot of sectors such as those that are
analyzed through the hierarchy model.
85
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Koulinas G.K., Demesouka O.E., Sougkara I.G., Vavatsikos A.P., Anagnostopoulos K.P. | Regional Units
Evaluation Using Extended Fuzzy Analytic Hierarchy Process: The case of Central Macedonia
Region
On the other hand the regional unit of Kilkis is proved the one increasingly developed the last seven years.
The reason is really easily and geographically explained, as Kilkis on the one hand is next to the urban center
of Central Macedonia, the regional unit of Thessaloniki, and on the other hand possessed many advantages
that Thessaloniki couldn’t maintain. Kilkis owns a better and much more developed Industrial area which is
really productive as it utilizes all the three production sectors. Kilkis also combines and takes advantage of
all of the benefits that are produced in the area that could be found in its geomorphology, climate or
economy. The fact that the regional unit of Thessaloniki is congested, leads Kilkis to a beneficial place
among the other regional units as it is in a position that allows it to import them and operate them for its
benefit.
5. CONCLUSIONS
AHP method that was produced by the need of studying Multicriteria analysis was proven as one of the
most efficient tools of decision-making support system. The purpose of AHP is to compare pairs of
alternatives in order to come up with an evaluation that could resolve multidimentional problems.
However, when the compared pairs are too many, the fuzzy logic comes with the use of Fuzzy AHP to fill the
blank. In this paper and in order to evaluate the regional units of Central Macedonia, Fuzzy Extended AHP
was used, after a four-leveled hierarchy model was formed by the decision makers. The results showed that
there were many differences among the seven units of the Region and the deficiencies in each unit were
found in different sectors. In Pieria the sector of productivity could be improved, in Chalkidiki many social
characteristics need to be taken under consideration whereas in Pella the technical characteristics of the
unit need improvement. In Kilkis there are noticed many environmental problems that need to be resolved.
Generally, most deficiencies are found in the Regional Unit of Chalkidiki whereas the regional unit of Kilkis is
proved the most dominant.
REFERENCES
Basu A., Bonami P., Cornuejols G. and Margot F., 2009. On the Relative Strength of Split, Triangle and Quadrilateral Cuts.
Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2009). New York, USA, pp. 1220-
1229.
Chang, D.Y., 1996. Application of the Extent Analysis Method in Fuzzy AHP. European Journal of Operational Research,
Vol. 95, No 3, pp. 649-655.
Data from the Regional Administration of Central Macedonia. Available on the website : http://www.pkm.gov.gr/
Enea, M., Piazza, T., 2004. Project Selection by Constrained Fuzzy AHP. Fuzzy Optimization and Decision Making, Vol. 3,
No. 1, pp. 39-62.
Koulinas, G. (2006). Developing a Microsoft Excel add-in for the Analytical Hierarchy Process.Thesis submitted for the
Diploma in Engineering degree, Dept. Of Production and Management Engineering, DUTH, Xanthi, Greece, October
2006.
Lamata, M.T., 2004. Ranking of Alternatives with Ordered Weighted Averaging Operators. International Journal of
Intelligent Systems, Vol. 19, No. 5, pp. 473-482.
Saaty, T.L., 2001. Decision Making for Leaders: The Analytical Hierarchy Process for Decisions in a Complex World. RWS
86
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
Abstract
In this work we consider the traveling salesman problem in a connected graph. We apply seven different formulations
and we compare the results. We also apply Benders decomposition and we observe its behaviour regarding solution
time. We conclude that Benders decomposition is not faster than the classical known formulations and we discuss
possible reasons behind.
KEYWORDS
VRP; TSP; classical formulations; Benders decomposition
1. INTRODUCTION
Let be a graph where is a set of n vertices and is a set of arcs or edges. Let be a cost matrix
associated with . is the set of vertices such that and . Note that we call the first
vertex . Edges connect vertices such that edge connects the vertices and . We denote by
the binary variable which takes the value of 1 if the edge connecting and is included in the
Hamiltonian cycle and 0 if not. is the vector containing the values xij. Let cij be the vector of costs
associated to the edge . The formulation of the TSP without the subtour elimination constraints (SECs) is
equivalent to the assignment problem (AP) and is presented right below.
subject to:
(1)
(2)
(3)
The SECs can been represented in many various ways as it will be explained in the sequel.
Solution algorithms for the TSP are divided in the literature in exact and heuristics. Heuristics can also be
combined with exact solution methods yielding efficient hybrid schemes. Most modern algorithms able to
tackle large instances of the TSP employ heuristics in some of the solution phases.
87
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
For a review of approaches to solve the TSP before 1992, the reader is referred to the comprehensive work
of Laport (1992). A more recent review with developments and an updated set of modern areas of
applications is included in Bektas (2006) and Saharidis (2014). Both exact and heuristics approaches have
been employed to tackle the TSP. In the case of exact methods, the problem is formally modelled as an
integer programming problem and related techniques are applied. In the case of heuristics, a formal
representation using standard mathematical programming notation is not required. It is very common in
mathematical programming to associate a problem with an easier or more popular one. There are thus two
perspectives to consider the TSP pertaining to the associated underlying problem.
The first perspective is to view it as an assignment problem where each vertex is assigned a descendent,
coupled with a set of constraints ensuring the elimination of subtours. Taking the latter into consideration
turns the problem from trivial to intractable. The modelling approaches focus on an elegant and economic
formulation of the subtour elimination constraints. The work of Dantzig et al. (1954) constituted the first
approach to model these constraints. The authors observe that if there was a subtour on a subset S of
vertices, then this subtour would contain exactly |S| arcs and as many vertices. This observation is turned
into a constraint where one forces every resulting subset of S to have contain no more than |S-1| arcs.
Other such formulations emerged in the following decades inspired by the seminal work of Dantzig et al.
(1954). In Miller et al. (1960), the number of constraints reduces significantly with the expense of additional
variables. Other formulations called flow-based and time-staged were also mentioned presented later on in
this paper.
A second perspective of viewing the TSP is as a special case of a minimum 1-spanning tree. This analogy was
nicely explored by Held & Karp (1969). The idea is to carefully create an objective function such that the
result of the spanning tree which is a lower bound of the TSP closely approximates the TSP. The formulation
of the minimization of 1-spanning trees by default excludes subtours, so there is no reason to enforce any
subtour elimination constraints. On the other hand, in a minimum spanning tree there may be nodes with a
degree greater to two, that is for instance, a node with two descendants nodes, which is prohibited in the
TSP.
We tested and compared the above formulations and obtained the following results. We performed the
experiments on a dual-core 2.2GHz processor with 3GB of usable memory. The code was on C++ (Concert
Technology) and the solution was provided by the IBM ILOG CPLEX 12.4 suite. Table 2 presents the outcome
of these experiments.
88
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
We observe that the conventional formulation DFJ that was historically the first one proposed quickly
shows its limits. We cannot afford solving any problem larger than 17 cities, which is our case sounds too
restrictive. Time-staged formulations also seem to quickly attain their limits. In the following we wish to
seek the optimal solutions in shorter times, so we decide to test decomposition methods. The Benders
decomposition method is the most popular and generic; next we will try to customise this solution in order
to try it on our problem.
We applied Benders decomposition on all the formulations above and compared them to the modelling
and solution approach proposed in this paper. Table 3 presents the outcome of the experiments.
Test case MTZ Benders on MTZ SCF Benders on SCF TCF Benders on TCF TS1 Benders on TS1
15 0.16 0.52 0.2 0.7 0.39 1.24 3.08 11.06
17 0.90 3.16 0.5 1.67 0.88 2.93 6.02 19.95
25 1.19 4.33 0.89 2.72 1.48 5.45 396.36 1504.57
31 23551 70912.37 2.75 10.37 18.19 67.61 - 1853.95
43 26.46 101.26 5.22 16.49 5.09 19.57 - 2451.54
50 33.91 110.95 15.28 52.76 40.81 128.43 - 3432.15
93 94.23 367.19 316.48 1002.4 285 880.54 - 7705.21
Benders decomposition was shown to be slower than the initial formulation it was applied to. Typically,
solution times are 2 to 3 times greater. We discuss possible reasons in the following section.
89
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
2.2. Discussion
Benders does not seem to perform well on any type of the instances considered, the reason being the large
number of iterations to convergence which essentially translates to bad quality cuts. In the MTZ
formulation, the returned cuts from slave to master are low-density cuts, that is, cuts in which a low
number of variables appears compared to the number of variables appearing in the master problem. In the
MTZ approach the density of the feasibility cuts is in the order or magnitude . Consequently,
when N is large, the number of variables included in the cuts is significantly low. It is known (for instance
see: Saharidis & Ierapetritou, 2010; Saharidis et al. 2010) that in cases of low-density cuts, there is
substantial room for improvement in the Benders decomposition.
Another reason of this poor behaviour is the tightness of cuts. At every iteration of the algorithm, the
following actions occur:
- The master passes its optimal solution to the slave
- If the slave is infeasible, it returns feasibility cuts to master; else the solution communicated by the
master is optimal.
In the case of Benders on theMTZ formulation, the slave problem has the following form:
(4)
The feasibility cuts returned to the master have the following form:
(5)
The dual value is non-zero only for those couples and for which was activated, i.e. . Let us
construct a small example and observe the form this cut takes:
Figure 1 Examples of two subtours
When solved, the slave problem assigns a non-zero dual value to those constraints containing in
the solution communicated by the master. The feasibility cut returned by the slave to the master is the
following:
or
Essentially, this constraint instructs the master to exclude the subtour {3-4-5} from the next solution
proposed to the slave. In other words, the slave cuts does nothing more than informing the master about
the subtour identified. This constraint is a simple SEC that could be manually added at each iteration,
instead of having to solve a linear programme to obtain it. This idea need to be exploited further.
90
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
3. CONCLUSIONS
In this paper we tested the seven well known exact formulations for the TSP and we compared the results
among them. The MTZ, SCF and TCF were proved to be the most powerful implementations.
We also tested Benders decomposition on these formulations and observed that the performance was
poor. The master problem consisted of the assignment problem while the slave problem varied across
formulations. Regardless the formulation, the cuts returned from the slave to the master were of low
quality. We focused on the MTZ formulation and suggested reasons of this poor behaviour. One reason was
attributed to the density of the cuts that appear to be considerably low. The other reason related to the
tightness of cuts that appear to be loose and at the time required to obtain a simple SEC by solving the slave
problem.
A future direction of this work will be to propose and implement ways to remedy the above two obstacles.
The application of the cut covering bundle and generation and maximum density cuts is being consider to
tackle the issue of low-density cuts. Appending progressively a tighter type of SECs at the master simply by
inspecting the (master) incumbent solution is being considered to tackle both the issues of tightness and
solution time of the whole algorithm.
ACKNOWLEDGEMENT
The authors gratefully acknowledges financial support from the European Commission under the grant FP7-
PEOPLE-2011-CIG, GreenRoute, 293753 and the grant EnvRouting SH3_(1234) of Action «Supporting
Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action’s
Beneficiary: General Secretariat for Research and Technology, Greece), and is co-financed by the European
Social Fund (ESF) and the Greek State.
REFERENCES
Benders, J.F., (1962). Partitioning Procedures for Solving Mixed-Variables Programming Problems. Numerische
Mathematic. 4: p. 238-252.
Bektas T. (2006). The multiple traveling salesman problem: an overview of formulations and solution procedures.
Omega, Vol. 34, No. 3, pp. 209-219.
Dantzig, G.B., Fulkerson, D.R. and Johnson, S.M. (1954). Solution of a large scale traveling-salesman problem.
Operations Research, Vol. 2, pp. 393-410.
Finke, G., Clauss, A. and Gunn, E. A (1984). Two-Commodity Network Flow Approach to the Traveling Salesman Problem.
Congressus Numerantium, Vol. 41, pp. 167-178.
Fox, K., Gavish, B., Graves, S. (1980). An n-constraint formulation of the (time-dependent) travelling salesman problem.
Operations Research, Vol. 28, pp. 1018-1021.
Gavish, B. and Graves, S.C. (1978). The travelling salesman problem and related problems. Operations Research Center,
MIT, Cambridge, MA. Working Paper OR-078-78.
Held, M, and Karp, R.M. (1969). The traveling-salesman problem and minimum spanning trees. New York : IBM Systems
Research Institute.
Laporte G. (1992). The Traveling Salesman Problem: An overview of exact and approximate algorithms. European
Journal of Operational Research, 59, 1992, pp. 231-247.
91
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Saharidis G., Kolomvos G., Liberopoulos G. | Different Formulations and Βenders Decomposition on TSP
Miller, C.E., Tucker, A.W. and Zemlin, R.A. (1960). Integer programming formulations and traveling salesman problems.
Journal of the Association for Computing Machinery, Vol. 7, pp. 326-329.
Saharidis, G., & Ierapetritou, M. (2010). Improving Benders decomposition using maximum feasible subsystem (MFS) cut
generation strategy. Computers & Chemical Engineering, 34(8), 1237-1245.
Saharidis, G., Minoux, M., & Ierapetritou, M. (2010). Accelerating Benders method using covering cut bundle generation.
International Transaction in Operational Research, 17(2), 221-237.
Saharidis G.K.D. (2014). Review of solution approaches for the symmetric traveling salesman problem. International
Journal of Information Systems and Supply Chain Management. 2014, to appear.
Wong, R. (1980). Integer programming formulations of the travelling salesman. In Proceedings of the IEEE Conf. on
Circuits and Computers, pp. 149–152.
92
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mourmouris J.C., Nikolaidis K. | Νew Technologies & Labour Market
Abstract
Today the new working relationships have major changes. The new forms such as telecommuting, workers by lease,
part-time employment, the fourth shift give another dimension to the workplace. Telecommuting is any form of work
that includes electronic data processing and the use of media for multiple/cross communication so that the employee
can produce the work he was asked in an area outside the space where the business is located. There are alternative
names for telecommuting in the relevant bibliography such as teleworking at home or distance working.
KEYWORDS
TELECOMMUNICATION, TELEWORKING, Labour Market
1. INTRODUCTION
What led to the development of telecommuting? This question can be answered by the development of IT
and telecommunications. The new forms of communication open new possibilities in computing via high-
speed transmission of the data with the VDSL at the speed of 50 mbps. At that speed we can have perfect
image in HD and high quality sound. So the potential weakness in the communication of the past have been
overcame. The globalization of the economy is another fact which has led to the development of
telecommuting. In our day and time, the economy and consequently the firms operate globally, with the
result that workers face a flexibility issue. Both the business forms of work have changed. The new trends
that have appeared nowadays in the field of teleworking regarding the new forms of telework are as
follows: The international literature identifies the following types of telework :
● Home Based Teleworking : Teleworking is made home-based (exclusively or on a regular basis). An area of
the house converted into office with the proper equipment (computer, telephone, modem, fax and
stationery).
●Satellite Centers : These centers are used by the employees of the same organization and are located in
remote areas near the homes of the workers.
●Telework Centers : They are well organized spaces with access to telecommunication and electronic
equipment, in the form of offices used by the employees of different companies, or employees of the same
company-who belong to different fields of work-or even by self-employed ones with some basic lease.
●Televillages : It is the modern form of telecottage. Entire villages are equipped with the appropriate
technological apparatus, whose houses are “wired” to be able to communicate with each other and with
other villages.
●Teamwork from distance: Some examples are as telemedicine, tele-education, e-commerce and research
from a distance.
93
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mourmouris J.C., Nikolaidis K. | Νew Technologies & Labour Market
2. SECTION Ι
The employer- employee working relationship may be based on a contract of dependent employment
where the former has at his disposal the entire labor power of the worker, who is considered employed. In
any other type of contract the worker is self-employed. Based on the above we have 3 types of employment
relations.
Full-time employment is perfomed at home and concems one employer by this term we mean that the
work is done entirely at home and is not related to working hours.
In part-time employment, work is carried out partly at home and the rest at the employer’s premises.
The self-employed type is more flexible as the employee works at home for more than one employers. In
this case the worker has his own working model and determines his employment relationship.
Another division regarding the forms of teleworking is related to the use of IT, and whether it is necessary
for the worker or to be online offline. In the case online work the worker is online with the company and
there is not enough freedom in the time and pace of their work, which means that he should abide by
company’s actual working hours. In offline work the employee has greater freedom and flexibility in the
management of his work since he can link to the company’s network only when necessary. So he can
manage his time the way he thinks is the best. Whether the contact is online or offline is also a very
important factor in the process of telecommuting. Both styles have positive and negative effects on this
process.
In online communication, the worker directly depends on the understanding of the presentation and
discussions as they are conducted and by whether he takes good notes or has a good memory. During the
same period, the contributions of the project leader or keynote speaker as well as those of the participants
are almost spontaneous.
On the other hand, in offline communication, the distance worker has more time to think about his
contribution and less pressure to respond immediately. Which form of communication is most appropriate
depends on what kind of activities it will support. For example, offline communication is better suited for
file transfer, information retrieval, etc., while realtime communication is very useful for the communication
and discussion of specific implementation issues or employment problems. Thus, depending on the case,
both forms of communication may be used by an entity that implements telecommuting procedures.
Many different disciplines and fields are now ripe for the implementation of flexible working arrangements.
The general factors that can serve as criteria for diversification and broad axes of direction to introduce
teleworking schemes in business operations are generally working without personal contact, task
management through profit results and tasks related to the management and electronic processing of data.
These factors generally cover effectively the organizational and physical side of the work, but work as a
social institution has a social dimension too. So when we analyze key factors for conducting business
through flexible working arrangements we should also include on the analysis level those factors which are
directly related to social interaction (such as sales, insurance) .
In general, the global literature indicates that the characteristics that make a job suitable for integration
into telecommuting shapes are:
1. The ability to be handled without constant personal contact and interaction with other people.
2. The ability to organize the necessary social contacts on a periodic basis. The work that currently require
daily meetings can be reoganised aiming at the integration of partial work at home into the working
pattern.
94
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mourmouris J.C., Nikolaidis K. | Νew Technologies & Labour Market
3. The ability to be manageable through profit results or by agreeing to meet specific objectives in a given
time. Experience has shown that teleworkers need short-term goals if they are to work effectively.
4., A possible access from distance (either electronically or by telephone conference) via remote device / PC
or with a permanent connection to a specific database in cases where access to information is needed on a
daily basis.
5. The possibility of a task, where a job is directly dependent in cases where a job on the time of delivery of
the products or to be delivered by electronic means or "hand in hand " or via courier.
On the basis of the above factors, the key sectors that have already adopted some forms of flexible working
procedures in Greece and are expected to continue such practices are summarized as:
Distance education ,
Telemedicine
Marketing & advertising.
At this point, it must be stressed that teleworking should not be considered the ideal working method for all
occupations. Occupations having as a prerequisite personal contact or manual labor cannot implement
telework.
As far as enterprises are concerned the main advantage of telecommuting is the increase in productivity.
This increase is mainly due to lower vacation periods during labor and greater concentration of workers,
increased motivation and job satisfaction, greater commitment to work in the absence of lost time and
hassle when traveling.
Arguably, the shift to teleworking and systematic use creates and projects the image of an innovative and
advanced company which streamlines the organization of work, benefits from the information society and
adjusts to modern developments.
The greatest and most immediate benefit is the gain from the reduction of operating costs. ¨ We save in
salaries and travel costs, we limit the need for premises and therefore the fewer the buildings the less the
maintenance costs.
With this new form of work, the company has more flexibility in the rational management of staff. The term
"office" as a fixed spatial point ceases to exist. The company is no longer defined by the offices occupied,
but as a network of relationships , which are connected through telecommunication networks. In this way,
the opportunity for access to the labor market is given even to geographically remote areas.
The desire for greater self-determination and control of time that the employee has leads to the adoption
of flexible forms of work. The possibility offered by telecommuting to workers not to make unnecessary
movements,or need to communicate with their colleagues in the narrow sense of an office and a specific
timetable makes telework very attractive to a large number of workers.
Another important problem is that companies have not integrated their information infrastructure due to
the large initial financial cost needed for the initial installation.
The employer cannot control and supervise the employees due to the fact that they are not constantly at
the workplace.
At this point a significant question arises. Whether companies can continue to require commitment from
employees while they themselves do not commit; I believe the answer to this question is not easy especially
nowadays when in Europe there is massive unemployment, plaguing mostly young people aged up to 35
years. Let’s not forget that teleworking and the new forms of work under discussion refer mainly to younger
age groups of workers.
This research we are conducting ends up with some suggestions that, I think, will improve the employment
frame of telework and the new forms of work related to telecommunications and computing.
These proposals do not necessarily cost money but they are important in order to improve and
institutionalize necessary and sufficient conditions for the proper implementation of telework.
So, I will indicatively mention the implementation of teleworking in the public administration.
As well as the education of young people on flexible types of work even at school, for example in the
subject of professional orientation.
Of course, there are some suggestions related to businesses. The field of teleworking is new and thus there
is considerable margin for optimization as well as proposals that will contribute to the better
implementation of new technologies. For instance, the application of pilot programs, the study of all the
matters related to human resources and new technologies.
Another point that we will focus our proposals in the field of telework on has to do with the employee. It is
equally important to make suggestions and take steps in industry associations aiming at the collective
representation of workers. Teleworkers are entitled to claim the same rights as employees who are on the
premises.
From the above table we see that Greece is close to the average of E.U. The Nordic countries have greater
penetration to telework in relation to the countries of the South. Then the thesis will try to study the
behavior of employees in relation to the evolution of telecommunications and the development of the
internet from 1 Mbps up to 100Mbps. We will study how the labor cost is affected by the speed of the
internet and the possible scenarios that arise from this study.
3. CONCLUSIONS
In this paper we propose a methodology that may be useful at improving the current framework as an
additional tool in the sector of telecommunication. At this point we can see what the resulting problems of
telework are. We have problems with the educational system, the level of penetration and the use of
technologies. There are also many questions about the social context of work which are not sufficiently
circumscribed. Another important problem is that companies have not integrated their information
infrastructure due to the large initial financial cost needed for the initial installation. The employer cannot
control and supervise the employees due to the fact that they are not constantly at the workplace.
This thesis will contribute to scientific research approaching the following questions.
Is there potential improving for the way to use the telecommunications?
How can we make the substitution of labor by telecommunications?
Is there clear evidence that this resource will be used by this research will reduce the overall cost of labor.
REFERENCES
(1) Ali, M.S. (2002). Information resource centre : mainstream for the flow of information for lifelong learning.
Paper presented at the XV annual conference of the Asian Association of Open Universities (AAOU), 21G23 February
2002, New Delhi, India.
(2) Anastasiades P.A. (2003). ‘Distance learning in elementary schools in Cyprus: the evaluation
methodology and results’. Computers & Education, 40(1), pp. 17G40.
(3) Bates, A.W. (1993). ‘Theory and practice in the use of technology in distance education’, in Keegan, D. (Ed.),
Theoretical Principles of Distance Education , London: Routledge, pp.213G233.
st
(4) Garrison, D. R. (2000). ‘Theoretical challenges for distance education in the 21 Century: A shift from structural
to transactional issues’. International Review of Research in Open and Distance Learning 1(1) (pp. 7G13),
(5) Dabholkar, P.A. (1994), "Technology-based service delivery: a classification scheme for developing marketing
strategies", in Swartz, T.A., Bowen, D.E., Brown, S.W.(Eds),Advances in Services Marketing and Management , JAI Press,
Greenwich, CT, Vol. Vol. 3 pp.241-71.
97
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
Abstract
The advent of the new warehouse technique of cross docking has created a new field of system modelling, this of
warehouse stochastic modeling. In this framework, we deal on our paper with the analytical modelling of a dynamic
supply system with two stages or echelons (supplier warehouses, distribution center). The system has amerge- in
structure. Further, there is no buffer in the system and the operation strategy is characterized as push. Our research
methodology is the configuration of an analytical model representing the physical warehouse system as described
above and the development of a computational algorithm. As far as the model concerns, we model the processing
times for the two modes of material flow (cross docking material flow and traditional warehousing material flow) using
Coxian-2 phase type distribution. The members of the warehouse system are functionally related. Supplier warehouses
are assumed saturated. We use continuous time Markov process with discrete space to model the warehousing system.
The steps for the development of the computational algorithm consist of the configuration of the state space and the
transition matrix , the derivation of steady state probabilities and the calculation of the performance measures such as
average inventory (Work in Process- WIP) and the throughput, the rate the entities, exit the warehouse system. Further,
owing to a number of numerical experiments which are taken place, a warehouse manager can explore the behavior
ofperformance measures(output variables)of the model in relation with a sum of input and fully controllable (by the
warehouse manager) variables. In more detail, we analyze the behavior of throughput in relation with distribution
center processing rates and the fraction of cross docking orders. Moreover, the impact of the number of suppliers and
the distribution center mean processing rates on the WIP is evaluated. Last but not least, the impact of the number of
suppliers and distribution center mean processing rates on fill rate is examined. The above mentioned methodology
offers to warehousing system administrators a research tool of system characteristics,useful conclusions for the
warehouse operations and new ways of warehousing system optimization, being the contribution of our paper.
KEYWORDS
Warehousing, cross- docking, merge- in, network, facility, flow
Based on the model characteristics we present the relevant papers. Concerning the stochastic models for
supply and production networks, stochastic models have been proposed for uncertainties of dynamic
supply chain systems. For example, stochastic models for demand uncertainty have been proposed in Gupta
and Maranas, [1]; Nagar and Jain [2]; Bernstein and deCroix, [3]; Bogataj and Horvsat, [4]. Other researchers
propose Markov chains to model supply chains. Pyke and Cohen [5] have developed a Markov chain model
of a three-level production-distribution system (a single station, a finished goods stockpile, a single retailer).
Further, a distribution-based methodology is used to reduce computational complexity. Nagar and Jain [2]
have developed a two-stage stochastic programming model, which is then extended to a multi-stage
98
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
stochastic programming model. They use scenario approach to address supply chain planning under
uncertain demand environment. Researchers have also developed a sequential approach for obtaining state
distribution for random variables that determine system performance. Markovian analysis of production
lines where the service times at each station of the line follow the Coxian-2 distribution has been proposed
in Vidalis and Papadopoulos [6]. Their proposed algorithm uses Coxian-2 distribution service times to
calculate the throughput rate of the production lines. Arts and Kiesmuller [7] state that, although situations
with one buyer and several supply options have become increasingly common in modern supply networks,
quantitative modeling approaches to analyze these supply networks are limited. Concerning the feature of
cross- docking, the literature contains papers dealing with the cross docking facility layout and its
optimization. On the paper of Gue [8] the effect of trailer scheduling on facility layout is analyzed whereas
on the paper of Hauserand Chung [9] a genetic algorithm is proposed for cross docking layout optimization
moreover Sung and Song [10] examine an integrated service network design for cross docking warehouse
facilities. We also find in cross docking literature, papers dealing with the role of cross docking distribution
technique in the inventory management. In more detail, Waller et al [11] dealt with the of cross docking on
inventory management in decentralized supply chains. Last but not least, Heragu [12] classifies the material
flows processed by a warehousing system into two (2) main categories : cross docking and conventional
warehousing.
On the first flow, the orders are first received then passed on the facility and finally arrive at the loading
docks in order to be shipped to the various destinations. On the second flow, batches of products are
received, upon receiving the products are stored for a period time and finally after the period of time the
orders leave the warehouse. On the third type of flow, the orders are received and stored in the warehouse
premises and before leaving the warehouse (shipping) value added activities took place such as order
consolidation. The flow 4, strongly resembling to flow 1, the orders are received and after subjected to
value added activities are moved to loading gates.
From the above analysis of types of material flows two main points are derived
Since the flow 1 is similar to flow 4 similar and also flow 2 is similar to flow 3 we can sort the four
types of material flows to two main categories of flows , cross docking and traditional and
conventional- traditional warehousing
The flow of traditional warehousing consists of a phase of cross –docking and a phase of
warehousing. As a matter of fact the traditional warehousing flow has two phases: the first the
cross docking material flow phase and the second the storage phase.
Further, the system consists of a number of supplier warehouse facilities in the upstream echelon and one
distribution center in the downstream echelon.
99
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
2.2.1. Notation
On this paper the following notation has been adopted:
Ν : number of suppliers
K1,2 : number of phases of Coxian phase type distribution
μ1 : mean cross docking processing rate for the distribution center
μ2 : meantraditional warehousing processing rate for the distribution center
d1: fraction of shipments which will be processed at distribution centre according cross docking
material flow ,
d2: fraction of shipments which will be processed at distribution centre according traditional
warehousing material flow ,
μN1 : mean cross docking processing rate for the supplier warehouse
μN2 : meantraditional warehousing for the supplier warehouse
dN1: fraction of shipments which will be processed at supplier warehouse according cross
docking material flow
dN2: fraction of shipments which will be processed at supplier warehouse according traditional
warehousing material flow.
WIP: average inventory in the entire system
THR: average output rate of the system
100
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
The steps of the solution method for the calculation of the steady state probabilities for the system under
consideration are similar to those applied in the papers of Papadopoulos, 1989 [13], Papadopoulos and
O’Kelly, 1989 [14]; Papadopoulos, Heavey and O’Kelly , 1989a; 1989b; [15],[16], Heavey, Papadopoulos and
Browne, 1993 [17]; Vidalis, 1998 [18]; Vidalis and Papadopoulos, 2001 [19]
3.1.1. Throughput in Relation with the Distribution Center Processing Mean Time
Rates μ1,μ2
Figure 1: Throughput in relation with μ1,μ2
From the graph, it is derived that as traditional warehousing mode and cross docking mode increase ,
throughput also increases.
3.1.2. Throughput in Relation with the Fraction of Cross Docking Flows (d1)
Figure 2: Throughput and fraction of cross docking flows (d1)
As shown on the diagram, throughput increases exponentially as the fraction of cross docking orders
processed by the distribution center increases.
101
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
From the above figure, we point out that the mean inventory increases linearly as the number of supplier
warehouses increases. The larger the number of suppliers the larger the inventory level is kept in the
warehouse network.
3.1.4. WIP in Relation with the Distribution Center Processing Mean Time Rates
μ1,μ2
The above graph stresses that as the rates of cross docking mode and traditional warehousing decrease the
level of inventory kept increases. The inventory accumulate as the two modes of material flow slow down .
3.2. Validation
102
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
Further to the analytical model, we develop a simulation in order to compare the values performance
measures of the two model. According to the tablebelow, the two models have the same values for a set of
input variables values, validating the robustness and correctness of the analytical model
AnalyticalModel Simulation
mean cross Meantraditional
docking warehousing
processing processing rate for
rate DC DC Buffer WIP Throughput WIP Throughput
4. CONCLUSIONS
Concluding, on our paper we presented a merge- in warehouse facilities network. In order to examine its
operation we modeled the system as queuing network with two modes of operation cross docking and
traditional warehousing and we applied Coxian-2 phase type distribution. Also we proceeded with the
configuration of transition matrices and with the calculation of steady state probabilities’ vector. Last but
not least, we presented the behavior of system characteristics – performance measures in relation to a
variety of variables. For further research, we propose the development of a model with many warehouses,
buffer and many distribution centers.
REFERENCES
Book
1. Sunderesh S. Heragu,2008, Facilities Design, Taylor and Francis Group, Boca Raton.
Journals
1. Gupta A. and Maranas C.D.,2003,“Managing demand uncertainty in supply chain planning” , Computers and
Chemical Engineering , vol. 27 No 8 pp.1219-27
2. Nagar L. and Jain K. ,2008, “Supply chain planning using multi stage stochastic programming” , Supply Chain
Management An International Journalvol 13 issue 3 pp 251- 256 ,
3. Bernstein F. and De Croix G.A.,2006, “Inventory policies in a decentralized assembly system” ,Operation
Research, vol 54, no 2, pp 324-336,
4. Bogataj L. , Horvsat L.,1996, “Stochastic considerations of Grubbstrom – Molinder model of MRP , input-
output and multiechelon inventory systems” ,International Journal of Production Economics,45, pp329-336
103
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Reklitis P., Vrysagotis V. | Modeling a Merge in Network of Warehouse Facilities with Two
Modes of Operation: Cross Docking and Traditional Warehousing
5. Pyke D.E. and Cohen M.A., 1993 ,”Performance Characteristics of stochastic integrated production-
distribution systems” , European Journal of Operational Research 68 23-48,
6. Vidalis M.J. and Papadopoulos H.T. 1999 “Markovian Analysis of production lines with Coxian -2 service times”
, International Transactions of Operation research, vol 6 pp 495-524.,
7. Arts J.J. and Kiesmulller G.P. ,2010, “Analysis of a two – echelon inventory system with two supply modes,
(BETA publicatie , working papers, No. 339) Eindhoven TechnisceUniveriteit Eindhoven , 25
8. Gue K., 1999, “The effects of trailer scheduling on the Layout of Freight Terminals” , Transportation Science
,vol 33, no 4, pp 419-428,
9. Hauser K and Chung C.H. ,2006, “Genetic algorithms for layout optimization in cross docking operations of a
manufacturing plant” , International Journal of Production Research, vol. 44 No 21 ,
10. Sung CS and SH Sung, 2003, “Integrated service network design for a cross docking supply chain network”
,Journal of Operation Research Society, 54, pp 1283-1295,
11. Waller M.A., Cassady C.R., Ozment J.,2006 ,”Impact of cross docking on inventory in a decentralized retail
supply chains” , Transportation Research Part E, vol 42, pp 359-382 ,
12. Papadopoulos H.T., 1989, “Mathematical Modelling of Reliable Production Lines Phd thesis in Industrial
Engineering /Operation Research, Department of Industrial Engineering, National University of Ireland,
Galway , Ireland ,
13. Papadopoulos H.T., and O’ Kelly M.E.J.,1989, “A recursive algorithm for Generating Transition Matrix of
Multistation Series Production Lines” ,Computers in Industry, vol. 12, pp 227-240,
14. Papadopoulos H.T., Heavey C. and O’ Kelly M.E.J,1989, “Throughput Rate of Multistation Reliable with inter
station buffers (I) Exponential case “ , Computers in Industry,vol 13, pp 229-224,
15. Papadopoulos H.T., Heavey C. and O’ Kelly M.E.J, 1989, Throughput Rate of Multistation Reliable with inter
station buffers (I) Exponential case “ , Computers in Industry,vol 13, pp 317-335,
16. Heavey C., Papadopoulos H.T, and Browne J.,1993,” Throughput Rate of MultistationUnreliable Production
Lines”, European Journal of Operation Research , vol. 68, pp 69-89. ,
17. Vidalis M.I.,1998, “Performance evaluation and optimal buffer allocation in serial production lines” , Phd
thesis in Operation Research, University of Aegean , Samos,
18. Vidalis M.I and Papadopoulos H.T, 2001 ,”A recursive algorithm for Generating Transition Matrix of
MultistationMultiserver Exponential Reliable Queueing Networks”, Computers and Operational Research, vol.
28 (9) , pp 853-883.
104
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
ABSTRACT
The quality of products/services is a key factor for the success of a business and it’s primarily determined by the
customers’ needs and expectations. This justifies the necessity for identifying and analyzing customer needs and
preferences. The main aim of this study is to present a methodological approach in order to define different quality
levels and classify customer requirements. The approach is based on Multicriteria Decision Analysis and adopts the
principles of the Kano’s model. In particular, the main objective is the comparison between derived and stated
importance for the satisfaction criteria. Stated importance is defined as the straightforward customer preference for
the weight of a satisfaction criterion, while derived importance is estimated by a regression-type quantitative technique
using customer judgments for the performance of this set of criteria. Both stated and derived importance are estimated
using ordinal regression techniques and these results are comparatively examined through a dual importance diagram
that defines different quality levels in agreement with Kano’s approach and gives the ability to classify customer
requirements. The applicability of the proposed approach is illustrated by a real-world application in the mobile phone
industry. In particular, the results of the presented customer satisfaction survey are focused on the quality attributes of
smartphones. These results can give valuable information, since they may identify unspoken motivators or even
expected or cost of entry attributes. Using this approach, customer requirements are better understood, since the
product/service criteria that have the highest impact on customer satisfaction or dissatisfaction can be identified and
priorities for product development may be decided.
KEYWORDS
Kano Model, MUSA Method, WORT Method, Ordinal Regression
1. INTRODUCTION
Several approaches for measuring and analyzing customer satisfaction have been proposed, most of them
adopt a one-dimensional recognition of quality with limited explanatory power (Kano et al.,1984). The one-
dimensional view of quality can explain the role of certain quality attributes where both satisfaction and
dissatisfaction vary in accordance with performance. However, this approach cannot explain the role of
other quality attributes where customer satisfaction (or dissatisfaction) is not proportional to their
performance. In this case, fulfilling the individual product/service requirements does not necessarily imply a
high level of customer satisfaction (or the opposite i.e., dissatisfaction does not occur, although the
performance of a product/service attribute is relatively low).
The theory of attractive quality is inspired by Herzberg’s (1956) work, on job satisfaction which posits that
the factors that cause job satisfaction are different from the factors that cause dissatisfaction. Using this
context, Kano’s model suggests distinguishing customer satisfaction and dissatisfaction, taking into account
the degree of achievement. According to Kano (2001), quality attributes are dynamic and can change over
time. A successful attribute follows a life cycle from being indifferent, to attractive, to one-dimensional, to
must-be. Using the Kano’s model, customer requirements are better understood, since the criteria that
have the highest impact on customer satisfaction or dissatisfaction can be identified. Classifying customer
105
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
requirements into must-be, one-dimensional and attractive categories may be useful to identify priorities
for product development.
In addition, the Kano’s model may give insight into the relationship between the importance of quality
attributes and the customer requirements for these attributes. Customers may be communicating different
levels of importance in their explicit judgments of importance. In simple words, the theory of attractive
quality suggests that the importance of a quality attribute is not constant, but it is affected by the category
in which this attribute is assigned, as well as its performance level.
1. Must-be quality: These quality attributes are taken for granted when fulfilled but result in
dissatisfaction when not fulfilled. The customer expects these attributes, and thus views them as
basics. Customers are unlikely to tell the company about them when asked about quality
attributes; rather they assume that companies understand these fundamentals of product design
(Watson,2003).
2. One-dimensional quality: These attributes result in satisfaction when fulfilled and result in
dissatisfaction when not fulfilled. They are also referred as the-more-the-better quality attributes
(Lee and Newcomb,1997). The one-dimensional attributes are usually spoken and they are those
with which companies compete.
3. Attractive quality: These quality attributes provide satisfaction when fully achieved but do not
cause dissatisfaction when not fulfilled. They are not normally expected by customer, and thus
they may be described as surprise and delight attributes. For this reason, these quality attributes
are often left unspoken by customers.
4. Indifferent quality: These attributes refer to aspects of a product that are neither good nor bad,
and thus, they cannot create satisfaction or dissatisfaction.
5. Reverse quality: This category is similar to the one-dimensional quality, but it refers to a high
degree of achievement resulting in dissatisfaction, and vice versa (i.e., a low degree of
achievement resulting in satisfaction). Thus they may be characterized as the-less-the-better
quality attributes.
106
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
In order to classify quality attributes into the five aforementioned dimensions, Kano et al. (1984) use a
specific questionnaire that contains pairs of customer requirement questions, that is, for each customer
requirement two questions are asked:
1. How do you feel if a given feature is present in the product (functional form of the question)?
2. How do you feel if that given feature is not present in the product (dysfunctional form of the
question)?
Using a predefined preference scale and the evaluation table of Figure 2, each customer requirement may
be classified into the five dimensions of the Kano’s model (Löfgren and Witell,2008). The dimension
designated as questionable contains skeptical answers and is used for responses in which it is unclear
whether the responder has understood the question. Finally, in order to decide on the classification of a
quality attribute, the proportion of respondents (statistical mode) that classifies a given attribute in a
certain category is used (i.e., the attribute is assigned into the category with the highest frequency
according to customer answers). Several variations of this classification procedure have been proposed,
referring mostly to alternative quality dimensions and evaluation scales. Löfgren and Witell (2008) present a
thoroughly review of these alternative approaches.
However, the previous procedure does not take into account that quality attributes are in fact random
variables and customer responses form a probability distribution function on the main categories of the
Kano’s model. Thus, the statistical mode is not always a good indicator of central tendency. Furthermore,
different market segments usually have different needs and expectations, so sometimes it is not clear
whether a certain attribute can be assigned to a specific category. For this reason, several indices have been
proposed to aid the classification process of quality attributes (Löfgren and Witell,2008). A simple approach
is to calculate the average impact on satisfaction and dissatisfaction for each quality attribute. Berger et al.
(1993) introduced the Better and Worse averages, which indicate how strongly an attribute may influence
satisfaction or, in case of its non-fulfillment customer dissatisfaction:
where A, O, M, and I are the attractive, one-dimensional, must-be, and indifferent responses, respectively
(i.e., percentage of customers assigning a given attribute to a certain category).
Expect R I I I M
Neutral R I I I M
Accept R I I I M
Dislike R R R R Q
The pairs of Better and Worse averages can be plotted in a two-dimensional diagram representing the
impact of quality attributes on satisfaction or dissatisfaction (Figure 3), and thus a clearer view for the
classification of quality attributes may be obtained.
107
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
Interestingly, derived importance by a preference disaggregation model and stated importance that is given
to each criterion by the customers are seldom the same. It is not unreasonable to say that customers tend
to rate every criterion as important, when asked freely (Naumann and Giel,1995). Because of the tendency
of customers to rate almost everything as important, the researchers are often wary of self-explicated
importance data and derived importance data are considered generally more reliable. Nevertheless, the
comparison between derived and stated importance can give valuable information. It enables a company to
identify what attributes the customers rate as important and see how these agree with truly important and
truly unimportant attributes. Moreover, it helps the company identify unspoken motivators or even
expected or cost of entry attributes. This approach also agrees with the principles of Kano’s approach for
defining different quality levels and may give the ability to classify customer requirements.
The required information is collected via a simple questionnaire in which the customers evaluate the
provided product/service, i.e., they are asked to express their judgments, namely their global satisfaction
and their satisfaction with regard to the set of discrete criteria. The MUSA method assesses global and
108
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
* *
partial satisfaction functions Y and X i, respectively, given customers’ judgments Y and Xi. It should be
noted that the method follows the principles of ordinal regression analysis under constraints using linear
programming techniques (Jacquet-Lagrèze and Siskos,1982; Siskos and Yannacopoulos,1985; Siskos,1985).
The ordinal regression analysis equation has the following form:
* *
where the value functions Y and X i are normalized in the interval [0, 100], and bi is the weight of criterion
* *
i. It should be noted that the value/satisfaction functions Y and X i are non-decreasing functions in the
ordinal scales Y and Xi, respectively.
*
The MUSA method infers an additive collective value function Y , and a set of partial satisfaction functions
*
X i from customers’ judgments. The main objective of the method is to achieve the maximum consistency
*
between the value function Y and the customers’ judgments Y. Based on the modeling presented in the
previous section, and introducing a double-error variable, the ordinal regression equation becomes as
follows:
* * + –
where Y is the estimation of the global value function Y , and σ and σ are the overestimation and the
underestimation error, respectively. The above formula holds for a customer who has expressed a set of
satisfaction judgments. For this reason, a pair of error variables should be assessed for each customer
separately.
The satisfaction criteria weights represent the relative importance of the assessed satisfaction dimensions,
given that b1+b2+…+bn=1. Thus, the decision of whether a satisfaction dimension is considered as important
by customers is also based on the number of assessed criteria. The properties of the weights are also
determined in the context of multicriteria analysis, and it should be noted that the weights are basically
value tradeoffs among the criteria, as presented in the previous sections.
Thus, the evaluation of preference importance classes C l is similar to the estimation of thresholds T l. An
ordinal regression approach may also be used in order to develop the weights estimation model. Using the
notations of the MUSA method, assume that is the preference of customer j about the importance of
criterion i. Then, the following cases exist (Grigoroudis and Spiridaki,2003):
109
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
+ -
In the previous formulas, it should be noted that S ij and Sij are the overestimation and underestimation
th th
error, respectively, for the j customer and the i criterion. Also, δ is a small positive number, which is used
in order to avoid cases, where bij= Tl ∀ l, while criteria weights are considered through the following
expression:
Furthermore, a minimum value may be assumed for thresholds T l in order to increase the discrimination of
the importance classes. Thus, the following conditions occur:
where λ is a positive number with λ ≤ (100/n), since the maximum value that λ may take cannot exceed the
criteria weights (if they are all of equal importance). The final model for the estimation of weights may be
formulated through the following LP (Grigoroudis and Spiridaki,2003):
Similarly to MUSA method, a post-optimality analysis should be considered, where the following LP is
formulated and solved:
* *
where Φ is the optimal value of the objective function of LP (1), and ε is a small percentage of Φ ; the
average of the optimal solutions of the previous LP(1), is taken as a representative final solution for the
model variables wik.
6. PROPOSED METHODOLOGY
An alternative approach for the classification of the quality attributes is presented now, using a dual
importance diagram, which combines the derived and stated importance of quality attributes (i.e., the
weights of attributes as estimated by a regression-type model and straightforwardly expressed by
110
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
customers, respectively). The presented methodology consists of the following main steps (Grigoroudis and
Spiridaki,2003):
1. In the first step, performance and importance data are collected using a simple questionnaire. In
particular, customers are asked about their level of satisfaction/dissatisfaction from each criterion,
while at the same time, they are asked to express their level of importance for each criterion.
2. Based on the performance satisfaction judgments, derived importance is estimated using the
MUSA method. Moreover, the straightforward customer preferences for satisfaction criteria
weights are used in the model presented in this section in order to estimate stated importance.
3. In the last step, stated and derived importance results are comparatively examined through a dual
importance diagram that defines different quality levels in agreement with Kano’s approach and
gives the ability to classify customer requirements.
Quadrants (i) and (ii) include the dimensions that are truly important to the customers. These are the main
characteristics that management and production should focus on. Quadrants (i) and (iv) include the
important dimensions according to the customers’ free statement. These are the dimensions that
marketing should focus on. When a characteristic appears in quadrant (i) or (iii) there is an agreement
between derived and stated importance. On the other hand, in quadrants (ii) or (iv) there is a disagreement
between the stated and derived importance. This disagreement is an indication that these dimensions
require further analysis. According to Lowenstein (1995), the dual importance diagram may be linked with
the Kano’s model and its three basic categories of product/service requirements:
1. Quadrants (i) and (iii) correspond to the characteristics that are truly important or truly
unimportant for the customers (one-dimensional characteristics). Both the model and the
customers agree on them giving the company a more valid view and a better-grounded direction.
2. Quadrant (ii) includes the characteristics that the MUSA method evaluates as being very
important, while the customers judge as less important when they are asked straightforward.
These characteristics are called unspoken motivators and represent dimensions to which the
company should pay attention. They may affect (positively or negatively) future clientele, although
the customers consider them of low importance.
3. Finally, quadrant (iv) includes the characteristics that the model estimates as less important, while
the customers rate them as very important. These usually include expected or cost-of-entry
111
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
services (e.g., service/product guarantees). A company should keep such characteristics at a level
at least as high as the ones of their competitors in order to keep its clientele, or offer extra,
unexpected services to gain competitive advantage.
The dimensions and criteria of both three questionnaires were: Display (size, resolution), Memory-
Processor (Memory size, Memory extension, Processor speed, Number of processor cores), Connectivity
(Wi/Fi, Mobile Internet, Bluetooth, NFC, Connection with PC), Accessories (Quantity and quality of
accessories – USB cable, stereo handsets, charger – Purchase extra accessories – car kit), Camera (Photo
resolution, Video resolution, Extra camera functions – front camera, panoramic record), Battery (Battery
duration), Extras (Dual SIM, Radio, GPS, Color options), Operating system and
applications (Operating system version, Upgrade, Number of applications – pre-installed or free download,
Purchase extra applications), Dimensions (Weight), Price (Price), Warranty (Years of warranty). The
questionnaires were collected through immediate distribution and through Internet (social media and
forums) for the time period January-February 2013. The answers for each of the above criteria were used
Satisfaction Analysis (MUSA method), Importance Analysis (WORT model) and Kano Analysis.
As for the WORT model, through the importance questions we had 5 importance classes (C1, C2, C3, C4, C5)
and 4 thresholds T1, T2, T3 and T4. After solving the WORT model, using the post-optimality analysis, 28
linear problems were solved, as many as the satisfaction criteria, which maximize the b i weight of every
criterion. Several λ values were tested under the threshold constraints, according the stability analysis at
the post–optimality phase and finally the λ=0,01 value was selected. Also, δ, which is used in order to avoid
cases where, bi=Tl, was calculated as δ=0,001. Finally, Φ*, the optimal value of the objective function of the
LP was calculated as Φ*=1883 and ε=0,1. The final weight for each criterion was calculated as the average
of the weights after each post–optimization.
The MUSA method, through MUSA software, calculates the criteria weights and the WORT method, through
LINGO software. After the weights’ normalization for both the MUSA and WORT, at the following scheme
stated importance is plotted in relation to derived importance (Figure 5).
112
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
Quality attributes located in the Quadrants (i) and (iii) show accordance between derived and stated
importance, where they are both relatively high (quadrant i)/low (quadrant iii). Quality attributes located in
the Quadrant (ii) show discordance between derived and stated importance, where derived importance is
high and stated importance is low. Quality attributes located in the Quadrant (iv) show discordance
between derived and stated importance, where derived importance is low and stated importance high. As
for the linkage between the dual importance grid and the Kano’s model categories, the results are the
follows: Quality attributes located in Quadrant (i) and (iii) are one-dimensional (truly important customer
needs). Quality attributes located in Quadrant (ii) are attractive (unspoken motivators). Quality attributes
located in Quadrant (iv) are expected (must-be customer needs). In our case:
113
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
Many real – world applications have shown that customer stated importance of quality attributes, is often
different from the importance produced from a mathematical model. Because of the tendency of
customers to rate almost everything as important, when asked freely, so the researchers are often wary of
self-explicated importance data and derived importance data are considered generally more reliable.
Nevertheless, the comparison between derived and stated importance can give valuable information.
114
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Pologiorgi I., Grigoroudis E. | A Multicriteria Approach for the Analysis of Customer Satisfaction According
to the Kano Model
The WORT model gives the opportunity to compare derived and stated importance and introduces the Kano
model principles in the MUSA method. The results can be organized in perceptual maps that present
schematically derived and stated importance and help the development of improvement strategies, in
accordance with the Kano approach for the determination of different quality categories. Also in the
present study, we explored the combination of WORT model and MUSA method, in order to find a model
that will include the performance and importance of the quality attributes.
Despite solving important problems related to importance estimation, future research shows important
perspectives. Firstly, there is need to model the importance evaluation problem, using the fuzzy set theory
and/or statistical analysis models that can handle qualitative data (e.g., multivariate conditional probability
models and multidimensional scaling). Also, in case of hierarchical satisfaction data it is possible to develop
a set of fuzzy logic rules and specialized models, such as Non Structural Fuzzy Decision Support (NSFDS). It is
also possible to apply the proposed WORT model in various business organizations with different
characteristics. Furthermore, future work can explore alternative objective functions and optimization
techniques for the examined linear programs. In the present study we tried to investigate the appropriate
parameter values. Finally a detailed investigation for the selection of appropriate values for the parameters
of the presented approach can be an important object for future study.
REFERENCES
1. Berger C., Blauth R. and Boger D. (1993). Kano’s methods for understanding customer-defined quality
2. Bharadwaj S. and Menon A. (1997). Discussion in applying the Kano methodology to meet customer requirements:
NaSA’s micro gravity science program
3. Center for Quality and Management Journal (1993). A special issue on Kano’s methods for Understanding
Customer-defined Quality
4. Grigoroudis E., Siskos Y. (2010). Customer Satisfaction Evaluation, Methods for Measuring and Implementing
Service Quality
5. Grigoroudis E., Samaras A., Matsatsinis N. F. and Siskos Y. (1999). Preference and customer satisfaction analysis: An
integrated multicriteria decision aid approach
6. Grigoroudis E., Malandrakis J., Politis J. and Siskos Y. (1999). Customer satisfaction measurement: An application to
the Greek sector
7. Grigoroudis E. and Siskos Y. (2002). Preference disaggregation for measuring and analyzing customer satisfaction:
The MUSA method
9. Kano N., Seraku N., Takahashi F. and Tsuji S. (1996). Attractive Quality and must be quality
10. Siskos Y. and Grigoroudis E. (2002). Measuring customer satisfaction for various services using multicriteria analysis
115
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
ABSTRACT
Nowadays, the building sector is responsible for at least 40% of the final energy consumption at national and European
level. In Greece, 27.3% of the buildings are ranked at the lowest energy efficiency class H (most of them without
thermal insulation) and 96.3% of the buildings lower than the reference building (energy efficiency class B). However,
taking into consideration their untapped potential for cost-effective energy savings, the penetration of energy efficiency
technologies in the building sector could play an active role among the EU’s efforts in development of a viable strategic
framework towards nearly zero - energy buildings. The main aim of this paper is the energy saving study of a nursery
school in the Municipality of Maroussi, Greece. The building was constructed in 2008 and is ranked at energy efficiency
class C. Based on the buildings’ consumptions and requirements, a number of energy-savings measures were proposed.
These alternatives were technically analyzed and economically evaluated with the Net Present Value (NPV), the Internal
Rate of Return (IRR) and the Discounted Payback Period (DPB) criteria. It is estimated that the selected proposals will
+
improve the energy efficiency of the building which will be ranked at the energy efficiency class A .
Key words
1. INTRODUCTION
Nowadays, buildings are responsible for at least 40% of energy use in most countries [1]. Energy security
and climate change are driving a future that must show a dramatic improvement in buildings’ energy
performance. At the European level, the main policy driver related to the energy use in buildings is the
Energy Performance of Buildings Directive (EPBD, 2002/91/EC). Implemented in 2002, the Directive has
been recast in 2010 (EPBD recast, 2010/31/EU) with more ambitious provisions. Through the EPBD
introduction, requirements for certification, inspections, training or renovation are now imposed in
Member States prior to which there were very few. Improving the energy performance of buildings is a key
factor in securing the transition to a ‘green’ resource efficient economy and to achieving the EU Climate &
Energy objectives, namely a 20% reduction in the GHG emissions by 2020 and a 20% energy savings by
2020. [2].
On 25 October 2012, the EU adopted the Directive 2012/27/EU on energy efficiency [3]. This Directive
establishes a common framework of measures for the promotion of energy efficiency within the Union in
order to ensure the achievement of the Union’s 2020 20 % headline target on energy efficiency and to pave
the way for further energy efficiency improvements beyond that date. The European Union has also
committed to 80-95 % GHG reduction by 2050 as part of its roadmap for moving to a competitive low-
carbon economy in 2050 [4]. Τhe recent Directive 2010/31/EU promotes the improvement of buildings’
energy performance within the whole EU, with the ultimate goal of ensure all new buildings are net zero
energy consumers by 2020, “Nearly zero-energy building” [5]. In the Energy Efficiency Plan 2011 [6], the
European Commission states that the greatest energy saving potential lies in buildings. The minimum
energy savings in buildings can generate a reduction of 60-80 Mtoe/a [7] in final energy consumption by
116
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
2020, and make a considerable contribution to the reduction of GHG emissions. The Eco-design of the
Energy-Related Products Framework Directive 09/125/EC (recast of Energy-Using Directive 32/2005/EC),
the End-use Energy Efficiency and Energy Services Directive 32/2006/EC (ESD) as well as the Labeling
Framework Directive 2010/30/EU (recast of 75/1992/EC) aim to contribute significantly to realizing the
energy-saving potential of the European Union’s buildings sector. This will be achievable only if buildings
are transformed through a comprehensive, rigorous and sustainable approach methodology.
In this respect, the main aim of this paper is the energy saving study of a nursery school in the Municipality
of Maroussi, Greece. The building was constructed in 2008 and is ranked at energy efficiency class C. Based
on the buildings’ consumptions and requirements, a number of energy-savings measures were proposed.
These alternatives were technically analyzed and economically evaluated with the Net Present Value (NPV),
the Internal Rate of Return (IRR) and the Discounted Payback Period (DPB) criteria. It is estimated that the
selected proposals will improve the energy efficiency of the building which will be ranked at the energy
+
efficiency class A .
Apart from the introduction the paper is structured along four sections. The second section is devoted to
the presentation of the proposed approach of the energy saving study. The third section is devoted to the
pilot appraisal and results. Finally, in the last section the main points drawn up from this paper are
summarized.
2. ADOPTED APPROACH
The general philosophy of the adopted approach is presented in Figure 1. More specifically, the proposed
framework includes the following steps:
Step 1: First of all, the identification of the building’s general characteristics should be made, providing
the necessary information for the energy saving study.
Step 2: The second step includes the energy inspection of the building with the use of relative software
[8].
Step 3: From the energy inspection, calculations that concern building’s thermal zone are obtained.
Step 4: The energy consumption of the whole building is calculated.
Step 5: Αcording to the relative consumptions that have been estimated, energy savings measures are
proposed.
Step 6: Finally, these measures are techno-economically evaluated and some of them constitute the
final scenario of energy savings measures which will improve the energy efficiency towards nearly zero
- energy buildings.
117
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
3. PILOT APPRAISAL
It has to be mentioned that from the above consumptions, the heating and cooling energy consumption
remain the same for the whole building while the lighting energy consumption represents the consumption
of lights of thermal zone, i.e. the lights of the ground floor and the first floor.
118
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
lighting energy consumption (thermal zone, basement, outside spaces), the energy consumption of
appliances and the energy consumption of electromechanical equipment. The annual electricity
consumption was calculated by the electrical bills of the last four years of the nursery. The energy
consumption of appliances was estimated by the annual use of appliances in the building and the energy
that every one of them consumes. The energy consumption of electromechanical equipment (burner &
circulators) was obtained by the calculation of heating losses of thermal zone, the annual oil consumption
of the building and the heating energy consumption, which had been exported from the energy
performance certificate. Having excluded all the above consumptions from the annual electricity
consumption, the lighting energy consumption of the whole building was estimated. The annual operation
hours of lighting were emerged so that they would be used in the estimation of energy saving measures.
In this part, the proposals that intent to increase the energy efficiency of the public building are technically
analyzed and the energy gain, associated with each measure, is presented.
Lamps replacement on and of thermal zone: Inside the building (in the thermal zone) there are 316
fluorescent lamps TL-D 60 W and 12 incandescent lamps 60 W. After extensive research and techno-
economic evaluation, it is obvious that the use of LED lamps instead of fluorescent and incandescent
lamps combines the minimum annual cost with the minimum emissions of carbon dioxide. By replacing
all the incandescent lamps of 60 W with LED lamps of 12 W and all the fluorescent lamps TL-D of 60 W
with MASTER LED tubes of 11 W, the Lighting Energy Consumption is reduced by 36,73%. In building’s
basement there are 62 fluorescent lamps TL-D 36 W and 23 incandescent lamps 60 W. By replacing all
the incandescent lamps of 60 W with LED lamps of 12 W [12] and all the fluorescent lamps TL-D of 36W
with MASTER LED tubes of 22 W, the Lighting Energy Consumption is reduced by 50.41% [13].
Thermal break frames: Thermal break is the interference of polyamide, which is a bad heat conductor,
between the inner and the outer aluminum, maintaining though the existing glass. By this measure, the
heating energy consumption will be reduced by 10% while the cooling energy consumption will remain
almost the same.
Additional insulation to the building and the roof: Heating and cooling are the main energy consumers
in buildings. The use of air condition is estimated to triple before 2030. Most of this energy is wasted
due to inadequate insulation. The total area of the vertical side of building elements as it was
2
calculated during the Energy Inspection of the building is 552,14 m . Respectively, the area of the roof
2
is 271,22 m . Adding insulation to the external side of the building and the roof, the heating energy
consumption will be reduced by 25% and the cooling energy consumption by 4.7%.
External shutters installation in building’s frames: the cooling energy consumption will be diminished by
33.5%. In contrast, the heating energy consumption will be increased. It is evident that it is a
phenomenal increase caused by the software in use. According to the software, the shutters block the
sun from entering the building and therefore this amount of heating that would be obtained by the sun
will have to be produced by the heating system and as a consequence, more heating energy will be
consumed [14].
Natural gas and biomass instead of oil for thermal energy production: These two measures examine the
use of natural gas and of biomass (i.e. wood pellets) respectively for space heating in the winter and
the heating of service water during the year. It is expected that there will be no change in building’s
energy consumption since by replacing the fuel building’s energy needs remain the same [15, 16].
Solar systems installation on the roof of the building: After the calculation of the electricity
consumption of the building, it is expected that the annual energy produced by the solar systems will
be as close as possible to the amount of 31266 kWh. Moreover, for the calculation of the peak power
119
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
needed to meet the energy requirements of one day of autonomy, some meteorological characteristics
were obtained by the PVGIS software [17]. Finally, after the peak power calculation for each month, the
optimum location of the photovoltaic panels had to be determined so that the best production capacity
would be achieved. Finally, with the use of the relative software, it is calculated the annual energy
production of solar panels, equal to 29270 kWh for an angle of 30 degrees. Due to the lower power of
the photovoltaic system, the installation will always have a deficit in solar energy production.
In the table below, the above measures are economically evaluated with the Net Present Value (NPV), the
Internal Rate of Return (IRR) and the Discounted Payback Period (DPB) criteria. It is obvious that the final
scenario of energy savings measures will consist of the measures with the maximum-NPV in the aim of a
profitable investment.
Cost of
NPV IRR DPB
Investment
120
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
Based on these four measures, the electricity consumption of the building will be reduced from 31266 kWh
to 24809 kWh. The electricity consumption will have a zero cost of production as 29270 kWh will be
annually produced by the solar system on the roof. In particular, with the use of net metering system will
remain 4461 kWh by the solar systems production, which will inflow in the electrical grid offering an annual
income of 514 € to the Municipality of Maroussi. The building will be finally ranked at the energy efficiency
class A+. In other words the public building will be a zero-net energy building with primary energy
consumption 28.2 kWh/m² and zero carbon dioxide annual emissions according to the new energy
performance certificate that was obtained by the energy inspection.
4. CONCLUSIONS
According to the Economic evaluation presented in Table2 and the technical evaluation of each measure,
the final scenario of energy savings measures which offers the greatest efficiency to the public building
includes the Replacement of lamps, the addition insulation to the building and the roof, the use of biomass
instead of oil and the installation of solar systems on the roof. The total cost of investment will be 67100 €
with a NPV of 26295€, an IRR 9.3% and a DPB of 12.2 years. Because of these four measures, the Electricity
consumption of the building will be reduced from 31266 kWh to 24809 kWh.
Even more, the new Electricity consumption will have a zero cost of production as 29270 kWh will be
annually obtained by the solar systems on the roof. More importantly, owing to the use of Net Metering
system, after deduction, will remain 4461 kWh by the solar systems production, which will inflow in the
electrical grid offering an annual income of 514€ to the Municipality of Maroussi. The building will be finally
ranked at the energy efficiency class A+. In other words the public building will be eventually a Zero-Net
energy building with Primary Energy consumption 28,2 kWh/m² and zero carbon dioxide annual emissions
according to the new Energy Performance Certificate that was obtained by the Energy Inspection.
REFERENCES
[1] Intelligent Energy Agency, Policy Pathways: Energy Performance Certification of Buildings - A Policy Tool to Improve
Energy Efficiency, 2010. Available online at: http://www.iea.org/papers/pathways /buildings_certification.pdf.
[2] European Commission. Communication from the commission to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions: 20 20 by 2020 Europe's climate change
opportunity. COM (2008) 30 final. Brussels, 2008.
[3] Directive 2012/27/EU on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing
Directives 2004/8/EC and 2006/32/EC[OJ L315 p.1].
[4] Directive 2010/31 of the European Parliament and of the Council of 17 May 2010 on the energy performance of
buildings and its amendments (the recast Directive entered into force in July 2010, but the repeal of the current
Directive will only take place on 1/02/2012).
[5] European Commission. Directive 2010/31/EU of the European Parliament and of the Council on the energy
performance of buildings. Brussels, 2010.
[6] Energy Efficiency Plan 2011: Energy Efficiency Plan 2011, Communication from the commission to the European
Parliament, the council, the European economic and social.
121
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papavasileiou I., Marinakis V., Psarras J. | Towards Nearly Zero - Energy Buildings: A Case Study of a
Public Building
[7] Summary of the impact assessment accompanying document to the proposal for a recast of the energy performance of
buildings directive(2002/91/EC).
[8] TEE-KENAK: The Hellenic software for energy audits and labelling of buildings, The calculation engine of TEE-KENAK was
based on the EPA-NR tool, which was developed within the framework of a European project http://www.epa-nr.org.
[9] T.O.T.E.E. 20701-1/2010 Detailed national standards parameters for the calculation of the energy performance of
buildings and the issuance of energy performance certificates, Edition B.
[11] ASHRAE Handbook “Fundamentals”. American Society of Heating Refrigerator and Air Conditioning Engineering,
Atlanta, Georgia, Edition 2009.
[14] T.O.T.E.E. 20701-1/2010 Detailed national standards parameters for the calculation of the energy performance of
buildings and the issuance of energy performance certificates, Edition B.
[17] PVGIS (Photovoltaic Geographical Information System) is a research, demonstration and policy-support instrument for
solar energy resource, part of the SOLAREC action at the JRC Renewable Energies unit of the European Communities
(Ispra) http://re.jrc.ec.europa.eu/pvgis/info/faq.htm#data.
122
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Rogkakou S., Matsatsinis N. | Development of a Multicriteria Decision Support System for Selecting the
Most Advantageous Route of the Produced Biogas in Landfills
Abstract
The penetration of Renewable Energy (RES) in the electricity balance is significantly increased due to preferential Feed-
in Tariffs (FIT). One of the RES is the evolved biogas at landfills Waste (Landfill) which is rich in methane.
This study aims to create a decision support tool for landfill regarding the most advantageous route, economically
speaking, between upgrading biogas injection to the gas versus electricity production from the raw biogas and bubbling
to the electricity grid. For the analysis used the environment of MATLAB.
The final decision will be based on the income that the algorithm will calculate as it will determine the final amount of
gas that can be absorbed by the grid or the quantity of electricity to be committed.
The marginal prices of NG compensation are explored which is more lucrative diversion upgraded biogas into the grid
NG to electricity production. Investigation is indicated on both Feed-in Tariff (FIT) prices and the marginal prices of
electricity system. For these values is presented the expected amount to be injected into the natural gas grid, too.
KEYWORDS
Decision Support Systems, Renewable Energy, Landfill, Multicriteria Analysis
1. INTRODUCTION
This work aimed to determine the price at which it would be possible to upgrade and feeding biogas into
the natural gas grid instead of its use for electricity generation.
Two Scenarios
First Scenario: Compensation under electricity under FIT (Feed-In Tariffs). This version of the DSS should
take into account the existing Feed-In –Tariff scheme (user defined) and the efficiency of the electricity unit
used.
Second Scenario: Compensation under Marginal Prices of the National System. One version like this. should
take into account future market integration scenarios, like participation of biogas electricity production
plants in the open electricity market (i.e day ahead markets).
For the calculations was used the data of the landfill of Ano Liosia which is the largest in Greece. However,
the same algorithm, with minor modifications, can be used generally in landfills.
123
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Rogkakou S., Matsatsinis N. | Development of a Multicriteria Decision Support System for Selecting the
Most Advantageous Route of the Produced Biogas in Landfills
II. Recording data (price of electricity for the plant, biogas consumption of the generator of
electricity, calorific value of biogas, calorific value of gas, transfer price of biogas into the natural
gas grid, biogas flow rate release plant).
3
III. Data analysis and types of processing elements (quantity of electricity for 1 Nm of raw biogas,
3
transfer price 1 Nm biogas into natural gas grid, minimum price of raw biogas that preferred
infusion into the natural gas grid than in electricity, calorie amount which reflects to the amount of
natural gas, check for other possible options, final infusion price of upgraded biogas, quantity
124
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Rogkakou S., Matsatsinis N. | Development of a Multicriteria Decision Support System for Selecting the
Most Advantageous Route of the Produced Biogas in Landfills
3
response NG per hour, final price 1 Nm natural gas per hour, final price 1 Kwh NG in euro, final
price 1 Mwh NG in euro, periodically checking).
IV. Compare prices of NG between factory price and price of Managing Authority.
V. Association arrays of different sources of profit for further comparison. Creation of a table that will
be installed for each natural gas price how many hours on the time it pays to give the natural gas
grid and how many hours it worth to give the electricity. Create chart (chart 1) hours of electricity.
The algorithm gets default constant electricity price 99.45 € / MWh and then in steps of 20 € to 75
€ check the electricity hours, as well as the quantity of natural gas for each value NG separately.
The user has the option to give as input the price, the characteristics of the plant, the
characteristics of the transmission grid and upgrading.
I. Initially, the user is prompted to give the marginal price of natural gas according to the
specifications of the installation and of this time pricing policy.
125
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Rogkakou S., Matsatsinis N. | Development of a Multicriteria Decision Support System for Selecting the
Most Advantageous Route of the Produced Biogas in Landfills
II. The algorithm loads the file with marginal prices of the electricity system, the next day Marginal
Prices System that have given by the Managing Authority.
III. Compare the Fit In Tariff price with all Marginal Price System and then arise the hours that worth
the channeling to the natural gas or electricity grid.
IV. In each case, the total infusion hours are calculated in each grid and the total amount of natural
gas or electricity which can be given.
V. The user has the option to give as input the price, the plant’s characteristics, the characteristics of
the transmission and upgrading grid.
4. CONCLUSION
This work aimed to determine the price at which it would be possible to upgrade and feeding biogas into
the natural gas grid instead of its use for electricity generation. For the comparisons of this study two
scenarios were studied. One scenario was to compensate electricity under FIT and the second one scenario
for compensation under Marginal Prices of the System. For the calculations was used the data of the landfill
of Ano Liosia which is the largest in Greece. However, the same algorithm, with minor modifications, can be
used generally in landfills.
From the study results it is evident that working with market prices can be financially beneficial to plant’s
manager. The gas supply from plant’s manager for the enterprise of gas supply is cheaper, as opposed to
the current regime.
The prospect of plant’s manager cooperation with the gas supplier will bring economic benefits to both
companies and control of unexploited anaerobic digestion waste which now utilized in flares.
The biomethane injection to the natural gas grid gives access to the producer of biogas in a much larger
market of potential buyers, compared with selling biogas locally.
126
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Rogkakou S., Matsatsinis N. | Development of a Multicriteria Decision Support System for Selecting the
Most Advantageous Route of the Produced Biogas in Landfills
REFERENCES
Books
Rogkakou Stavriani, Tsikalakis Antonis, Kalaitzakis Konstantinos |2013| Utilization of biogas for energy production and
potential feedback to the natural gas grid, Greek Operational Research Society (E.E.E.E) Athens, ISBN: 978-960-87277-9-
3
127
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
Vasilios C Kapsalis *
National University of Athens,
9, Iroon Polytechniou Str.,
157 80, Zografou Campus, Greece *
Email: [email protected]
Abstract
One of most challenges in energy consumption modeling is the limited amount of historical and forward – looking
information. This forces us to augment the standard techniques to extract as much as possible information we can.
Consequently, the knowledge of distribution of energy consumption is a very important decision tool and contributes to
a wide range of energy efficiency applications, from the energy conservation measurements establishment and the
energy performance contracts development to the risk management strategies under uncertainty.
Analysis of available data is the first, and potentially one of the most important steps in understanding and quantifying
the essential features of the appropriate energy consumption model development. In this study, we focus in the special
properties of consumption data, collected both of the utilities bills and a pilot installation of gas and electricity metering
system in a university building. The energy consumption data analysis based on relevant statistical tools (ANOVA) and
methodologies as well as meteorological data for the corresponding period. The specific requirements of the
measurement and verification needs define the appropriate compliance path to use. Several regression techniques are
examined to express the energy consumption’s relationship in a yearly base with multiple independent variables
modelling the appropriate one. The baseline consumption is approached and successfully correlated with variable -
based degree day model for both gas and electricity consumption.
Moreover, the day ahead scheduling in liberated markets requires better insight in modeling techniques to accurately
capture the structural characteristics of the empirical data and to manage the risk derived from scheduled and real
energy market distortions. In deregulated markets the approaches could be quite different from those utilities have
adopted for planning purposes in a regulated frame Thus, beside the long run investigation through IPMV procedures
and real data brief examples and back testing validation, the stochastic approach of the energy evolution process is
presented to capture local diffusion evolution.
KEYWORDS
Energy consumption modeling; Empirical data performance; Markov process; Brownian motion; Energy management
1. INTRODUCTION
Traditionally, energy mathematic models separated in two large distinct categories, the classical one or
forward approach and the data – driven or inverse approach (ASHRAE, 2013). The former, uses sound
engineering principles of the physical description of the building system and simulates the demand
response and peak load performance of it. This method studies the dynamic behavior of the building and
can be utilized from the design phase and in a microscopic level. The latter, meets different requirements
and analyze the building system while operates, uses a relatively small amount of, rather, macroscopic
parameters.
Other researchers [Aure´ lie Foucquier et al., 2013] utilize a three category classification in modeling energy,
the physical models or ‘’white box’’, the statistical methods using machine learning tools or ‘’black box’’ and
recently the hybrid approaches using both physical and statistical tools or ‘’grey box’’ technique. In the first
team, computational fluid dynamics (CFD), zonal and nodal modeling has been recognized, in the second
128
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
one multiple regression analysis, genetic algorithms, artificial neural networks and support vector machine
regression has been used and the third one consists of a combination of the other two, depending on the
specific case. Every method has its advantages and disadvantages, according to the features we wish to
analyze and the purpose we need to use the model for.
Moreover, worldwide standards prescribe general specific approaches with top – down and bottom – up
methods, [IPMV Vol.I:2012, BS EN 16212:2012, EN 16001, ISO 50001]. The interest of statistics in energy
related models remains increased, but this approach has not been widely adopted by the building
professional community. All these literature has the same bottom line which is the baseline energy
consumption process definition, especially where in ISO 50001 it is prerequisite to calculate the process and
simultaneously is the central variable for energy efficiency and savings calculations (Reichl et al., 2011). This
is the energy we would have occurred had modifications not been implemented to our system and includes
the consumption and the adjusted values, in the baseline and in a future period, respectively. The
knowledge of this energy is very important and is a primary driver to the valuation of the energy
conversation measurements, the energy performance contracts design, the energy efficiency agreements,
as well as the risk management strategies under uncertainty. The verification of the right values is another
significant aspect to deal the participants of such negotiations with.
In many cases the variables follow stochastic process or coupled with stochastic ones (Andersen A., 2013)
especially when there is indication of stochastic behavior derived from the autocorrelation function.
Moreover, we can easily model the consumption and the predictors as Brownian motion in order to capture
local evolution, distribution distortions or data shortage (Fritch et al., 2013). The first order Markov process,
AR(1), is used to eliminate the non zero autocorrelation function of the residuals and improve the
uncertainty of the deterministic model, while others uses the Markov chain model residual modification to
achieve accurate and precise results (Hsu Li-Chang , 2003).
Many researchers consider the Markov property and Weiner process conditions to model variables that
vary continuously and stochastically through time. (Dixit et al, 1994). The infinitesimal generator (drift and
diffusion coefficient) and the conditional expectation operator share eigenfunctions, the eigenvalues for the
latter are obtained by the eigenvalues for the former and the diffusion samples become equivalent to
randomizing the time interval that elapses between observations (Hansen et al., 1998). The temperature
has already been modeled as a stochastic process to capture among other parameters the weather
derivatives (Alaton et al. 2003, Gunay et al., 2013).
Here, in section 2 we present the basic assumptions of our study of the statistical performance of empirical
data in the deterministic way using well known regression techniques. In section 3 a review and an
introduction of stochastic approaches is depicted. In section 4 brief examples and their result according to
the IPMV methodology analyzed and further research is proposed. Finally, this paper concludes in section 5
resuming our findings.
129
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
(1)
where, Y denotes the energy consumption per day ( ) for electricity or gas,
respectively) and X counts for , the heating or cooling degree days with the best fit balance point.
The constant value is the baseline energy in and the slope coefficients have units in
or . The above analysis made under the assumptions of linearity, unbiasedness and
estimator less variance. The stochastic part of the model has zero expected value and is normal and
independent from the non stochastic variables ( variables are non stochastic and there is
independency between them).
The most significant regressor variable in a daily and monthly base is the outdoor temperature. The variable
base degree days concept [Fels, 1986] tries to identify the balance point temperature in which the energy
use switches from weather independent to weather dependent. Although the balance point of the specified
building can be explicitly derived by dynamic simulation programs, here, we calculated the variable base
annual degree days using hours and bin temperature data, normalized by the average daily standard
deviation Sd, the number of days in the month, N, the difference between monthly average T and variable
base temperatures (ISO 15927-6). Temperature data derived from average daily values of nearby on-site
weather monitoring equipment according to the following mathematical type:
(2)
where, the mean month temperature during the day, the mean monthly temperature over
24 hours and the mean monthly maximum temperature.
Once the mean month temperature is calculated and the standard deviation of the daily temperature is
known the heating or the cooling degree days are found.
Benchmarking
The operation of the electricity metering system validated by the vendor, while the gas meters by in house
inspection of the burner cycle, consequently, in three periods (December 2010, January and February
2011), from minute data and ambient temperatures. While this is not the topic here, we overcome the
analysis and proceed in our methodology presentation.
(3)
where the nominator express the root mean squared error of the regression equation with p parameters
and the denominator the average energy use values.
Energy Management
The procedure above leads to a diagnosis of the energy and environmental performance level of an
organization and provides a framework to model the energy consumption process, but moreover result to
justified improvements which are required by ISO 50001. An augmentation analysis may follow the
130
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
establishment of the baseline energy consumption in order to implement a cost – effectiveness retrofit
plan.
(4)
The sequence of data analyzed and a new state space redefined in order to model the strongly correlated
data as Markov process. If z(t) is a Weiner process, then any change in z, Δζ, corresponding to a time
interval Δt, has the properties prescribed by the two following conditions (5) and (6):
(5)
where is a normally distributed random variable (is serially uncorrelated) with mean zero and standard
deviation 1, and
for t (6)
Thus, z(t) follows a Markov process with independent increments, as the values of Δz of two different
intervals of time are independent. The generalization of the above lead to more complex processes such as
Brownian motion in several types, where the evolution of a variable has the following typical expression:
(7)
where dz is the increment of a Weiner process as above, μ is the drift and σ is the variance parameter.
Therefore, we model temperature as a deviation from historical mean or low frequency process evolving on
a seasonal time scale and a high frequency process of daily fluctuations around the first process. With the
appropriate coefficients we can capture the complex daily trends the transformation of (6)
may lead to a forward temperature term structure. The solution of the above stochastic equation using the
infinitesimal generator and conditional expectations leads to a jointly distributed process between the
energy consumption and the temperature. The stochastic investigation of sufficient energy consumption
data will be subject of later work.
4. BRIEF EXAMPLES
the summer period. Both meters are used pulse signals over TCP/IP protocols and the data remote
monitored and gathered in dedicated pc software for analysis. In house meteorological station nearby the
campus provides us weather information to the corresponding period, continuously, and the data are
normalized as above. The results of the analysis of variance are presented below for the heating and cooling
season, respectively, where SE denotes the standard error of the coefficients b, t – student represent the
critical values of the corresponding test. Based on the value of adjusted the specified coefficients of the
variables interpret the energy consumption by 95%. The one tail - t student test denotes the significance of
each variable and the null hypothesis ( ) is rejected when t is larger than the critical value. The
coefficient denotes the baseline energy consumption and the is the slope of the regression line for
values less or more than the balance point ambient temperature for the heating or cooling, respectively.
According to the best fit regression analysis we can conclude that in the gas consumption evolution process
is proportional to the heating degree days at different balance points. The constant variable cannot be
included because the hull hypothesis cannot be rejected after the t – student test. The is 0.956 and
the CVrms 8.488%. The regression equation is below, the statistical parameters are in Table1 whereas in
figures 1, 2 the regression curve and the observed and fitted values are presented, respectively.
(8)
Y HDD(21.1)
coef. 33.503
SE 1.449
t – student 15.732
Adj 0.956
132
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
The electricity consumption faces a more complicated correlation as we observe from the autocorrelation
function (Figure 3). The AR process with lag 1 and balance point temperature 23.9 is the most appropriate,
here. In this case a first order Markov modification made according the autocorrelation coefficient, ρ, and
the new variables transformation found such as . The adj is
0.866 and CVrms = 6,5%. The coefficients of the variables are shown at Table 2. The critical value of the t –
student distribution is t = 2,228 in 2-tail test and the null hypothesis is rejected in 95% confidence level. The
errors for each variable are derived of the table’s values and the two–tail critical t – student. The
regression equation is below, the statistical parameters are in Table1, whereas in figures 2, 3 and 4 the
autocorrelation function for the electricity consumption, the residual (lag1) ACF and PACF, the regression
curve with the observed and fitted values are presented, respectively.
(9)
133
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
134
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
Figure 5 Electricity consumption model observed, fit values and upper lower boundaries
Finally, a back testing has been conducted to examine the validity of the implementation of IPMV
procedures to our consumption data. Therefore, previous year gas and electricity data analyzed and
adapted to our equations above beside the corresponding HDD/CDD values and the same balance point.
The results are shown to the Fig. 6 and Fig. 7, for the gas consumption (two months backwards) and the
electricity (one year backwards), respectively. The denoted seasonality (ACF observation), especially for the
electricity consumption as well as specific characteristics of the building (e.g August deduced occupancy
and operational factor), weekdays and weekend cycles, can be also considered for a more detailed analysis.
Figure 6 Back testing for the gas consumption, two months backwards (mean monthly value)
135
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
Figure 7 Back testing for the electricity consumption, twelve months backwards
4.1.3 Discussion
We illustrated, here, a deterministic approach on the energy consumption evolution process. We used the
IPMV approach coupled with the generalized linear regression model and partial use of stochastic modeling
with the first order Markov process transformation, when needed. The long run consideration of this
method captures the long run periods (yearly, Balance Of Month, mean monthly data). We derived
appropriate equations for gas and electricity consumption and proposed an extension approach introducing
the basics to dynamically approach the short run diffusions and the local evolution (daily data). Further
developments and analysis should be conducted to explore the best fit parameters and Brownian process of
the stochastic model we proposed here.
5. CONCLUSIONS
In this paper we analyzed the energy evolution process in two ways, the deterministic and the stochastic
one. The former method derived from a regression analysis and correlation with weather dependent
variables and brief examples presented. The infringement of our origin assumptions led us to use the first
order Markov process transformation (lag 1), in order to eliminate the non zero autocorrelation residuals
condition. This resulted to a better accuracy of the consumption curve adaptation and a less uncertainty
estimation, within the accepted range of the IPMV. This method seems to be appropriate for long term
periods. Stochastic evolution processes may capture short term localities and have to be explored.
Brownian motion evolution process is a good candidate for this approach. The interpretation of the curves
reveals unique information about the conservation, efficiency and risk management strategies. The
establishment of rules offers confidence to the participants of a project and the market players.
ACKNOWLEDGEMENT
nd th
This paper was presented at the 2 International Symposium and 24 National Conference on Operational
Research, 25-28 September, 2013, Athens Crown Plaza. I feel to acknowledge the Executive Energy and
Environmental Management Committee (ESEEPD) and the Energy Saving Group of National Technical
University of Athens. Finally, The Laboratory of Steam Boilers and Thermal Plants for the gas burner data
validation, The Laboratory of Hydrology and Water Resources Management for the meteorological data
support and the Engineering and Facilities Administration Service for the obtained energy consumption
data. This paper expresses the author’s only personal opinion.
136
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kapsalis V.C.| Energy consumption evolution process
REFERENCES
Alaton, P.,Djehiche, B., and Stillberger, D. 2002. On Modeling and Pricing Weather Derivatives. Applied Mathematical
Finance, Vol.9, pp.1-20.
Andersen Arne, Sollie M. Johan, 2013. Multi-factor models and the risk premiums: A simulation study, Energy Systems
Vol.4, pp. 301-314
ASHRAE, 2013. ASHRAE Handbook-Fundamentals Atlanta, GA: American Society of Heating, Refrigerating and Air-
Conditioning Engineers, Inc. Chapter 19, Energy Estimating and Modeling Methods.
Dixit K. Avinash, Pindyck S. R., 1994. Investment under uncertainty, Princeton press, Princeton
Efficiency Valuation Organization, 2012. Concepts and Options for Determining Energy and Water Savings, International
Performance Measurement and Verification Protocol, Vol. 1
Gunay H. B., O’Brien W., Beausoleil-Morrison I., 2013. A critical review of observation studies, modeling, and simulation
of adaptive occupant behaviors in offices, Building and Environment, Vol. 70, pp. 31-47
Foucquier Aure´ lie , Robert Sylvain, Suard Frederic , Stephan Louis, Jay Arnaud, 2013. State of the art in building
modelling and energy performances prediction: A review, Renewable and Sustainable Energy Reviews, Vol. 23, pp.272-
288
Fritsch R., Kohler A., Nygard–Ferguson M., Scartezzini J-L., 1990. Stochastic Model of User Behavior Regarding
Ventilation, Energy and Environment, Vol.25, No 2, pp. 173-181
Hansen Lars Peter, Jose Alexandre Scheinkman, Nizar Touzi, 1998. Spectral methods for identifying scalar diffusions,
Journal of Econometrics, Vol.86, pp. 1-32
Hsu Li-Chang, 2003. Applying the Grey prediction model to the global integrated circuit industry, Technological
Forecasting & Social Change, Vol. 70, pp. 563-574
Reichl J. , Kollmann A., 2011. The baseline in bottom-up energy efficiency and saving calculations – A concept for its
formalisation and a discussion of relevant options, Applied Energy, Vol. 88, pp.422-431
Stram O. Daniel, Fels F. M., 1986. The applicability of PRISM to electric heating and cooling, Energy and Buildings, Vol. 9,
pp. 101-110
Vasin A., Kartunova P., Weber Gerhard-Wilhem, 2013. Models for capacity and electricity market design, Central
European Journal of Operations Research, Vol.21, pp.651-661
137
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
Abstract
Biomass fuels present an increasing source of energy for heating in Greece. This phenomenon is attributed to the
increasing domestic oil prices and the economic recession. Fuelwood, presents one of the most important types of
biomass used for heating (wood-burning) by households today, with an increasing trend in its adoption either as a
primary or a secondary source of energy. The prices of this natural resource are primarily determined by a production-
allocation model, i.e. the fuelwood supply chain. Nevertheless, other external factors like the customer’s income or
weather conditions (average temperature), may affect the demand and therefore prices. In this work, Agent Based
Modelling System (ABMS) is adopted as the tool of choice for modelling the fuel wood supply chain, under which, each
node in the supply chain is modelled an agent, dynamically determining its behaviour in the environment according its
individual preferences and parameters. The proposed model has been applied to the prefecture of Pella, where through
a questionnaire based analysis, parameters of the model and agent behavioural rules were derived. The system is
examined on a long term horizon (until 2020).
KEYWORDS
Energy, Agent Based Modeling System (ABMS), Forecasting, Fuelwood Supply Chain, Fuelwood.
1. INTRODUCTION
Fuelwood is derived from forest production activity and is one of the widely used natural resources in
Greece. Similarly to other products, fuelwood is produced, processed, stored and finally delivered to the
final node, the customer. The forest supply chain (FSC) is a special form of a supply chain and consists of
several nodes and operations (production, transportation). In the beginning of this special form of supply
chain, several production schemes may guarantee sustainability or maximum production (Galatsidas et al,
2013). Forest compartments act as fuelwood production units and are scheduled for harvesting every 10
years. The next nodes in the chain are Agricultural Forest Cooperatives (AFCs), warehouses (wholesalers)
and finally households (consumers). In the case of Greece, fuelwood logging mainly is conducted by AFCs
that are assigned to specific forest compartments (Arabatzis et al. 2013). The present work approaches a
fuelwood supply chain model through an Agent Based Modeling System (ABMS). Under ABMS, each node in
the supply chain is modeled as an agent, dynamically determining its behaviour in the environment
according its individual preferences and parameters. ABMS (Tesfatsion, 2006; Farmer and Foley, 2009) is a
rather recent approach in Computational Economics, and is in fact a micro-simulation approach, providing
the means for specification of behaviours of individual entities (consumers/providers) and modelling of
complex agent interactions. System properties (i.e. market behaviour) emerge from agent interactions; a
phenomenon referred to as emergence, and therefore constitutes a bottom-up approach to modelling,
allowing easy investigation of different scenarios.
138
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
Although a number of successful approaches of ABMS (Richiardi, 2012) to economics have been reported in
the literature, as indicated by the introduction of a special issue in a prestigious computer science journal
(Marks and Vriend, 2012), and in energy markets, as for instance an agent based system addressing global
energy needs (Voudouris et al. 2011). However, there are only a few approaches that deal with the
problem of fuelwood supply chain modeling and price predication in the specific market (Kostadinov et al,
2013).
The rest of the paper is structured as follows: In section 2 the results of the statistical analysis are
presented, in section 3 the FSC agent model is described, in section 4 the fuelwood predicted prices are
presented on a short-term horizon (2013 – 2014) and on a long-term horizon (2014 – 2020). The
conclusions and further research are presented in section 6.
2. STATISTICAL ANALYSIS
The lack of available data regarding the attitude of consumers towards fuelwood consumption, such as
quantity of fuelwood consumption, household energy footprint and percentage of heating energy covered
by fuelwood, led to the introduction of a questionnaire, that consisted of 37 questions, covering various
aspects of fuelwood use. The analysis enabled the determination of behavioural characteristics of
consumers, and covered a sample of 385 households in a number of regions of the Pella prefecture. The
questions concerned fuelwood consumption, household income, past purchased fuelwood
prices/quantities, warehousing storage space and fuelwood order rate per annum. Answers to
questionnaires were used to model consumer agents in the model as described below.
Figure 1 Histograms for prices and quantities for years 2010 – 2013.
139
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
agent has its own behavioural specification that is affected by a number of parameters, different for each
agent, that allows to model the entities diversity easily.
For instance, warehouses have different storage capacities and decide to buy wood from AFC's when their
capacity has reached a certain low threshold. Probably, the most interesting feature in the model is that
instead of analysing answers to questionnaires and deriving a "general" model of consumer (household)
behaviour, each questionnaire was used to set agent parameters. Thus, each agent is "unique" in the sense
that has its individual behaviour, dictated by its own characteristics (quantity of fuel wood used,
consumption with respect to temperature, time of buy, amount of money spend on heating, maximum
storage capacity, etc.).
A challenge faced in the model development, was to define the energy consumption of each household,
and consequently the rate of fuelwood consumption. The latter was necessary in order to determine the
time point in which consumers were "shopping" for wood, a parameter that can have a large impact on the
price. Since no data were available, a “reverse engineering” approach was use to determine day-to-day
consumption wrt temperature, by simply dividing the past fuel wood consumption over the total degree
days in Pella for each agent, given by the following equation:
m ont
h
T
bem p ase -
T em paverag
e
30,i
a
fTem p vegare<Te mpbase
Deg r
eeDaystotal=
N/A ,Tempavegar
e Tem pbas
e
In the model, the base temperature is set to 16 and we are considering the average temperature of each
month.
Price is dynamically adjusted based on rules of thumb by the warehouses. The latter are competing for
customers and decrease their selling price when their current customers are less than the customers of the
best retailer in their vicinity. In order to avoid large fluctuations of prices, once a warehouse adjusts its
price, it retains that price for a period of 15 days.
The model was implemented in NetLogo (Wilensky, 1999), a multi agent modeling simulation platform
targeted to ABMS with a large number of agents. In this model, agents are simple reactive agents that
follow the directives described above. A snapshot of the FSC agent model implemented in NetLogo platform
is presented in Figure 2. Based on this implementation, the results described in the following section were
obtained.
140
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
Parameter
3
Total forest production 55.000 sp m
Compartments examined / year 55
Number of AFCs 150
Population of the targeted area 140.000 citizens
Base Temperature 16
4. RESULTS
Applying the ABMS model, a projection of fuelwood prices are derived based on the rules described in the
previous section as shown in Figure 2. The prices of for the year 2013 – 2014, form an increasing curve
while prices in years 2014 – 2020 may form large up and downs, but the largest value is approximately
3
equal to 52 €/sp m . The increasing trend is obvious in both the short-term and long term prediction of
fuelwood prices. This is quite reasonable due to the increasing consumption for household heating, yet the
assumption made in this analysis is that oil prices remain at the current high prices. As fuelwood is generally
a subsidy for a large part of Greek population, a decrease in oil prices would certainly affect the fuelwood
prices as well.
141
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
ACKNOWLEDGEMENT
Financial support of E. Zografidou from “IKY fellowships of excellence for postgraduate studies in Greece –
Siemens Program” is gratefully acknowledged.
REFERENCES
Arabatzis G., Petridis K., Galatsidas S., Ioannou K. 2013. A demand scenario based fuelwood supply chain: A conceptual
model, Renewable and Sustainable Energy Reviews, Vol. 25, pp.687-697.
Galatsidas S., Petridis K., Arabatzis G. and Kondos K.,2013. Forest production management and harvesting scheduling
using dynamic Linear Programming (LP) models. 2013 HAICTA Proceeding, Corfu, Greece .
142
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Zografidou E., Sakellariou I., Petridis K., Arabatzis G. | Agent Based Modeling and Simulation for
Fuelwood Consumption Prediction
Kostadinov, F., Holm S., Steubing B., Thees O., Lemm R. 2013. Simulation of a Swiss wood fuel and roundwood market:
An explorative study in agent-based modeling. Forest Policy and Economics,
http://dx.doi.org/10.1016/j.forpol.2013.08.001
Farmer J. D., Foley D. 2009. The Economy Needs Agent-based Modelling. Nature, Vol. 460, pp. 685–686.
Mark R.E., Vriend N. J. 2012. The special issue: agent-based computational economics—overview In The Knowledge
Engineering Review Vol 27, Special Issue 2, pp 115-122, Cambridge Journals Online.
Richiardi M.G, 2012. "Agent-based computational economics: a short introduction" In The Knowledge Engineering
Review, Vol. 27, Special Issue 2, pp 137-149, Cambridge Journals Online
Tesfatsion L., 2006. Agent-Based Computational Economics: A Constructive Approach to Economic Theory, In: L.
Tesfatsion and K.L. Judd, Editor(s), Handbook of Computational Economics, (Chapter 16 ), Vol. 2, pp.831-880, ISSN 1574-
0021, ISBN 9780444512536, Elsevier.
Voudouris V., Stasinopoulos D., Rigby R., Di Maio C. 2011. The ACEGES laboratory for energy policy: Exploring the
production of crude oil, Energy Policy, Vol. 39, No. 9, pp.5480-5489.
Wilensky U. 1999. NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based
Modeling, Northwestern University. Evanston, IL.
143
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
Abstract
Introduction: The Health System in Greece is a mixture of public and private sector. In several medical specialties, the
private sector is the dominant one. Especially regarding the case of obstetric services, the private sector has almost
taken the features of an oligopolistic market.
Purpose: The purpose of this research is to estimate the intensity of competition in the private maternity – obstetric
hospitals, during the period 2005 – 2012 in Greece and more specifically in the Attica area.
Material and Methods: The data used in this research were collected from the published accounts and financial
statements of the private maternity hospitals, as well as in various published studies in Greece. The research tools used,
were those of globally accepted, Concentration Ratio (CR) and Herfindahl – Hirschman Index (HHI). More specifically,
the revenues as well as the market shares of every maternity hospital were registered from 2005 to 2012, and then the
two indexes were calculated. The data were analyzed with the use of Microsoft Excel.
Results: Our survey showed that there are basically five maternity hospitals in the Attica Area. During the period from
2005 to 2010, the dominant maternity hospitals in the Attica area were three (Iaso, Mitera and Leto). They were
summing up to the 100% of the market in the area of Attica and approximately the 80% of the market share in the
whole country. A differentiation in the market share of the three dominant maternity hospitals was observed in 2010
with the entrance of two new companies (Gaia and Rea) in the market, as well as to the onset of the economic crisis in
Greece. An even greater differentiation in the market shares was observed during the period 2011 – 2012, attributed to
the austerity measures imposed to Greece and the subsequent reduction in the number of births. There was also a
change in the first three dominant positions, as in the first and second position remained Iaso and Mitera, however,
maternity hospital Rea has gained the third position, owing to the unique position that it possesses since it is the only
one situated in the south area of Athens.
Conclusions: The concentration of obstetric services is particularly high in the area of Attica. The variations in the two
indexes, initiating from the year 2010, mainly due to the entrance of two new “players” in the market (Gaia and Rea)
and to the economic crisis in Greece. The latter has resulted in the reduction in the number of births and the shift to the
public maternity hospitals, in order to reduce the costs of the family budget. Finally, the entry of Rea –a new maternity
hospital– among the three dominant in the area of Attica is attributed to its strategic and unique position, in the south
area of Attica.
KEYWORDS
Obstetric services, Competition, Concentration, Attica, Economic crisis.
1. INTRODUCTION
Childbirth is the most beautiful event of the female status and the performance of a sacred mission. The
final act of childbirth, for safety reasons, was established to be performed in a Maternity Hospital (Public or
144
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
Private), turning them into complex and growing organizations that are constantly evolving due to the
increased requirements and competition between the public and private sector. (Papadopoulou, 2004)
In Greece, the provision of health services is based in a mixture of public and private sector. (Kyriopoulos &
Niakas, 1991; Tountas et al, 2005; Boutsioli, 2007) The private sector is divided in diagnostic centers and
clinics, which are further divided into neuropsychiatric clinics, obstetrics – gynecological clinics and general
clinics (large multi – purpose, mid – sized and small). (Icap Databank, 2005; Icap Databank, 2006; Icap
Databank, 2007; Icap Databank, 2008;Icap Databank, 2010; Daskalaki, 2006; Kiritsis, 2007)
Table 1 The existing Maternity Hospitals in the Attica area (both public and private) as well as their availability of beds.
(ICAP Databank, 2010; Webpages of Private Maternity Hospitals Leto, Mitera, Iaso, Gaia, Rea, 2011)
Figure 1 The market share Figure 2 Concentration Ratio (CR) Figure 3 HHI
CRn = Σni=1si
In the following section, are presented the turnovers, the market shares and the two indexes (CR and HHI)
for the market of Attica.
145
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
2.1.1. Turnovers
The annual turnovers of each private maternity hospital in the Attica area from 2005 until 2012 are
presented in Table 2.
Table 2. Turnovers of the private maternity hospitals in the Attica area (2005 – 2012)
Table 3 Market shares of the private maternity hospitals in the Attica area (2005 – 2012)
Figure 4 Concentration Ratio (CR3) of the three dominant private maternity hospitals in the Attica area
146
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
Figure 5 Herfindahl – Hirschman Indexes (HHI) of the three dominant private maternity hospitals in the Attica area
(2005 – 2012)
Table 4 Market shares of the biggest private maternity hospitals in Greece (2005 – 2012)
147
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
Figure 6 Concentration Ratio (CR3) of the three dominant private maternity hospitals in Greece
Figure 7 Herfindahl – Hirschman Indexes (HHI) of the three dominant private maternity hospitals in Greece
2.3. Comparison
In the following section, are compared two indexes (CR and HHI) for the market of obstetric services
between the Attica area and the whole country.
148
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
Figure 8 Concentration Ratio (CR3) of the three dominant private maternity hospitals in Greece
Figure 9 Herfindahl – Hirschman Indexes (HHI) of the three dominant private maternity hospitals in the Attica area and
in Greece (2005 – 2012)
2.4. Results
Our research showed that:
There are basically five maternity hospitals in the Attica Area (Iaso, Mitera, Leto, Gaia and Rea).
During the period from 2005 to 2010, the dominant maternity hospitals in the Attica area were
three (Iaso, Mitera and Leto). In the private sector, they were summing up to the 100% of the
market in the area of Attica and approximately 80% of the market share in the whole country.
A differentiation in the market share of the three dominant maternity hospitals was observed in
2010 with the entrance of two new companies (Gaia and Rea) in the market, as well as to the
onset of the economic crisis in Greece.
An even greater differentiation in the market shares was observed during the period 2011 – 2012,
attributed to the austerity measures imposed to Greece and the subsequent reduction in the
number of births.
There was also a change in the first three dominant positions, as in the first and second position
remained Iaso and Mitera, however, maternity hospital Rea has gained the third position, due to
the unique position that it possesses since it is the only one situated in the south area of Athens.
149
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Petsa A., Vozikis A. | The Obstetric Services in the Attica Area: Concentration Ratio and Competition
3. CONCLUSIONS
The concentration of obstetric services is particularly high in the area of Attica. The variations in the two
indexes, initiating from the year 2010, are mainly attributed to the entrance of two new “players” in the
market (Gaia and Rea) and to the economic crisis in Greece. The latter has resulted in the reduction in the
number of births and the shift to the public maternity hospitals, in order to reduce the costs of the family
budget. Finally, the entry of Rea – a new maternity hospital – among the three dominant in the area of
Attica is attributed to its strategic and unique position, in the south area of Attica.
ACKNOWLEDGEMENT
I would like to thank my supervisor, Assistant Professor Athanasios Vozikis, for his help and guidance. I also
owe a huge thank you to my family and friends for their support and their understanding.
REFERENCES
1. Papadopoulou E., 2004. Evaluation of the private and public maternity hospitals in Drama in 2002 and the attitude
of women towards the provided services. Hellenic Open University, Post Graduate Program in Health Management
(Post Graduate Thesis).
st
2. Kyriopoulos G., Niakas D., 1991. The funding of health services in Greece. 1 Edition, Athens, Center of Social Health
Sciences.
3. Tountas Y., Karnaki P., Pavi E., Souliots K., 2005. The unexpected growth of the private sector in Greece, Health
policy, pp. 167 – 180.
4. Boutsioli Z., 2007. Concentration in the Greek private hospital sector: A descriptive analysis, Health Policy, pp. 212 –
225.
5. ICAP Databank, 2005. Private Health Services in Greece.
6. ICAP Databank, 2006. Private Health Services in Greece.
7. ICAP Databank, 2007. Private Health Services in Greece.
8. ICAP Databank, 2008. Private Health Services in Greece.
9. ICAP Databank, 2010. Private Health Services in Greece.
10. Daskalaki E., 2006. The private health sector: Iaso A.E., University of Piraeus, Department of Business
Administration, Post Graduate Program in Business Administration (Post Graduate Thesis).
11. Kyritsis C., 2007. Development Strategies in the private health sector in Greece: Ygeia (Case Study), University of
Piraeus, Department of Economic Science, Post Graduate Program in Economic and Business Strategy (Post
Graduate Thesis).
st
12. Kreatsas G., 1998. Modern Obstetrics and Gynecology, 1 Edition, Volume II, Athens, Paschalidis Editions.
st
13. Michalas S., 2000. Obstetrics and Gynecology, 1 Edition, Athens, Parisianou Editions.
14. Eleytheriou N., 2005. Comparative study of the factors that define the frequency of caesarian section in the public
and private health sector, Hellenic Open University, Post Graduate Program in Health Management (Post Graduate
Thesis).
15. Ministry of Health and Social Solidarity, 2010. National Plan for the Public Health. Available at:
www.ygeianet.gov.gr. Retrieved at: 24/12/2010.
16. Webpages of Private Maternity Hospital Leto. Available at: http://www.leto.gr. Retrieved at 29/07/2011.
17. Webpages of Private Maternity Hospital Mitera. Available at: http://www.mitera.gr. Retrieved at 29/07/2011.
18. Webpages of Private Maternity Hospital Iaso. Available at: http://www.iaso.gr. Retrieved at 29/07/2011.
19. Webpages of Private Maternity Hospital Gaia. Available at: http://www.gaiamaternity.gr. Retrieved at 29/07/2011.
20. Webpages of Private Maternity Hospital Leto. Available at: http://www.reamaternity.gr. Retrieved at 29/07/2011.
150
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
Abstract
Objectives recent research in Europe and Usa revealed that the number of patients who have experienced a medical
error in healthcare has increased worryingly since the last decade, while over half of harm refers to medical errors
reasonably preventable. At the same time, surveys indicate that medical errors constitute a significant financial burden
on health care systems. Based to recent eurobarometer survey (2010), Greece ranks first among eu27 and more than
eight out of ten Greek citizens feel it is likely they will be harmed by healthcare whilst 16% of respondents or a member
of their family has experienced an incident. The innovation of this paper is to attempt a survey of the current situation
in Greece through the assumptions of civil and administrative courts.
Methodology it was conducted an extensive analysis of 287 cases associated with medical malpractice and were
awarded to Greek courts over the last 10 years. The research process included a detailed review of the case and
recorded economic and other data. Then is applied simple descriptive statistical analysis to seek information relevant to
our survey with the aid of variables.
Results the findings from our audit on various assumptions of public and private hospitals are compatible with
international researches. About 68% of legal actions brought against public hospitals and 18% in private hospitals and
medical centers. about 45% of medical errors occur during treatment while most incidents of medical error related to
patient death (37%) or permanent disability (36%). Further, the survey unveiled that on top of the list of specialties who
are involved in medical errors are the specialty of general surgery and obstetrics - gynecology. Additional findings from
the survey also show that the average amount of financial compensation awarded by courts, indicatively mention, is
€623,146 for obstetrics - gynecology, €324,901 for general surgery and €319,771 for anesthesiologists.
Conclusions from our research, it was found that in Greece the economic burden on health care system of medical
errors appears to be worryingly high. In Greece, unlike other countries in the world, the assessment of an overall
burden of medical errors is not achievable, mainly due to the absence of any medical error reporting system. The
development and implementation of medical error reporting information system (meris) in order to detect, record and
analyze medical errors nationwide is paramount in order to achieve control of the problem of medical errors in Greece.
Above all, the aim of meris is to become a useful tool for health providers, to enhance both patient safety and quality
health care, with major prerequisites the co-operation and trust between patients, health care professionals, hospital
managers and government.
KEYWORDS
Medical Errors, Medical Malpractice, Healthcare Costs, Health Information Systems, Patient Safety, Quality Health Care
151
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
1. INTRODUCTION
For several years, medical errors are a very common phenomenon, in worldwide which can cause
temporary or permanent harm to patients when receiving healthcare. At the same time, the economic
burden on the Health Care Systems seems to be very high (Vozikis and Riga, 2012).
In U.S.A., 98,000 deaths occur annually in hospital care due to medical errors (McLean, 1997) and in U.K.
between 20,000 to 30,000 patients die each year, as a consequence of adverse events while a greater
proportion of patients is suffered by health complications (WHO, 2000; Thomas et al., 2000). In Germany,
30,000 patients die every year (McLean, 1997) and in New Zealand, over 50,000 hospitalized patients have
harmed by one or more medical errors (Bismark et al, 2006). In Greece, there are no official statistics for
the current situation regarding the medical errors due to an absence of any medical error reporting system
which obstructs any attempt of recording and analysis of adverse events and medical errors in Greece. The
detection of adverse events and medical errors is through spontaneous reporting and thus finally identified,
only a small number of them (Vozikis and Riga, 2012). However, informal Greek statistics mention that
between 20 to 30 patients die every day and about 200 patients daily suffer from serious medical errors,
many of which could have been prevented (Vozikis and Riga, 2008).
The aim of this paper is to present the current situation regarding the medical errors in Greece through
Greek courts’ judgments and also to display the main characteristics of Medical Error Reporting Information
System (MERIS) which used to identify, collect, report and analyze medical errors and patient adverse
events, for enhancing the patient safety and health care quality.
In To Err is Human, the IOM (1999) sets the definition for an adverse event, as follows: “An adverse event is
defined as an injury caused by medical management rather than by the underlying disease or condition of
the patient.”
Following, the classification of medical errors’ types, deals with: (Leap, 1993):
Diagnostic Errors
Treatment Errors
Preventive Errors
Other
The National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP, 1998)
displays the following categories of severity, in case medical errors do cause harm:
152
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
3. ΜETHODOLOGY
Our research was conducted through Greek courts’ judgments with sample 287 cases, associated with
medical malpractice, over the last 10 years. The research process includes a detailed overview of the
content of legal cases and the establishment of a database that records encoded data related to (e.g.): (a)
Type of Health Care Unit, (b) Medical Specialty, (c) Description of case, (d) Phase of care, (e) Severity of
Medical Error, (f) Amount of financial compensation. For seeking our research findings, we applied simple
descriptive statistical analysis, frequency analysis and cross-tabulation analysis.
4. RESEARCH FINDINGS
In public hospitals seem to occur more medical errors (67.94%) than in private health sector, but this is
normally due to the fact that a large proportion of sample coming from Administrative Courts (Fig.1). Most
medical errors happen during the Treatment with 44.25%, followed by errors at phase of Diagnosis with
32.40% (Fig.2). The 36.93% of medical errors resulted in death and 35.89% permanent disability (Fig.3). The
interventional specialties of Obstetrics and Gynecology and General Surgery gather more incidents of
medical errors (Fig.4). The "responsible" medical specialties for the highest financial compensations are
again all surgical specialties and follow Anesthesiologists (Fig.5). The highest mean compensation awarded
by courts in patient death (Fig.6).
153
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
Figure 4: Frequency of medical errors by specialty Figure 5: Mean Compensation by specialty (>100.000 €)
The aim of system is to detect, record and analyze the key characteristics and factors that contributed to
the causation of adverse events (“root cause" analysis ). Also, its goal is to reduce the injuries of medical
errors, understand the omissions of the Health Care System and activate health professionals and citizens
into preventive interventions. Last but not least, MERIS has the opportunity to disseminate this experiential
information learned from errors to all health organizations at risk for adverse events.
MERIS has some basic recommendations to become a successful reporting system, as follows:
Building stakeholders’ awareness of medical errors
154
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
Mapping out a national-wide strategy with the usage of Health Information Technology
Promoting quality assurance practices, patient safety standards and decision making process in
reducing the adverse events and medical errors
Educating health professionals and patients in patient safety reporting system
6. CONCLUSIONS
Our research findings are consistent and agree with the findings of other surveys, according to the literature
(Pollalis et al. 2012). In a survey of U.S.A. the specialties of General Surgery and Obstetrics and Gynecology
occurred in the first two positions as responsible for causing harm due to medical malpractice (US Dept of
Health and Human Services, 2002).
At this point, it should be clarified that medical errors and adverse events mainly occurred as a
consequence of systemic problems in a healthcare (Vozikis and Riga, 2012). In specific, the understaffing in
hospitals, the unsafe working environment, the severity of the patients, the increased workload, the circular
time, the burnout of health professionals and the inadequate staff in nursing, are the most important root
causes of medical errors. To sum up, this paper has not the tension to blame the health professionals but
only to raise the awareness to all stakeholders for taking preventive actions for reducing medical errors and
adverse events.
155
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
ACKNOWLEDGEMENT
REFERENCES
Bismark M., Dauer E., Paterson R., Studdert D., 2006. Accountability sought by patients following adverse events from
medical care: the New Zealand experience-Research. CMAJ 175(8):889-894.
European Opinion Research Group, 2010. Patient safety and quality of healthcare - Full report. Directorate-General for
Health and Consumers. European Commission, Special Eurobarometer 327/ 72.2.
Institute of Medicine, 1999. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press.
Leape L.L., 1993. Preventing Medical Injury. QRB Quality. Review Bulletin., 19 5, pp. 144-149.
McLean, 1997. Nationwide poll on patient safety: 100 million Americans see medical mistakes directly touching them
(press release). VA: National Patient Safety Foundation.
National Coordinating Council on Medication Error Reporting and Prevention, 1998.Taxonomy of Medication Errors
Pollalis Y., Vozikis Α., Riga Μ., 2012. Qualitative Patterns of Medical Errors: Research Findings from Greece, Rostrum of
Asclepius Vol.11, 4.
Quality Interagency Coordination Task Force, 2000. Report to the President, Doing What Counts for Patient Safety:
Federal Actions to Reduce Medical Errors and Their Impact, http://www.quic.gov/report/mederr4.htm#evidence
Thomas E.J., Studdert D.M., Runciman W.B., Webb R.K., Sexton E.J., Wilson R.M. et al., 2000. A comparison of iatrogenic
injury studies in Australia and the USA, I: Context, methods, casemix, population, patient and hospital characteristics.
International Journal for Quality in Health Care Journal of the International Society for Quality in Health Care ISQua
12(5):371–378.
US Dept of Health and Human Services, 2002. Confronting the new Health Care Crisis: Improving Health Care Quality
and Lowering Costs by fixing our Medical Liability System, Washington, D.C.: Office of the assistant secretary for
planning and Evaluation.
Vozikis A., 2009. Information management of medical errors in Greece: The MERIS Proposal, International Journal of
Information Management 29 15–26.
Vozikis A., Riga M., 2012. Patterns of medical errors: A challenge for quality assurance in the Greek Health System, In:
Mehmet Savsar (Ed.), Quality Assurance and Management, Publisher: Intech, pp. 245-266.
Vozikis A., Riga M., 2008. Medical errors in GREECE: The economic perspective through the awards of administrative
courts, Society, Economy and Health, 2: 22-44.
156
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vozikis A., Riga M., Pollalis Y. | Medical Errors in Greece: Main Research Findings Through Greek Courts’
Judgments
WHO, 2000. The world health report. Americans have the best medical care in the world.
157
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantiou P., Papadopoulou A., Psarras J. | Energy Priorities’ Identification for the Municipality of
Apokoronas for the Reduction of its Carbon Footprint
Abstract
The Covenant of Mayors is a European initiative that aims to fight climate change through the promotion of energy
efficiency and penetration of RES. Already a large number of municipalities at the European level has joined this
initiative, while Greece numbers 81 signatories. In April 2013 the Municipality of Apokoronas signed the Covenant and
pledged to reduce its CO2 emissions by at least 20% by 2020.
In the above context, a study of the energy and carbon footprint of the municipality for the chosen base year (Action
Plan for Sustainable Development) took place and a series of actions that promote the use of energy efficiency and
renewable energy sources in all energy consuming sectors in the municipal territory are suggested.
KEYWORDS
Action Plan for Sustainable Development
1. INTRODUCTION
Municipality of Apokoronas is located in north-west Crete, Greece bordering the municipalities of Chania
and Sfakia with population of 12.860 and an area of 313 . The major towns of Apokoronas are Vryses,
Vamos and Armenoi with municipal and utility offices and many stores while Kalyves, Almyrida and
Georgioupoli are the largest beach resorts. It has solar and wind potential that have not been fully
exploited. Major local industries are agriculture and tourism. As mentioned before, in April 2013
municipality of Apokoronas decided to take part in the fight against climate change and joined the
Covenant of Mayors.
In this work, we recorded the energy and carbon footprint and we suggested a series of actions that are the
most proper for the municipality of Apokoronas. In addition, important part of this study is the
determination of the factors that are used to give implementation’s priority to the proposed measures.
2. SECTION Ι
158
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantiou P., Papadopoulou A., Psarras J. | Energy Priorities’ Identification for the Municipality of
Apokoronas for the Reduction of its Carbon Footprint
identified directly at the municipal level were estimated using scientific publications and studies. Specifically
the necessary sources for data collection are the following:
The energy and carbon footprint of the municipality for the chosen base year, which is 2011, took place for
every section.
It is observed that over half of the energy consumed was in transport, 10% of it in farming sector, as the
municipality is a rural area, and the rest of the energy was consumed in buildings. Every sector’s
contribution to emissions is different than its contribution to energy consumption because different fuels
are associated with different emissions factors.
159
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantiou P., Papadopoulou A., Psarras J. | Energy Priorities’ Identification for the Municipality of
Apokoronas for the Reduction of its Carbon Footprint
Street lighting Program “Photovoltaic New energy efficient Automatic water intake
on roofs “ tires
Eco driving
Initially the selection of the proposed measures was based on the municipalities’ energy needs,
organization, infrastructure and capabilities. Local communities, residents and business community have a
great to play as the municipality aims to make people aware of the challenge of climate change and
empower them to make more efficient and environmentally friendly choices. Due to low budget, the
municipality doesn’t plan investments in local electricity production but it will support private investments.
Local authorities have decided to give priority to the domain of buildings due to big consumption of energy
and the municipality’s infrastructure in this area. It will also finance changes in residential and commercial
buildings for energy saving and will take advantage of national funding programs.
160
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantiou P., Papadopoulou A., Psarras J. | Energy Priorities’ Identification for the Municipality of
Apokoronas for the Reduction of its Carbon Footprint
Furthermore, after cooperation with local authorities, implementation priority is assigned to those actions
depending on the potential energy savings, contribution to the reduction target of CO 2 emissions and
financial indicators, such as the cost of implementation by the municipality and the benefit for the citizens.
Change of energy
Changes in school Biofuel imports Change of irrigation
behavior
buildings techniques
Eco driving
3. CONCLUSIONS
The identified actions contribute to reducing the carbon dioxide emissions by 21.54%. Therefore it is
concluded that local authorities are able to fulfill their commitments to the Covenant of Mayors. Giving
161
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantiou P., Papadopoulou A., Psarras J. | Energy Priorities’ Identification for the Municipality of
Apokoronas for the Reduction of its Carbon Footprint
priority to measure whose implementation is complicated while they are very affective, taking advantage of
funding programs and giving opportunities for citizens to adapt a greener lifestyle can make the target
feasible. Due to the implementation report that will be submitted every two years, the required
adjustments will take place.
ACKNOWLEDGEMENT
This paper has been presented at the 2nd International Symposium and 24th National Conference on
Operational Research. The authors would like to thank the participants for their comments.
REFERENCES
Covenant of Mayors, How to develop a Sustainable Energy Action Plan,
http://www.eumayors.eu/IMG/pdf/seap_guidelines_en.pdf
P Konstantiou, “Ανάπτυξη Προσχεδίου Δράσης για την Αειφόρο Ενέργεια για το Δήμο Αποκορώνου”
162
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
Abstract
News affect visibility, evaluation, framing and image attributes of nations. This research analyzes the impact of
environmental issues upon nation image in the news media. The analysis covered June 2012 and coded environmental
news items from newspapers in Greece, Turkey, Cyprus and the United Kingdom. A total of 105 news items were
collected, that focused on environmental issues in reference to another country, and 293 references to foreign
countries were documented in these news items. UK had the most and Turkey had the fewest news items related to the
environment. One third of the news items were published in conservative newspapers and were of smaller length.
Environmental news items almost never made the first page and were of small length, taking up more page space in
Greece and the UK compared to Cyprus and Turkey. Turkey and UK newspapers had more editorials compared to
Cyprus and Greece. On article tone, event dynamics appeared more prevalent in the UK; in Greece, event dynamics
with articles sources shared in importance almost exclusively; and in Cyprus, articles sources lead in importance,
followed by event dynamics. More references to other countries appeared in news items of newspapers in Cyprus
(4.61), followed by the UK (2.641), Greece (2.516) and Turkey (1.583). These references covered many areas of the
world with the United States of America (13% of all references to a foreign country), Brazil (6.5%), China (5.1%), France
and Germany (with 4.8% each) and the UK and Spain (with 3.4% each) being most often referred to. As regards framing,
most references made no specific links to the European Union and were not presented in a conflictual frame. Notable
was also the lack of presentation of a foreign country in terms of the problems it experiences due to the environment.
Finally, one third of references were presented in terms of a strategy frame. Future work could focus on specific topics
such as climate change and examine their presence in the public discourse amidst a global financial crisis.
KEYWORDS
Content Analysis; framing; nation image; media ethnocentrism; environmental issues.
1. INTRODUCTION
News affect the visibility, the evaluation, the framing and the image attributes of nations in the news. This
research analyzes the impact of environmental issues upon nation image in the news media (newspapers)
of four countries by examining environmental news items that made references to other countries.
2. LITERATURE REVIEW
Nation image is defined as the cognitive representation of a given country (Boulding, 1959; Kunczik, 1997;
Manheim & Albritton, 1984). It is comprised of two differentiated elements: image-projected, i.e. the image
as an attribute of the message; and image-perceived, the image as a cognitive structure i.e. the image
formulated in the minds of the audience (Hacker, 1995).
There are three varieties of image-projected: (a) The identity of the country as constructed and projected
by strategic communication of institutions of a country (e.g. international public relations, nation branding);
3 corresponding author, Department of International and European Studies, University of Piraeus, 18534 Greece, e-mail: [email protected]
163
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
(b) the image of the country that derives from events that take place within the country and/or otherwise
related with the country; and (c) the journalistically mediated image, the image in the news which is
produced by the combined operation of the (international) news making process, the strategies of actors
and domestic and international events. The impact of news media upon the construction of image-
perceived is related to their capacity to make particular topics (agenda-setting) or aspect of these topics
(framing) more accessible to the audience (Keplinger, 1992).
Frames are defined as “the persistent patterns of cognition, interpretation and presentation, of selection,
emphasis and exclusion by which symbol handlers routinely organise discourse whether verbal or visual”
(Gitlin, 1980; Carter, 2013). Framing is inherent in the news making process since media cannot offer a
mirror reflection of reality but involves selection. The selection of a news angle or storyline that transforms
an occurrence into a news event, and that, in turn, into a news report, is a frame. Frames are employed
strategically by political actors in order to achieve control of the meaning of a situation (terministic control)
as well as by journalists as part of the process of constructing and attributing meaning (Stoehrel, 2012) .
Frames and framing, while widely employed in the analysis of environmental news (e.g. Good, 208; Knight
et al., 2011; Lakoff, 2010; Nerlich & Koteyko, 2009), they are notoriously underemployed in the analysis of
nation images (Dell’ Orto, Dong, Moore & Scheeweis, 2004).
Previous research has documented such effects at the level nation images (Brewer, Graf & Willnat, 2003;
Kiousis & Wu, 2008). The focal point of this research is upon a particular form of news: the environmental
news, the manner they are presented within the context of (foreign) nation image making and the effect
they have upon such images. A comparative intercultural analysis perspective is employed in order to
examine the impact of media ethnocentrism and the operation of the domestication processes. The
formation of nation image is examined as part of the dialectic between international events and local angles
which stands at the center of the glocalization process.
3. METHODOLOGY
In terms of methodology, the project utilized quantitative content analysis, a method that uses specific
norms to extract valid conclusions from the text under analysis.
The coding protocol, the tool employed for translating the quantitative characteristic of each article into
numbers was especially designed for this project. Its taxonomical component draws from frame theory,
image theory and sociology of news. For the development of certain taxonomies, grounded theory
methodology was employed. The coding protocol included taxonomies on type, frequency and placement
of reference of each country, the evaluative aspect of image, frames contributing to image formulation as
well as dominant news topic.
The time frame of the analysis was a period of one month (June 2012, chosen because it was
environmentally unremarkable) and covered the press of four countries: Greece, Turkey, Cyprus and the
United Kingdom (UK). Two newspapers (of opposing political predisposition) from each country were
examined. A total of 105 news items that were published in these newspapers and focused on
environmental issues in reference to another country, were examined. A total of 293 depictions of
(references to) foreign countries were documented in these articles.
Statistical analysis and graphing were done with Minitab version 16.2.3.
The comparative intercultural analysis allowed the examination of the impact of media ethnocentrism
(Gans, 1978) and the operation of the domestication processes (Nossek, 2004).
164
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
4. RESULTS
As shown in Table 1, UK had the most and Turkey had the fewest news items related to the environment.
Country Count %
Cyprus 23 21.90
Greece 31 29.52
Turkey 12 11.43
UK 39 37.14
105 100
Table 2 lists the number of items per newspaper. It is noted that one third of the news items (38 out of the
105 items or 36.19%) were published in conservative newspapers.
Newspaper Count %
Cumhuriyet 9 8.57
Fileleftheros 15 14.29
Guardian 21 20.00
Kathimerini 9 8.57
Politis 8 7.62
Sabah 3 2.86
Ta Nea 22 20.95
The Times 18 17.14
105
As shown in Figure 1, there was a surge of news items on Saturday editions. All in all, one third of the news
items (37 of 105 or 35.24%) were published in Saturday or Sunday papers with the rest (68 or 64.76%) on
weekdays.
Figure 1 News items per weekday
25
20
15
Count
10
0
1 2 3 4 5 6 7
WEEKDAY (1:Sunday ~ 7:Saturday)
165
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
Environmental news almost never (in 103 out of 105 news items) made the first page (so placement could
not be a variable in any further analysis). As shown in Figure 2, most environmental news items were of
small length.
70
60
50
40
Count
30
20
10
0
0.125 0.375 0.750 1.250
PAGES
Environmental news items occupied a significantly different number of pages in each country (ANOVA
F-test=2.97, p=0.035), taking up more page space in Greece (0.379 of a page) and the UK (0.375) compared
to Cyprus (0.1957) and Turkey (0.1771). Also, it appeared that conservative newspapers devoted smaller
space to environmental issues mentioning other countries (0.25 of a page, compared to 0.3507 in other
newspapers, although this difference was only marginally significant).
Of the 105 news items, 18 were reporting and 18 commentary or opinion. The difference in the mix of
reporting versus opinion articles (related to environmental issues and referencing other countries) was
statistically significant among countries (chi square=8.381, p=0.039) with Turkey and UK newspapers having
33% and 28% editorials correspondingly, compared to 15% for Cyprus and just 3% for Greece.
Regarding the tone of the 105 news items, in almost half of them (51 or 48.57%) it was determined by
event dynamics, in one third (35 or 33.33%) by article sources, and in the rest 19 (18.1%) by the author.
Article tone differed (at a 90% confidence level) among countries (chi-square=10.993, p=0.089), as shown in
Table 3: in the UK, event dynamics appeared more prevalent; in Greece event dynamics with articles
sources shared in importance almost exclusively, demoting the importance of the author; and in Cyprus,
articles sources lead in importance, followed by event dynamics.
Article Event
Country Author All
sources dynamics
Cyprus 11 4 8 23
Greece 12 1 18 31
Turkey 3 3 6 12
UK 9 11 19 39
35 19 51 105
It is also noted that most news items (95 of 105) were based on own newspaper sources.
166
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
As shown in Figure 3, references to other countries covered many areas of the world with Europe, North
and South America being more represented. The United States of America (13% of all references to a
foreign country), Brazil (6.5%), China (5.1%), France and Germany (with 4.8% each) and the UK and Spain
(with 3.4% each) were countries most often referred to.
50
40
30
Count
20
10
0
Caribbean
Middle East
Central Africa
Central America
East Africa
East Asia
North America
Oceania
Polynesia
South-East Asia
South America
South Asia
Southern Africa
West Africa
West Asia
Eastern Europe
Northern Europe
Southern Europe
Western Europe
Coming to the number of country references in a news item, as shown in Figure 4, most news items
contained one or two such references and very few news items contained more than 6.
45
40
35
30
Frequency
25
20
15
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
COUNTREFS
The average number of countries referenced in the 38 news items that were published in conservative
papers (3.075) was bigger than that of liberal newspapers (2.632) but this difference was statistically
insignificant (at a 95% confidence level and with equal variances determined by Levene’s test). On the other
hand, Analysis of Variance (ANOVA) showed that there was a significantly different number of references to
167
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
other countries in the news items of each country (F=3.43, p=0.020), with more references in news items of
newspapers in Cyprus (4.61), followed by the UK (2.641), Greece (2.516) and Turkey (1.583).
In most articles, the country referred to was not mentioned in the title nor was it the central topic of the
news item and there was mostly one direct reference to a country (Figure 5).The focus of the news item
was split between the domestic area of the foreign country and its operation at the level of the state
system (international dimension) and it was mostly unrelated to the news item origin country.
180
160
140
120
Frequency
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9
DIREF
The effects of environmental news upon the nation image were examined at four levels: visibility, valence
(evaluation), framing and domestication of foreign nation. Measured frames included:
European Frame: The foreign country is presented in terms of its relation with the European Union.
Problem Frame: The foreign nation is presented in terms of the problem it faces. This problem may
directly relate to the foreign nation or to a part thereof which, however, is directly associated with
the nation.
Conflict Frame: The nation is presented in the context of a conflict or confrontation
Positive Effects Frame: The foreign nation is presented as having a positive impact for its
surroundings, for other nations or actors.
Strategic Frame: The nation is presented in terms of strategic moves and actions which aim to
serve its interests.
Melodramatic Frame: The presentation of the foreign nation is governed by the perspective of the
“human story” as well as by subjectification, sentimentalism and melodrama.
Hope Frame: The nation is presented in terms of hope and positive expectations. It either hopes or
creates hope. It either has positive expectations or creates positive expectations.
Fear Frame: The nation is presented in terms of fear, threat and negative expectations for the
future.
Cost Frame: The foreign nation is presented as bearing cost, having effects or negative implications
for its surroundings, for other nations.
The evaluation of a country is an important aspect of its nation image and derives from the dynamic of the
event associated with the country, the journalistic bias and the strategies of the sources quoted. A 6-scale
Likert measurement instrument (functioning as an evaluation thermometer) was employed: very negative (-
2), negative (-1), neutral or no evaluation (i.e. evaluation deficit, 0), positive (+1) and very positive (+2).
168
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
Although detailed results are not shown, in terms of valence, environmental issues resulted to a neutral
(-0.02) evaluation for the foreign country. Of note were the negative valence of environmental news for the
USA (-0.33; number or references or n=39), Spain (-0.30; n=10) and Japan (-0.62; n=8); the positive valence
for Australia (+0.30; n=10) and Brazil (+0.11; n=19); and the fact that China had a neutral rather than
negative evaluation (0.00; n=15).
Most references made no links to the European Union and were not presented in a conflictual frame.
Notable also was the lack of presentation of a foreign country in terms of the problems it experiences due
to the environment. Finally, it is noted that one third of references were presented in terms of a strategy
frame.
5. CONCLUSIONS
This study integrated framing in the nation image analysis and the manner that the environmental issues
affect such images. The formation of each nation image was examined as part of the dialectic between
international events and local angles. Such a comparative intercultural analysis allows the examination of
the impact of media ethnocentrism and the operation of the nation frame.
Regarding limitations of this research, two step clustering (in order to accommodate all variable types) of
news items was attempted and showed that clustering schemes are largely determined by categorical
variables such as the tone of the article. More effort to group news articles as well as country references is
underway. Future work could also focus on specific topics such as climate change and examine their
presence in the public discourse at this time of a global financial crisis. Finally, future work could analyze the
impact of environmental issues upon nation image in conditions of crisis.
REFERENCES
Boulding, K., 1959. National Images and International Systems. Journal of Conflict Resolution, Vol. 3, No. 2, pp. 120-131.
Brewer, P.R., Graf, J., Willnat, L., 2003. Priming of Framing; Media Influence on Attitudes Towards Foreign Countries.
Gazette, Vol. 65, No. 6, pp. 493-508.
Carter, M.J., 2013. The Hermeneutics of Frames and Framing: An Examination of the Media's Construction of Reality.
SAGE Open, Vol. 3, pp. 1-12.
Dell’ Orto, G., Dong, D., Moore, J., Scheeweis, A., 2004. The Impact of Framing on Perception of Foreign Countries.
Ecquid Novi, Vol. 24, No. 3, pp. 294-312.
Gitlin, T. 1980. The Whole World is Watching. Berkeley: University of California Press
Good, J.E., 2008. The Framing of Climate Change in Canadian, American, and International Newspapers: A Media
Propaganda Model Analysis. Canadian Journal of Communication, Vol. 33, pp. 233-255.
Hacker, K.L., 1995. Candidate Images in Presidential Elections. Preager, Westport, Connecticut.
Keplinger, H. M., 1992. Put in the Spotlight Instrumental Actualization of Actors, Events, and Aspects in the Coverage on
Nicaragua. The Mass Media in Liberal Democratic Societies, Rothman, S. (Ed.), New York, Paragon House, pp. 201-219.
169
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Samaras A., Paravantis J.A. | Τhe Impact of Environmental Issues Upon Journalistically Mediated Nation
Images: An Intercultural Content Analysis
Kiousis, S., Wu, X., 2008. International Agenda-Building and Agenda-Setting.Gazette,Vol. 70, No. 1, pp. 58-75.
Knight, G., Greenberg, J., Knight, G., Greenberg, J., 2011. Talk of the Enemy: Adversarial Framing and Climate Change
Discourse. Social Movement Studies, Vol. 10, No. 4, pp. 37-41.
Kunczik, M., 1997. Images of Nations and International Public Relations. Mahwah, NJ, Lawrence Erblaum Associates.
Lakoff, G., 2010. Why it Matters How We Frame the Environment. Environmental Communication: A Journal of Nature
and Culture, Vol. 4, No. 1, pp. 70-81.
Manheim, J.B., Albritton, R.B., 1984. Changing National Images: International Public Relations and Media Agenda
Setting. American Political Science Review, Vol. 78, pp. 641-657.
Nerlich, B., Koteyko, N., 2009. Carbon Reduction Activism in the UK: Lexical Creativity and Lexical Framing in the Context
of Climate Change. Environmental Communication: A Journal of Nature and Culture, Vol. 3, No. 2, pp. 206-223.
Nossek, H., 2004. Our News and Their News: The Role of National Identity in the Coverage of Foreign News.Journalism,
Vol. 5, No. 3, pp. 343-368.
Stoehrel, V., 2012. The Battle for the Citizens’ Opinions, the Power of Language, the Media and Climate Change: The
sources and their strategies in the U.S. and in Sweden. Observatorio Journal, Vol. 6, No. 1, pp. 25-46.
170
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
Abstract
The MUSA method is a collective preference disaggregation approach following the main principles of ordinal
regression analysis under constraints using linear programming techniques. The method has been developed in order to
measure and analyze customer satisfaction and it is used for the assessment of a set of marginal satisfaction functions
in such a way that the global satisfaction criterion becomes as consistent as possible with customer’s judgments. Thus,
the main objective of the method is to assess collective global and marginal value functions by aggregating individual
judgments. This study presents an extension of the MUSA method based on desired properties of the inferred
preference system. In particular, the linear programming formulation of the method gives the ability to consider
additional constraints regarding special properties of the assessed model variables. One of the most interesting
extensions concerns additional properties for the assessed average indices. These indices refer to the average
satisfaction indices, which are the mean value of the global and marginal value functions and can be considered as the
basic performance norms and the average demanding indices, which indicate customers’ demanding level and
represent the average deviation of the estimated value functions from a “normal” (linear) function. The main aim of the
study is to show how incorporating these additional constraints in the linear program of the original MUSA method, the
robustness of the estimated results may be improved. In addition, the study presents potential problems in the
aforementioned approach, especially in case of inconsistencies between global and partial judgments, and proposes
alternative modeling techniques based on goal programming that may be used in the post-optimality analysis step of
the method.
KEYWORDS
1. INTRODUCTION
The MUSA (MUlticriteria Satisfaction Analysis) method is a preference disagregation approach following the
main principles of ordinal regression analysis. It measures and analyzes customer satisfaction assuming that
customer’s global satisfaction is based on a set of criteria representing service characteristic dimensions.
The main object of the MUSA method is the aggregation of individual judgments into a collective value
function.
Different extensions of the method for the improvement of the provided results include additional DMs’
preferences or desired properties of the inferred preference system. These extensions concern properties
for the estimated value functions, hierarchy or interaction of criteria, alternative objective functions (during
the post-optimality analysis), different types of input data (ordinal/cardinal), etc.
This study presents an extension of the MUSA method based on desired properties of the inferred
preference system and the examination of whether the introduction of additional constraints in the linear
program of the original MUSA method may improve the robustness of the estimated results.
171
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
* n
Y bi X i
*
i 1 (1)
n
b 1
i 1
i
where bi is the weight of the i-th criterion and the value functions Y * and X i* are normalised in the
interval [0, 100]. The main objective of the method is to achieve the maximum consistency between the
*
value function Y and the customers’ judgments Y. Based on the above modelling and by introducing two
+ -
error variables σ and σ , the ordinal regression equation becomes as follows:
n
Y * bi X i* σ σ (2)
i 1
* + -
where Y * is the estimation of global value function Y and σ and σ are the overestimation and
underestimation error, respectively.
The final form of the linear programming problem is as follows:
M
[min]F =
j 1
j j
n t ji 1 t j 1
z
m 1
m 100 (3)
n ai 1
w
i 1 k 1
ik 100
z m 0 , wik 0 , m, i, k
j 0 , j 0 , for j 1,2,..., M
where tj and tji are the judgments of the j-th customer globally and partially for each criterion i=1,2,…,n, M
is the number of customers and zm, wik are a set of transformation variables such us:
zm y
* m 1
y* m for m=1,2,...,α 1
(4)
wik bi xi bi xi for k=1,2,...,αi 1 and i=1,2,...,n
* k 1 *k
where α and αi is the evaluation ordinal scale for the global assessment and for the assessment of the i-th
criterion, respectively.
172
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
Furthermore, taking into account the hypothesis of strict preferential order of the scales of some or all the
dimensions/criteria, the following conditions are met:
y y
*m *m 1
y m y m 1 for m 1,2,,α
*k (5)
xi xi xi xi
*k 1 k k 1
for k 1,2,,αi 1 and i 1,2,,n
where means “strictly less preferred”. Based on (5) the following conditions occur:
y
*m 1
y*m γ zm γ zm' 0 for m 1, 2, ,α
*k 1 *k (6)
xi xi γi w ik γi w ik 0 for k 1, 2, ,αi 1 and i 1, 2,
,n
'
where γ and γi are the preference thresholds (minimum step of increase) for the value functions Y * and
X i* , respectively, with γ, γi > 0, and it is set that:
i
* *
where F is the optimal value of the objective function of LP (3) and ε is a small percentage of F . The
average of the optimal solutions given by the n LPs (8) may be considered as the final solution of the
problem. In case of instability, a large variation of the provided solutions appears and the final average
solution is less representative.
The stability of the results provided by the post-optimality analysis is calculated with the Average Stability
Index (ASI). ASI is the mean value of the normalized standard deviation of the estimated weights bi and is
calculated as follows:
2
n
n
n bi bi j
j 2
1 n j 1
ASI 1
j 1
(9)
n i 1 100 n 1
173
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
where bi j is the estimated weight of the criterion i, in the j-th post-optimality analysis LP (Grigoroudis and
Siskos, 2002; Grigoroudis and Siskos, 2010).
Furthermore, the fitting level of the MUSA method refers to the assessment of a preference collective value
system (value functions, weights, etc.) for the set of customers with the minimum possible errors. For this
reason, the optimal values of the error variables indicate the reliability of the value system that is
evaluated.
The Average Fitting Index (AFI) depends on the optimum error level and the number of customers:
F*
AFI 1 (10)
100 M
where F * is the minimum sum of errors of the initial LP, and M is the number of customers.
The AFI is normalised in the interval [0, 1], and it is equal to 1 if F * 0 , i.e. when the method is able of
evaluating a preference value system with zero errors. Similarly, the AFI takes the value 0 only when the
pairs of the error variables σ j and σ j take the maximum possible values.
An alternative fitting indicator is based on the percentage of customers with zero error variables, i.e. the
percentage of customers for whom the estimated preference value systems fits perfectly with their
expressed satisfaction judgments. This average fitting index AFI2 is assessed as follows:
M0
AFI 2 (11)
M
+ –
where M0 is the number of customers for whom σ = σ = 0.
A final average fitting index AFI3 takes into account the maximum values of the error variables for every
global satisfaction level, as well as the number of customers that belongs to this level:
F*
AFI 3 1 α
(12)
M p m max y *m ,100 y *m
m 1
m m
where p is the frequency of customers belonging to the y satisfaction level.
2.3. Results
The most important results of the method are the estimated value functions. Another interesting result
concerns the average global and partial satisfaction indices, S and S i , which can be assessed according to
the following equations:
1 α m *m
S 100 p y
m 1 (13)
α
S 1
i
174
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
where p m and pik are the frequencies of customers belonging to the y m and xik satisfaction levels
respectively. It should be noted that the average satisfaction indices are basically the mean value of the
global or partial value functions and they are normalised in the interval [0, 100%].
The average global and partial demanding indices, D and Di , represent the average deviation of the
estimated value curves from a “normal” (linear) function and reveal the demanding level of customers.
They are normalised in the interval [–1, 1] and, are assessed as follows:
α 1
100m 1 *m
y
D m1 α11 for α 2
100
m 1
m1 α 1
(14)
α 1
100k 1 *k
k 1 αi 1
xi
Di 1 for αi 2 and i 1,2,, n
k 1
100
i
k 1 αi 1
For example, a linkage between global and partial average satisfaction indices may be assumed, since these
indices are considered as the main performance indicators of the business organization. In particular, the
global average satisfaction index S is assessed as a weighted sum of the partial satisfaction indices Si:
n α n αi
Taking into account the transformation variables zm, wik and using formula (13), the above equation
becomes as follows:
α m 1 n αi k 1
p z p w
m
t
k
i it
(16)
m2 t 1 i 1 k 2 t 1
In the case of the generalized MUSA method, the preference thresholds γ and γi should be introduced, and
equation (16) is written:
n αi k 1 α m 1 α n αi
p w p z p
k
i it
m
t
m
γ(m 1) pik γi (k 1) (17)
i 1 k 2 t 1 m2 t 1 m2 i 1 k 2
Similarly, a weighted sum formula may be assumed for the average demanding indices:
n
D bi Di (18)
i 1
175
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
Taking into account the transformation variables zm, wik and using formula (14), the previous equation can
be written in terms of the MUSA variables:
α 1 m 1 αi 1 αi 1 k 1
In the case of the generalized MUSA method, equation (19) should be modified by introducing the variables
z m and wik (see formula (7)).
The equations (17) and (19) may be introduced as additional constraints in the LP (3). However, these
additional properties of average indices should be used carefully, since their form does not guarantee a
feasible solution of the LP, especially in case of inconsistencies between global and partial satisfaction
judgments. For this reason, the aforementioned equations may be written using a goal programming
formulation and used alternatively as post-optimality criteria.
Table 2 presents the results of the original MUSA method and of the extension of the method with the
successive introduction of additional constraints for the average satisfaction and demanding indices.
According to Table 2, there is a remarkable increase of the ASI and AFI2 indices when the additional
constraints are introduced in the original MUSA method. The slight decrease (<1%) that appears for the
other two fitting indices (AFI1.and AFI3) is not sufficient to reduce the considerably improved results.
176
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Grigoroudis E., Politis Y. | A Robust Extension of the MUSA Method Based on Desired Properties of the
Collective Preference System
4. CONCLUDING REMARKS
The introduction of additional constraints in the linear program of the original MUSA method seems to
increase its stability. The MUSA method is a rather flexible approach and thus several extensions may be
developed taking into account additional information or data. Additional information that can be
introduced in the original MUSA method and improve its robustness may include preferences for the
importance of the criteria or other model properties. A simulation study can also be performed in order to
study the impact of model parameters and whether appropriate combinations of these parameters can
improve the stability of the provided results. Additional measures of robustness may facilitate the
investigation of various extensions of the MUSA model.
ACKNOWLEDGEMENT
This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national
funds through the Operational Program "Education and Lifelong Learning" of the National Strategic
Reference Framework (NSRF) ‐ Research Funding Program: THALES. Investing in knowledge society through
the European Social Fund.
REFERENCES
Grigoroudis, E. and Y. Siskos (2002). Preference disaggregation for measuring and analysing customer satisfaction: The
MUSA method, European Journal of Operational Research, 143 (1), 148-170.
Grigoroudis, E. and Y. Siskos (2010). Customer Satisfaction Evaluation: Methods for Measuring and Implementing
Service Quality, Springer, New York.
Jacquet-Lagrèze, E. and J. Siskos (1982). Assessing a set of additive utility functions for multicriteria decision-making: The
UTA method, European Journal of Operational Research, 10 (2), 151-164.
Siskos, J. (1985). Analyses de régression et programmation linéaire, Révue de Statistique Appliquée, 23 (2), 41-55.
Siskos, Y. and D. Yannacopoulos (1985). UTASTAR: An ordinal regression method for building additive value functions,
Investigaçao Operacional, 5 (1), 39-53.
177
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
Abstract
An efficient frontier in the typical portfolio selection problem consists an illustrative way to express the tradeoffs
between return and risk. Following the basic ideas of modern portfolio theory as introduced by markowitz the security
returns are usually extracted from past data. This work is an attempt to incorporate future returns scenarios in the
investment decision. For representative points of the efficient frontier, the minimax regret portfolio is calculated on the
basis of the aforementioned scenarios. These points correspond to specific weight combinations. in this way, the areas
of the efficient frontier that are more robust than others are identified. An example with the 50 securities from
eurostoxx 50 is also presented to illustrate the method.
KEYWORDS
Multiobjective programming; Robust programming; Financial modeling
1. INTRODUCTION
In financial theory, models allowing the selection of an optimal portfolio are all inspired from the classical
theory of Markowitz (1952, 1959), which is exclusively based on the criteria of expected value and variance
of the return distribution. In this regard, an investor considers expected return as desirable and variance of
return as undesirable. The Markowitz’s theory describes how we calculate a portfolio which exhibits the
highest expected return for a given level of risk, or the lowest risk for a given level of expected return
(efficient portfolio). Then, according to the theory, the problem of portfolio selection is a single-objective
quadratic programming problem, which consists in minimizing risk, while keeping in mind an expected
return which should be guaranteed. Thus, the solution of the original bi-objective model is reduced to the
parametric solution of a single objective problem, providing the efficient (or Pareto optimal) portfolios
(Xidonas et al, 2010, 2011).
In Modern Portfolio Theory (MPT) the return and risk for each stock in the investment universe are usually
extracted from past data. In this work we try to incorporate future scenarios for the return and risk that is
mainly based on the perspectives of the investor/decision maker. It is an attempt to show how this
information may be exploited and produce robust portfolios against the future scenarios. We deal with
future scenarios using the concept of the minimax regret criterion as it was proposed for Mathematical
Programming problems by Kouvelis and Yu (1997).
The methodological contribution of the present work is that it expands the concept of the robust solution
as it was proposed by Kouvelis and Yu to the multi-objective case. Namely, we use the minimax regret
criterion in order to identify robust Pareto optimal solutions in the Pareto front of multi-objective problems.
In this way we can identify areas of the Pareto front that are more robust than other. The specific areas of
the Pareto front are characterized by the weight combination used in the objective functions.
178
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
The remainder of the paper is as follows: In the second section we describe the methodology and in the
third section we present the application that illustrates the method. Finally, in section 4 the main
conclusions are presented.
2. METHODOLOGY
Scen1 Scen2 Scen3 Scen4 Scen5 Scen1 Scen2 Scen3 Scen4 Scen5 Worst case
8
Solution 1 3 12 14 6 6 8 0 0 3 4
7
Solution 2 11 8 7 6 10 0 4 7 3 0
5
Solution 3 8 8 11 9 5 3 4 3 0 5
In the rightmost column we calculate the maximum of each row to find the worst case scenario for each
solution (worst case in the sense that is more distant from the optimal of the scenario). The minimax regret
solution is the one that has the minimum among the worst case values which is Solution 3 in our case.
Kouvelis & Yu (1997) accomplish this task not for a finite number of solutions but for an infinite number of
solutions according to the feasible region of the problem. Other attempts include the works of Mausser and
Laguna (1999) as well as Loulou and Kanudia (1999). According to Kouvelis and Yu (1997) assume the
following mathematical programming problem:
max z cx
st
xF
Assume now that we have S scenarios for the objective function coefficients which means that the
corresponding objective function coefficient vectors are denoted as cs. The minimax regret solution is
calculated from the following problems:
179
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
Where zs is the optimal value for the s-scenario and y is the variable that expresses the minimax regret.
Where fp,min and fp,max are the minimum and the maximum values of the objective functions as obtained
from the payoff table. The solution of this problem corresponds to a Pareto optimal solution of the multi-
objective problem. Varying the weights we obtain a representative set of the Pareto optimal solutions of
the multi-objective problem. The concept of our method is to apply the Kouvelis and Yu formulation to
every combination of the weights. In this way we obtain the minimax regret solutions that correspond to
different areas of the Pareto front. Assuming that we have S scenarios for the objective function
coefficients, we discretize the weight space to g weight combinations (vectors) and we solve the following
problem:
MMR( g ) min yg
st
P c sp x f ps,min
w
p 1
g
p
f ps,max f ps,min
(1 yg ) z gs sS
xF
and we solve the above problem for every g obtaining the MMR solution at representative points of the
Pareto front. According to the value of the minimax regret solution (yg) we can draw conclusions about the
areas of higher or lower robustness of the Pareto front. The flowchart of the algorithm adjusted to the
specific case study where the objective functions are maximize return (ret) and minimize risk is depicted in
Figure 1.
180
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
Start
NO YES
g=0 g=g+1 g=G ? End
s=0
model (2)
s=s+1 MMR( g ) min yg
NO st
risksmax risks ( x) ret ( x) retsmin
s=S ? Find zsmax(g) solving model (1) w1g w2g s max (1 yg ) zsmax ( g ) s S
risksmax risksmin rets retsmin
YES xF
Find MMR(g) solving model (2)
3. APPLICATION
We use the 50 stocks of the Eurostoxx 50 the Europe's leading blue-chip index for the Eurozone, provides a
high capitalization representation of supersector leaders in the Eurozone. The index covers 50 stocks from
12 Eurozone countries. The model is presented in Xidonas and Mavrotas (2013). We use five scenarios of
return and risk evolution. In the absence of actual decision makers we create 5 scenarios for the return and
the risk as follows: We used historical data of 80, 60, 40, 20, and 10 weeks extracting the average return
and Mean Absolute Deviation (MAD) as a risk measure from the corresponding data. Therefore scenario 1
that corresponds to 80 weeks past horizon denotes a more long-term point of view than scenario 2, 3, 4 and
5 that denotes a short-term behavior. The five efficient frontiers are depicted in Figure 2.
181
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
100
90
80
70
60 scenario 1
scenario 2
50
scenario 3
40 scenario 4
scenario 5
30
20
10
0
0 20 40 60 80 100 120 140
The obtained results using the minimax regret model are shown in Table 2. We used 11 weight
combinations, namely (0,1), (0.1,0.9), (0.2,0.8)…(0.9,0.1), (1,0). The optimum of each scenario for the
weight combinations (0,1), (0.1,0.9) and (1,0) along with the minimax regret solution are shown in Table2
(the first objective function is the minimization of risk and the second one is the maximization of return)
Table 2: The output format of the obtained solutions for three weight coefficients
w1= 0.0
w1= 1.0
182
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
We can observe that the minimax regret solution in all the weight combinations contains more stocks in
the final portfolio than the individual scenarios optima. This means that we have a more dispersed
allocation of the total investment.
The minimax regret (MMR) solution across the Pareto front is obtained from the MMR values for the
specific weight combinations. Hence we can detect areas of the Pareto front that present relatively
increased robustness in relation to other areas. We calculate the MMR solutions for the 11 weight
coefficients for the relative and the deviation regret criterion (see section 2). The results are shown in
Figure 3 for the relative MMR.
Figure 3 The MMR values across the Pareto front (relative MMR)
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
w1 (=weight for risk)
In Figure 3 the lower the MMR the more robust is the specific area of the Pareto front. Hence It is obvious
that there are areas in the Pareto front with higher robustness based on the 5 scenarios. For example the
Pareto optimal solutions that correspond to weights varying from 0.1 to 0.4 are less robust than the Pareto
optimal solutions that correspond to weights varying from 0.6 to 1 (robust area of the Pareto front).
4. CONCLUSIONS
We extend the Kouvelis Yu formulation for the minimax regret criterion in multi-objective programming
problems. Using the weighting method for generating the Pareto optimal solutions we can detect the
robust areas of the efficent frontier. Future research can be driven towards examining the effectiveness of
the method for more objective functions and also in other applications.
ACKNOWLEDGEMENT
This research has been co-financed by the European Union (European Social Fund) and Greek national funds
through the Operational Program "Education and Lifelong Learning"
REFERENCES
183
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mavrotas G., Xidonas P., Doukas H., Psarras J. | Constructing Robust Efficient Frontiers for Portfolio
Selection Under Various Future Returns Scenarios
Markowitz, H., 1952. Portfolio selection. The Journal of Finance, 7 (1), 77-91.
Markowitz, H., 1959. Portfolio Selection: Efficient Diversification of Investments. John Wiley and Sons, New York.
Kouvelis, P., Yu, G. 1997. Robust discrete optimization and its applications. Kluwer, Amsterdam
Loulou R and Kanudia A., 1999 Minimax regret strategies for greenhouse gas abatement: methodology and application.
Operations Research Letters 25; 219-230
Mausser HE, Laguna M., 1999 Minimising the maximum relative regret for linear programmes with interval objective
function coefficients. Journal of Operational Research Society 50; 1063-1070.
Xidonas, P., Mavrotas, G., 2013. Multiobjective portfolio optimization with non-convex policy constraints: Evidence from
the Eurostoxx 50. European Journal of Finance, To appear.
Xidonas, P., Mavrotas, G., Zopounidis, C., Psarras, J., 2011. IPSSIS: An integrated multicriteria decision support system for
equity portfolio construction and selection. European Journal of Operational Research, 210 (2), 398-409.
Xidonas, P., Mavrotas, G., Psarras, J., 2010. Equity portfolio construction and selection using multiobjective
mathematical programming. Journal of Global Optimization, 47 (2), 185-209.
184
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
Abstract
Process Mining has only been extensively studied for the past few years and yet it has already helped to better
understand the way systems work and made possible the extraction of knowledge that can be used to improve
processes of a system. In this paper, the focus of process mining is on Open Source Software projects and software
events. The use of data that are publicly available on the web with the application of process mining algorithms can lead
to useful conclusions on characteristics of successful projects and ways to help those who appear to be risky. This can
be achieved by analyzing the event logs in order to extract process models but also in order to identify social networks
within the community of a project. In addition, with further research and analysis a general process specification could
be created to help Open Source Software projects reengineer their processes and improve their activities in order to
succeed. In this paper there is an attempt to extract a process model from an Open Source Software project and
compare it with the process that the development team should follow. More specific, the effort is focused on the bug
handling process and thus, the extraction of the followed process model could help improve the way tasks are shared
and bugs are closed.
KEYWORDS
Process Mining, Software Events, Open Source Software.
1. INTRODUCTION
During the last decade a lot of interest has been shown in the application of methods and tools that are
able to extract critical conclusions from complex information systems such as ERP, SCM and CRM systems.
Process Mining [1], a relatively young research discipline, has helped better understand the way systems
work and has made possible the extraction of information that can help improve each process of a system.
In order to perform process mining in any type of system, an event log must be first created. All relevant
activity performed in a system is recorded in a well-defined way such that each event refers to an activity
and is also related to a process instance. Additionally, information on the actor of each event as well as a
timestamp can also accompany each record in an event log [2]. Once the event log is available, valuable
conclusions can be drawn when process mining algorithms are applied. There is a variety of algorithmic
approaches that can derive models that correspond to the control-flow, the organization and the
information present in a system [3]. The existence of multiple tools (e.g. Little Thumb, Process Miner, HP
BPI etc.) that used different formats for reading/storing logs files and showed the results in various formats
made it difficult for researchers to combine algorithms while process mining [4]. That lead to the creation of
ProM Framework, a framework that integrates the functionality of several existing process mining tools and
also provides plug-ins for additional process mining algorithms [4][5].
Process Mining is applied in order to gain knowledge on the actual way a process is executed in a software
project. Thus, the interested parties are able to perform conformance checking to ensure the correct
execution of a process or in many cases to find ways to improve the activities within a process improving
4
Corresponding author
185
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
the process itself at the end. In addition, it is useful that a process mining algorithm identifies decision
points that show how data attributes influence the choices made in the process based on past process
executions [6].
Both Open Source and Proprietary software projects need project management tools throughout their life
span. Although there are a lot of tools for this purpose available for closed source software, when it comes
to open source projects the situation gets complicated due to their nature. The application of Process
Mining algorithms in Open Source software (OSS) projects can lead to the extraction of processes that show
the way developers work and communicate. By analyzing these process models, it is possible to identify
bottlenecks on a project’s survivability and thus, find ways to address them early, avoiding any unpleasant
consequences. Of course, all of this work would not be feasible without the existence of events that are
recorded in many forms in every OSS project.
The remainder of this paper is structured as follows. Section 2 reviews important concepts of Open Source
software such as OSS communities and logging practices and events in OSS. Section 3 analyses the role of
software events in Process Mining for the improvement of OSS projects. Section 4 shows the analysis of
some closed bugs in the OSS project OpenVZ and presents the derived process instances. Finally, Section 5
summarizes the conclusions drawn and includes directions for future work.
186
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
demonstrate a clear sequence of executions of activities or it may reveal that no process model is followed
at all. In the first case, it might be more interesting to pay attention to the activities executed in each
process: the sequence of execution, activities that are executed concurrently and users/actors that are
involved in each activity. In the second case, the consequences of not following a process model could be
monitored and ways of introducing good practices could be suggested.
A social network can also be constructed when event logs are available. This is also very important as
interesting structures may be revealed [11]. The density of the network, the role of an individual in the
network and potential cliques can be used to identify risks or show signs that could lead to the
improvement of the project’s process model.
Since each record in the event log provides detail for the actor of the event, answers to questions such as
who collaborates with whom, or who hands out work to whom can be given. By identifying developers that
frequently collaborate and by associating this detail with characteristics of projects such as healthy,
successful, risky, etc. one can derive patterns that show when a project is likely to do well or show signs of
problems.
Wahyudin et al. [13] illustrate the importance of community support in an OSS project in terms of bug
tracking. By applying process mining algorithms and extracting process models in OSS projects, researchers
can focus in the bug closing process and study the average times between discovery/reporting of a bug and
the fixing of a bug. The sooner a bug is closed, the sooner the users will get the new code of the project.
This makes their software safer and thus, reliable and less subject to threats such as worms and viruses.
Further analysis of the extracted process model of an OSS project could also reveal activities that usually
take an unnecessary amount of time to complete. Focusing on these activities could lead to identification of
problematic steps that are being followed or to poor communication between community members that
should work together in order to carry out the given task. As a result, the improvement of each activity
leads eventually to the improvement of each process.
188
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
In Figure 1, the most common process instance is shown. Each bug that follows the specific process instance
is firstly recorded as New, then as Resolved and finally as Fixed. In Figure 2 the state transitions are quite
different as the state Patchsent is inserted before the state sequence Resolved->Fixed. It is important to
note that state Patchsent is nowhere formally presented in Bugzilla and is one out of many examples on
how each community is free to create and use states that make bug handling process easier for them. In
Figure 3 two rare process instances are presented. The third process instance corresponds to the sequence:
New->Resolved->Reopened->Resolved->Fixed and the fourth to the sequence: New->Resolved->Reopened-
>Resolved->Reopened->Resolved->Fixed. It is very interesting to observe the loop that is formed between
states Resolved and Reopened and examine whether or not the possibility of an infinite loop exists.
Table 1 summarizes the frequency of each process instance in these first 50 bugs. It is clear that there is no
strict process model followed for the resolution of bugs in OpenVZ. However, there exist frequent
transitions that are followed and some rare ones that can either lead to flexible resolutions or complex and
time-consuming loops. Further analysis of all 2186 bugs will lead to solid results and important conclusions
about the process model followed for the resolution of bugs.
189
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
5. CONCLUSIONS
Although Process Mining has only been extensively studied for the past few years, it has already helped to
better understand the way systems work and has made possible the extraction of information that can help
improve each process of a system. Process Mining algorithms are applied in event logs that are available in
almost every type of system nowadays. From these event logs, models that correspond to the control-flow,
the organization and the information present in a system can be extracted.
The application of Process Mining algorithms in Open Source software (OSS) projects can lead to the
extraction of processes that show the way developers work and communicate. This leads to very useful
conclusions about the communities of OSS projects as they usually involve a lot of people that work
together.
In contrast to closed software projects, in OSS projects there is a huge amount of process-relevant data
publicly available online. OSS communities are a great opportunity to discover and analyze software
processes. The more data are available in the form of events, the more accurate resulting process models
and their analysis will be. By analyzing the process models of successful projects, elements that led to their
success can be identified and thus, applied to other projects for their improvement.
Each process model is comprised of activities. The sequence of execution and the discovery of activities that
can be executed in parallel will improve the workflow in a team and thus new releases of software and/or
bug closures will eventually take less time.
Process mining in event logs can also help identify social networks that are created within the members of a
community. Since there are a lot of data available, networks from various OSS projects can be compared. As
a result, interesting conclusions might be drawn, for example the number of project leaders and core
developers that successful projects usually involve. Also, repeated groups of members that are present in
healthy projects and finally, elements that demonstrate that the lack of communication leads to project
failure could also be revealed.
As presented in Figures 1-3, various process instances are present in the bug handling process in the Open
Source Project OpenVZ that was examined. As a result, it is clear that the bug handling process in the
specific project is a random process. The next state depends only on the current state and not on the
sequence of events that preceded it. Further automated analysis of all bugs can lead to very interesting
conclusions. Our goal is to construct a Markov chain that will show the possibility of transitioning from one
state to another in the extracted process model of a project.
In conclusion, process mining can be used to gain insight in the process of a project. It would be very useful
if researchers could study various projects and then extract a general process specification that leads to
successful and healthy projects. By doing this, smaller projects or projects that appear to be risky can use
this template in order to reengineer their own process model. At the end, the whole OSS community will
benefit as the improvement of each activity leads eventually to the improvement of each process and each
project.
190
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kouzari E., Stamelos I.| Process Mining in Software Events of Open Source Software Projects
REFERENCES
[1] Van der Aalst, Wil, et al., 2012. Process mining manifesto. Business process management workshops, Springer Berlin
Heidelberg.
[2] Van Der Aalst, Wil MP, and Schahram Dustdar, 2012. Process mining put into context. Internet Computing, IEEE 16.1:
82-86.
[3] Rubin, Vladimir, et al., 2077. Process mining framework for software processes. Software Process Dynamics and
Agility, Springer Berlin Heidelberg, 169-181.
[4] van Dongen, Boudewijn F., et al., 2005. The ProM framework: A new era in process mining tool support. Applications
and Theory of Petri Nets, Springer Berlin Heidelberg, 444-454.
[5] Verbeek, H. M. W., et al., 2010, Prom 6: The process mining toolkit. Proc. of BPM Demonstration Track 615: 34-39.
[6] Rozinat, Anne, and Willibrordus Martinus Pancratius Aalst, 2006. Decision mining in business processes. Beta,
Research School for Operations Management and Logistics.
[7] Détienne, Françoise, Jean-Marie Burkhardt, and Flore Barcellini, 2006. Open source software communities: current
issues. PPIG Newsletter–September (2006).
[8] Xu, Neng, 2003. An exploratory study of open source software based on public project archives. Diss. Concordia
University.
[9] Jensen, Chris, and Walt Scacchi, 2004. Data mining for software process discovery in open source software
development communities. 96-100.
[10] Jensen, Chris, and Walt Scacchi, 2006. Experiences in discovering, modeling, and reenacting open source software
development processes. Unifying the Software Process Spectrum. Springer Berlin Heidelberg, 449-462.
[11] Van Der Aalst, Wil MP, Hajo A. Reijers, and Minseok Song, 2005. Discovering social networks from event
logs. Computer Supported Cooperative Work (CSCW)14.6, 549-593.
[12] Wahyudin, Dindin, and A. Min Tjoa, 2007, Event-based monitoring of open source software projects. Availability,
Reliability and Security, 2007. ARES 2007. The Second International Conference on. IEEE.
[13] Wahyudin, Dindin, et al., 2006. Introducing" HEALTH" Perspective in Open Source Web-Enginerring Software
Projects Based on Project Data Analysis. iiWAS.
191
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
Abstract
It is well known, that the calculation of the exact set in Multi-Objective Combinatorial Optimization (MOCO) problems is
one of the most computationally demanding tasks as most of the problems are NP-hard. In the present work, we use
the improved version of augmented ε-constraint (AUGMECON2), a Multi-Objective Mathematical Programming
(MOMP) method which is capable of generating the exact Pareto set in Multi-Objective Integer Programming (MOIP)
problems, for producing all the Pareto optimal solutions (POS) in two popular MOCO problems: The Multi-Objective
Traveling Salesman Problem (MOTSP) and the Multi-Objective Set Covering Problem (MOSCP). The computational
experiment is confined to bi-objective problems that are found in the literature. The performance of the algorithm is
slightly better to what is already found from previous works and it goes one step further generating the exact Pareto set
to till now unsolved problems. The results are provided in a dedicated site and can be useful for benchmarking with
other MOMP methods or even Multi-Objective Meta-Heuristics (MOMH) that can check the performance of their
approximate solution against the exact solution in MOTSP and MOSCP problems.
KEYWORDS
Multi-objective traveling salesman problem, epsilon constraint, GAMS
1. INTRODUCTION
Multi-Criteria Optimization (MCO) attracts all the more interest mainly due to two reasons: (1) the multiple
points of view (expressed as criteria or objective functions) that allow the decision maker to make more
balanced decisions through a Multi-Objective Decision Making approach (2) MCO is a computationally
intensive task that can take advantage of the vast improvement in computational systems and algorithms.
Usually there is no unique optimal solution (optimizing simultaneously all the criteria) but a set of Pareto
optimal solutions (POS) which are mathematically equivalent (Pareto set). The decision maker must be
involved in order to express his preferences in order to find the most preferred among the Pareto optimal
solutions (Steuer, 1986). Therefore, MCO methods have to combine optimization with decision support.
Multi-Objective Mathematical Programming (MOMP) deals with the MCO problem when it is formulated as
a Mathematical Programming problem with more than one objective functions. From the beginning of the
st
21 century MOMP entered the area of Multi-Objective Combinatorial Optimization (MOCO) problems
(Ehrgott and Gandibleux, 2000). The basic characteristic of MOCO problems is that the decision variables
are integer (mostly binary) and the relevant problems are most of the time NP-complete even in their single
objective version. In addition, the discrete feasible region allows for the calculation of all the Pareto optimal
solutions, at least theoretically.
In the present work we deal with two of the most challenging MOCO problems: The Multi-Objective
Traveling Salesman Problem (MOTSP) and the Multi-Objective Set Covering Problem (MOSCP). We test the
AUGMECON2 method (Mavrotas and Florios, 2013), a MOMP method which is capable of producing the
exact Pareto set in Multi-Objective Integer Programming (MOIP) problems in some of the well known
192
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
benchmarks. Our proposed method solves exactly for first time, 16 specific benchmarks of the symmetric
MOTSP with 2 objectives and 100 cities which were previously only heuristically solved. We publish the
exact Pareto Fronts in a website (https://sites.google.com/site/kflorios/motsp), in order to promote
benchmarking of Multi-Objective Metaheuristics (MOMHs). The same is done for 44 MOSCP benchmark
problems found in the literature, some of them never exactly solved.
The rest of the paper is organized as follows: Section 2 presents the corresponding problem formulations
for MOTSP and MOSCP. Section 3 describes the methodology, mainly the coupling of AUGMECON2 method
with the Branch and Cut and Heuristic (BCH) model for the TSP for the solution of MOTSP problems. The
results are discussed in Section 4. Finally, in Section 5 the basic concluding remarks are discussed.
2. MODEL FORMULATIONS
The model formulations for the Multiobjective Traveling Salesman Problem (MOTSP) and the Multiobjective
Set Covering Problem (MOSCP) are presented.
The MOTSP can be defined as in Lust and Teghem (2010): given N cities and p costs cik, j , k=1,…,p associated
with traveling from city i to city j, the MOTSP is aiming at finding a tour, i.e. a cyclic permutation π of the N
cities, minimizing
N 1
vec min zk ( ) ck (i ), (i 1) ck ( N ), (1) (1)
i 1
According to the Dantzig, Fulkerson and Johnson, 1954 (DFJ) formulation for the TSP, a MOIP formulation of
MOTSP for p=2 can be derived extending the classic formulation DFJ
n n n n
min c x ,
i 1 j 1
1
ij ij min c x
i 1 j 1
2
ij ij st
x
i 1
ij 1, j 1,..., n
n
x
j 1
ij 1, i 1,..., n (2)
Initially, we tested an approach based on the ε-constraint solution for problem (2) using the DFJ Subtour
Elimination Constraints (SEC). The limit of use was 40-45 cities. We then searched for the most efficient
IP/MIP formulation for the TSP which is publicly available in a Modeling Language. We tested models from
GAMS model library and from Lee and Raffensperger (2006) in AMPL. The most efficient implementation
was found to be bchtsp (Branch and Cut and Heuristic for TSP) available in:
http://www.gams.com/modlib/libhtml/bchtsp.htm (GAMS model library). Although bchtsp model is
developed for the general asymmetric TSP, we used it for the symmetric instances available in the MOTSP
literature. Our approach can be summarized in the following. First, we modified bchtsp so that it solved the
ε-constraint sub-problem of Eq. (3)
193
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
n n n n
min z1 cij1 xij st z2 cij2 xij , z2 s2 2 , 2 z2ub ( z2ub z2lb ), x S (3)
i 1 j 1 i 1 j 1
Where S is the set of constraints of model (2) i.e. assignment, integrality and subtour elimination. The
functioning of model (3) is as follows: the second objective z 2 is transformed into a constraint with RHS ε2. A
positive slack s2 ensures that the constraint is of the form “ ” (for a ‘minimize’ objective function). We
calculate the nadir value z2ub and the ideal value z2lb of the second objective function. Then, we
standardize the ε-constraint sub-problem by a dimensionless parameter η [0, 1]. When η=0, the relaxed
ε2= z2 problem (3) is solved. When η=1, the tight ε2= z2 problem (3) is solved. For η (0, 1) the complete
ub lb
In the present paper, a refined form of model (3) is proposed, with an optimal parameterization for η, and
an efficient formulation and solution method for the TSP, namely the bchtsp model. We publish the exact
Pareto fronts and the corresponding tours in the website https://sites.google.com/site/kflorios/motsp. The
benchmarks are taken from the websites of Paquete (9 datasets) http://eden.dei.uc.pt/~paquete/tsp/ and
Lust (10 datasets) https://sites.google.com/site/thibautlust/research/multiobjective-tsp. Three datasets are
common among the two sites, so in total we have 16 datasets.
(1)
Number of constraints is m, number of variables is n, the objective function 1 coefficients are c , the
(2)
objective function 2 coefficients are c and the matrix tij is as follows. For each constraint i=1,…,m in matrix
t, it is tij=1 if variable j=1,…,n is involved in the i=1,…,m constraint. Else, t ij=0. Matrix t is a sparse matrix,
containing 0-1 elements tij. Also, note that the RHS of the constraints of model (4) is 1 and this is a vector
minimization problem. The datasets are taken from http://xgandibleux.free.fr/MOCOlib/MOSCP.html.
In order to couple AUGMECON2 method with BCHTSP model, the bchtsp model is altered in order to solve
the ε-constraint sub-problem described in Eq. (3). The ε-constraint sub-problem accepts a dimensionless
parameter η [0, 1] as described in Section 2.1. Consequently, in the GAMS model ‘bchtsp’, which is
available in http://www.gams.com/modlib/libhtml/bchtsp.htm the following operations are added:
2 nd
a) Input of the cost matrix c for the 2 objective function
ub lb nd
b) Input the nadir value z2 and the ideal value z2 of the 2 objective function as obtained from the
individual optimization of both objective functions.
c) Solve bchtsp(η) for specific η [0, 1], externally defined from the AUGMECON2 procedure.
194
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
d) Return the counter of the Pareto Optimal Solution (cPOS), the values of the objective functions (z 1, z2), the
Right Hand Side (ε2), the bypass coefficient (b), the counter of the grid point (i), the CPU time in seconds
and the resulting tour for every time subroutine bchtsp(η) is called from AUGMECON2.
The procedure implements AUGMECON2 for MOTSP with p=2 objectives and has been coded in Fortran
with Intel Visual Fortran 11.1. Furthermore, we parallelize the computational process by splitting the do-
while loop in 3 parts, with each one corresponding to a different thread of the CPU. Specifically, in thread 1
we have the iterations for i=1 to int(0.60×r2) where int() denotes the integer part of a number. For thread 2
we have the iterations for i=int(0.60×r2)+1 to int(0.85× r2) and finally for thread 3 we have the iterations for
i= int(0.85× r2)+1 to r2. We have compiled 3 applications, called thread1.exe, thread2.exe and thread3.exe
for the AUGMECON2-BCHTSP algorithm and we have run them always in parallel, in order to perform as
much calculations possible concurrently. Special care needs to be paid to the fact that η must be defined as
double precision variable, since truncation errors may lead to non-discovered POS if single precision is used.
195
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
Table 1 AUGMECON2 statistics using the BCHTSP model for 2-objective TSP (Lust, 10 datasets)
Dataset Pareto front Models Solved CPU time (h) in Parallel Processing
size |PF*| (MS) Thread1 (h) Thread2 (h) Thread3 (h)
kroAB100 3332 3372 47 45 71
kroAC100 2458 2509 36 32 22
kroAD100 2351 2370 15 20 26
kroBC100 2752 2790 29 31 34
kroBD100 2657 2705 25 28 27
kroCD100 2044 2078 9 13 26
euclAB100 1812 1839 19 11 19
clusAB100 3036 3110 15 16 33
randAB100 1707 1718 7 9 25
mixdGG100 1848 1863 15 11 21
Table 2 AUGMECON2 statistics using the BCHTSP model for 2-objective TSP (Paquete, 9 datasets)
Dataset |PF*| MS CPU time (h) in Parallel Processing
Thread1 (h) Thread2 (h) Thread3 (h)
euclAB100 1812 1839 19 11 19
euclCD100 2268 2294 23 17 42
euclEF100 2530 2559 13 22 28
randAB100 1707 1718 7 9 25
randCD100 1850 1868 13 15 19
randEF100 1882 1902 11 17 25
mixdGG100 1848 1863 15 11 21
mixdHH100 2108 2137 10 11 22
mixdII100 1883 1906 13 16 20
Figure 1 Visualization of exact Pareto front for kroAB100 and euclAB100 datasets for biobjective TSP
196
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Florios K., Mavrotas G.| Generation of the Exact Pareto Set in Multi-objective Traveling Salesman and Set
Covering Problems
Table 3 Performance of AUGMECON2 for BOSCP benchmarks averaged over types A-D
AUGMECON2
A B C D Average
100 8.64 6.26 2.78 1.45 4.78
200 15.41 12.07 5.87 13.68 11.76
400 35.83 52.04 31.79 7.14 31.70
600 70.88 87.11 114.88 150.73 105.90
800 132.59 84.21 339.76 760.60 329.29
1000 2443.53 1571.66 11698.00 10268.00 5597.86
5. CONCLUDING REMARKS
The goal of this paper is to apply the recently proposed improved version of the augmented ε-constraint
method (AUGMECON2), which is suitable for general MOIP problems, to two popular MOCO problems,
namely, the Multi-Objective Traveling Salesman Problem (MOTSP) and the Multi-Objective Set Covering
Problem (MOSCP). To the best of our knowledge our work is the first one that generates the exact Pareto
set for 16 popular MOTSP instances with 2 objectives and 100 cities from the literature. The proposed
method is a combination of a general purpose MOIP method (AUGMECON2), with a Branch-and-Cut-and-
Heuristic model (BCHTSP) available in GAMS model library. The AUGMECON2-BCHTSP method is able to
effectively calculate the Exact Pareto Set in 24-72h time. Regarding the MOSCP, AUGMECON2 succeeded in
solving the unsolved benchmarks (201a,201b) of MOCOlib. In general, the effectiveness of AUGMECON2 is
reflected on the fact that the number of models solved is very close to the cardinality of the Pareto front.
REFERENCES
Dantzig G., Fulkerson R., Johnson S., 1954. Solution of a large-scale traveling-salesman problem, Journal of the
Operations Research Society of America, Vol. 2, No. 4, pp.393-410.
Ehrgott M., Gandibleux X., 2000. A survey and annotated bibliography of multiobjective combinatorial optimization, OR
Spektrum, Vol. 22, No. 4, pp.425-460.
Jaszkiewicz A., 2004. A comparative study of multiple-objective metaheuristics on the bi-objective set covering problem
and the Pareto memetic algorithm, Annals of Operations Research, Vol. 131, No. 1-4, pp.135-158.
Lee J., Raffensperger, J.F., 2006. Using AMPL for teaching the TSP, INFORMS Transactions on Education, Vol. 7, No. 1,
pp.3089-3101.
Lust T., Teghem J., 2010. The multiobjective traveling salesman problem: A survey and a new approach, Studies in
Computational Intelligence, Vol. 272, pp.119-141.
Mavrotas G., Florios K., 2013. An improved version of the augmented ε-constraint method (AUGMECON2) for finding the
exact pareto set in multi-objective integer programming problems, Applied Mathematics and Computation, Vol. 219,
No. 18, pp.9652-9669.
Steuer R.E., 1986. Multiple Criteria Optimization.Theory, Computation and Application, Krieger, Malabar FL.
197
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
Abstract
Weak form of efficiency appears in existing literature as the main characteristic of betting exchange and prediction
markets. Thus, low-risk trading opportunities leading to profitable investments may exist. this work focuses on
investigating a large amount of data in order to detect pattern recognition that offer opportunities for low-risk bet-
trading, via technical analysis as it is used in financial stock markets. The empirical study is based on bet-exchange
markets and specifically in horse racing. The main instrument used in this analysis is the bollinger bands tool. Our
results reveal that before a single game starts there could be a priori identified patterns which lead to secure profitable
investing opportunities.
KEYWORDS:
BETTING EXCHANGE, ARBITRAGE, LOW-RISK TRADING, PREDICTION MODELS.
1. INTRODUCTION
Bet-exchange and prediction markets are generally new and thus research and literature is continually
expanding in more depth, supporting the notion that weak form of efficiency is followed. Once this
statement is valid, low-risk trading opportunities leading to profitable investments may be identified.
However, in order to exploit these opportunities, several conditions should be met. The most important
issue is the immediate identification of a valid situation and the instant exploitation on the spot of that
specific opportunity. Moreover, trading strategies, portfolio management and behavioral finance theories
are significant and play a crucial role in the strategy proposed in this paper.
This work focuses on investigating a large data volume and detecting patterns that offer opportunities for
low-risk bet-trading, via technical analysis as it is used in financial stock markets. Our empirical study is
based on bet-exchange markets, specifically in horse racing, a market not yet extensively analyzed by
literature. The main instrument used in the analysis is the "Bollinger bands" tool. Our methodology is based
on behavioral finance, which proposes psychology-based theories to explain market anomalies.
Additionally, market's trend recognition is substantially significant. The combination of the above can help
in identifying low-risk trading opportunities.
As soon as an opportunity is detected, by combining several types of specific trading techniques (arbitrage,
hedging, scalping, backing and laying) in the majority of the situations we manage to get a profit. Certainly,
in order to succeed long-term profitable investments, we carefully select and adopt a strategy based on
portfolio management. Our results reveal that before a single game starts there could be a priori identified
patterns leading to secure profitable investing opportunities. Through our empirical study we observe
encouraging results that show positive return of investment. These results could be optimized by using
numerous other techniques, such as dynamic programming, which may introduce a more sophisticated
model.
198
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
2. THEORETICAL BACKGROUND
In general, when talking about sport or football or horse racing betting, we mean the process of forecasting
the result of a game and placing bets according to a prediction. Specifically, in order for a bet to exist there
should be two sides: the bookmakers who determine the prices and the players who bet in various sports
based on given odds. Depending on the result of the game, one of the two – the bookmaker or the player –
wins and the other loses.
According to Sharpe and Alexander (1990), arbitrage in finance theory is “the simultaneous purchase and
sale of the same, or essentially similar, security in two different markets for advantageously different
prices”. The efficient market hypothesis relies to a large extent on the assumption that any arbitrage
opportunity is exploited quickly once it appears. Paton et al. (2006) regard it as “the purest form of weak
form inefficiency”, when price differences permit riskless arbitrage.
2.3. Betting
The Betting Exchanges can be described as an electronic market place. One of the unique characteristics of
betting exchanges is that they allow anybody to bet “in-play”. In-Play betting takes place while an event is
actually taking place and the odds change according to the actual events on the spot of the specific sport.
All betting exchanges provide bets in two forms: “back bets” and “lay bets”. Backing allows a punter to bet
on a selection to win while laying allows a punter to bet on a selection not to win. Thus one can bet that a
specified event will happen (a "back bet") or will not happen (a "lay bet"), depending on what the personal
behavior and expectations are.
Backing is the same method as placing a bet with a standard bookmaker (e.g. Bet365). Laying works in a
completely different manner to backing, since it virtually allows the player to act as the bookmaker (e.g.
Betfair). Prices are influenced by the amount of money backing and laying on a selection. More money in
199
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
the back column is a signal that the price will raise and more money in the lay column is a signal that the
price will fall.
“Bollinger Bands” are bands drawn in and around the price structure on a chart. Their purpose is to provide
relative definitions of high and low: prices near the upper band are high, prices near the lower band are
low. The base of the bands is a moving average that is descriptive of the intermediate-term trend. This
average is known as the middle band and its default length is 20 periods. The width of the bands is
determined by standard deviation (square root of volatility). The data for the volatility calculation is the
same data that was used for calculating the moving average. The upper and lower bands are drawn at a
default distance of two standard deviations from the average.
The most widely accepted tools for analyzing data and taking investment decisions are Technical Analysis,
Fundamental Analysis and Rational Analysis. Technical Analysis is the study of market-related data, usually
using graphs, as an aid to investment decision making; Fundamental Analysis is the study of company-
related data as an aid to investment decision making; and Rational Analysis is the juncture of the sets of
technical and fundamental analysis. Since the most important factor in our approach is the combination of
the above tools and linking them with behavioral finance and trend recognition, we decided to use Bollinger
band tool in order to combine them and have the best possible outcome. Moreover, Bollinger bands can be
applied to virtually any market or security.
(1) (2)
(3) (4)
where: UB = Upper Band; LB = Lower Band; MB = Middle Band (20-period moving average); Xj =
Observation j; N = Total number of observations; σ = standard deviation.
3. METHODOLOGY
The general idea behind our strategy is to exploit the trend or the range of the market by applying the
theory and analysis described in the previous section. Due to the ability to collect a large amount of betting
information about all available horse racing, by using a combination of trading techniques we manage to
exploit several risk-free arbitrage opportunities. It is realized that guaranteed profit bets appear in bet-
exchange market and they are named differently in literature: arbitrage, scalp, surebets and risk-free bets.
Using relative terminology, traders using these techniques are called arbitrageurs or scalpers (see e.g.
Kotlyar and Smyrnova, 2012). In this paper we focus on the scalping technique.
200
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
There are two trading scalping methods available to scalpers: (a) in range trading, and (b) in trend trading.
The first one is based on the range of the price. Usually, most of the horse odds are traded between an
upper and a lower level. When the odds keep moving up to an upper level and then reversing we obtain the
“resistance” points. When the odds repeat bouncing on a lower end of the trading range this is known as
“support”. Often horse odds trade between levels of support and resistance for some time (seconds to
minutes) which is suitable for scalping since this enables a series of high confidence scalp trades.
The second method is based on the idea of following the trend of the market. When odds move upwards or
downwards, with high reversal points getting higher, and low reversal points getting lower, then the odds
are said to be trending. A well known saying in financial trading is that the “trend is your friend”. Scalp
traders can jump on to the band wagon and profit from trends moving in either direction. The key is to
close the trade very quickly when the direction of the trend reverses and it is very crucial to get out quickly,
even for a small loss.
Our model, by applying scalping-trading techniques is proved to have no loss trades, with the profits
weighted to the outcome that an average investor will be happy. However, if we are not "lucky" we will not
have a profit and the handle of the variance in our bankroll is the main factor for a long-term investment.
This is where behavioral finance and portfolio management comes in.
201
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
a) c) (5)
b) d) (6)
where: Hs = Hedge stake; Bp = Back price; Lp = Lay price; St = Initial stake; ExPr = Expected Profit.
4. RECOMMENDED STRATEGIES
How does our strategy actually works? Firstly, we select horse races with high liquidity (mostly horse races
in Great Britain). Then, through Betfair’s API we collect all data regarding odds movements fifteen minutes
before the race starts. When all data is collected, Bollinger bands are created. During this process, by
analyzing the data in real time, we try to recognize the trend and the range of the price. The critical point of
this process is to choose where to enter into the market. This is where Bollinger bands come in. If the prices
touch the lines, upper or lower limit, for two or more times repeatedly, then we choose one of our
strategies/algorithms we explain next.
The first strategy is called “Lay-Back Strategy”. In order to select this strategy, the price must touch the
Upper Bollinger Band two times consecutively. If that happens, at the current price, we place a lay bet.
Then, we check every single tick if the price is higher than the price we have placed our lay bet. If that is
true we place a back bet at that price. On the other hand, if the condition is false we continue checking the
price for the next 100 ticks. Finally, if the price does not match the condition, the stop loss mechanism is
triggered (see Figure 1). The second strategy is called “Back-Lay Strategy”. In order to choose this strategy,
the price must touch the Lower Bollinger Band two times consecutively. If that happens, at the current
price, we place a back bet. Then, we check every single tick if the price is higher than the price we have
placed our back bet. If that is true we place a lay bet at that price. On the other hand, if the condition is
false we continue checking the price for the next 100 ticks. Finally, if the price does not match the
condition, the stop loss mechanism is triggered (see Figure 2).
5. RESULTS - CONCLUSION
Through our empirical study, we found that once the signal is triggered, if we apply the scalping-trading
techniques, as explained in this paper, 74% of the Back-Lay transactions and 83% Lay-Back transactions are
profitable. One of the most important issues in our strategy is the number of ticks for the stop-loss function
to be triggered. By trying all the possible combinations we found that after 70 ticks if the stop-loss function
is triggered we have the maximum number of profitable transactions (see Table 1).
REFERENCES
Bollinger J., 1992. Using Bollinger Bands, Stocks & Commodities, Vol. 10:2, pp 47-51.
202
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
Fama E. F., 1970. Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, Vol. 40, pp. 793-
805.
Franck E. P., Verbeek E. and Nüesch S., 2010. Inter-Market Arbitrage in Betting, Economica, forthcoming. Jackson D.,
1994. Index Betting on Sports, The Statistician, Vol. 43, No. 2, pp. 309-315.
Kotlyar V. Yu and Smyrnova O.V., 2012. Betting Market: Analysis of Arbitrage Situations, Cybernetics and System
Analysis, Vol. 48, No. 6, pp 912-920.
Mintel Intelligence Report, 2001. Online Betting, Mintel International Group Ltd, London.
Osborne E., 2001. Efficient Markets? Don’t bet on it, Journal of Sports Economics, Vol. 2, pp. 50-61.
Paton, D.; Smith, M.; Vaughan Williams, L., 2006. Market Efficiency in Person-to-Person Betting, Economica, Vol.73
(292), pp. 673-689.
Sauer R.D., 1998. The Economics of Wagering Markets, Journal of Economic Literature, XXXVI, pp 2021-2064.
Sharpe W. and Gordon A., 1990. Investments, Prentice Hall, Englewood Cliffs, NJ.
Thaler, R. H., and Ziemba W. T., 1988. Anomalies: Parimutuel Betting Markets: Racetracks and Lotteries, Journal of
Economic Perspectives, Vol. 2, No. 2, pp. 161-174.
http://corporate.betfair.com
http://latest.letscomparebets.com
http://www.sportstradingedge.com
203
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
APPENDIX
Table 1 Results based on our model and strategy
Number of Ticks Lay-Back Lay – Back Back-Lay Back-Lay
Opportunities % success Opportunities %success
Full example
In order to understand the strategy proposed in this paper here is a full example. Imagine that we
are 10 minutes before the start of a horse race. According to the strategy we select to place a back bet of
€100 at 6.0 odd, as we expect the price to shorten. So our expected profit is €500.
A couple of minutes later, the selection is available to lay at 5.0. Laying at 5.0 for €100 gives us a
liability of €400. As a result we have a risk-free bet with a potential €100 profit. However, if our selection
wins, we have a profit but if it loses we do not gain any profit.
As a consequence, hedging or greening up the bet is the best solution in order to have a profit
regardless of the result of the race. So we can calculate the hedge stake using eq. (5) and eq. (6):
We divide the potential profit on the selection by the current lay price. As a result, €100 divided by 5, gives
€20 and then place a further lay bet of €20 at 5.0. Back €100 at a price of 6.0 means our expected return if
we win will be €600 (=€100x6.0). When the price reaches 5.0 we simply divide €600 by the lay price of 5.0
(€600/5=€120). So we lay our selection for €120 at a price of 5.0. This has the advantage of having €20
profit.
This is how it works based on the possible outcomes (see Tables 2 and 3):
1. If the selection wins the following happens:
Original Back bet: +€500; Lay Bet: -€400; Second lay bet: -€80; Total Profit: +€20
2. If the selection loses the following happens:
Original Back bet: -€100; Lay Bet : +€100; Second lay bet : +€20; Total Profit : +€20
204
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
i<100
pricet+i >pricet
END
205
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ioulianou M., Talattinis K., Stephanides G.| Are There Low-Risk Trading Opportunities in Sport Betting
Exchange Markets?
START
Place back
Bet @ pricet
i<100
END
206
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
Abstract
The size of the pharmaceutical market and its contribution, both in national and global level, to the regional, national
and international economic development is widely recognized. This fact signifies that the supported and efficient
decision making in the sector is a matter of paramount importance. This paper refers to a multicriteria assessment
system for portfolio optimization in the Hellenic pharmaceutical market. The evaluation criteria are extracted from
three points of view, namely: i) current market situation, ii) development of the sector over the recent years, and iii)
comparison with other European countries. The overall objective of this research work is the assessment and ranking of
192 therapeutic categories for investment purposes in the Hellenic pharmaceutical market. The ranking of these
categories is obtained through the utilization of an additive value model which is assessed by the ordinal regression
method utastar, implemented in three phases. in the first phase, the decision maker (dm) is asked to rank a sample of
these alternatives, to infer an additive value system which should be as close as possible to the dmʼs ranking and as
robust as possible. In a second phase, all the alternatives are evaluated and a complete ranking is obtained. Finally, in
order to analyze the robustness of the model, given the incomplete determination of inter-criteria model parameters, a
random weighting sampling technique is utilized, to obtain the probability that an alternative maintains its initial
position in the ranking.
KEYWORDS
Multiple Criteria Decision Analysis, Pharmaceutical market strategy, Ordinal regression, Robustness analysis
1. INTRODUCTION
Greece throughout the last three years is experiencing an economic crisis and recession in a scale
unprecedented within Eurozone. Data analyzing and decoding the messages that this murky economic
environment is signaling constitutes the black box that everyone is search of, in order to optimize the
outcome of a forecast. So here lays the question of what a company, which is willing to invest, can really
do?
Even though the attempt to invest within an economic environment, with the aforementioned
characteristics, is a risk, investments and investors will always emerge when opportunities are likely to
appear. In such an economic turmoil opportunities will arise but the key element that will minimize the risk
factor, in terms of capital and labor forces loss, is diversification of the investment portfolio along with in
depth analysis of the data of the market in which the investments will take place.
One sector that has succeeded in withstanding the economic crisis is the pharmaceutical. Having said that
the sector has suffered lately a considerable downsize in terms of gross revenue and labor force reduction
(Athanasiadis et al., 2012). Despite the latter, the pharmaceutical sector, which is consisted of the
companies that manufacture or import drugs or non-pharmaceutical products, is one of the main pillars of
the Greek industry and one of the largest employers in the Greek economy. According to the Greek
Pharmaceutical Companies Association the expenditure on pharmaceutical products represents 2.5% of the
207
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
GDP from 2007 and onwards, while pharmaceutical products constituted the 5% of the gross exports of the
country and more than 6% of the gross imports in 2010 (Athanasiadis et al., 2012).
The contribution of the pharmaceutical sector in the domestic economy is not restricted at the commercial
aspect, it affects considerable the labor factors as well. According to data provided by the Division of Labor
force of the Hellenic Statistics Agency, in 2011 approximately 13.600 persons were employed at the sector
of pharmaceutical manufacture, a number which renders the sector a main player in the overall labor
within the Greek economy (Bank of Piraeus, 2011).
The ongoing downturn of the pharmaceutical market since 2009 with a rate of 12% per annum, with the
situation being exacerbated the last two, with a downturn rate of 18% per annum (see Figure 1), cannot be
interpreted that the companies should halt any investment program. Within this environment the
companies should have a balanced scorecard and carefully assess the investment targets.
Figure 1: Pharmaceutical Sector - value sales evolution (Data source: IMS Hellas).
The purpose of this paper is to provide the appropriate tools and methodologies to assist the managers of a
pharmaceutical company in investing on new pharmaceutical products. The paper proposes an integrated
multicriteria decision aid (MCDA) methodology to extract and imprint the preferences of the decision
makers-managers of a pharmaceutical company.
The study described below elaborates with the involvement of two real experts in the pharmaceutical
market in the role of the decision maker. In the end, a personalized ranking of the possible therapeutic
categories for investment emerges, from which the company will select those top ranked to form its
investment portfolio. At the end of the paper, the robustness of the model and its results is analyzed, to
check for differentiations over the final ranking, supporting therefore further the investment decision of the
company.
208
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
analyzed, which the company already has launched abroad or may purchase it from another company and
then import it in the Greek market.
The success or failure of this new launch is affected by many different parameters and criteria that
differentiate according to which strategy the company is willing to follow. For instance, some would opt the
strategy to target large therapeutic categories with low pricing relying on their organized production and
brand recognition, in order to eliminate head on the competition, while others would prefer to invest in
smaller therapeutic categories where the competition is less dynamic and precise (Jones, 2002).
Besides the scope under which the decision makers set their preferences, the main purpose of this
simulation is the assessment of all 192 existing therapeutic categories (EphMRA, 2012) of level three (ATC-
3s, see Figure 2), through a number of evaluation criteria to be modeled in the following section.
Consequently, their preferences can be prioritized on a scale of importance to select the therapeutic
category they will be interested to invest upon.
After reviewing the literature for similar surveys, focusing on the evaluation of therapeutic categories, a
problem of confidentiality occurred since these projects/surveys constitute a main tool of corporate
strategy in real corporate life.
For instance, major consulting groups are involved in the assessment of pharmaceutical categories in order
to provide consistent and credible data to assist their clients to invest in the most profitable way. In Greece,
companies like Boston Consulting Group, McKinsey and especially IMS Consulting Group are the main
players in providing services concerning similar problems with the ones this paper is attempting to address.
the rest EU countries. These points of view are farther analyzed to form the ten evaluation criteria of the
study which are presented in Figure 3 and briefly described below.
The indices measuring the criteria along with their evaluation scales are presented in Table 1. The data of all
ten criteria stem from IMS Hellas (see data sources).
g1: % market share in value sales (€) in 2012: It is the value market share that each ATC-3 captures within
the total pharmaceutical market for 2012 calendar year.
g2: % of sales contribution from products launched since 2010: The value sales market share of new
launched products (products launched within the last three years) for each ATC-3 is calculated. It is a way to
calculate how easily a new product launch can capture market share within the category.
g3: # of companies contributing 70% of total value sales: The number of companies that all together
contribute the 70% of total value sales for this category. In this way the level of concentration in sales
within category is quantified.
g4: % of volume sales from Generics products: This criterion is a measure to evaluate the penetration of Gx
products within the category. The volume market share is calculated in order to eliminate the price effect
from the original products which are more expensive.
g5: % CAGR in terms of volume sales over the last two years: Volume evolution of the ATC-3 category
within the last two years.
g6: Volume mix effect in 2012 vs. 2011: It is a metric calculating the switch from more expensive to less
expensive products revealing how easily the patients switch their brand of choice.
g7: % Evolution index of Generics in value sales (€) in 2012 vs. 2011: Calculation of the growth of the
category vs. the growth of the total market.
210
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
g8: Number of Standard Units per capita in Greece vs. Other European countries: This criterion measures
the deviation in consumption in terms of dosages per capita in Greece vs. Other European countries.
g9: % of difference between average price per Standard Unit in Greece vs. Other European countries: This
criterion measures the differentiation between the average price of the category in Greece vs. Other
European countries.
g10: Difference between the Generics market share (in Standard Units) in Greece vs. Other European
countries: The delta between the penetration of Gx in Greece and Other European countries is calculated.
Table 1: Evaluation scales of the criteria
Following the discussion with the decision makers, the alternatives selected for evaluation in this study are
the level three therapeutic categories (ATC-3) and more specifically the ones that exhibited gross sales
above 500,000 euros in 2012. Their preferences revealed that they are mostly interested to invest on
mature categories in terms of Generics penetration in order to avoid potential instability due to the latest
changeable legislation (Maniadakis et al., 2010).
where is the performance vector of an action on the n criteria; and are the least
and most preferable levels of the criterion , respectively; , i =1,2,…,n are non-decreasing marginal
211
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
value functions of the performances , and is the relative weight of the i-th function
.
An additive value function is valid in the case of an individual decision maker (DM) if the criteria are
preferentially independent from each other (see Keenney and Raiffa, 1976, Keeney, 1992, for instance). A
number of different methods may be utilized to obtain the aforementioned additive value system (see
Keeney and Raiffa, 1976, Figueira et al., 2005). In this study the ordinal regression UTASTAR method (Siskos
and Yannacopoulos, 1985) is implemented to assess and construct the additive value system.
Step 1: Express the global value of reference actions , k =1,2, …, m, first in terms of marginal values
, and then in terms of variables and j, by means of the following
expressions:
Step 2: Introduce two error functions and by writing for each pair of consecutive actions in the
ranking the analytic expressions:
Step 4: Robustness analysis (Centroid value function based on multiple characteristic value functions,
robust ordinal regression analysis, see section 4 below).
212
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
Figure 4 presents the estimated weights of the model that emerged after the solution of
the linear program of the UTASTAR method, for the calculated centroid that resulted by the two rankings.
The final stage of the evaluation process is the implementation of the already constructed model on all 192
alternatives to achieve their evaluation and ranking. The results of this stage, also known as extrapolation,
that emerge after the implementation of the additive value model are presented clearly in Table 3. As
normally expected the 30 alternatives maintain their initial ranks on the final complete ranking, as shown in
Table 2.
Table 2: Reference actions the joint DMs’ ranking and its Table 3: Sample of the final
confirmative values given by UTASTAR method ranking of the 192 alternatives
4. ROBUSTNESS ANALYSIS
Having concluded to the ranking of the pharmaceutical categories, based on the preferences of the two
DMs some questions arise concerning the robustness of our method. From the methodological point of
view, the UTASTAR linear programming problem, described in paragraph 2.3 does not guarantee a single
specific solution (Kadzinski et al., 2012). On the other hand, there exists an infinite number of evaluation
parameters that are optimally consistent with the set of UTASTAR constraints that arises by the DMs’
rankings. All these different parameters, although compatible with the DMs’ rankings over the reference set
of alternatives, may cause differentiations in the final ranking. Consequently, it must be estimated to what
extent the overall hierarchy of the alternatives could be affected by random solutions of the UTASTAR
linear problem.
Under this context, a random sampling algorithm was implemented to extract a big number of different
random solutions of the UTASTAR linear programming problem. It is a weighting vector generating
algorithm proposed by Tervonen et al. (2012) who adapted the “Hit and Run” sampling algorithm of Lovasz
(1999) to multiple criteria decision analysis problems. This algorithm is forced to stop when the desired
number of solutions to be generated is reached.
The weighting sets produced by the algorithm, led to the construction of 1,000 individual random rankings
through the implementation of the additive value model. The statistical procession of these rankings is
presented in Figure 5.
214
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
The output of the analysis reveals that the position, each alternative holds based on the centroid, is also the
position in the majority among the 1,000 different random solutions. In addition to this, it has been
revealed that differentiations between different ranking positions are more frequent for alternatives
belonging in the top or bottom of the ranking. On the other hand, greater alternations occur for the
alternatives being ranked in the middle positions.
5. CONCLUSION
Through this paper a solution on the identification of the ideal therapeutic category for investment was
given implementing the UTASTAR method based on the feedback given from two decision makers. The final
output was a ranking of all therapeutic categories based on the revealed preferences of the two DMs
providing a stable and robust solution for this problem and a powerful and stable tool in the decision
making process. The ranking obtained based as defined by the DM’s preferences can be easily adjusted to
the preferences of different DMs and in that way different preferences can be incorporated in the model.
The next research steps include the expansion of the proposed approach to the Over the Counter and
Personal Care market, the inclusion of more criteria as well as the involvement of more DMs on the
procedure.
ACKNOWLEDGEMENT
The authors wish to acknowledge financial support from the Ministry of Education, Religious Affairs, Culture
and Sports of Greece and the European Social Fund, under Grant THALIS, MIS 377350: Methodological
approaches for studying robustness in multiple criteria decision making projects.
REFERENCES
[1] Athanasiadis, T., Maniatis, G., Demousis, F., 2012. Pharmaceutical Market in Greece, Foundation for Economic &
Industrial Research, Athens, Greece.
[2] Bank of Piraeus, 2011. Production and trading of pharmaceutical products, Sectorial report, No. 5.
215
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Mastorakis K., Siskos E., Siskos Y.| Value Focused Pharmaceutical Strategy Determination with
Multicriteria Decision Analysis Techniques
[3] EphMRA, 2012. Anatomical Classification guidelines 2012, Retrieved October 31, 2013, from
http://www.ephmra.org/pdf/ATCGuidelines2012Final%20V2%20revised.pdf.
[4] Figueira, J., Greco, S., Ehrgott, M., Eds. 2005. State-of-Art of Multiple Criteria Decision Analysis, Kluwer Academic
Publishers, Dortrecht.
[5] Jacquet-Lagrèze, E., Siskos, J., 1982. Assessing a set of additive utility functions for multi-criteria decision making:
The UTA method, European Journal of Operational Research, Vol. 10, pp. 151-164.
[6] Jones, D., 2002. Pharmaceutical Statistics, Pharmaceutical Press, London, Great Britain.
[7] Kadzinski, M, Greco, S., Slowinski, R., 2012. Selection of a representative value function in robust multiple criteria
ranking and choice, European Journal of Operational Research, Vol. 217, No. 3, pp. 541–553.
[8] Keeney, R.L., 1992. Value-focused thinking: A path to creative decision making, Harvard U.P., London.
[9] Keeney, R.L., Raiffa, H., 1976. Decisions with multiple objectives: Preferences and value trade-offs, John Wiley and
Sons, New York.
[10] Lovαsz, L., 1999. Hit-and-run mixes fast, Mathematical Programming, 86, pp. 443–461.
[11] Maniadakis, N., 2010. Study on the Generic products market and the legislation across Europe, Travel Times
Publishing, Athens, Greece.
[12] Roy, B., 1985. Méthodologie multicritère d'aide à la décision, Economica, Paris.
[13] Siskos, Y., Yanacopoulos D., 1985. UTA STAR—an ordinal regression method for building additive value function,
Investigacao Operational, Vol. 5, pp. 39–53.
[14] Tervonen, T., Valkenhoef, B.N., Postmus, D., 2013. Hit-And-Run enables efficient weight generation for simulation-
based multiple criteria decision analysis, European Journal of Operational Research, Vol. 224, No. 3, pp. 552–559.
DATA SOURCES
IMS Health Hellas, Retrieved February 01, 2013, from http://www.imshealth.com/portal/site/imshealth. .
216
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
Abstract
The aim of this study is the elicitation of farmers’ goals through a multi-criteria weighted programming model. This is
pursued in the area of Ancient Epidaurus (Peloponnesus-Southern Greece), where farming systems comprise various
combinations of tree crops, such as olive-, orange- and mandarin- groves, as well as small vegetable gardens for own-
consumption. The analysis is carried out by means of a 70 farm sample, whose elements have a multitude of on- and
off- farm employment opportunities.
Research findings indicate that in this highly heterogeneous farming context farms aim at achieving multiple goals. Six
different clusters of farms/households were identified, on the basis of those objectives. In each cluster, various sets of
objectives in a hierarchical order were identified, indicating a diversity of strategies followed by farm holders; the usual
pattern comprises three or four different objectives – with one or two prevailing.
The relative importance of the goals reveals that both elements of the composite entity "farm business/farm
household" are represented in the decision making process. The varying hierarchy of objectives across the farmsis
accounted for by a series of farm and household particular characteristics.
The identification of sets of hierarchical goals for different types of farms/households goes beyond the conventional-
simplistic hypothesis of profit maximization in all different farm entities.
KEYWORDS:
Farmers’ objectives, goal programming, Greece
1. INTRODUCTION
The complex nature of farming activity as well as the incorporation of the farm operation into the farm
household necessitates a thorough examination of economic as well as non-economic values and objectives
in agriculture.
The multiple nature of farmers’ objectives has been studied in the context of farm management, but
recently it has assumed a renewed interest, as agricultural policy’s focus has shifted from productivity
increases and market regulation to public goods provision from agriculture and rural development
concerns. Thus, e.g., following a behavioral approach, it is discovered that different groups of farmers adopt
different viewpoints on decision making, having different values. This has an obvious effect on applying
environmental policies that are incentive-based (Pedersen et. al. 2012).
This means that we have to go beyond some commonly used simplistic hypotheses, and to include often
conflicting goals, objectives and values in the decision making process. This approach broadens the scope of
the conventional notion of economic rationality which, through profit maximization, allegedly dictates the
economic behavior of farmers.
217
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
A strong stimulus for such an investigation comes from a diverse farming context, such that the one
prevailing in Greece, which is characterized by a high sector and spatial heterogeneity.
The current study aims at the elicitation of farmers’ objectives by means of a multi-criteria weighted
programming model based on observed activities and labor allocation. The proposed methodology is
applied to the area of Ancient Epidaurus, in Argolida (Peloponnesus-South Greece).
The study is divided into four parts. After the presentation of the methodology, the case-study area is
described. This is followed by the results and conclusions.
2. METHODOLOGY
The mainstream economic theory's assumption of the profit maximization goal is often inconsistent with
the observed farmers' behaviour. We adopt the notion of the expected utility maximization goal, where
expected utility is defined as a function of multiple objectives.
In order to identify the multiple objectives and their hierarchy, that is their relative importance, we adopted
an indirect technique that has been proposed by Sumpsi et al. (1996) and has been further elaborated by
Amador et al. (1998) and Gomez-Limon et al. (2003). The methodology is based on weighted goal
programming and has been used in many case studies, like Gomez-Limon and Berbel (2000), Arriaza and
Gomez-Limon (2003). In Greece, this methodology has been applied for evaluating the effects of pricing
water use (e.g. Manos et al., 2007, Latinopoulos, 2004) and for estimating milk supply from sheep farms
(Sintori et al. 2010).
We assume an additive multi-attribute expected utility function that has the following form:
(1)
where is the total utility level of the k-th alternative, is the weight of i-th attribute and is the
utility added by the i-th attribute value for alternative k.
The first step on this method is to define a tentative set of alternatives, i.e. objectives, and to create the
pay-off matrix by consecutive optimizations of the classical mathematical programming decision model of
the farm. The payoff matrix elements can include either the calculated values of the objectives or the
calculated decision variables per optimized objective. We follow the latter approach as suggested by
Gomez-Limon et al. (2003).
(2)
where there are decision variables and objectives. is the value of the decision variable when the
objective is optimized.
w
j 1
j xij x i i (3)
218
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
where wj the is the weight elicited for the j-th objective and x i is the observed value of the i-th decision
variable and is obtained from the above payoff matrix. Since this is usually an over-determined linear
system there is no exact solution and so we proceed to find the weights that minimize the deviation from
the observed values (L1 criterion).
w
j 1
j xij ni pi x i i (4)
w
j 1
j 1
where i is an index of the set of the decision variables, j is an index of the set of the objectives (i.e. the
attributes), ni is the negative deviation and pi the positive deviation of the weighted sum of activities
generated by j objectives from the observed value of Xi.
3. CASE STUDY
3.1. Description
Ancient Epidaurus, seat of Epidaurus municipality, is a coastal area with a natural port, with 2000
permanent residents and 10000 total population on the summer period. There are 383 farms in the area
with a total area of 734 ha. The average farm size is 2.44haof which 1.81ha(about the 3/4) comprise olive
trees. The typical farm has a gross profit of 19274 € on average and 59% of those families include members
with off-farm employment. Furthermore, the off-farm income accounts for two-thirds of the total family
income, with farming activities contributing the remaining one-third. The typical farm household members
are offering 2.6Annual Work Units (AWU), most of which are employed to various non-farm activities (86%),
while only 14% are employed on-farm. Furthermore, the typical farm is employing 0.54AWU on average, of
which 1/3 is non-family labor.
The dataset has been derived from extensive fieldwork in a representative sample of 70 farms. Agriculture
in the area is comprised solely from tree crops (non-irrigated and irrigated olives, orange and tangerines).
There are no annual crops, except tiny vegetable gardens for own-consumption. The data collected in 2006,
except for the case of olive trees, where the data refer to the average of 2005 and 2006 due to the
problems arising from the olives alternate bearing property.
The choice of the decision variables of the model is mainly affected by the particularity of the type of
agricultural activity on the case study area. Generally, in an optimization problem, a farmer owns the land
and has to choose between alternative annual crops in order to maximize an objective function that models
his main objective, usually the gross farm profit. However, in the present case study, the farmer's owned
land and the allocation to the various crops are fixed for long periods, since the average lifespan in the case
study area for an olive cultivation is 52 years, whereas for orange trees and for tangerine trees is 53 and57
years, respectively. Therefore, in our case, the decision variable for the farm is not the allocation of land in
219
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
alternative crops, but: i) the allocation of the family members’ available work units to the various tree crops
and to off-farm activities and ii) the volume of non-family labor to use in those various tree crops. It has also
to be noted that the proposed methodology has not been applied to one or a few representative farms
using average values for the objective coefficients – which is usually the case in similar studies – but to all
70 farms, with the detailed farm-level data set for all the related parameters employed.
For the utilization of the methodology the pay-off matrix was determined, and nine initial (tentative)
objectives were grouped to five new objectives that are presented in table 1.
Family Employment (FE) (max) The hours of family members employment, to on- and off- farm activities
Farm Variable Costs (FVC) (min) The total variable costs of the farm agricultural activity plus the variable cost of
direct selling to open-airmarkets in Athens
Gross Farm Profit (GFP) (max) Farm revenue minus variable costs
Household Standard of (max) The Net Family Labor Income from agriculture plus the income from off-farm
Living (HSL) activities of the family members divided by the equivalent family members
Net Farm Profit (NFP) (max) Farm Revenue minus total costs [fixed and variable]
3.2. Results
After iteratively solving optimization problems corresponding to the aforementioned five objectives the
payoff matrix is populated. The the LP model (4) is run resulting in the set of weights for each individual
farm in the sample. Calculations were performed by means of the GAMS software. Then, using K-means
algorithm (R software implementation), six clusters of farms were identified, with different combinations of
objectives (table 2).
It seems that the farmers pursue different strategies, comprising combinations of various objectives.We will
briefly outline these differences on the basis of some salient structural characteristics of each cluster of
farms (see Appendix table A1).
The five objectives seem to have a varying importance for the decision making process of the farmers in
nd th
each cluster. In twoout of the six clusters (the 2 and 6 ) the most important objective is the minimization
of variable costs, whose weight ranges from 44% to 76%. The relative importance of this objective
220
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
compared to the maximization of the gross profit has also been stressed in previous studies, such as that of
Piech and Rehman (1993). Farmers in cluster 2 have good reasons to pursue this objective, since they
possess the largest farms(3,9 ha) and devote the highest volume of family labor in direct marketing in open-
air markets in Athens; also,variable costs represent 40% of their total costs, higher than the average, while
they have the highest dependence on farm income for their livelihood.In cluster 6 the minimization of
variable costs is pursued along with the maximization of family employment, as variable costs have the
highest contribution to total costs, the size of the farms is higher than the average, 41% of total farm work
is carried out by hired workers and 89% of family labor is employed outside the farm.
The main objective of farmers in cluster 1 is the maximization of family employment, followed by three
other objectives of minor, though non-negligible importance. The households here are run by the youngest
holders and offer a high volume of work, 93% of whichin off-farm activities.On the other hand, the
attainment of a standard of living seems to be the prevailing objective in cluster 4 which comprises 18
farms/households; this prioritization could be explained by thehighest indexes of equivalent household size,
total family labor and working adults, who devote 92% of their labor time off-farm.
Furthermore, the strategy pursued by farms/households in cluster 3consists, firstly, of maximization of net
farm profit and secondly by the achievement of a standard of living. These farms have the smallest size, and
they are the only ones that have not secured their long-term economic viability; at the same time 98% of
family labor is employed in off-farm activities, while almost three quarters of on-farm labor is derived by
hired workers.
Finally, the most commonly used hypothesis in the literature – the maximization of gross farm profit –
seems to be confirmed only in cluster 5, albeit this is in combination with two other goals. These farms are
run by the oldest holders, and althoughthey have a relatively small size, they utilize the highest volume of
human labor (1.18 AWU), almost entirely provided by family members.Also, they are heavily dependent on
farm income for their livelihood, while their members present the lowest degree of engagement in
activities outside the farm.
4. CONCLUSIONS
This study is using multi-criteria analysis in order to elicit the farmers’ objectives in an area with tree crops.
The decision variables are the allocation of the household members' labor in on-farm and off-farm
activities, as well as the proportion of family and hired labor in the farm activities.
The application of the proposed methodology to all (70) case study farms with detailed farm-level data
resulted in grouping the initial nine objectives to five, i.e. the maximization of family employment and
gross- and net- farm profit, the minimization of farm variable costs, and the achievement of a living
standard comparable to that of the broader region. Then, farms were classified intosix clusters, on the basis
of various combinations of those objectives. In each cluster, various sets of objectives in a hierarchical order
were identified, indicating a diversity of strategies followed by farm holders; the usual pattern comprises
three or four different objectives – with one or two prevailing.
The relative importance of the goals reveals that both elements of the composite entity "farm
business/farm household" are represented in the decision making process.The varying hierarchy of
objectives across the farms, accounted for by a series of farm and household particular characteristics, is
another important finding.
221
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
REFERENCES
Amador F., Sumpsi J.M. and C. Romero, 1999, A non-interactive methodology to assess farmers΄ utility functions: An
application to large farms in Andalusia, Spain, European Review of Agricultural Economics, 2, pp.: 92-109.
Arriaza, M. and Gómez-Limón J.A., 2003, Comparative performance of selected mathematical programming models,
Agricultural Systems, 77, pp. 155–171.
Gόmez-Limόn, A.G. and Berbel, J., 2000, Multicriteria analysis of derived water demand functions: a Spanish case study,
Agricultural Systems, 63, pp. 49-72.
Gόmez-Limόn, A.G., Arriaza, M. and Riesgo, L., 2003 An MCDM analysis of agricultural risk aversion, European Journal of
Operational Research, 151, pp. 569–585.
Latinopoulos, D. , 2008, “Estimating the potential impacts of irrigation water pricing using multicriteria decision making
modelling: an application to northern Greece, Water Resource Management, 22, pp. 1761-1782.
Manos B., Bournaris T. et al., 2007, Regional Impact of Irrigation Water Pricing in Greece under Alternative Scenarios of
European Policy: A Multicriteria Analysis, Regional Studies, 40(9), pp. 1055-1068.
Pedersen, A., Nielsen, H., Christensen T., and B. Hasler (2012) Optimising the effect of policy instruments: a study of
farmers' decision rationales and how they match the incentives in Danish pesticide policy, Journal of Environmental
Planning and Management, 55 (1), 1-17.
Sintori, A., Rozakis, S. and C. Tsiboukas, 2010, Utility-derived supply function of sheep milk: The case of Etoloakarnania,
Greece, Food Economics - ActaAgricultura Scandinavia C, 7 (2-4), pp. 87-99
Sumpsi, J.M., Amador, F. and Romero, C., 1996 On farmers’ objectives: A multi-criteria approach, European Journal of
Operational Research, 96, pp. 64-71.
222
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Karanikolas P., Kremmydas D., Rozakis S.| The Multiplicity of Goals in Tree-cultivating Farms in Greece
APPENDIX
Table A.1: Structural Characteristics (average values for each cluster)
Revenu
e/Variab
On- le Costs Revenu
farm Off-farm Farm [if >1: e/Total
Olive Olive Oran Labour Hired Family Off- Family Family Variab farm is Costs [if
Farm Utilized Trees, Trees, ge Mandar Equival [family Labour Labour in farm Total Labour/ Income/ le viable in >1: farm
Numb Head's Agr. non- irrigat groo in ent Workin + /On- Direct Family Family Total Total Costs/ the is viable
Clust er of Age Area irrigated ed ves grooves Househ g hired] farm Marketing Labour Labour Family Family Total short in the
er Farms (years) (ha) (ha) (ha) (ha) (ha) old Size Adults (AWU) Labour (AWU) (AWU) (AWU) Labour Income Costs run long run
1 6 50 1,5 1,1 0,3 2,1 2,0 0,30 48% 0,11 3,50 3,76 93% 17% 30% 7,0 2,1
2 17 59 3,9 1,8 1,1 0,7 0,2 2,1 2,4 0,84 33% 0,53 1,21 2,31 52% 51% 40% 6,5 2,6
3 7 60 0,8 0,6 0,1 2,2 2,4 0,10 73% 1,29 1,31 98% 9% 18% 4,7 0,8
4 18 55 2,0 1,2 0,5 0,2 2,4 3,1 0,33 50% 0,16 3,59 3,91 92% 22% 36% 6,6 2,4
5 6 65 1,4 0,3 0,2 0,7 0,1 1,8 2,5 1,18 4% 0,11 0,88 2,13 41% 46% 14% 11,9 1,6
6 16 57 2,9 2,1 0,4 0,3 0,1 2,1 2,6 0,45 41% 0,04 2,56 2,87 89% 35% 44% 7,8 3,4
All
Far 70 57 2,4 1,4 0,4 0,5 0,2 2,1 2,6 0,53 33% 0,20 2,31 2,86 81% 33% 35% 7,1 2,5
ms
223
nd th
2 International Symposium and 24 National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
Abstract
Data envelopment analysis (DEA) is a nonparametric method in operational research and economics for the estimation
of production efficiency frontiers. DEA requires general production and distribution assumptions only. However, if these
assumptions are too weak, efficiency levels may be systematically underestimated in small samples. In this paper a
multi-criteria additive value model is proposed based on a consistent family of criteria composed be DEA’s input and
output criteria. A real world case study is also conducted dealing with the evaluation of agricultural units in Greece in
the frame of the European Common Agricultural Policy (CAP). CAP is set and reformed in the frame of providing
producers in primary sector via a single payment schema as a direct funding based on the land area, where farmers
obtain ‘entitlements’. The total production and/or the total area are taken into account to the financial aid as form of
direct payment. However, since the production is no longer part of the evaluation so called sofa-farmers are financially
aided. The goal of this study is to propose a support methodology to financial aid as reward, towards 2013-2020 reform
of the policy. Industrial tomato production is described and the method is used as a tool for assessing global values.
Production of industrial tomato in Greece is observed and analyzed which consists of approximately four hundred
farmers. The UTA ordinal regression approach is proposed in order to assess a set of additive value functions. The
preference information used in UTA method is given in the form of a partial pre-order on a subset of farmers (reference
set). The most representative additive value function is proposed to be taken into consideration in order to financially
aid the productive farmers. Finally a suggestion is proposed towards the new policy, granting the production based
approach more effective and more objectively allocating the direct payments.
KEYWORDS
Multi-criteria Decision Analysis, Data Envelopment Analysis, Ordinal regression, Robustness, Common Agricultural
Policy, Agricultural policy reform
1. INTRODUCTION
Data envelopment analysis (DEA) is a nonparametric method in operational research and economics for the
evaluation of relative efficiency of Decision Making Units (DMUs). DEA requires general production and
distribution assumptions (Charnes et al., 1978; Banker et al., 1984). However, if these assumptions are too
weak, efficiency levels may be systematically underestimated in small samples. In this paper a multi-criteria
additive value model is proposed based on a consistent family of criteria composed be DEA’s input and
output criteria as an alternative of DEA. The UTA ordinal regression approach initiated by Jacquet-Lagrèze
and Siskos (1982) is applied in order to assess a set of additive value functions. An illustrative real world
case study is also conducted dealing with the evaluation of agricultural units in Greece in the frame of the
European Common Agricultural Policy (CAP).
Common Agricultural Policy (CAP) is a system of European Union agricultural subsidies and programs which
has evolved significantly from the late 80s. The year 2003 (ECC 1997) the first pillar was reformed away
from production oriented policy, introducing the Single Payment Scheme (SPS) or known as Single Farm
Payment (SFP). Farmers receiving the SFP have the flexibility to produce any commodity on their land.
Therefore the products were integrated from coupled direct payment into decoupled direct payment over
the years past. Years of reference were introduced for the cultivations in order to calculate individual
224
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
entitlements, which is the average total area of fields the farmer cultivated during the years of reference.
The new SFP is subject to "cross-compliance" conditions relating to environmental, food safety and animal
welfare standards (European Union 2009). Hence this paper aims into providing a new tool of evaluation for
farmers, to avoid the “sofa farmers’ phenomenon” (European Commission 2010). Production of industrial
tomato in Greece during the year 2010, when the financial aid was based on the total production, is
observed and analyzed as a case study. A multi-criteria model is proposed in order to financial aid the
productive farmers – decision making units not only based on their entitlements (decoupled), but also
taking into account an evaluation (decoupled coupled payment).
The rest of the paper unfolds as follows: In section 2, the DEA and UTA multi-criteria models are briefly
outlined. Section 3 presents the case study and the obtained results. Finally, section 4 concludes the paper.
A Data Envelopment Analysis (DEA) problem consists of n Decision Making Units (DMUs)
D = {d j }, j = 1, ..., n having m Inputs X = {x i > 0}, i = 1, ..., m and s
OutputsY = {y r > 0}, r = 1, ..., s . For the kth DMU in order to obtain the efficiency score in classical
DEA, the following BCC LP problem is needed to be solved:
s
max hk(u,v) = ur y rk
r =1
(1)
subject to
m
i=1
v i x ik = 1 (2)
s m
ur y rj - v i x ij 0, j = 1, ..., n
r =1 i=1
(3)
s m
ur +
r =1 i=1
vi = 1 (4)
ur ε,v i ε r, i
From the above LP it is clear that each efficiency score is assessed to the best interest of the unit evaluated.
It is clear that the number of LPs needed to be solved is equal to the number of the evaluated DMUs. In
such a setting the scores are relative and competitive as altering the dataset will probably alter the
efficiency score. Therefore, this method cannot be applied since efficiency scores from DEA are relative, and
strongly dependent on the dataset. Applying DEA to a small sample of dataset leads to erroneous efficiency
scores, when trying to extrapolate the results to the complete dataset. DEA frontier is formed from efficient
units and there is a stability issue concerning the results. Furthermore, as the number of input/output
variables increases, so does the number of efficient units.
In this paper a multi-criteria additive value model is proposed based on a consistent family of criteria
composed be DEA’s input and output criteria as an alternative of DEA, as follows.
Supposing we have n units, utilizing m inputs to produce s outputs. Therefore we have A = {a1 ,a2 , ...,an }
is a finite set of alternative actions and the family of criteria is G = {g iI , g rO } , i = 1, ..., m ; r = 1, ..., s .
Each input criterion is a non-increasing real valued function on A, as followed,
225
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
where [g i*I , g iI* ] is the criterion evaluation scale, g i*I and g iI* is the worst and the best level of the i-th
input criterion respectively, and g iI (a) is the evaluation of performance of action α of the i-th input
criterion. Each input criterion is a non-decreasing real valued function on A, as followed,
where [g r*O , g rO* ] is the criterion evaluation scale, g r*O and g rO* is the worst and the best level of the r-
O
th output criterion respectively, and g (a) is the evaluation of performance of action α of the r-th output
r
criterion. The main target of this approach is the assessment of an additive multi-criteria value system,
which is expressed by the following formulae:
m s
u(g) = piui (g iI )+ pr ur (g rO ) (7)
i=1 r =1
under the following constraints,
ui (g iI* ) = 0, ui (g i*I ) = 1 , i = 1, ..., m (8)
ur (g r*O ) = 0, ur (g rO* ) = 1 , r = 1, ..., s (9)
s+m
i=1
pi = 1 (10)
pi 0, i = 1, ..., s + m
where ui , i = 1, ..., m represent the marginal non-increasing value functions, which are normalized
between 0 and 1, defined on the respective criteria g iI ; g iI* , g i*I are respectively the worst and the best
evaluation level of criterion (Figure 1 left); ur , r = 1,...,s represent the marginal non-decreasing value
functions, which are normalized between 0 and 1, defined on the respective criteria g rO ; g r*O , g rO* are
respectively the worst and the best evaluation level of criterion (Figure 1 right);
g = (g 1I , g 2I , ..., g mI , g 1O , g 2O , ..., g sO ) is the multi-criteria evaluation vector; and pi is the relative
positive weight of the function u i .
ui (giI* ) ur (g rO* )
A set of additive value functions are assessed via the UTASTAR ordinal regression method (Siskos et al.,
1985; Jacquet-Lagrèze and Siskos, 2001). Firstly, a selection of a reference set of DMUs to infer the additive
value functions is performed using the UTASTAR method. Afterwards post-optimality analysis is applied in
226
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
order to evaluate the robustness of the additive model and to calculate the most representative additive
value function. Finally, extrapolation to the whole set of actions is made from the consistent preference
model.
In the year 2010 direct payment was decoupled for industrial tomato cultivation in Greece. Until 2010,
funding of tomato production was based on the total field area, taking into consideration however the total
production. From that year on, the annual production of the specific cultivation is constantly decreasing. A
case study of 565 farmers, who were active in Thessaly region, is conducted from the set of alternative
actions which is the 90% of Greece specific production. The case study is formed of six evaluation criteria,
two inputs and four outputs, which are presented in Table 1 and are briefly described below.
Input Criteria
g 1I : Production Cost (€): Value of expenses on fertilizers, pesticides, sprayers other chemicals, plant values
and other expenses during the year.
g 2I : Labor Cost (€): Cost of number of hours of operator, family, and hired farm labor – e.g. tilth, planting,
carving during the year.
Output Criteria
g 1O : Gross Farm Output (€): Gross value of final farm product (industrial tomato).
g 2I 2,200 200,000
Output Criteria
O
g1 15,000 3,000,000
g 2O 700 100,000
g 3O 1,500 120,000
g 4O 4 350
227
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
A sub-set of twenty-six alternatives from 565 was selected to form the reference set. The UTASTAR method
was applied for a subset of twenty-six farmer profiles and a zero error sum was obtained ( F 0 ). Post-
optimality analysis was conducted to obtain a robust result. The barycentral solution of the weights from
the analysis is reported in formula (11), while the global values, their final rankings as well as the
constructed reference set are presented in Table 2.
Table 2: DM’s Ranking evaluation, and reference actions of 26 farmers with global values from UTASTAR method.
DM’s/ Global Production Labor Gross Farm Financial Capital (€) Land (Ha)
Sample Value Cost (€) Cost (€) Output (€) Aid (€)
Ranking
1 0.4035 68362.60 88,530.07 1,186,849.00 26,952.86 17,895.97 160.20
2 0.3983 5,955.98 7,153.22 97,893.00 4,090.32 96,716.39 12.80
3 0.3298 30,516.17 34,679.61 586,874.00 21,509.46 42,914.02 78.40
4 0.3256 7,171.50 5,775.98 142,180.00 19,097.24 40,370.73 11.70
5 0.3243 10,563.92 13,811.82 244,347.00 13,419.73 38,855.44 30.90
6-7 0.3218 17,965.31 18,650.63 386,421.00 27,368.21 8,134.05 40.00
6-7 0.3218 5,082.84 4,169.35 100,850.00 4,382.15 34,059.36 8.50
8-9 0.3217 11,614.75 8,279.25 232,295.00 22,012.69 14,994.18 15.90
8-9 0.3217 12,787.26 10,051.00 259,182.00 22,549.36 7,585.51 20.00
10 0.3211 9,453.03 9,510.74 210,067.00 21,990.17 13,029.46 20.00
11 0.3208 19,374.07 16,843.65 385,786.00 11,194.64 9,844.48 35.00
12 0.3206 15,989.84 13,659.32 325,393.00 9,620.05 7,484.52 28.00
13 0.3199 7,023.26 28,730.96 154,087.00 27,292.46 40,843.08 75.10
14 0.3194 2,519.14 3,069.81 49,983.00 17,620.68 1,851.40 7.00
15 0.3192 14,899.30 14,505.27 294,297.00 6,490.65 6,917.73 31.30
16-21 0.3189 11,558.25 21,325.90 204,073.00 20,814.84 36,533.85 44.50
16-21 0.3189 7,814.84 6,937.55 152,868.00 5,751.71 13,413.66 14.60
16-21 0.3189 6,643.94 8,694.34 135,849.00 14,744.20 8,627.85 20.00
16-21 0.3189 8,211.64 9,076.68 170,607.00 7,924.17 13,680.84 20.00
16-21 0.3189 11,058.83 15,768.03 177,232.00 41,844.39 20,718.14 31.00
16-21 0.3189 5,890.17 7,718.46 94,710.00 33,815.83 6,068.83 14.60
22 0.3184 3,163.09 4,468.38 62,580.00 9,581.83 14,963.87 10.50
23 0.3183 22,523.37 24,299.45 415,586.00 13,581.08 19,085.81 54.80
24 0.3163 15,021.37 16,893.56 257,906.00 8,409.97 20,388.46 28.30
25 0.3162 26,400.70 28,525.61 467,139.00 15,687.97 14,154.30 45.50
26 0.3149 21,445.32 29,771.78 383,530.00 19,876.44 7,670.98 55.00
The consisted, to the DM’s policy, preference model created by the sub-set is extrapolated to the whole set.
The results of the robust ordinal regression (Greco et al. 2008, Greco et al. 2010) emerge after the
implementation of the additive value model. The DEA BCC scores as well as UTAStar final global values of all
units are displayed in Figure 2,
228
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
Figure 2: (left) DEA scores of all DMUs, (right) Global values - Extrapolation to the whole set of units.
In Figure 2 DEA scores of all DMUs as well as the final global values from UTAStar method of all units of the
dataset are presented. In DEA most of the evaluated units receive maximum score, and are regarded as
efficient ones. However, in UTAStar the final global values of units are obtained from evaluation of the
criteria and the ranking of the sub-set is consisted to the DM’s preference model. Therefore, most of the
global values are less than one. The global value of each farmer is actually an index that can be used for SPS
in order to prevent "sofa farmers" from receiving EU money.
4. CONCLUSION
In this paper we illustrated the evaluation of units with input and output criteria, using robust ordinal
regression (UTASTAR) as alternative approach to DEA. The family of criteria consists of input and output
criteria similarly to DEA. As the input criteria values increase, the global additive value decreases while the
output criteria values have the opposite effect. Although DEA is a powerful approach, the competitive
nature of the methodology cannot be applied.
Furthermore, UTASTAR evaluation could provide a necessary tool to evaluate farmers in the single payment
scheme. Finally a suggestion is proposed towards the new CAP, granting the production based approach
more effective and more objectively dividing the direct payments. Although decoupled, coupled evaluation
is taken into consideration for the final payment. The proposed decoupled payment scheme is likely to
eliminate the phenomenon of “sofa farmers”, since farmers would be financially aided after been
evaluated. Extreme ranking analysis (Kadzinski et al. 2012) can also be performed to evaluate each farmer's
best and worst possible values. With this approach, the DM is able to distinguish how robust the initial
ranking is, and classify the global values into ranges.
ACKNOWLEDGEMENT
This study is funded and supported by the Institute of National Funds of Greece, since the first author is
under financial scholarship.
REFERENCES
Banker R.D., Charnes A., & Cooper W.W., 1984. Some models for estimating technical and scale inefficiencies in data
envelopment analysis. Management Science, Vol. 30, No. 9, pp. 1078–1092.
Charnes A., Cooper W.W., & Rhodes E., 1978. Measuring the efficiency of decision making units. European Journal of
Operational Research, Vol. 2, No. 6, pp. 429–444.
229
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Valiakos A., Siskos Y.| From Data Envelopment Analysis to Multi-criteria Decision Support: Application to
Agricultural Units Evaluation in Greece
Commission of the European Communities, 1997. Agenda 2000—Communication: For a Stronger and Wider Union,
COM(97) 2000 Office for Official Publications of the European Communities, Luxemburg.
European Commission, 2010. The CAP Towards 2020: Meeting the Food, Natural Resources and Territorial Challenges of
the Future. Communication from the Commission to the European Parliament, the Council, the European Economic and
Social Committee and the Committee of the regions, Brussels.
European Union, 2009. Council Regulation (EC) No 1782/2003 of 29 September 2003 establishing common rules for
direct support schemes under the common agricultural policy and establishing certain support schemes for farmers and
amending Regulations (EEC) No 2019/93, (EC) No 1452/2001, (EC) No 1453/2001, (EC) No 1454/2001, (EC) 1868/94, (EC)
No 1251/1999, (EC) No 1254/1999, (EC) No 1673/2000, (EEC) No 2358/71 and (EC) No 2529/2001.
Greco S., Mousseau V., Słowiński R., 2008. Ordinal regression revisited: Multiple criteria ranking using a set of additive
value functions, European Journal of Operational Research, Vol. 191, No. 2, pp. 416-436.
Greco S., Słowiński R., Figueira J., Mousseau V., 2010. Robust ordinal regression, Trends in multiple criteria decision
analysis, M. Ehrgott, S. Greco, and J. Figueira (eds.), Springer, Berlin.
Jacquet-Lagrèze E., Siskos J., 1982. Assessing a set of additive utility functions for multicriteria decision making,
European Journal of Operational Research, Vol. 10, No. 2, pp. 151-164.
Jacquet-Lagrèze, E., Siskos, J., 2001. Preference disaggregation: 20 years of MCDA experience, European Journal of
Operational Research, Vol. 130, No. 2, pp. 233-245.
Kadzinski M., Greco S., Slowinski R., 2012. Extreme ranking analysis in robust ordinal regression, Omega, Vol. 40, 488-
501.
Siskos Y., Yannacopoulos D., 1985. UTASTAR: An ordinal regression method for building additive value functions.
Investigação Operacional, Vol. 5, No. 1, pp. 39–53.
230
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
Abstract
Today, dietary needs and globalization highlight the significance of the agrifood supply chain management. In addition,
the short life-cycle of agricultural commodities further promotes novel pricing schemas, which are usually related to the
quality level (freshness) of the products. To that effect, in this paper we study ordering strategies for a perishable
product, the price of which is related to its freshness. In this context, we assume a retailer who merchandises a single
product at a given price. However, the retailer has the option to use a more expensive, emergency supplier to boost
revenues when the products from the regular channel are deteriorated. There is also the assumption that consumers
prefer to purchase fresh products at a premium price, while, in case of a fresh product stock-out, they are willing to buy
inferior quality products at a lower price. A profit maximization model (revenue management) is developed to
determine the optimal regular and emergency quantities to be procured. Assuming a newsvendor type analysis, we
provide an analytic expression for the expected total system profit and closed form expressions for the optimal (regular
and emergency) ordering quantities. Based on numerical experimentation, we demonstrate that emergency
replenishments may have, under certain conditions, a pivotal role in the system’s expected total profit. Additional
sensitivity analyses demonstrate the significant effect of market price, procurement cost, and lost sales cost of fresh
and deteriorated products on profits, and the corresponding procurement policies. Finally, a threshold for shifting from
single sourcing to dual sourcing policies is indicated in each case.
KEYWORDS
Agrifood supply chain management, inventory management, perishable products, revenue management.
1. INTRODUCTION
Today, food supply chain management has attracted the interest of both academia and business due to the
associated challenges and opportunities. This interest is further fueled by the globalization of food markets
that involves greater transportation distances and higher competition (Monteiro, 2007; Ahumada and
Villalobos, 2009). This is even more prominent regarding Agrifood Supply Chains (AFSCs) where the
perishability of goods and the underlined food safety concerns introduce new challenges and increase the
complexity of the necessary supply chain management operations (Yu and Nagurney, 2013). Indicatively,
every year roughly ⅓ of edible products is lost or wasted globally (Widodo et al., 2006; FAO, 2011).
To that end, the plethora of agrifood suppliers and the consumers’ diversified dietary preferences render
the market demand satisfaction and revenue maximization with additional complexity. Findings in the
literature confirm that corporations elaborate alternative purchasing options to improve their performance
in terms of profit, customer service level and environmental impact (e.g. Warburton and Stratton, 2005;
Cachon and Terwiesch, 2009; Rosic and Jammernegg, 2013). This strategy is even more pronounced in the
agrifood sector due to the short product shelf-life and the challenge to develop dynamic pricing policies
closely related to product quality deterioration (Elmaghraby and Keskinocak, 2003).
231
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
In this context, we tackle a dual sourcing problem of a single perishable good considering an inexpensive,
regular supplier and a more expensive, emergency supplier. Assuming a newsvendor type analysis, we
provide an analytic expression for the expected total system profit and closed form expressions for the
optimal ordering quantities. Based on numerical experimentation, we demonstrate that emergency
replenishments may have, under certain conditions, a pivotal role in total system revenue.
The rest of the paper is organized as follows. Section 2 briefly summarizes the relevant literature review.
The system description is discussed in Section 3, while Section 4 contains the proposed model and the
developed solution algorithm. The implementation of the model upon a numerical example is presented in
Section 5. Finally, we sum up in Section 6 with conclusions and future research directions.
2. LITERATURE REVIEW
Inventory management of perishable products has attracted considerable research efforts in the past (e.g.
Nahmias, 1982; Raafat, 1991; Goyal and Giri, 2001; Wee and Law, 2001; Li et al. 2010). A critical constituent
of perishable inventory management refers to the differentiated pricing strategies in order to satisfy
demand and optimize profit (Elmaghraby and Keskinocak, 2003). Thus, revenue management is a key for
retailers to maximize their revenues and optimize their supply chain performance. As consumers prefer to
purchase the freshest produce at a reasonable price (Bodily and Weatherford, 1995; Wilcock et al., 2004),
the offering of price discounts is imperative to allure consumers and satisfy the price-dependent demand in
food markets (Lowe and Preckel, 2004; Lusk and Hudson, 2004; Ahumada and Villalobos, 2009).
There are also cases where to maximize profit, there is needed to place emergency orders for fresh
commodities as to reap the potential from the market demand. There are many indicative research efforts
that investigated periodic review inventory systems that included regular and emergency replenishment
strategies with a single period and negligible lead times (Barankin, 1961; Neuts, 1964). Recently, Inderfurth
et al. (2013) approached the dual sourcing problem where a retailer has the option to employ a long-term
supplier and a spot market. Nevertheless, the work that is closest to the present study is that of Tagaras and
Vlachos (2001) who analyzed a periodic review replenishment system with two replenishment modes, while
they examined a heuristic optimization algorithm and an approximate model for significant cost reductions.
3. SYSTEM DESCRIPTION
In the system that we study here, we consider a grocery retailer who merchandises a single perishable
product with a limited shelf-life. For our revenue management analysis we assume that the product shelf-
life is divided into two discrete stages: (i) the “freshness stage” (Stage I), during which the product
maintains all its initial physical and nutritional properties, and (ii) the “deteriorated stage” (Stage II), during
which few of the aforementioned properties have decayed but the product can be still traded by the
retailer. To that end, a fresh product unit is sold at a price p, whereas during the successive, deteriorated
Stage II the respective selling price is lower (p’<p) considering that the product expires at the end of this
Stage (Banerjee and Turner, 2012; Tajbakhsh et al., 2011). The retailer has to satisfy a stochastic demand
that is independent of the availability of fresh products. To this effect, we assume that consumers are
willing to pay a premium price to purchase fresh products (Ferguson and Ketzenberg, 2006). On the other
hand, in case of a fresh product stock-out occurs, the rest of the demand is satisfied by the less expensive
inferior-quality products. Thus, the retailer revenues depend on the availability of fresh produce.
In order to satisfy the market demand, the retailer has contracted agreements with two agricultural
producers that can supply the required product quantities. Considering that the two suppliers differ in
terms of distance from the retailer, they pursue a different pricing strategy. Specifically, the first (regular)
supplier is located offshore, but is rather cheap and inflexible. The second (emergency) supplier is in a
closer distance from the retailer and can deliver any requested orders on short notice, but at a higher cost.
232
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
Therefore, the retailer has the option to occasionally place orders from the regular supplier with a period
equal to the shelf-life of the product. However, the emergency supplier may be used to increase the retailer
revenue when the products of the regular order become deteriorated during the Stage II. Therefore, the
decisions that the grocery retailer has to consider are: (i) the necessity of placing an emergency order, and
(ii) the order size for both regular and emergency orders to maximize the expected total profit (ETP) of the
system. The only source of revenues for the retailer is product sales, while the associated costs include the
classic inventory-related costs i.e. procurement cost and lost sales costs.
4. MODEL DEVELOPMENT
The above system is modelled as follows. Supplier 1 is the regular supplier, while Supplier 2 is the
emergency supplier. The retailer places an order of size Q1 to the regular supplier every SL time units, where
SL is the product’s shelf-life. The shelf-life is split into Stage 1, where the product is fresh, and into Stage 2,
where the product is deteriorated. The market price for the fresh produce is p whereas the deteriorated
products are valued by a lower price p’ (p>p’). Naturally, a product aged more than its shelf-life should be
disposed without any revenue for the retailer. The retailer may also place emergency orders during the
Stage 2 of the regular order. Herein, we examine a model that allows for a single emergency order of size Q2
at the beginning of Stage 2. We assume that the procurement cost from Supplier 2, c2, is higher than the
procurement cost from Supplier 1, c1 (c2>c1). We further assume that although the regular replenishment
lead time is positive, the transportation of produce occurs in such a manner that the decaying process is
retarded. Additionally, the lead time of the emergency supplier is considered negligible.
Demands during Stage 1 and Stage 2 of the review period are continuous random variables denoted by X1
and X2, respectively. These are independent and identical distributed (i.i.d.) across the two Stages. All
unsatisfied demand is charged by a lost sales cost, denoted by b. We assume that the customers prefer to
purchase the more expensive fresh products and then, when fresh products are no longer available, they
purchase the less expensive, deteriorated products. Based on this assumption, we also assume that the
emergency ordered products are only sold as fresh. The validity of this assumption can be justified taking
into account that the size of the expensive emergency order is relative small and since these products are
sold with priority as fresh produce during Stage 2, they will be probably sold out by the end of Stage 2. The
few remaining (if any) may be sold during Stage 1 of the consecutive period, but with lower priority
compared to the fresh massive regular order that has just delivered. This combined probability is negligible.
The latter assumption renders possible a single period (newsvendor typed) mathematical analysis, since all
products are ordered and delivered in one period. Table 1 summarizes the model parameters and indices.
Symbol Description
i Supplier index (1 = regular supplier, 2 = emergency supplier)
J Stage of shelf-life (j = 1, 2))
P Retailer’s unit price for fresh products
P’ Retailer’s unit price for deteriorated products
ci Procurement cost for supplier i (i = 1, 2)
b Lost sales cost
Qi Order quantity for supplier i (i = 1, 2)
Xj Stochastic demand during stage j (j = 1, 2)
Fj(xj) Probability density function of xj (j = 1, 2)
Fj(xj) Cumulative probability density function of xj (j = 1, 2)
Decision making in this system involves the determination of the optimal order quantities Q1 and Q2 that
maximize the expected total profit, ETP(Q1, Q2), during a review period:
233
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
Q1 ∞ Q2 ∞
Equation 1
By applying first order conditions on Equation 1 we obtained the following close-form equations for the
determination of optimal solutions.
∂
ETP Q1 , Q2 = 0
∂Q1
Q2
p + b − c1 − pF1 Q1 + p′ μ2 − Q2 f1 Q1 − bF1 Q1 F2 Q2 + p′ Q ′
2 f1 Q1 F2 Q 2 − p f1 Q1 x2 f2 x2 dx2
0
Q 1 +Q 2 Q 1 +Q 2
−b f2 x2 F1 Q1 + Q2 − x2 dx2 − b Q1 + Q2 f2 x2 f1 Q1 + Q2 − x2 dx2
Q2 Q2
Q 1 +Q 2 Q1
+b x2 f2 x2 f1 Q1 + Q2 − x2 dx2 + b x1 f1 x1 f2 Q1 + Q2 − x1 dx1 = 0
Q2 0
Equation 2
and
∂
ETP Q1 , Q2 = 0
∂Q2
Q 1 +Q 2
p + b − c2 − p′ F1 Q1 − p + b F2 Q2 + p′ F1 Q1 F2 Q 2 − b f2 x2 F1 Q1 + Q2 − x2 dx2
Q2
Q 1 +Q 2 Q 1 +Q 2
− b Q1 + Q 2 f2 x2 f1 Q1 + Q2 − x2 dx2 + b x2 f2 x2 f1 Q1 + Q2 − x2 dx2
Q2 Q2
Q1
+b x1 f1 x1 f2 Q1 + Q2 − x1 dx1 = 0
0
Equation 3
234
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
Although it is not possible to prove that Equations 2 and 3 are global optimum conditions due to the
complexity of the Hessian matrix for the general distribution case, this condition can be proven for specific
distribution as in the numerical example below.
5. NUMERICAL EXAMPLE
To illustrate the applicability of the model as well as a few managerial insights, we implement the model for
a specific numerical example (base case). In this example we assume that the durations of Stage 1 and Stage
2 are equal, and for obtaining the demand during both stages follows an exponential distribution with a
mean of 1⁄λ1 = 1⁄λ2 = 100 items per Stage. The system parameters are listed below:
c1=6 [€⁄unit] c2=9 [€⁄unit] p=16 [€⁄unit] p’=9 [€⁄unit] b=8 [€⁄unit]
-λ1x1 -λ2x2 -λ1x1 -λ2x2
f1(x1)=λ1×e f2(x2)=λ2×e F1(x1)=1-e F2(x2)=1-e
Under the exponential demand assumption, the ETP (Equation 2) become very simple and it is easily proven
to be concave with respect to Q1 and Q2. Then, by applying first order conditions, we obtain the optimal
order quantities Q1=198 [units] and Q2=25 [units] that yield an ETP=557.35 [€]. To further examine the
behavior of the model, a sensitivity analysis on the ETP, Q1, and Q2 is conducted with respect to economic
parameters (costs and prices). To that end, three numerical "what-if" analyses are performed in order to
elaborate the role of selling prices (analysis 1), procurement costs (analysis 2), and lost sales cost (analysis
3), respectively, in the system’s behavior.
Sensitivity Analysis 1
Figure 1 depicts the impact of the price of deteriorated products on expected total profit and optimal order
quantities (the optimal Q1 and Q2 are printed as stacked areas, thus the total order quantity Q1+Q2 during
the period is also depicted). The horizontal axis represents the ratio of the deteriorated produce unit price
divided by the fresh produce unit price, rp=p’⁄p. In general, as the price of the inferior-quality produce units
approaches the fresh units' price, the model indicates the need for higher regular order quantities. The
emergency orders have a significant role when p’<<p. However, there is a threshold value (rp~70% in this
example), above which the emergency sourcing is no more beneficial, i.e. Q 2=0. Additionally, above this
threshold, the optimal regular order quantity seems to stabilize (Q 1≅constant), as well as the increase rate
of the ETP function. The quantities above the dashed line (average demand) correspond to the percentage
of ordered products that will finally become obsolete and thus they will be disposed. Finally, the expected
total profit is an increasing function of this ratio (rp).
Sensitivity Analysis 2
Figure 2 depicts the impact of the procurement cost on expected total profit and optimal order quantities.
The horizontal axis represents the ratio of the procurement unit price for Supplier 1 divided by the
procurement unit price for Supplier 2, rc=c1⁄c2. In general, when the procurement cost for the regular
supplier approximates the emergency supplier’s one, the model indicates the need for smaller regular and
bigger emergency orders. This behavior is reasonable since the fresh emergently ordered products are
attractive for the customer and they can purchased at almost the same cost with the regular orders.
However, there is a threshold value (rc≅30% in this example), below which the emergency sourcing is no
more attractive, i.e. Q2=0. Finally, we observe that the expected total profit is a decreasing function of rc.
We are also able to detect a threshold (rc≅0.3) below which the emergency ordering policy is not beneficial
for the retailer, and thus he should shift his ordering policy to the single, regular-sourcing one, i.e. Q2=0.
Sensitivity Analysis 3
Figure 3 depicts the impact of the lost sales cost on expected total profit and optimal order quantities. The
horizontal axis represents the ratio of the lost sales/penalty cost divided by the procurement unit cost from
235
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
the regular supplier, rp=b⁄c1. In general, when lost sales cost increases, the optimal ordering policies tend to
place higher sized orders to both suppliers in order to shield the retailer against high penalty costs.
Figure 9 Impact of deteriorated price on the expected total Figure 2 Impact of procurement price on the expected total
profit and optimal order quantities profit and optimal order quantities
Figure 3 Impact of lost sales cost on the expected total profit and optimal order quantities
6. CONCLUSIONS
In this manuscript we study a supply chain of a perishable product with a more expensive emergency
replenishment option. An analytic model for the determination of the optimal order quantities has been
developed. A sensitivity analysis with respect to specific cost parameters documents that under certain
circumstances the emergency ordering option could increase total revenue and profit. The examination of
the effect of the demand variability and skewness by employing different demand distributions could be an
interest extension of this work in the future.
ACKNOWLEDGEMENT
This research paper has been conducted in the context of the GREEN-AgriChains project that is funded from
the European Community’s Seventh Framework Programme (FP7-REGPOT-2012-2013-1) under Grant
Agreement No. 316167. All the above reflect only the authors’ views; the European Union is not liable for
any use that may be made of the information contained herein.
236
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
REFERENCES
Ahumada O. and Villalobos J.R., 2009. Application of planning models in the agri-food supply chain: a review, European
Journal of Operational Research, Vol. 196, No. 1, pp.1–20.
Banerjee P.K. and Turner T.R., 2012. A flexible model for the pricing of perishable assets, Omega, Vol. 40, No. 5, pp.533–
544.
Barankin, E.W. 1961. A delivery-lag inventory model with an emergency provision, Naval Research Logistics Quarterly,
Vol. 8, No. 3, pp.285–311.
Bodily S.E. and Weatherford L.R., 1995. Perishable-asset revenue management: Generic and multiple-price yield
management with diversion, Omega, Vol. 23, No. 2, pp.173–185.
Cachon G. and Terwiesch C., 2009. Matching supply with demand ‒ An introduction to operations management.
McGraw-Hill, Columbus, OH, USA.
Elmaghraby W. and Keskinocak P., 2003. Dynamic pricing in the presence of inventory considerations: Research
overview, current practices, and future directions, Management Science, Vol. 49, No. 10, pp.1287-1309.
FAO, 2011. Global food losses and food waste. Food and Agriculture Organization of the United Nations, Rome.
Ferguson M. and Ketzenberg M.E., 2006. Information sharing to improve retail product freshness of perishables,
Production and Operations Management, Vol. 15, No. 1, pp.57–73.
Goyal S.K. and Giri B.C., 2001. Recent trends in modeling of deteriorating inventory, European Journal of Operational
Research, Vol. 134, pp.1–16.
Inderfurth K., Kelle P. and Kleber R., 2013. Dual sourcing using capacity reservation and spot market: Optimal
procurement policy and heuristic parameter determination, European Journal of Operational Research, Vol. 225, No. 2,
pp.298–309.
Li R., Lan H. and Mawhinney J.R., 2010. A review on deteriorating inventory study, Journal of Service Science and
Management, Vol. 3, pp.117–129.
Lowe T.J. and Preckel P.V., 2004. Decision technologies for agribusiness problems: a brief review of selected literature
and a call for research, Manufacturing & Service Operations Management, Vol. 6, No. 3, pp.201–208.
Lusk J.L. and Hudson D., 2004. Willingness-to-pay estimates and their relevance to agribusiness decision making, Review
of Agricultural Economics, Vol. 26, No. 2, pp.152–169.
Monteiro D.M.S., 2007. Theoretical and empirical analysis of the economics of traceability adoption in food supply
chains, Ph.D. Thesis, University of Massachusetts Amherst, Amherst, MA, USA.
Nahmias S., 1982. Perishable inventory theory: a review, Operations Research, Vol. 30, pp.680–708.
Neuts M.F., 1964. An inventory model with an optional lag time, Journal of the Society for Industrial and Applied
Mathematics, Vol. 12, No. 1, pp.179-185.
Raafat F., 1991. Survey of literature on continuously deteriorating inventory model, Journal of the Operational Research
Society, Vol. 42, pp.27–37.
237
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Keramydas C., Tsolakis N., Vlachos D., Iakovou E.| Revenue Management of Perishable Products with
Dual Sourcing and Emergency Orders
Rosic H. and Jammernegg W., 2013. The economic and environmental performance of dual sourcing: A newsvendor
approach, International Journal of Production Economics, Vol. 143, No. 1, pp.109-119.
Tagaras G. and Vlachos D., 2001. A Periodic review inventory system with emergency replenishments, Management
Science, Vol. 47, No. 3, pp.415–429.
Tajbakhsh M.M. Lee C. and Zolfaghari S., 2011. An inventory model with random discount offerings, Omega, Vol. 39, No.
6, pp.710–718.
Warburton R.D. and Stratton R., 2005. The optimal quantity of quick response manufacturing for an onshore and
offshore sourcing model, International Journal of Logistics: Research and Applications, Vol. 8, No. 2, pp.125-141.
Wee H.-M. and Law S.-T., 2001. Replenishment and pricing policy for deteriorating items taking into account the time–
value of money, International Journal of Production Economics, Vol. 71, pp.213–220.
Widodo K.H., Nagasawa H., Morizawa K. and Ota M., 2006. A periodical flowering–harvesting model for delivering
agricultural fresh products, European Journal of Operational Research, Vol. 170, No. 1, pp.24–43.
Wilcock A., Pun M., Khanona J., Aung M., 2004. Consumer attitudes, knowledge and behaviour: a review of food safety
issues, Trends in Food Science & Technology, Vol. 15, No. 2, pp.56–66.
Yu M. and Nagurney A., 2013. Competitive food supply chain networks with application to fresh produce, European
Journal of Operational Research, Vol. 224, No. 2, pp.273–282.
238
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
Abstract
One of the most used tools for training artificial neural networks is the backpropagation algorithm. However, in some
practical applications, it may be very slow. This paper proposes a new strategy based on the use of the Bi-hyperbolic
function which offers more flexibility and a faster evaluation time. The efficiency and the discrimination capacity of the
proposed methodology are shown through a set of computational experiments. Comparisons were done with
traditional problems of the literature.
KEYWORDS
Optimization; backpropagation algorithm; neural networks; activation functions; Bi-hyperbolic function;
1. INTRODUCTION
Artificial Neural Networks (ANNs) have been increasingly used for a large group of applications. Multilayer
Perceptrons (MLP) is the most used ANN model. The wide range of applications demands an efficient
training algorithm and, according to the literature, the backpropagation algorithm is the most recurrent. It
is computationally efficient and has solved the problem of backward propagation of the error for multiple
layers ANNs. Some problems are still found in the use of this algorithm which limits an ampler application.
Usually it takes a long time to process. This slower processing time prevents its use in many practical
applications, as Schiffmann et al. (1994) and Otair et al. (2005) have discussed. One of the factors believed
to be responsible for slowing this process is the activation function imbedded in the neurons of the
network. The proposal presented in this paper uses a new activation function, the Bi-hyperbolic function
which shows the necessary characteristics and is faster to compute than others sigmoid functions, as states
Xavier (2005).
This paper is organized as follows. In Section 2, we briefly describe the structure of an ANN and the
Backpropagation algorithm. In Section 3, we present some computational experiments executed to
compare the performance of the Bi-hyperbolic function in contrast with the Logistic function. The
convergence and generalization aspects, as well as the processing time were evaluated and obtained results
compared with classifiers presented in literature. Conclusions are drawn in Section 4, where we explain the
characteristics of this work that gave these stimulating results.
ANNs works building connections between the mathematical processing units called neurons. Knowledge is
coded in the network by the strength of the connections between different neurons, called weights, and by
the generation of neuron layers that work in parallel. The computational power of ANNs, Haykin (2001)
explains, derives from its distributed parallel structure and its ability to learn and generalize.
Artificial neurons are the processing units of ANNs and their activation is obtained by applying the weighted
sum of the values of each input to a transfer function that may activate or not the output.
239
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
The activation or transfer function is one of the most important components of the artificial neuron. Its aim
is to limit the valid range of the neuron output signal to a finite value. The use of a sigmoid function is
recommended since the early researches on backpropagation and neural learning as can be seen in
Grossberg (1982), Hopfield (1984) and Williams (1986) and, although the most used activation functions
found in the literature is the Logistic Function, a series of sigmoid-type functions have been used, as can be
seen in Kalman and Kwasny (1992). An important point to be considered when using a sigmoid function is
the saturation phenomena which can significantly slow down learning due to a very flat error surface, as
explained by Gori and Tesi (1992), Schiffmann et al. (1994).
We are proposing the use of a new activation function, the Symmetric Bi-hyperbolic Function, described by
Xavier (2005), which has the desirable property of being totally derivable, image in the interval [0, 1] and
offers the symmetry property. It is defined by:
(v, , ) 2 (v 1 / 4 ) 2 2 2 (v 1 / 4 ) 2 2 1/ 2
The existence of two parameters, one more than in other activation functions, gives this function more
flexibility to represent the phenomena modeled with artificial neural networks, giving to this function the
capacity to approximate any function, in a more synthetic way, with fewer neurons.
Through the proper manipulation of these parameters, this function offers the possibility of working with
the saturation phenomenon more conveniently as well as the possibility of avoiding an unwanted local
minimum.
The capacity of a neural network is affected by: the size and efficiency of the training data; by the
architecture of the network and the number of processors in hidden layers; and by the complexity of the
problem. Heuristics are used together with a number of trial and error tasks in the architecture and
definitions of the network. The main objective is to obtain a network that generalizes, rather than
memorize, the patterns used in training, as explained in, Hornik (1989), Hecht-Nielsen (1989) and Stathakis
(2009).
Training an MLP is to adjust the weights and thresholds of their units in order to obtain the desired results.
When a pattern is initially introduced to the network it produces an output and, after measuring the error
between the given and the desired output, the weights are adjusted to reduce this distance.
The most used algorithm for training MLP networks is the Backpropagation, which operates in a two-step
sequence. In the first step, a pattern is presented to the input layer of the network. The signal flows through
the network, layer by layer, until an answer is produced by the output layer. In the second step, the given
output is compared with the desired answer to this particular pattern. If it is incorrect, the error is
calculated and, as is described by Haykin (2001), this error is propagated from the output to the input layer,
and the connection weights of the inner layers are modified as soon as the error is retro-propagated.
3. COMPARATIVE STUDY
The proposal of this paper is to address the problem of optimizing the efficiency and the convergence rate
of the Backpropagation algorithm using the Bi-hyperbolic function. The Backpropagation architecture used
in this paper is the basic, classical version. To explicit only the influence caused by the activation functions,
we avoided using other optimizations forms in these experiments.
240
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
3.1. Experiment I
We used the Wisconsin Breast Cancer Database, organized by Wolberg (1991). It is a well-known base used
for evaluating classifiers. It is built with data from samples obtained through biopsies of human breasts with
suspected cases of malign tumors.
The holdout method was used varying the number of instances used for training and also the number of
neurons in the hidden layer. The obtained results for the model using Logistic Activation Function are
showed in Table 1, while, in Table, 2 we present results obtained by the model using Bi-hyperbolic
Activation Function.
The comparison of the data from these two tables, support our initial expectations that not only the use of
the new activation function is faster to compute, shows a better convergence and also generalizes better.
The architecture with less hidden neurons also shows that a smaller structure is necessary, consuming less
computational resources.
The accuracy obtained by the model using the bi-hyperbolic function with a training set of only 100
instances is better than was obtained by the Logistic function model with a training set of 500 instances,
and indicates a better generalization, even with less training. In this case, the processing time used by the
Logistic function was almost 83 times than the time used by the model with Bi-hyperbolic function. The
number of neurons in the hidden layer was also reduced to less than one third of the number used in the
model with the Logistic activation function. Besides the accuracy, all other measures showed better results
when the Bi-hyperbolic activation function was used.
To compare with results found in literature we can take, as example, the work of Polat and Güne (2006).
Using this same dataset from UCI, they explain that a great variety of methods were used for breast cancer
diagnosis problem and reached accuracies from 94.74% to 98.6%. In their paper they used the Least
Squares Support Vector Machines (LSSVM) to diagnose the breast cancer. In Table 3 their results are
241
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
compared with the ones we obtained in similar conditions of training. We obtained better results, even
with a smaller network and with less training necessity.
Table 3. Comparison of our results with one found in Polat and Güne (2006)
LSSVM BI-HYPERBOLIC
TRAINING/ ACCURACY PRECISION SENSITIVITY SPECIFICITY ACCURACY PRECISION SENSITIVITY SPECIFICITY
TEST (%) (%) (%) (%) (%) (%) (%) (%) (%)
50 – 50 95.89 94.87 93.28 97.30 99.71 100.00 100.00 99.55
70 - 30 96.59 94.52 95.83 96.99 100.00 100.00 100.00 100.00
80 - 20 97.81 97.87 95.83 98.88 100.00 100.00 100.00 100.00
In a more recent paper, Pandey et al. (2012) have used a Modular Neural Network (MNN) and Genetic
Algorithm (GA). As they explain, the used a MNN for classifying the input data vectors. The outputs
produced are combined by a Fuzzy C-means Integrator. Then, the GA is used to find the optimal
connections set in each of the six experts of the MNN. The results of this model are presented in Table 4,
along with the results of their model, they showed the results of a Multilayer Perceptron with BPA training
model (MLP-BPA), a Fixed Architecture Evolutionary ANN model (FAE-ANN), a Variable Architecture
Evolutionary ANN model (VAE-ANN) and Modular Neural Network (MNN). They used 70% of the dataset for
training and 30% for testing. As can be seen, our model offers a better classifying accuracy and holds a more
simple architecture.
3.2. Experiment II
The Vertebral Column Dataset, distributed by Bache and Lichman (2013), is a biomedical data set, which
consists in classifying patients as belonging to one out of three categories.
We want to evaluate the performance of the standard MLP, using the bi-hyperbolic activation function and
backpropagation training algorithm, compared with others classifiers found in literature.
The experiments were conducted splitting the original dataset in two sets, one to be used for training and
the other for testing. The results obtained are presented in Table 5. The output layer has three nodes.
Rocha Neto et al. (2006) reported results from the diagnostics module of a software platform, using the
Vertebral Column Dataset, comprising a preprocessing unit and classifiers with a MLP-based neural classifier,
a standard Naïve Bayes and a Generalized Regression Neural Network (GRNN). The best results obtained for
these classifiers are presented in Table 6. The training used 70% of the instances and the remaining was used
for testing.
242
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
F-
ACCURACY PRECISION SENSITIVITY SPECIFICITY
CLASSIFIER EPOCHS MEASURE
(%) (%) (%) (%)
(%)
MLP-based 2500 93.55 96.55 93.33 93.94 94.92
Naive Bayes 2500 89.25 88.41 96.83 73.33 92.42
GRNN 2500 91.40 92.86 95.59 80.00 94.20
In a more recent paper, using a training with 80% of dataset, Rocha Neto and Barreto, (2009) (024) showed
results obtained with a Support Vector Machines (SVM), a MLP classifier and a GRNN, which are presented
in Table 7.
Essam Abdrabou (2012) reported the use of an integrated classifier using the Case-Based Reasoning (CBR)
and Artificial Neural Network (ANN). The eZ-CBR has built one ternary classifier and the ANN used the
momentum term and a maximum number of 25,000 epochs. Its architecture showed 6 neurons in the input
layer, two hidden column with 6 neurons each, and an output layer with 3 neurons. The Accuracy reported
to this experiment was of 85%.
Chimieski and Fagundes (2013), reported that the best algorithm they used for Vertebral Column
diagnostics prediction was the Logistic Model Tree, which classified test instances with accuracy of 85.52%.
Consulting the results found in the tables above we can verify that the classifier MLP with the Bi-hyperbolic
activation function showed the best results. Besides that, it must be reinforced the simplicity of this model,
needing less computational resources and less expertise in classifications tools for the final users.
4. CONCLUSIONS
The robustness of the model, characterized by the wide range of parameterizations and topology that can
be used to obtain the best results, as showed in Miguez at al. (2012), for sure facilitates this modeling
process. These characteristics can be attributed to a higher variation rate of the derivative of the Bi-
hyperbolic function compared to other activation functions being used, once we may consider that one of
the reasons of slow convergence is the diminishing value of the derivative of the commonly used activation
functions as the nodes approaches saturated values, as explained in Kenue (1991), Kamruzzaman and Aziz
(2002).
The observed results demonstrated the benefits of using the Bi-hyperbolic activation function, confirming
the initial expectation of a greater generalization capacity, faster convergence, higher computational speed
and smaller number of neurons in the architecture of the network.
The architecture of the network that used the Bi-hyperbolic activation function made it possible faster
training, less consumption of resources and a higher precision in obtaining results.
243
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
REFERENCES
Abdrabou, E., 2012. A Hybrid Intelligent Classifier for the Diagnosis of Pathology on the Vertebral Column.
Bioinformatics Using Intelligent and Machine Learning. Artificial Intelligence Methods and Techniques for Business and
Engineering Applications, ITA 2012, Sofia, Bulgaria, pp. 297-310.
Bache, K. & Lichman, M., 2013. UCI Machine Learning Repository, University of California, School of Information and
Computer Science, Irvine, CA, USA.
Chimieski, B. F., Fagundes, R. D. R., 2013. Association and Classification Data Mining Algorithms Comparison over
Medical Datasets, J. Health Inform. April-june; vol. 5, no. 2, pp. 44-51.
Gori, M., Tesi, A., 1992. On the Problem of Local Minima in Backpropagation, IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 14, No. 1, January. pp. 76-86.
Hecht-Nielsen, R., 1989. Theory of the Backpropagation Neural Network. IJCNN International Joint Conference Neural
Networks . pp 593 – 605. Washington, USA.
Hopfield, J.J., 1984. Neurons with graded response have collective computational properties like those of two-state
neurons. Proceedings of the National Academy of Sciences, 81, pp: 3088-3092.
Hornik, K., 1989. Multilayer Feedforward Networks are Universal Approximators, Neural Networks 2, 359-366.
Kalman, B.L. , Kwasny, S.C. 1992. Why tanh: choosing a sigmoidal function, Neural Networks, IJCNN., International Joint
Conference, Vol. 4, pp. 578 – 581.
Kamruzzaman, J. Aziz, S.M., 2002. A Note on Activation Function in Multilayer Feedforward Learning, Neural Networks,
IJCNN’02 Proceedings of the 2002 International Joint Conference. Vol. 1, pp. 519 – 523.
Kenue, S.K., 1991. Efficient Activation Functions for the back-propagation Neural Network, Neural Networks, IJCNN-91-
Seattle, International Joint Conference, vol. 2.
Miguez, G.A., Maculan, N., Xavier, A.E. 2012. Desempenho do Algoritmo de Backpropagation com a Função de Ativação
Bi-Hiperbólica. In: XVI Latin-Ibero-American Conference on Operations Research and XLIV Brazilian Symposium on
Operations Research, Rio de Janeiro.
Otair, M. A., Salameh, W. A., 2005. Speeding Up Back-Propagation Neural Networks. Proceedings of the 2005 Informing
Science and IT Education Joint Conference. Flagstaff, Arizona, USA.
Pandey, B., Jain, T., Kothari, V., Grover, T., 2012. Evolutionary Modular Neural Network Approach for Breast Cancer
Diagnosis, IJCSI International Journal of Computer Science Issues, Vol.9, Issue 1, No 2, pp. 219-225.
Polat, K., Güne¸S. 2007, Breast Cancer Diagnosis Using Least Square Support Vector Machine. Digital Signal Processing,
694–701.
Rocha Neto, A.R., Barreto G.A., Cortez, P.C., Mota, H., 2006. “SINPATCO: Sistema Inteligente para Diagnóstico de
Patologias da Coluna Vertebral”, XVI Congresso Brasileiro de Automática, Salvador, pp 929-934.
Rocha Neto, A. R., Barreto, G. A., 2009. On the Application of Ensembles of Classifiers to the Diagnosis of Pathologies of
the Vertebral Column: A Comparative Analysis. IEEE Transactions on Latin America 7(4), pp. 487-496.
244
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Miguez G., Adilson E.X., Nelson M.| Use of Bi-Hyperbolic Activation Function to Optimize the
Performance of Artificial Neural Networks
Rocha Neto, A.R., Sousa, R., Barreto, G.A., Cardoso, J.S., 2011. Diagnostic of Pathology on the Vertebral Column with
Embedded Reject Option. In Proceedings of the 5th Iberian Conference on Pattern Recognition and Image Analysis
(IbPRIA'11), Springer-Verlag, Berlin, Heidelberg, pp. 588-595.
Schiffmann, W., Joost, M., Werner, R., 1994. Optimization of the Backpropagation Algorithm for Training Multilayer
Perceptrons, Technical Report, University of Koblenz, Institute of Physics, Koblenz.
Stathakis, D., 2009. How many hidden layers and nodes? International Journal of Remote Sensing 30(8), pp. 2133–2147.
Taha, I., Ghosh, J., 1996, Characterization of the Wisconsin Breast Cancer Database Using a Hybrid Symbolic-
Connectionist System. Technical Report, University of Texas, Austin.
Xavier, A. E., 2005. Uma Função de Ativação para Redes Neurais Artificiais Mais Flexível e Poderosa e Mais Rápida.
Learning and Nonlinear Models – Revista da Sociedade Brasileira de Redes Neurais (SBRN) 1(5), pp. 276-282.
Williams, R. J. (1986). The Logic of Activation Functions. In D.E. Rumelhart and J.L. McClelland (Eds.), Parallel Distributed
Processing. Cambridge, MA: MIT Press. pp. 423-443.
Wolberg, W.H., 1991, Wisconsin Breast Cancer Database, UCI Machine Learning Repository.
http://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Original), Irvine, CA: University of California, School
of Information and Computer Science.
245
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
Abstract
Personalized routing for the tourist agenda is looking for a set of control points, usually referred as Points of Interest
(POIs) to be visited, so that the total score, benefit or profit is maximized subject to a constraint on the total travel cost
or time. A score or profit is associated to each POI, and for each pair of POIs a travel cost is specified. The corresponding
tourist planning problem basically coincides with the Team Orienteering Problem with Time Windows (TOPTW)
constraints, which is NP-hard problem that arises in vehicle routing and production scheduling framework. Personalized
routing aims not only at planning an optimal route, by means of route generation and customization, but also facilitates
each user by personalizing their recommendations. Although problems in this category present enormous difficulty to
complete search algorithms, various heuristics and local search algorithms, found in the literature, can be applied to
create a personalized route especially designed for tourists. Furthermore as a bases to future work an algorithm that
solves the OP is presented where the tourist departs from a specified but not fixed point (origin) and returns to it. The
algorithm uses TOP methodology to construct an initial route and then OP in order to enhance it and produce an
optimal one. After the construction of the initial route heuristics, greedy local search algorithms as well as optimization
techniques are implemented in order to optimize the result. Throughout the design of the algorithm consideration was
taken upon future extension where starting and ending points would be different and the algorithm should produce as
a result more than one optimal routes in order to facilitate not one single tourist/team but several groups. The
proposed method is applied in three well known data sets, taken from the literature and is compared to four previously
published algorithms. The algorithm proposed in this paper proved to be able to give similar results with previously
published algorithms in acceptable computing time, and in some instances outperform them.
KEYWORDS
ROUTING, ALGORITHMS, ORIENTEERING, PERSONILISED ROUTING
1. INTRODUCTION
Tourism is considered to be one of the largest industries in the world. The variety of services provided and
the rapidly increasing volume of information often do not help a person - consumer - tourist to easily find
what he/she is looking for. A visitor may not be able to visit all places, presumably because of the limited
time and/or resources available. When visitors are at a destination, they typically search for information in
the Local Tourist Organizations. There, the staff either provides them with a generic visitor tour either
determines the profile of the tourists and their restrictions. Combining this information with their up-to-
date knowledge about the local attractions and public transportation, they suggest a personalized route for
the tourist agenda. Finally, they fine tune up this route to better fit tourist’s needs. Kramer et al. (2006)
have analyzed the diversity of gathered tourist interest profiles and conclude that they are surprisingly
diverse. This conclusion supports the idea of creating personalized tours instead of proposing generic visitor
tours. Nowadays geo-informatic systems alongside with smart applications, web services and location
based services are developed, that take over the Tourist Organizations staff’s role and by using various
technologies they try to provide the best possible information, with validity and reliability in the least time
possible.
246
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
2. PERSONALIZED ROUTING
Routing refers on creating a path using a set of POIs, each one of which is assigned with a score (or cost). In
its simplest form the problem is to maximize (minimize) the sum of scores (costs) collected, by visiting all
POIs only once while taking under consideration other constraints of the problem. In this sense routing is
the process of selecting best paths in a network. Personalized routing refers on planning an optimal route
based on the recommendations and restrictions of an individual but also facilitating each user by
personalizing their recommendations. It is looking for a set of POIs to be visited once, so that the total
score, benefit or profit is maximized subject to a constraint on the total travel cost or time. A score or profit
is associated to each POI, and for each pair of POIs a travel cost is specified. Personalized routing can be
outdoor or indoor with indoor typical use being such as finding optimal routes for impaired persons within
buildings (Candy J., 2007; Karimi H. A. et al., 2010) and focuses on finding the safest route or the route with
the least obstacles whereas personalized outdoor routing is about finding the shortest route within
constraints. The process is carried out through decision support systems or web services and applications
that can be handled by the user from a computer, cell phone or mobile guide. The input data are usually
information about the POIs available (geographic location (latitude and longitude), scoring system) and the
restrictions of the user, in our case a tourist who is in a given area. The output is the generated route.
One of the early attempts to create a mobile tourist guide was Cyberguide (1995-1997), of the Georgia
Institute of Technology, which provided information based on the position and orientation of the user and
focused on how portable computers could assist in exploring physical spaces and cyberspaces. Since then
other systems where developed such as Gulliver's Genie (O 'Hare G.M.P., O' Grandy M. J., 2003) and GUIDE
(Cheverst K. et al., 2000). All of the above take into account the position of the user, provide information
about the POIs located around him and help him choose which points he wants to visit, hence for the
generation of a personalized route the user must select by himself the points that interest him (Souffriau W.
et al., 2008). During the last decade there have been developed systems and applications that instead of
recommending pre-packaged tours or sorting POIs by estimated interest value as recommender systems do,
they determine the combination of POIs that maximize the joint interest (Souffriau W., Vansteenwegen
P.,2010). For example the Dynamic Tour Guide (DTG) of ten Hagen et al. (2005) that calculates personal
tours on the fly or the system of Lee et al. (2009) that allows planning personalized travel routes to Tainan
City, China. A thorough presentation of related work is beyond this paper but one should refer for more
information to (Souffriau W., Vansteenwegen P., 2010); Kabassi K., 2010).
Suppose there is a web service that generates personalized routes. The user connects with the service and
the system lets the user know of the POIs he could visit, provided the system knows the location of the
user. Table 1 consists of 12 POIs found in the center of the city Athens of Greece. For each POI the
geographical coordinates are known as well as the score profit. The scoring system reflects the
“attractiveness” of a POI and in a simple form it could be a measure of tourist arriving in a city and the
number of tourists visiting a certain place. Distances between POIs are given by the Euclidian Distance. In
more complex models the data could contain even traffic or weather information. The next step is to
provide the user with a route he could follow based on his constraints. In the simplest form the only
247
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
constraint is the time budget (tmax) available by the user. For the route generation we distinguish three
cases, routes where start and finish points are the same (Figure 3), with different start and finish points
(Figure 2) and routes generated for more than one tourist or groups of tourists (Figure 1).
In the generalized version of the problem more than one tourists, or groups of tourists, are in one place.
Each of them wants to take a tour and then return to the starting point and meet with the others. Due to
the limited time they have available they cannot visit all places and so they must follow the route that
provides them with the highest profit possible in the time available. This problem can be modeled as TOP,
namely, given a fixed amount of time the team (M members) has to determine M paths, with respect to the
time constraint, from the start point to the finish point through a subset of locations in order to maximize
the total score collected. In the case of a single visitor the problem is similar with the Single-Competitor
Orienteering Problem (OP), a special case of TOP with only one member. Scoring system is defined by the
satisfaction a user gains by visiting a control point.
Complete search algorithms, eg. Shortest Path Routing, A* algorithm, and exhaustive search fail to produce
optimal solutions in acceptable time, or else there is no known polynomial time algorithm (Sahni S., 1977),
for this category of problems. Hence several heuristics, local search algorithms, meta-heuristics and hyper-
heuristics, such as Tabu search, simulated annealing, Ant Colony Optimization etc., have been developed in
order to find good or just feasible solutions. We divide the route generation process in two steps,
construction of the initial route and optimization. In the optimization step we can further divide the
248
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
procedures and moves used in two categories, those that augment the total score of the route (insert, one
in – zero out, two-insert, replace) and those that lower the cost. Lowering the cost of a route is mainly done
by eliminating cross-over using swap moves, 2-opt, 3-opt and the Lin–Kernighan heuristic. Also
diversification procedures are being used in order to escape local optimums in which heurists tend to get
caught. Details of the mentioned procedures can be found in Vansteenwegen P. et al. (2009).
As a bases for future work we propose a heuristic that incorporates only 3 relatively simple moves and
generates an optimal or near optimal solution for the OP. The heuristic is applied to three well known
problem sets (Tsiligirides T., 1984) with different problem size, distribution of POIs on the ground relative to
each other and complexity of the scoring system and frequency distribution (Keller C.P., 1989). Problem 1
consists of 32 POIs, three score categories and start and finish POIs located relatively central, problem 2
consists of 21 POIs, seven score categories and start and finish POIs located on the edge, problem 3, similar
to problem 1, consists of 33 POIs, four score categories and start and finish POIs located relatively central.
The results for various tmax values are compared with 4 algorithmic approaches found in the literature,
those of Tsiligirides, Keller and Golden. Tsiligiride’s proposed two algorithms to construct the initial route
for the OP, the S-algorithm and the D-algorithm. The stochastic approach (S-algorithm) uses cost/distance
and Monte Carlo techniques, to construct 3000 initial routes and then the best is selected. The D-algorithm
is a deterministic one where routes are constrained in specified sectors determined by 2 concentric circles.
Tsiligiride’s also gives a third algorithm (R-I) to improve the initial route. Keller (1989) introduces the Multi-
objective Vending Problem (MVP), a TSP generalization where each node has a reward and travelling
between nodes incurs a penalty. MVP examines the trade-off relationship between maximizing the total
score (max nodes) and minimizing the penalty (min nodes) by finding a Pareto optimum. OP can be
considered as special case of the MVP. Golden (1987), proposed a heuristic to solve the OP based on the
Knapsack problem. The heuristic consists of 5 steps, construction of initial routes, route improvement step,
2 Knapsack steps and a perturbation step. The results of the 4 algorithms when applied to the tree problem
sets are given in Keller C.P., 1989.
249
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
Loop=0
DETERMINE FEASIBLE SET OF POIS
CONSTRUCT ROUTES
SELECT INITIAL ROUTE
REPEAT UNTIL NO BETTER SOLUTION EXISTS AND Loop<5
SWAP
REPLACE
INSERT
END
SHOW RESULT
4. CONCLUSIONS
As it appears the algorithm works well giving comparable results with those (Tsiligirides, Keller and Golden)
already reported in the literature and discussed above. In this sense the proposed heuristic produces good
if not optimal results while using a small number of procedures. Future work will focus on: (a) the
development of a web service that could be used by tourists in large scale, (b) further improvement of the
algorithm, by using stochastic method in local search for the construction of the initial route, (c) extension
of the algorithm for different start and finish POIs and (d) extension of the algorithm to solve the TOP and
facilitate groups of tourist and to take under consideration more constraints (traffic, Time Windows etc.).
250
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Diareme K.C., Tsiligiridis T.| Personalized Outdoor Routing: Route Planning Techniques and Algorithms
REFERENCES
Candy J., 2007. A Mobile Indoor Location-based GIS Application. Proceedings of the 5th International Symposium
on Mobile Mapping Technology (MMT '07 ), Padua, Italy.
Cheverst K., Davies N., Mitchell K., Friday A., 2000. Experiences of developing and deploying a context-aware tourist
guide: the GUIDE project. Proceedings of the 6th annual international conference on Mobile computing and
networking (MobiCom '00),NY , USA, pp. 20-31.
Golden B. L., Levy L., Vohra R., 1987. The orienteering problem. Naval Research Logistics, Vol 34, No. 3, pp. 307-
318.
ten Hagen K., Kramer R., Hermkes M., Schumann B., Mueller P., 2005. Semantic matching and heuristic search for a
dynamic tour guide. Information and Communication Technologies in Tourism, pp. 149-159.
O' Hare G. M. P., O' Grady M.J., 2003. Gulliver's Genie: a multi-agent system for ubiquitous and intelligent content
delivery. Computer Communications, Volume 26, Issue 11, pp. 1177-1187.
Kabassi K., 2010. Personalizing recommendations for tourists. Telematics and Informatics, Vol. 27, No. 1, pp. 51-66.
Karimi H. A., Ghafourian M., 2010. Indoor Routing for Individuals with Special Needs and Preferences. Transactions
in GIS, Vol. 14, No. 3, pp. 299-329.
Keller C. P., 1989, Algorithms to solve the orienteering problem: A comparison *. European Journal of Operational
Research, Vol 41, pp. 224-231.
Keller C. P, Goodchild M. F., 1988. The multiobjective vending problem: a generalization of the travelling salesman
problem. Environment and Planning B: Planning and Design,Vol. 15, pp. 447-480.
Kramer R., Modsching M., ten Hagen K., 2006. A city guide agent creating and adapting individual sightseeing tours
based on field trial results. International Journal of Computational Intelligence Research, Vol. 2, No. 2, pp. 191–206.
Lee C. S., Chang Y. C., and Wang M. H., 2009. Ontological recommendation multi-agent for Tainan city travel.
Expert Systems with Applications, Vol. 36, No. 3, pp. 6740-6753.
Sahni S., 1997. General Techniques for Combinatorial Approximation. Operations Research, Vol. 25, No. 6, pp. 920-
936.
Souffriau W., Vasteenwegen P., 2010. Tourist Trip Planning Functionalities: State–of–the–Art and Future. Current
Trends in Web Engineering Lecture Notes in Computer Science, Volume 6385, pp. 474-485.
Tsiligirides T., 1984. Heuristic Methods Applied to Orienteering. The Journal of the Operational Research Society,
Vol. 35, No. 9, pp. 797-809.
Vansteenwegen P., Souffriau W., Vanden Berghe G., Van Oudheusden D.,2008. A guided local search metaheuristic
for the team orienteering problem. European Journal of Operational Research, Vol. 196, No. 1, pp. 118-127.
Vansteenwegen P., Souffriau W., Vanden Berghe G., Van Oudheusden D., 2009. Metaheuristics for Tourist Trip
Planning. Metaheuristics in the Service Industry Lecture Notes in Economics and Mathematical Systems, Vol. 624, pp.
15-31.
251
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
Abstract
This paper examines the application of an evolutionary optimisation heuristic (differential evolution) to resource
reservation and scheduling in a service provider with requests of different priority. The formulation of the problem is
based on the principle of maximising the availability of high priority classes while keeping a control on the blocking
probability of less important categories of users. Through a numerical example, we illustrate the practical relevance of
the proposed intelligent optimisation scheme and benchmark its performance against a Monte-Carlo solution-search
method. First experimental results designate the effectiveness and robustness of evolutionary algorithms in terms of
detecting superior resource allocation plans.
KEYWORDS
Resource reservation, blocking probability, differential evolution, intelligent optimisation heuristics.
1. INTRODUCTION
In many service providing systems, such as communication channels, call centers and courier services,
requests are classified into different priority classes. These are specified, for example, according to the
membership fee paid by each customer, the degree of emergency of each call, the revenue management
policy, and so on. Assuming an adequate service capacity, it is practically feasible to satisfy all incoming
requests but an increasing demand can lead to gradual exhaustion of resources. In such a critical state,
priority should be given to requests of high importance, and this can be achieved in practice by reserving a
priori a certain number of system resources for these particular classes. An optimisation technique can then
be used for determining a resource allocation plan (number of servers assigned to each type of clients) that
improves system availability with increasing priority.
The purpose of this paper is to demonstrate how intelligent optimisation heuristics can be utilised in
determining availability plans with a view on gradually minimising the service-denial rate for high-priority
classes. We show how this requirement translates into a nonlinear integer-programming problem, which is
solved numerically using the differential evolution algorithm. By means of a case study, we illustrate the
practical difficulties of the particular optimisation setting and how evolutionary techniques can be used to
overcome numerical problems.
252
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
The rest of the article is structured as follows: Section 2 discusses the main features of our modelling
framework and Section 3 puts the resource availability planning problem in a mathematical programming
context. Section 4 illustrates the use of differential evolution through a practical example and Section 5
concludes the paper and points to future research directions.
Consider a system with a finite number of servers N accepting requests from four classes of users that are
ordered with an increasing priority scale (e.g. “zero”, “little”, “medium”, “high”). The goal is to reduce the
possibility that an arriving client has no further access to server resources. This is known in the literature as
the blocking probability (BP) (see e.g. Farago, 2008; Vu and Zukerman, 2002; Haring et al., 2001). In order to
have a control on the blocking probability of each class, we assume that a fixed proportion of system
resources is a priori devoted to a particular group of clients, as described in the sequel. Let g1, g2 and g3 be
the number of servers allocated to Classes 3, 2 and 1 respectively. Class 1 is the group of highest priority,
which is assumed to have access to all system resources indistinguishably. Class-2 users have access to their
own resources but can also occupy resources previously assigned to Classes 3 and 4. Class 4 has no priority
over any other group and, consequently, has no pre-reserved resources.
The evolution of the system in time can be described by a birth-death stochastic process {Z(t), t ≥ 0}, where
the state of the system is the number of idle resources. Figure 1 presents the state transition diagram for
the service providing system. The leftmost state indexed by N represents a situation where no service
requests have arrived to the system and hence none of the servers is occupied. The rightmost state 0
corresponds to the case where all resources are busy serving requests.
Requests from each Class i (i = 1, 2, 3, 4) are assumed to arrive according to a Poisson process with
parameter λclass i. The service time is exponentially distributed with characteristic constant μ, which in
practical applications can be set equal to the weighted sum of the corresponding rate for each class of
users. Under these assumptions, the process {Z(t), t ≥ 0} can be considered as a continuous-time Markov
chain, in which the arrival rate is state-dependent and equal to λ4 = λclass 1 for states 0 to g3 +1, λ3 = λclass 1 +
λclass 2 for states g3 +2 to g3 + g2 +1, λ2 = λclass 1 + λclass 2 + λclass 3 for states g3 + g2 +2 to g3 + g2 + g1 +1 and λ1 =
λclass 1 + λclass 2 + λclass 3 + λclass 4 for states g3 + g2 + g1 +2 to N. The blocking probability of each class can be
computed analytically by resorting to the steady-state distribution { i , i=0,1,…,N} of Z(t), where
i lim PrZ(t ) i . This can be derived by solving the well-known linear system of
t
equations π Q 0, iE
i 1 ,where E={0,1,...,N} and Q = [qij](i,j)ExE is the (N+1) by (N+1) transition rate
matrix, with qij denoting the transition rate from state i to state j. Based on the aforestated assumptions
and the resource reservation protocol described at the beginning of this section, one is able to compute the
blocking probability Pb,class(i ) of each Class i with reference to the system steady-state probabilities. More
details on the derivation of the analytical formulae are given in Koutras et al. (2013); Koutras and Platis
(2009); Koutras and Platis (2008); Koutras and Platis (2006).
λ1 λ1 λ2 λ2 λ2 λ2 λ3 λ3 λ3 λ3 λ4 λ4 λ4 λ4 λ4 λ4
μ μ μ μ μ μ μ μ μ μ μ μ μ μ μ μ
253
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
3. DERIVING AN OPTIMAL RESERVATION PLAN
The problem of obtaining an allocation of resources that is compatible with class priority can be
equivalently expressed as minimising the blocking probability of the top-priority class (Class 1) while placing
constraints on the system availability for the rest of the user groups (2-4). We thus have the following
optimisation problem:
4. EXPERIMENTAL STUDY
We assume a particular case of a service provider with N=50 discretised resources and arrival/service rates
as specified in Table 1. In the same table, we also provide information on the acceptable limits for the
blocking probability of lower priority classes and the parameter values for the DE algorithm.
To ensure that DE has not converged to a suboptimal region, we performed 20 independent runs from
random initial states and report the best solution found in all repetitions. To evaluate how well DE performs
5
For an introduction to differential evolution, see Storn and Price (1997) and Storn (2008).
254
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
compared to a purely random exploration of the solution space, we also report results from a Monte-Carlo
(MC)-type optimisation technique. This amounts to randomly generating 10,000 candidate solutions and
keeping the one attaining the lowest value for the objective function.
The results of the optimisation exercise in the case of DE are presented in Table 2 (first row). Column 2
reports the lowest value for the blocking probability of Class 1 attained in all twenty DE runs and columns 3-
5 show the corresponding optimal resource reservation policy. The best distribution of resources that MC
was able to detect is (28, 5, 2, 15), which results in an almost four times larger blocking probability for Class
1 requests (4.436e-024). Table 3 gives further insight into the results of Monte-Carlo simulation by
reporting additional cut-off points (percentiles) from the distribution of visited solutions. By p100×α we
denote the a-th percentile of the distribution, which is an indication of the best outcome in 100×(1-α)% of
the trials (p0 is hence the best-ever solution found by Monte Carlo). Cells with dashes represent cases
where the lower-edge solution failed to satisfy the subgroup of problem constraints controlling availability
(constraints 1.1-1.3). What can be immediately inferred from Table 3 is a quick deterioration in Monte-Carlo
performance for increasing values of α. In as large as 99.9% of the MC trials, the method was unable to
reduce the blocking probability below 4.874e-023 (see column labeled “p0.1”), while the best solution that
was reported in 99% of the cases is 5.218e-21 (see column under “p1”), almost 5000 times inferior to the
one reported by DE in only 20 runs! What is worse, the chances that MC finds a solution that is feasible at
all are also quite low. For instance, 75% of the sampled resource allocation schemes resulted in
unacceptable levels of blocking probability for Class-2 and Class-3 requests (see last two columns of Table
3). This means that, in general, it needs more than 7,500 trials before we are lucky to find an allocation that
is at least feasible. This result is indicative of the hardness of the optimisation problem.
Input parameters
λclass 1 0.9500
λclass 2 0.9025
λclass 3 0.8574
λclass 4 0.8145
Μ 3.8768
Constraints on blocking probability
Class 2 0.002
Class 3 0.010
Class 4 0.050
Algorithmic parameters
Population size 50
Maximum number of generations 500
Crossover rate 0.80
Mutation rate 0.25
Population-update strategy DE/rand/1
(with uniform dither)
255
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
Table 2 Experimental results.
To gain more intuition on the optimal resource reservation policy, assume for a moment that the system
engineer’s intention was to myopically reduce the chances that a top-priority request is denied service. This
requirement translates into a slightly modified problem formulation without constraints 1.1-1.3. Solving this
problem with DE, we got the allocation scheme that is depicted in the second row of Table 2. As expected,
the optimal value in the “unconstrained” case is much lower than what can be achieved when availability
for lower priority classes in also an issue. Note that in an attempt to meet tighter targets on system
availability, the new reservation policy decommits 15 servers from the top-priority group (the number of
resources assigned to Class 1 fall from 45 to 30), out of which 13 are held in reserve (i.e. put in Class 4) and
2 are given to lower priority classes.
5. CONCLUSIONS
The purpose of this paper was to demonstrate the application of evolutionary techniques in resource
availability planning with priority classes. First results from this exercise are indicative of the true potential
of this class of optimisation heuristics for the particular application domain. Future research is directed
towards generalising the modeling framework for an arbitrary number of classes and system servers and
providing clear guidelines to the system designer on how to tackle the associated optimisation problems.
This covers, among others, helping the decision-maker choose reasonable bounds for the blocking
probabilities of lower priority classes, possibly as a function of the number of classes/servers, the
arrival/service rate and other input parameters.
ACKNOWLEDGEMENT
This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national
funds through the Operational Program "Education and Lifelong Learning" of the National Strategic
Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through
the European Social Fund.
REFERENCES
Farago A., 2008. Efficient blocking probability computation of complex traffic flows for network dimensioning,
Computers and Operations Research, Vol. 35, No. 12, pp. 3834–3847.
Haring G., Marie R., Puigjaner R., Trivedi K. S., 2001. Loss Formulas and Their Application to Optimization for Cellular
Networks, IEEE Transactions on Vehicular Technology, Vol. 50, No 3, pp. 664-673.
256
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Thomaidis N., Salagaras C., Vassilidiadis V., Koutras V., Platis A., Dounias G.| Evolutionary Algorithms for
Solving Resource Availability Optimisation Problems related to Client Service of Different
Priority Classes
Vu H.L., Zukerman M., 2002. Blocking probability for priority classes in optical burst switching networks, IEEE
Communications Letters, Vol. 6, No. 5, pp. 214–216.
Koutras V.P., Platis A.N., Salagaras C.S., 2013. Resource Availability Optimization for Green Courier Service, 2013 IFAC
Conference on Manufacturing Modeling, Management, and Control (MIM 2013), St. Petersburg, Russia, pp 1654-1659.
Koutras V.P. and Platis A.N., 2009. Modeling Resource Availability and Optimal Fee for Priority Classes in a Website,
European Safety and Reliability Conference (ESREL 2009), Prague, Czech Republic, pp. 1191-1198.
Koutras V.P., Platis A.N., Gravvanis G.A., 2009. Optimal Server Resource Reservation Policies for Priority Classes of Users
under Cyclic Non-Homogeneous Markov Modeling, European Journal of Operational Research, Vol. 198, pp. 545-556.
Koutras V.P. and Platis A.N., 2008. Guaranteed Resource Availability in a Website, in Martorell et al.(eds) Safety,
Reliability and Risk Analysis: Theory, Methods and Applications, Taylor & Francis Group, London, pp 1525-1532.
th
Koutras V.P. and Platis A.N., 2006. Resource Availability Optimization for Priority Classes in a Website. 12 IEEE
International Symposium on Pacific Rim Dependable Computing (PRDC ’06), Los Alamitos, California, pp. 305-312.
Necula, N., 2009. A Method for Real-Time Emulation of Genetic Algorithm-Optimized Controllers, U.P.B. Sci. Bull. Series
C, Vol. 71, pp. 15-26.
Storn, R., 2008. Differential evolution research – trends and open questions, Advances in Differential Evolution, Vol. 143,
Heidelberg: Springer Berlin , pp. 1-31.
Storn, R. and Price, K., 1997. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces, Journal of Global Optimization, Vol. 11, pp. 341–359.
Thilakawardana Sh. and Tafazolli, R., 2003, Use of Genetic Algorithms in Efficient Scheduling for Multi Service Classes,
The Fifth European Wireless Conference: Mobile and Wireless Systems beyond 3G (European Wireless 2004),
Barcelona, Spain.
Vergados, D.D., 2007. Simulation and Modeling Bandwidth Control in Wireless Healthcare Information Systems,
Simulation, Vol. 83, pp. 347-364.
257
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
Abstract
This paper deals with a three-echelon supply chain consisting of a retailer, a wholesaler and a manufacturer. The
retailer faces pure Poisson demand. The wholesaler and the retailer follow continuous review inventory policies (Si, si),
i=1,2. The lead times between successive nodes are random and follow the Coxian-2 distribution. Assuming lost sales at
the retailer and infinity capacity at the manufacturer, we explore the performance of the supply chain system. The
system is modeled as a continuous time Markov process with discrete space. The structure of the transition matrices of
these specific systems is examined and a computational algorithm is developed to generate them for different values of
system characteristics. The proposed algorithm allows the calculation of performance measures –fill rate, cycle times,
average inventory (WIP)– from the derivation of the steady state probabilities.
KEYWORDS
Three-echelon supply chain, performance measures, Markov analysis, lost sales, continuous review inventory policy.
A supply network is essentially a set of inter-connected suppliers and customers. In such a network, every
node/customer is a supplier of the next downstream node(s)/customer(s), until the end product reaches
the ultimate consumer or end user. The literature related to lost-sales inventory systems has increased in
the last years. The key-characteristics to review the works are the review interval (continuous review or
periodic review), the demand distribution, the lead time (deterministic or stochastic), and the maximum
number of outstanding orders. Due to space limitations only a few articles are given here.
Hill & Johansen (2006) use policy-iteration to explore the behaviour of optimal control policies for lost sales
inventory models, with the constraint that no more than one replenishment order may be outstanding at
any time. Demand is discrete and, for a continuous review, assumed to derive from a compound Poisson
process. The lead times are assumed to be fixed. Johansen and Thorstenson (1993) consider a continuous
review (r, Q) inventory system with Poisson demands and at most one order outstanding. The
replenishment lead time is gamma distributed. Demands not immediately covered by the inventory are lost.
Their objective is the determination of optimal values of Q and r to minimize the long run average cost.
Jeong Eun Lee and Yushin Hong (2003) analyze a (s, S)-controlled stochastic production system with multi-
class demands and lost sales. Demands in each class are assumed to arrive according to a Poisson process,
and the time required to process an item is assumed to follow a 2-phase Coxian distribution. The
production system is modeled as a continuous time Markov chain, and an efficient algorithm to calculate
the steady-state probability distribution of the system is proposed.
258
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
The remaining of the paper is organized as follows: Section 2 describes the supply network under
consideration. Section 3 gives the formulation of the model, and Section 4 provides a few numerical results
and Section 5 concludes the paper and gives a few areas for further research.
2. SYSTEM DESCRIPTION
Our study focuses on a dynamic system and specifically on a supply chain with three members: a retailer, a
wholesaler and a manufacturer. The basic assumptions for the system under consideration are: (1) The
external demand process is pure Poisson with rate λ. (2) All demands that occur during stock-outs are lost.
(3) There can never be more than a single outstanding order. One supporting argument for this assumption
may be that the manufacturer can handle at most one replenishment order at a time, due to distance and
capacity constraints.
d21 d11
μ11
μ21
(4) The wholesaler and the retailer follow continuous review inventory policies (Si, si), i=1,2. When the
inventory on hand reaches the level si (reorder point), the node i=1,2 places an order of a size q i=Si-si to the
upstream node in order to achieve the up-to-order inventory level Si. If the upstream node has not enough
stock to be sent the rest quantity is lost. (5) After an order is placed, node i serves the downstream demand
until his remaining inventory reaches zero or the outstanding order arrives. When an order by the upstream
node arrives, the node i re-examines the inventory on hand to decide whether a new order needs to be
triggered. The lead time L defined as the active processing and delivery time of a replenishment order
between two successive nodes is stochastic and follows the Coxian-2 distribution. A fraction of the
downstream node i orders, di1, (0≤ di1≤1) needs a random time to be served, exponentially distributed with
a rate μi1 (needs an exponential time T1 to be completed). The rest of the fraction, d i2, (0 ≤di2 =1-di1 ≤ 1),
faces an additional time of delay, also exponentially distributed with a rate μi2 (needs an additional
exponential time T2 of delay, i.e., T1+T2 is the total lead time). Coxian distributions have been popular
because the states (or groups of states) can sometimes be given a physical interpretation. Also Coxian
distributions exhibit quite a lot of flexibility. Actually a Coxian distribution can approximate any distribution
function with coefficient of variation, cv: 0 < cv < ∞. (6) The manufacturer is never starved.
The performance of this dynamic system is affected by the stochastic processes that take place (i.e.,
external demand occurence, lead time) and by the replenishment policies (S i, si). The random nature of the
processes involved in the behavior of the system under consideration needs a stochastic evaluative model
to obtain suitable perfomance measures.
The effectiveness of the system, i.e., the customer service level –the percentage of the orders that are
fulfilled by the existing inventory– is a function of a) S (the maximum inventory level), b) s (the reorder
point), c) the demand rate λ and d) the replenishment time characteristics (μi1, μi2, di1, di2). The main
objective of this work is to express the customer service level, i.e., the fill rate or FR and other performance
259
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
measures, such as the average inventory or WIP, and the average flow time or cycle time or CT, as functions
of key system characteristics. The above measures may be combined with the respective costs, e.g., the
average WIP with the holding cost and the fill rate with the lost sales cost. The main contribution of this
work is the presentation of an exact evaluative model to calculate performance measures of the system in
the case of batch production with Coxian replenishment. This model can be used as a generative one to
determine the values of the parameters that optimize the behavior of the system given an objective
function.
3. MODEL FORMULATION
The supply network under consideration is analyzed as a 2-dimensional Markov process.
The state of the system at time t is described by the vector (p2t, I2t, p1t, I1t) for 0 ≤ Iit ≤ si, i=1,2 or by the
integer Iit for si < Iit ≤ Si where:
Occurrence of an external demand at time Δt: the state of I1t jumps from n to n-1 with probability λ·Δt.
Shipment arrival at the downstream node i at time Δt that faces only one phase of delay: I it grows up
by an amount qi (or less depending on the stock of the upstream node i+1) with probability di1·μi1·Δt.
nd
Shipment arrival at the downstream node i at time Δt that faces an additional (2 ) phase of delay: Iit
remains stable with probability di2·μi2·Δt.
Shipment arrival at the downstream node i at time Δt that have faced two phases of delay: Iit increases
by an amount qi (or less depending on the stock of the upstream node i+1) with probability μi2·Δt.
260
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
The above Continuous Time Markov Process with Discrete Space (CTMPDS) is similar to a QBD or skip-free
process. This CTMPDS is skip free to the left but not to the right because the jumps of the random variable
I1t are of size 1 for s1 < I1t ≤ S1 or q1 for 0 ≤ I1t ≤ s1.
The total number of states, i.e., the dimension of the transition matrix, for a chain with K=3 members and
inventory policies (S1, s2), (S2, s2) is given by the relationship:
For example, if the retailer follows the (S1=2, s1=1) policy, and the wholesaler follows the (S2=3, s2=1) policy
the total number of states will be:
=2(2+1)+(3+1) =2*3+4*5=20
and =2+1+2=5
The Markovian model of the system was examined and the transition matrix was explicitly derived as well
as the steps of an algorithm for calculating the steady-state probabilities of all states of the system. From
these probabilities, one may explore the possibility of deriving various interesting performance measures
such as the average inventory in the retailer, the wholesaler and the whole system as well as the retailer’s
fill rate.
4. NUMERICAL RESULTS
This section provides some sample numerical results. The performance measures of the supply chain are
affected exclusively by the replenishment policies (Si, si), the external demand rate (λ), the replenishment
rates (μij, i=1, 2 j=1,2) and the probabilities (di1 or di2).
261
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
Figure 2: Evolution of the fill rate
Using the proposed algorithm we investigate the optimal (Si, si) policies that maximize the fill rate and
minimize the WIPsystem given that the total echelon inventory does not exceed an upper level (here upper
level is equal to 8 units). Due to space limitations, these numerical results are omitted. Only a couple of
plots regarding the evolution of the fill rate and the average WIP are given in Figures 2 and 3, respectively.
From these results some observations were made which lead to a few conclusions.
This work examines a three-echelon supply chain with Coxian-2 lead times and (Si,si) inventory policies and
lost sales. We conducted an exact Markovian analysis of this system. After the calculation of the state
probabilities, various performance measures (fill rate, average inventory and average cycle time) were
computed.
The main findings of this work may be summarized as follows: (a) Both inventory policies affect the system’s
fill rate. That is, to succeed high fill rates given that the total echelon inventory does not exceed an upper
262
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Vidalis M., Vrisagotis V., Papadopoulos C.| Performance Evaluation of a Three-echelon Supply Chain with
Stochastic Demand, Lost sales, (S, s) Continuous Review Policies and Coxian 2-phase
Replenishment Times
leve,l one must put S1=S2 as possible. (b) High levels of reorder points (ROP) do not give generally high fill
rates. For balanced supply chains the optimal inventory policies are achieved for s i=1 or 2. (c) The S2
parameter tends to increase the WIPsystem. If the WIPsystem is critical, one must put S2 as minimum as
possible. In this case the minimization of the WIPsystem results to the minimization of the fill rate.
Possible areas for further research include: (i) Supply chains with more than three stages. (ii) Modeling the
replenishment (lead) times, by Coxian distribution with more than two phases. (iii) In the design of systems
treated in this paper, it would be possible to take explicit care to meet customer service expectations by
developing a target objective function of weighted measures of fill rates and cycle times.
ACKNOWLEDGEMENT
The authors Michael Vidalis and Papadopoulos Chrissoleon have been co-financed by the European Union
(European Social Fund – ESF) and Greek national funds through the Operational Program "Education and
Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program:
Thales: Investing in knowledge society through the European Social Fund.
REFERENCES
1. Buchholz, Peter (1999), Structured analysis approaches for large Markov chains, Applied Numerical Mathematics ,
Volume 31, Issue 4, pp. 375–404
2. Grassmann, W. K. and Heyman, D. P. (1990). Equilibrium distribution of block-structured Markov chains with
repeated rows. J. Appl. Prob. 27, 557-576.
3. Helber St. (2005), Analysis of flow lines with Cox-2-distributed processing times and limited buffer capacity, OR
Spectrum, Vol. 27, Issue 2-3, pp. 221-242
4. Hill M. Roger and Johansen Søren Glud (2006), Optimal and near optimal policies for lost sales inventory models
with at most one replenishment order outstanding, European Journal of Operational Research, Vol. 169, pp. 111–132.
5. Lee Hyo-Seong (1995), On continuous review stochastic (s, S) inventory systems with ordering delays, Computers &
Industrial Engineering, Volume 28, Issue 4, pp. 763-771.
6. Lee, Jeong Eun and Hong Yushin, (2003), A stock rationing policy in a (s,S)-controlled stochastic production system
with 2-phase Coxian processing times and lost sales, International Journal of Production Economics, Vol. 83, pp. 299-
307.
7. Vidalis, M.I. and Papadopoulos, H.T., (1999), Markovian Analysis of production Lines with Coxian-2 service times,
International Transactions in Οperational Research, Vol. 6, pp. 495–524.
263
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
Abstract
In supply chain management, a good planning must be beneficial for the whole supply chain and for each participating
company. As each company in practice tends to optimize its own production unit and the centralized strategy cannot be
applied in most cases, qualitative and quantitative analysis of the comparison of centralized and decentralized
strategies is important. In serial supply chains with one factory at every level, the transportation of products and
information between the two stages is rather simple. When the supply chain becomes more complicated many
difficulties arise. One of the major problems is how to decompose the system. We study a system with two stages in
series, where each stage consists of more than one factories. Every factory of a stage will supply all next-stage factories
with semi-products. The problem was solved for the deterministic case. An important decision is the decomposition
point in the decentralized strategy. The problem under consideration in the decentralized policy is formulated as a bi-
level programming problem. The results of the comparison determine the conditions under which the two strategies
provide the same optimal solution for each problem bypassing the two analyses. Moreover, it is important to see how
the size and the complexity of a supply chain affect its behavior. We conclude that decentralized planning results in loss
of efficiency with respect to centralized planning regardless of the size and the complexity of a system.
KEYWORDS
Supply chain management, centralized strategy, decentralized strategy, bi-level programming, qualitative analysis.
1. INTRODUCTION
In supply chain systems the aim is to optimize some performance measure, which either concerns
organizations of a supply chain or a combination of supply chains. Production planning is a complex process
and defines which products should be produced and which products should be consumed in each time
instant over a time horizon. In an ever-evolving environment, which is complex and increasingly
competitive, production planning is a difficult task. In practice, each company tends to optimize its own
production unit with no attention to the whole chain. It has been more difficult the last decades to achieve
an objective due to globalization, where partnerships with suppliers, subcontractors and customers are
needed and the production facilities are geographically distributed. Consequently, many difficulties arise in
backward information flow and centralized systems have proven to be inflexible in cases of unexpected
events.
The aim of this study is to analyze and compare the two types of optimization, centralized and
decentralized, in order to obtain important qualitative results. The aim is to find the profit and the optimal
policy of centralized optimization in contrast to decentralized, and to answer if the size and complexity of a
supply chain affects its behavior. The system we study is composed of two factories that work in parallel at
some stage of the supply chain and supply all next-stage factories of the chain. The performance of this
system is compared with the results of a two stage serial system studied by Saharidis et al. (2006).
264
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
2. BACKGROUND
As it is shown in Saharidis et al. (2006), decentralized planning yields less efficient results compared to
centralized planning. It is, however, difficult to quantify the difference between the two approaches within
the context of production planning, production scheduling and control policies. During the last years there
have been a few studies that have provided both centralized and decentralized analyses. There exist some
interesting real life applications with centralized versus decentralized analysis in the literature. A case study
from the chemical industry is presented by Bassett et al. (1996) where a resource decomposition method is
used to reduce the problem’s complexity by dividing the scheduling problem into subproblems based on its
process recipes. In Harjunkoski and Grossmann (2001), a decomposition scheme for solving large-scale
scheduling problems for steel production is presented. One more real life application concerns an
automotive industry presented by Gnoni et al. (2003) with uncertain multiproduct and multi-period
demand.
Saharidis et al. (2006) and Saharidis (2011) consider a supply chain production planning problem involving
several enterprises, the final products of which are doors and windows made out of aluminum. The authors
compare the two approaches to decision-making. The conditions under which the two approaches yield the
same optimal solution are also examined. A stochastic supply chain is examined in Saharidis et al. (2009)
and Saharidis (2011), which corresponds to an industry producing windows and doors. This supply chain is
composed of two manufacturers that produce a single type of product with a random demand. The
objective is to minimize of the sum of the long-run average holding, backordering, and subcontracting costs.
Moreover, qualitative results are obtained from the centralized versus decentralized analysis.
A refinery system is addressed in Shah et al. (2011). The authors present the two optimization approaches
(centralized and decentralized) as well as a structural decomposition rule which is generally applicable in
this type of systems where the production is a continuous process with intermediate stock areas. Finally,
Baboli et al. (2011) consider a pharmaceutical downstream supply chain. The authors compare the two
approaches for multi-product replenishment policies and seek to identify the best product quantity and
replenishment period.
3. MAIN FOCUS
In general, the outcome of the centralized optimization is beneficial for the whole supply chain. Full sharing
of information is needed among the members of a supply chain in order for the centralized optimization to
be applicable. In reality though, the continually ongoing competition between companies make the
application of the centralized optimization a difficult task, so a comparison between the centralized and
decentralized optimization needs to be carried out. The conditions under which the centralized and the
decentralized optimization have the same outcome are very important, as well as the differences and the
variations between the two cases.
In the decentralized case each partner is treated separately, ignoring the overall profit of the supply chain
and the performance of the other partners. The pure decentralized optimization presents major difficulties
in data sharing among partners for supply chain problems with more than two partners. With no
information among partners and without taking into consideration the plans and the needs of the same
stage factories, one cannot optimize the plan of each factory separately. In order to overcome this difficulty
the partial decentralized case is solved. That is, the problem is formulated in such a way where each partner
needs only to acquire partial information about the other partners, without mutual collaboration.
In Saharidis et al. (2006) the case where products and information is transferred between two serial plants
is rather simple. The decision variables of their problem are the quantities produced and subcontracted in
each factory and the inventory of every stage during every time period. They study the deterministic case
265
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
where the demand, the costs and the maximum production capacity of every factory are known. In the
decentralized case the supply chain is decomposed into two parts. In the centralized case the objective is to
optimize the performance of the whole supply chain taking into consideration the material balance and
production capacity constraints, the costs and the final demand. On the other hand, in the decentralized
case the objective is to optimize the performance of each factory separately. The authors assume that the
initial stocks are zero and that there is no final demand during the first time period. By comparing the
centralized and the decentralized optimization the authors proved analytically some qualitative results.
The system that is considered here is a two-stage supply chain where in each stage two factories work in
parallel. Each factory in the second stage has to satisfy its own final demand using semi-products supplied
from both first stage factories. Every factory has an output stock and a sub-contractor who provides
additional external capacity if desirable.
The major difficulty for problems with more than two partners is in sharing data among the partners. In
general, supply chain planning problems inherently exhibit multi-level decision network structures. Bi-level
programming problems are mathematical problems which have another parametric optimization problem
as part of their constraints. In bi-level programming problems, partners have perfect information of each
other, but there are non-cooperative decision problems with a hierarchical ordering of the players. The
upper level decision maker, the leader, is the first player, and the lower level decision maker is the second
player of a bi-level problem. Bi-level programming problems favor the leader. The leader finds the reaction
of the follower for each feasible production plan and then among all these reactions it defines its own
optimal production plan. The follower, on the other hand, reacts to the leader's decision. The results of the
bi-level problem and of the pure decentralized case will not be the same; the former will outperform the
latter thanks to the partial sharing of data. An interesting issue in bi-level problems is the existence of an
equilibrium solution. If a solution of this type exists, the outcome is close enough to the solution of global
optimization, and in some cases the outcome is identical to that of the global case.
There are a few studies in the literature that formulate a supply chain as a Stackelberg non-cooperative
game, but they do not provide comparisons between the decentralized bi-level and the centralized strategy.
Sadigh et al. (2012) investigate a multi-product manufacturer-retailer supply chain and propose two power
scenarios. Kovács et al. (2013) provide a comparison among four different computational approaches:
decomposition, integration, co-ordination and bi-level programming.
In the case we study, for the centralized optimization, we take into account all the characteristics of the
production simultaneously and the system is globally optimized. The decision variables of this problem are
the production, the inventory and the products subcontracted in each first stage factory, the production,
the inventory and the products subcontracted in each second stage factory using semi-products by all first
stage factories, during every time period. We study the deterministic case as well where the costs, the
maximum production capacity of every factory, the demand for every second stage factory, and the
minimum production of second stage factories using semi-products by every first stage factory are known.
In this case the performance of the system is optimized taking into consideration material balance and
production capacity constraints, and constraints for the transported quantities between the two stages. We
also assume that the initial stocks are zero and that there is no final demand during the first time period.
In the decentralized approach, the major problem is that it is difficult to determine the amount of products
that are transferred from each first stage output stock to each second stage factory. We consider the
second stage factories as the “leaders” facing the final demand. The first stage factories are considered to
be the “followers”. Assuming a fixed plan for the second stage, the optimal production plan for the first
stage factories is determined.
266
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
Linear formulations are developed for both optimization problems. In order to solve the resulting linear bi-
level problem in the decentralized case, the lower level problem is replaced by its KKT optimality conditions,
and the outcomes of computational experiments for both the local and the global strategy have been
compared. The results of the comparison determine the conditions under which the two strategies provide
the same optimal solution for each problem bypassing one of the two analyses.
4. QUALITATIVE RESULTS
We formulate the two cases and encode them using the C++ programming language and the IBM/ILOG
CPLEX optimizer. Through the analysis of several case studies, a certain qualitative behavior of the supply
chain has been consistently observed. The qualitative results of the serial two-stage system are very useful
and comprise the base for more complicated systems with more than one factory per stage. That is, for the
properties of the parallel system more restrictions are needed in order to be valid under the same rules as
in the serial system. In general, the properties for a serial two-stage system are valid for the case of more
complicating systems only when a Stackelberg equilibrium solution occurs at the second. This equilibrium
solution concerns a type of collaboration between leader and follower, and is obtained when the leader
accepts to achieve a slightly worse solution to favor the follower and eventually the whole supply chain.
This leads to a lower total cost of the supply chain than the one obtained without collaboration. The
properties obtained for the parallel system are the following.
1. The overall cost in the case of the centralized optimization is lower than or equal to that of the pure
decentralized optimization, and also to that of the partial decentralized optimization as well.
2. Because of the fact that bi-level programming problems favor the leader, the production cost of each of
the second stage factories as well as the total second stage cost in the case of partial local optimization
is less than or equal to those of the global optimization. If the bi-level problem results in an equilibrium
solution, then the outcome is identical to the case of the centralized policy.
3. In terms of the follower’s optimal solution, the total production cost in partial decentralized
optimization is greater than or equal to that of the centralized optimization. This is not necessarily valid
for the production cost of the first stage factories. If in the decentralized case the demand for each first
stage factory equals the demand of the centralized case, then this property is valid for the factories of
the first stage; the same applies to the case of the serial two-stage factory.
4. The strategy related to the system (subcontracting, inventory or mixed strategy) is the same when the
difference between the production cost and the subcontracting cost remains constant. However, the
difference between the costs of the local and the global optimization is constant only when the total
demand for the first stage factories is the same for both the local and global strategy cases, that is,
when an equilibrium exists. Using this property, one is not obliged to change the production plan when
the production cost changes and in some cases could avoid one of the two analyses.
5. We could avoid one of the two analyses when the first stage factories obtain demand curves which are
similar to the curves of the final products, that is, when the following conditions hold at the same time:
when the optimal solution of the centralized optimization suggests that the factories of the
second stage subcontract the extra demand regardless of the production plan of the first stage
factories,
the total demand for each first stage factory in the global case equals the respective demand in
the local case,
the demand for each first stage factory in the local case, for every time period, is lower than or
equal to the maximum of the production capacities of the first stage factories, the respective
demands in the global case and the respective requirements of the second stage factories,
the total subcontracting quantity for each first stage factory in the global case equals the
respective quantity in the local case,
the total stocked quantity for each first stage factory in the global case equals the respective
quantity in the local case.
267
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
6. Finally, the centralized optimization provides the same optimal plan and cost as the decentralized
optimization when the first stage factories have the best possible curve of demand, that is, when the
following occur at the same time:
the extra demand for the second stage factories is satisfied from inventory,
the demand for the first stage factories is the same in the centralized and the decentralized case,
and
including constraints that ensure that the demand a first stage factory must satisfy at any time
period should not be greater than the maximum of the second stage factories requirements at that
time period.
5. CONCLUSIONS
It is known that decentralized planning results in loss of efficiency with respect to centralized planning. In
serial two-stage systems with many factories working in parallel at each stage, difficulties arise when data
need to be shared among the stages and major strategies on production and cost distribution for the supply
chain need to be answered. We compare two supply chain systems, a serial and a parallel, and two types of
optimization, centralized and decentralized. A bi-level programming formulation can be used to locally
optimize the parallel system. The qualitative results provide useful information that can be used in
production planning. The comparison between the qualitative results of the serial and the parallel system
provides information on whether the size and the complexity of a supply chain affect its behavior. Some
variations of the bi-level formulation could enrich the qualitative results as well.
The study of more complex systems could enrich the conclusions drawn from the qualitative analysis.
Moreover, the computation of the exact equilibrium conditions under which the outcomes of the
decentralized and the centralized policy are the same is very useful and could enrich the conclusions.
Considering additional parameters such as random processing time and demand in supply chains would
bring these models closer to real-life problems. Some variations of the bi-level formulation could be solved
as well. A variation could be to change the roles of leader and follower. We could consider as leader the
first stage factories, which in addition to production planning would have to determine the price at which
the second stage factories would buy their semi-products. Consequently, as follower we would consider the
second stage factories, which would try to optimize their production plan for specific selling prices and first
stage production plans at every time. It is therefore very important to obtain qualitative results obtained
from the centralized and decentralized analysis in order to answer major questions about the supply chain’s
behavior.
ACKNOWLEDGEMENT
This work was supported by grant MIS 379526 “Odysseus: A holistic approach for managing variability in
contemporary global supply chain networks,” which was co-financed by the European Union (European
Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong
Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES:
Reinforcement of the interdisciplinary and/or inter-institutional research and innovation."
REFERENCES
Baboli A., Fondrevelle J., Tavakkoli-Moghaddam R. and Mehrabi A., 2011. A replenishment policy based on joint
optimization in a downstream pharmaceutical supply chain: centralized vs. decentralized replenishment, International
Journal of Advanced Manufacturing Technology, Vol. 57, No. 1-4, pp. 367-378.
268
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kostarelou E., Saharidis G., Liberopoulos G., Pandelis D.| Centralized vs Decentralized Decomposition of
Supply Chain Using Bi-level Schema
Bassett M.H., Pekny J. and Reklaitis G., 1996. Decomposition Techniques for the Solution of Large Scale Scheduling
Problems, American Institute of Chemical Engineers Journal, Vol. 42, No. 12, pp. 3373-3387.
Harjunkoski I. and Grossmann I.E., 2001. A decomposition approach for the scheduling of a steel plant production,
Computers and Chemical Engineering, Vol. 25, No. 11-12, pp. 1647-1660.
Gnoni M., Iavagnilio R., Mossa G., Mummolo G. and Leva A.D., 2003. Production Planning of a Multi-site Manufacturing
System by Hybrid Modeling: A Case Study From the Automotive Industry, International Journal of Production Economics,
Vol. 85, No. 2, pp. 251-262.
Kovács A., Egri P., Kis T. and Váncza J., 2013. Inventory control in supply chains: Alternative approaches to a two-stage
lot-sizing problem, International Journal of Production Economics, Vol. 143, No. 2, pp. 385-394.
Sadigh A.N., Mozafari M. and Karimi B., 2012. Manufacturer–retailer supply chain coordination: A bi-level programming
approach, Advances in Engineering Software, Vol. 45, No. 1, pp. 144–152.
Saharidis G.K.D., 2011. Supply Chain Optimization: Centralized vs Decentralized Planning and Scheduling, In Pengzhong
Li (Ed.), InTech - Open Access Company (Chapter 1), Available from: http://www.intechopen.com/books/supply-chain-
management/supply-chain-optimization-centralized-vs-decentralized-planning-and-scheduling.
Saharidis G.K., Dallery Y. and Karaesmen F., 2006. Centralized VERSUS decentralized production planning, RAIRO -
Operations Research, Vol. 40, No. 2, pp. 113-128.
Saharidis G.K., Kouikoglou V.S. and Dallery Y., 2009. Centralized and decentralized control policies for a two-stage
stochastic supply chain with subcontracting, International Journal of Production Economics, Vol. 117, No. 1, pp. 117-
126.
Shah N., Saharidis G.K., Jia Z. and Ierapetritou M.G., 2009. Centralized–decentralized optimization for refinery
scheduling, Computers and Chemical Engineering, Vol. 33, No. 12, pp. 2091–2105.
269
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
Abstract
In today’s business environment, supply chains involve a number of autonomous organizations. The nature of supply
chain processes with inter-organizational activities, involving different enterprises, calls for their design, analysis,
control and evaluation in a well-designed and structured manner. The increasing importance of business processes,
inevitably, puts process models in the epicenter of the majority of the efforts for achieving the required interoperability
and agility in dynamic supply chains. As a consequence, it is necessary to design or redesign efficient business process
models using supply chain process models. This can be achieved by reusing knowledge captured in reference process
models. The purpose of this paper is to study current research efforts and pinpoint those appropriate to serve as the
basis for the development of a supply chain reference model, focusing on demand variability management. In that
direction, a detailed literature review of available commercial and academic reference models followed by criteria-
based screening process is implemented. After examining the core concept and the basic principles behind each model,
comparing their strengths and weaknesses under a critical and original perspective, the GSCF – Global Supply Chain
Forum reference model is selected. Finally, in this paper, the Supply Chain Reference Model for Managing Demand
Variability (SC REMEDY) model is introduced; a decision, knowledge, IT and risk enhanced reference model that is
focused on demand variability management.
Keywords
Literature Review, Supply Chain, Modeling, Reference Models, Business Processes, SAP, SCOR, GSCF
1. INTRODUCTION
Supply chain management is increasingly being recognized as the integration of key business processes
across the supply chain (Croxton et al., 2001). Although the importance of supply chain relations is widely
acknowledged, seamless co-ordination is rarely achieved in practice (Trkman et al., 2007). Supply chain
performance can be improved through supply chain integration, and to that purpose, reference models
provide processes to support integration. In order to build links and relationships between companies,
there is a necessity for standard processes. According to Trkman et al. (2007), processes have five levels of
maturity (ad hoc, defined, linked, integrated, and extended). Reference models attempt to provide
extended processes for optimal performance. Because processes are now viewed as assets requiring
270
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
investment and development as they mature, the concept of process maturity is becoming increasingly
important as firms adopt a process view of the organization (Lockamy & McCormack, 2004).
In this paper, initially we present the results of a review and a criteria-based comparative analysis of
available supply chain reference models, for the particular needs of a research project, aiming to provide a
holistic approach for managing demand variability in contemporary supply chain networks. Then, the
Supply Chain Reference Model for Managing Demand Variability (SC REMEDY) is introduced, a decision,
knowledge, IT and risk enhanced reference model that is focused on demand variability management.
SCOR was developed by the Supply Chain Council (http://supply-chain.org/) in 1996 and is commonly cited
in contemporary literature (Stephens, 2001; Huan et al., 2004; Wang et al., 2010; Liu et al., 2013). It is a
business process reference model that links process descriptions and definitions with metrics, best practice
and technology (Wondergem, 2001). It aims to support communication between supply chain partners and
to make supply chain management more effective. SCOR 10.0 describes five basic processes (Plan, Source,
Make, Deliver, and Return), implemented in four distinct levels. The three first levels describe standardized
elements of the model that can be applied according to organizational needs (generic processes), whereas
level four describes the implementation of specific supply chain management practices, thus imposing
process instantiation.
Originating from the Information Technology (IT) territory, SAP R/3 and SAP APO reference models,
developed by SAP AG (www.sap.com), describe the two systems capabilities from a business scope and
271
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
depict the systems functions through graphical methods. They include the components hierarchy, the
process models, object models, data models, industry models and group models. The components hierarchy
groups the models functions according to specific operational criteria and describes the systems through a
functional lens (Kallrath & Maindl, 2006; Knolmayer et al., 2002). The process models depict, through
graphical representation, the functions and integration of the business processes offered by the SAP R/3
and SAP APO systems.
On the other hand, as mentioned earlier, CPFR is more of an initiative towards implementing a set of
business practices, rather than a reference model. Still, its wide acceptance and demand management
orientation, dictated the need for inclusion in our study. It was introduced in 1998, by the Voluntary Inter-
Industry Commerce Standards Association (VICS), by assembling a committee in order to record best
practices and create guidelines to be integrated to the framework. CPFR can integrate all members of a
supply chain to jointly develop demand forecasts, production and purchasing plans, and inventory
replenishments (Sari, 2008) These efforts resulted, according to Attaran & Attaran (2007), in the framework
becoming the third most used methodology for supply chain management improvement.
Leaning our study towards more academic and research oriented efforts, we identified the reference model
introduced by Mentzer et al. (2001). The authors, before presenting their model, conduct a broad research
on supply chain management theories and definitions and present a literature review. Then they formulate
their own definition for supply chain management and propose a reference model based on their definition.
The reference model is only described at a conceptual level and there have been no further efforts towards
its development.
Finally, the research efforts of Verdouw et al. (2011) and Klingebiel (2008) must be noted. The authors of
the first paper propose a framework based on the terminology and definition of SCOR 9.0 processes. They
introduce the Viable System Model (VSM) and Business Process Modeling Notation (BPMN) logic to the
documented processes in order to create a toolkit for modeling the supply chain. In the second paper, the
author provides a reference model for build-to-order (BTO) production by focusing on high level processes
in the automotive industry. She then describes the steps needed in order to instantiate her reference model
in other BTO industries. In both efforts, besides the high level process description, there is no analysis for
the lower level processes in their work.
Having studied the literature related with supply chain modeling and existing efforts towards the
development and application of supply chain reference models, in the next section we apply a screening
process based on predefined criteria, and finally select the most suitable one, for the needs of the research
project briefly presented in the beginning of the next section.
Based on the project’s panel of experts and their experience from previous projects, we established a set of
fifteen (15) selection criteria in order to assist the process of determining the more suitable model for the
research at hand. The list of criteria under study is shown in Table 1.
Next, weights were assigned to each criterion, based on expert’s opinion and an initial screening process
was imposed resulting in shortlisting SCOR and GSCF models, as providing the biggest coverage of the
criteria list items. Then, an internal discussion and thorough examination of each model was initiated. On
the one hand, the GSCF has a very broad scope which is considered both its strength and its weakness
(Lambert et al., 2005). Respectively, on one hand, there are increased opportunities for gaining value from
the supply chain; on the other hand, there are challenges to be faced during its implementation. An
organization cannot start implementing one process at a time due to the complex interfaces existing
between the eight processes. Such a scheme could lead to a lower than optimum performance. Also, the
cross-functional teams described in the model need to be committed to key suppliers and key customers in
order for them to function. GSCF is a better match for organizations with a market orientation due to its
position to assure resource alignment. It has a more strategic nature and focuses on increasing long-term
relationships and value along the supply chain. On the other hand, SCOR focuses on a narrower scope than
the GSCF, that being activities like purchasing, logistics and manufacturing. This makes it easier to
implement in most organizations since those functions are more likely to be integrated. But there is no
consideration of the marketing, finance and research and development functions of the organization and
this could potentially lead to lower levels of performance. SCOR lacks explicit connections between
functional strategies. One of its strengths is the set of benchmarking tools it provides, assisted by the Supply
Chain Council’s source of data and information. Another strength is the catalogue of best practices offered,
aiming at the improvement of transactional efficiency. SCOR is a useful tool for identifying areas of
improvement to achieve quick pay-back opportunities and satisfy top-management’s desire for cost
reductions and asset efficiency (Lambert et al., 2005).
The GSCF reference model is process oriented and constitutes a holistic supply chain model that is demand
driven, comprehensive, and flexible and meets the aims and particularities of our research. An organization
that adopts the GSCF framework is process oriented as every process that concerns a certain product or
service has to function in harmony with the rest of the processes. All activities are important for the
organization, but each organization can decide which activities are crucial according to its scope. It provides
a standard set of processes and analytical guidelines for their implementation. The model enhances the
organization’s cooperation with the rest of the participants of the supply chain, such as suppliers and clients
and signifies the important nodes which need to be tied firmly to the company. Each activity is thoroughly
analyzed encompassing all the necessary processes needed by an organization that wills to increase its
agility and capabilities throughout the supply chain.
The GSCF and SCOR 10.0 RM’s, originating from the fields of practice, both share common characteristics
within the criteria set, like the existence of five actors (from suppliers supplier to customers customer) in
both cases, the existence of best practices, the existence of metrics, the existence of sub-processes, the
level of focus on production, IT coverage, demand forecasting, demand variability management, the level of
agility and the degree of market acceptance. Despite these fields of similarity, they are actually two quite
different models. The GSCF provides a more comprehensive and extensive set of main processes that are
described in verbal fashion followed by basic graphical support in contrast to the plain verbal fashion of
273
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
SCOR. Each GSCF process is thoroughly analyzed and supported by best practices and specialists opinions
whereas, in SCOR, best practices are weakly connected to the processes. The total configuration of GSCF
provides economic value added, making it ideal for organizations with long term strategies.
Based on the results of the project team’s analysis and internal discussion, the GSCF model was selected as
an initial starting point and reference for the development of our decision, knowledge, IT and risk enhanced
reference model that is focused on demand variability management.
The SC REMEDY model is concentrated on demand management variability and requires three elements:
supply chain structure, activities and management features; providing optimal and novel activities that
every organization can adopt in order to improve its ability to manage demand variability. Three tiers of the
supply chain are described in the model: the supplier, the organization and the client. The SC REMEDY
model builds on the strategic and operational discrimination of processes described in GCSF, adopting the
practice of cross-functional process management teams and the involvement of IT, and is constituted of
nine major cross-functional activities, as shown in Table 2. Each activity is divided into a strategic and an
operational level. The activities run across the length of the supply chain and cut through the functional
silos of the organization. The strategic level relates to the definition of the long-term implementation of the
activities and the operational level concerns the short-term implementation.
Currently, the implementation of the SC REMEDY model is a work in process. Modeling efforts so far,
include the elaboration of the core value chain diagram of the model, its three (3) supporting organizational
charts (one for each implicated business entity, i.e. suppliers, organization, client) and the underlying
process decomposition into nine (9) function trees (one for each cross-functional activity, see Table 2) and
ninety two EPC (Event-driven Process Chain) diagrams. The modeling effort is complimented with an
Application System Diagram, providing a complete map of the IT infrastructural components supporting the
supply chain operation.
274
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
ACKNOWLEDGEMENT
The research efforts described in this paper are part of the research project “A Holistic Approach for
Managing Variability in Contemporary Global Supply Chain Networks” in research action: “Thales - Support
of the interdisciplinary and/or inter-institutional research and innovation”, which is implemented under the
Operational Programme: Education and Lifelong Learning, NSRF 2007-2013 and is co-funded by European
Union (European Social Fund) and Greek Government.
REFERENCES
Attaran, M. & Attaran, S., 2007. Collaborative supply chain management: The most promising practice for building
efficient and sustainable supply chains. Business Process Management Journal, Vol. 13, No. 3, pp.390–404.
Croxton, K.L., Garcia-Dastugue, S.J., Lambert, D.M., Rogers, D.S., 2001. The Supply Chain Management Processes. The
International Journal of Logistics Management, Vol. 12, No. 2, pp.13–36.
Ellram, L. M., Tate, W. L., & Billington, C., 2004. Understanding and managing the services supply chain. Journal of
Supply Chain Management, Vol. 40, No. 4, pp.17-32.
Gayialis S.P., Ponis, S.T., Tatsiopoulos, I.P. and Stamatiou, D-R.I., 2013, A Knowledge-based Reference Model to Support
Demand Management in Contemporary Supply Chains, The 14th European Conference on Knowledge Management
Kaunas University of Technology Kaunas. Lithuania 5-6 September, pp. 236-245.
Huan, S. H., Sheoran, S. K., & Wang, G. (2004). A review and analysis of supply chain operations reference (SCOR) model.
Supply Chain Management: An International Journal, Vol. 9, No. 1, pp.23-29.
Karrlath, J., & Maindl, T. J. (2006). Real Optimization with SAP® APO. Springer Heidelberg, Germany.
Knolmayer, G., Merteus, P., Zeier, A. (2002). Supply Chain Management Based on SAP Systems, Springer-Verlag, Berlin.
Klingebiel, K., 2008. A BTO Reference Model for High-Level Supply Chain Design. In G. Parry & A. Graves, eds. Build To
Order: The Road to the 5-Day Car. Springer London.
Lambert, D.M., García-Dastugue, S.J. & Croxton, K.L., 2005. An Evaluation of process oriented supply Chain Management
Frameworks. Journal of Business Logistics, Vol. 26, No. 1, pp.25–51.
275
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ponis S., GayIalis S., Tatsiopoulos I., Panayiotou N., Stamatiou D.R., Ntalla A.| Modeling Supply Chain
Processes: A Review and Critical Evaluation of Available Reference Models
Liu, P., Huang, S. H., Mokasdar, A., Zhou, H., & Hou, L. (2013). The impact of additive manufacturing in the aircraft spare
parts supply chain: supply chain operation reference (SCOR) model based analysis. Production Planning & Control,
(ahead-of-print), pp.1-13.
Lockamy, A.I. & McCormack, K., 2004. The development of a supply chain management process maturity model using
the concepts of business process orientation. Supply Chain Management: An International Journal, Vol. 9, No. 4, pp.272–
278.
Mentzer, J.T., DeWitt, W., Keebler, J.S., Min, S., Nix, N.W., Smith, C.D., Zacharia, Z.G., 2001. Defining Supply Chain
Management. Journal of Business Logistics, Vol. 22, No. 2, pp.1–25.
Naslund, D. & Williamson, S., 2010. What is Management in Supply Chain Management? - A Critical Review of
Definitions, Frameworks and Terminology. Management Policy and Practice, Vol. 11, No. 4, pp.11–28.
Ponis S.T., 2006. A Reference Model to Support Knowledge Logistics Management in Virtual Enterprises: A Proposed
Methodology. International Journal of Knowledge, Culture and Change Management, Vol. 5, No. 9, pp. 1-9.
Sari, K., 2008. On the benefits of CPFR and VMI: a comparative simulation study. International Journal Production
Economics, Vol. 113, No. 4, pp.575–586.
Stephens, S., 2001. Supply chain operations reference model version 5.0: a new tool to improve supply chain efficiency
and achieve best practice. Information Systems Frontiers, Vol. 3, No. 4, pp.471-476.
Trkman, P., Stemberger, M.I., Jaklic, J., Groznik, A., 2007. Process approach to supply chain integration. Supply Chain
Management: An International Journal, Vol. 12, No. 2, pp.116–128.
Verdouw, C.N., Beulens, A.J.M, Trienekens, J.H., van der Vorst, J.G.A.J., 2011. A framework for modelling business
processes in demand-driven supply chains. Production Planning & Control, Vol. 22, No. 4, pp.365–388.
Wang, W. Y., Chan, H. K., & Pauleen, D. J., 2010. Aligning business process reengineering in implementing global supply
chain systems by the SCOR model. International Journal of Production Research, Vol. 48, No. 19, pp.5647-5669.
Wondergem, J., 2001. Supply Chain Operations Reference-model Includes all Elements of Demand Satisfaction, Business
Briefing: Global Purchasing and Supply Chain Strategies, pp.27-29.
276
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Bimpos A., Papadopoulos C.| Supply Chain Optimization with Regard to Customer Satisfaction: Solving
with GAMS and Comparing with Analytic Hierarchy Process (AHP)
Abstract
The purpose of this study is to develop a general multi-criteria supply chain model. Our main target is to optimize the
supply chain with regard to customer satisfaction. The optimization of the financial figures such as profit, holding cost,
general cost etc., are the other targets of the model. Specifically, a supply chain of three echelons is examined and the
model tries to minimize the holding cost in every echelon and through this the profits are increased while the customer
satisfaction is guaranteed. Two main alternative policies to achieve these targets are presented. The first alternative
proposes to hold more inventory but this leads to excess holding cost. The second alternative examines a more flexible
inventory management which allows the supply chain to lower the holding cost and also to keep higher customer
satisfaction. To solve the mathematical model the GAMS software was used. Also, a qualitative method of decision
making is presented using the Analytical Hierarchy Process, to enrich our results.
KEYWORDS
Customer Satisfaction, Supply Chain Management, GAMS, AHP.
The literature in the area of supply chain optimization is large. Here, due to space limitations, only a few
studies are mentioned. The interested reader is addressed to the Master’s thesis of Bibos (2013) for more
references and a systematic classification of the relevant material. Jinghsan Li et al. (2004) measured the
customer satisfaction in an automotive industry with the so-called “Due-Time Performance - DTP”
approach. DTP measured the possibility that a customer would get the parts he ordered in the proper time
period. Also, Jung et al. (2004) examined the impact of inventory management in the profit while they were
trying to serve the demand. Actually they tried to find out how much inventory to keep in order to satisfy
the demand by keeping the holding cost low at the same time.
Dong et al. (2005). formulated and examined a multicriteria supply chain model. They maximized the profit
while maintaining other targets (such as holding cost, transportation time) in a proper level and satisfying
the demand. They measured the time period without shortage.
277
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Bimpos A., Papadopoulos C.| Supply Chain Optimization with Regard to Customer Satisfaction: Solving
with GAMS and Comparing with Analytic Hierarchy Process (AHP)
Jodlbauer (2008) studied a production model and how different targets (e.g., customer satisfaction level)
can affect the production program. van Kampen et al. (2010) considered in a case study, the safety stock
and the transportation time optimization based on the customer satisfaction goal.
The main scope of this paper is to create an easily established general multi-criteria model which fully
serves the demand and satisfies customers. The difference in our study is that we first try to guarantee that
all customers will get the quantity they order at the promised time and then we examine the optimization
of the financial targets. In other words we attempt to achieve complete customer satisfaction level and
manage the cost. We examine the holding cost all over the supply chain and how we can get it down to its
lower level. To solve the model we use GAMS software (CPLEX solver). Also the Analytic Hierarchy Process is
implemented as a contribution of a qualitative method to make a decision. The rest of paper is structured
as follows: In the next Section, the mathematical model is presented and in Section 3, a numerical example
is given. Finally last Section concludes the paper and gives a few areas for further research.
Subject to
I.
II.
III. 1st
Category
IV.
V.
VI.
VII. 2nd
VIII. Category
IX.
X.
3rd
XI. (t=0)
Category
XII. (t=0)
XIII. (t=0)
XIV. 4th
Category
XV.
XVI.
XVII. 5th
Category
XVIII.
3. NUMERICAL EXAMPLE8
We used Microsoft Excel (random number generator) for entering the data regarding customer demand.
More specifically, we supposed that the demand follows the normal distribution and we supposed that
there are four different scenarios (one for each retailer). We used big numbers for standard deviation in
order to test our model in stress situation. The four pair numbers are (the first number is the average and
the second is the standard deviation): (1000,300), (500,50), (1200,400), (900,400). We examined two
alternatives of inventory management: The first is that the retailers would keep inventory as much as they
need to serve the demand. The second is to keep most of the inventory at the manufacturer and a small
amount of products at the other echelons.
st
1 Alternative: By following the first scenario (the customers get the products they want at the time they
order) the supply chain is based on the random numbers (or these could be a history or forecast data) of
the demand while programming the flow of products. That situation leaves the retailers unprotected from a
(very) possible demand variance. To solve this problem a solution would be to increase the minimum
inventory that each echelon would maintain. The retailers could hold inventory equal to average plus one
or two or three standard deviations based on how they want to minimize the possibility of unsatisfied
demand (if a demand variation happens). Unfortunately even though we achieve to satisfy almost all the
demand by keeping more and more inventory, at the same time we cause a great increase of the holding
cost. A simulation model was developed using the Microsoft Excel software. We supposed that an increase
(based on random data) in demand occurs and counted how many times the demand was not satisfied. 30
scenarios were examined for each retailer.
8 Due to space limitation the constant data are not given here.
279
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Bimpos A., Papadopoulos C.| Supply Chain Optimization with Regard to Customer Satisfaction: Solving
with GAMS and Comparing with Analytic Hierarchy Process (AHP)
nd
2 Alternative: The second alternative is based on IT systems. Customers would get their orders at a
promised time period. Retailers will keep inventory equal to one standard deviation. Likewise each
distributor would maintain inventory equal to the sum of one standard deviation of the two retailers that it
serves. Most of the inventory would be kept in the manufacturer. The inventory will be equal to the sum of
the average plus one standard deviation of each of the four retailers. Each time an order is received, the
manufacturer will start the shipping of products needed to the next echelon in order to reach the final
customer. This way we manage to have available inventory in the supply chain equal to the average and
three standard deviations of all the retailers. Every demand deviation or change could be faced successfully.
Results: Between these two alternatives the second one is better because it offers better customer
satisfaction and lower holding cost. Therefore the profits increase too. In Figure 1 the comparison of the
holding cost in the two alternatives is given.
To enrich the outcome we also used a qualitative method in decision making, the Analytic Hierarchy Process
(AHP). To make the comparison more interesting we insert a third alternative. Its characteristics are that
the supply chain would work as a pure pull system; no inventory will be kept and so will last longer to fulfill
an order. The result from this method (as shown in Table 1) is that the second alternative is still the best
because it can provide higher customer satisfaction with lower holding cost.
Table 1 Normalized Pairwise Matrix Table 2 The score of each alternative in each criteria
AHP compares the available options to make a decision. It compares these options under some criteria.
Here the options are the alternatives and the criteria are the holding cost, the customer satisfaction and the
time for order fulfillment. Table 1 shows the final classification of the criteria and Table 2 presents the
classification of each alternative in each criteria. In other words we first classify the criteria (which value
more for us) and then we classify the alternatives under the criteria. Finally we combine these two matrices
9
and we get the results shown in Table 3 where the best alternative is the one with the biggest value .
This study can be extended by establishing a transportation model aiming to get the real conditions of the
supply chain even closer. In addition, a second product could be added, with another supply chain to be
combined in the echelon of distributors. A sensitivity analysis about how many unfulfilled orders would not
change our result, would be useful. Also it is worthy to extend the second alternative with the availability of
direct order fulfillment (for a percentage of orders).
9 For more information about AHP: Anderson D., Sweeney W., Williams T., Martin K., (2008), An
Introduction to Management Science: Quantitaive Approaches to Decision Making, Thomson, Canada, pp. 732--743.
281
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Bimpos A., Papadopoulos C.| Supply Chain Optimization with Regard to Customer Satisfaction: Solving
with GAMS and Comparing with Analytic Hierarchy Process (AHP)
ACKNOWLEDGEMENT
In this research, Professor Chrissoleon T. Papadopoulos has received a grant from THALES, a project co-
financed by the European Union (European Social Fund – ESF) and Greek national funds through the
Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework
(NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social
Fund.
REFERENCES
Dong et al. (2005), Multitiered Supply Chain Networks: Multicriteria Decision—Making Under Uncertainty, Annals of
Operations Research, Vol. 135, pp. 155–178.
Jodlbauer, H. (2008), A time-continuous analytic production model for service level, work in process, lead time and
utilization, International Journal of Production Research, Vol. 46, pp. 1723–1744.
Jung, J.H. et al. (2004), A simulation based optimization approach to supply chain management under demand
uncertainty, Computers and Chemical Engineering, Vol. 28, pp. 2087–2106.
Li, Jingshan et al. (2004), Random Demand Satisfaction in Unreliable Production–Inventory–Customer Systems, Annals
of Operations Research, Vol. 126, pp. 159–175.
van Kampen, T.J. et al. (2010), Safety stock or safety lead time: coping with unreliability in demand and supply,
International Journal of Production Research, Vol. 48, No. 24, pp. 7463–7481.
282
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Bimpos A., Papadopoulos C.| Supply Chain Optimization with Regard to Customer Satisfaction: Solving with GAMS and Comparing with Analytic Hierarchy Process
(AHP)
Appendix
Table 4 Notation
283
nd th
2 International Symposium and 24 National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ehm H., Ström A., Hanson A., Larmon H., Habib H. | From Moore’s Law to “More Supply Chain”
Hans Ehm Anna Ström Anna Hansson Hannah Larmon Haris Habib
Infineon Technologies AG Infineon Technologies AG Infineon Technologies AG Infineon Technologies AG Infineon Technologies AG
Am Campeon 1-12 Am Campeon 1-12 Am Campeon 1-12 Am Campeon 1-12 Am Campeon 1-12
85579 Neubiberg 85579 Neubiberg 85579 Neubiberg 85579 Neubiberg 85579 Neubiberg
Germany Germany Germany Germany Germany
Abstract
With the well-known challenges of short ramp up, long lead time and a violate market; continuous innovation is vital in
the semiconductor market. The importance of the semiconductor industry is undeniable and thus it is vital that future
innovation remains supported. Supply chain management facilitates fast response to uncertain demand and allows
thereby effectively operations in a truly competitive international market over a longer period of time. This paper aims
to bridge the observed gap from innovation ideas to real concrete project applications for companies such as a
semiconductor company. A new supply chain project prioritization model is discussed in this paper which uses an open
innovation approach in order to envelope external trends and internal expert opinions to ensure all knowledge in the
semiconductor market is exploited. The process maps high tech trends to existing roadmap projects in the company and
aims to overcome industry specific challenges. This model uses two stages; scouting and ideation, and ensures that the
projects that are implemented are aligned with highly rated trends which can affect the semiconductor industry and
companies supply chain strategies. Overall the project prioritization model allows for correct steering of a company in a
truly successful direction. It can be said that this unique innovation handling ensures further development of the supply
chain in the semiconductor industry, and will hence pave the way from Moore’s Law to “More Supply Chain”.
KEYWORDS
Semiconductor industry, Moore’s Law, Supply Chain Management, Project Management, Supply Chain Innovation,
Open innovation
1. INTRODUCTION
As the semiconductor industry is a challenging industry, with characteristics such as short ramp up, long
lead time and very volatile customer demand, innovation work in areas such as supply chain management is
crucial. A literature gap has however been observed by the authors, regarding how a semiconductor
company can handle innovative ideas, in order to make the ideas with great potential to real concrete
projects. This paper therefore describes how a semiconductor company can work with project
prioritization, in order to ensure that it is focusing on the most relevant and beneficial projects. More
details about what special challenges a semiconductor company faces is initially described, in order to
provide the reader with an understanding of the context for the following discussion. This is followed by
descriptions of how importance supply chain management is in this industry, and why it is vital to be
innovative in this area. The later presented project prioritization model is an open innovation-inspired
model that incorporates market trends and letting a range of different sources determine the importance of
the trend. The gap between important trends and current projects can thereafter be attempt to be filled by
initiating new ideas and trying to match these with the existing project portfolio at the company, and
thereby ensure that the company is focusing in the right areas, and hence steered in a successful direction.
2. BACKGROUND
In the background of this paper, the semiconductor industry will be introduced and the specific needs and
challenges of this industry will be explained. Furthermore, this section of the paper also motivates why
284
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ehm H., Ström A., Hanson A., Larmon H., Habib H. | From Moore’s Law to “More Supply Chain”
supply chain management is vital in the semiconductor industry, and why this is an area that needs to be
continuously developed and innovated.
The customer demand in this industry is highly volatile, partly due to the speedy introduction of new
designs, which often lead to short product lifecycles (Katircioglu & Gallego, 2011). Another challenge is that
many customers of semiconductor companies practice lean manufacturing, which requires high levels of
available inventory of the suppliers (Katircioglu & Gallego, 2011). The enormous size of the supply chains in
the semiconductor industry, together with the permanent appearance of uncertainty and rapid technology
changes, makes the approaches developed for other industries insufficient in many cases (Mönch, Fowler,
& Mason, 2013).
Moreover, the semiconductor market is continuously pushing the technology forward. Until now the
development has followed Moore’s law, which was stated in 1965 by Gordon Moore (Grier, 2006). The Law
first stated that the amount of elements in a typical integrated circuit increased at a rate of roughly a factor
of two per year. However, the time it would take for the number of elements to double was a decade later
revised to 18 months instead of annually (Grier, 2006). This has since then been true; processors are faster,
memories larger and prices lower every 18 months. The technology has come to the point where the size of
the circuits reaches the size of nanometers (The Economist, 2012), and it can hence be questioned how long
this development can continue in the same pace.
When supply chain performance is improved by better coordination and planning along the supply chain,
challenges such as the bullwhip effect can be handled and controlled (Lee, Padmanabhan, & Whang, 1997).
Therefore, supply chain management is most often a key strategic issue among the companies who
compete effectively in a truly competitive international market over a longer period of time (Sweeney &
Faulkner, 2007).
implicates tremendous costs (Chesbrough, 2007). Furthermore, even great technology innovations can no
longer assure satisfactory profit due to the shortening product lives (Chesbrough, 2007).
Sweeney and Faulkner (2007) state that the ability of companies to view SCM as strategic is becoming an
increasingly important determinant of business success. Furthermore, they emphasize the importance of
putting appropriate supply chain capabilities into place in advance of the need in a pro-active manner
(Sweeney & Faulkner, 2007). It is hence crucial to work with supply chain management in an innovative and
offensive manner. This is supported by Franks (2000), who claims that for a manufacturing organization to
stay competitive, supply chain innovation is crucial.
3. RESULTS
In this section of the paper, it is introduced how a semiconductor company can work with market trends, in
order to evaluate them and rate them according to their probability of occurrence and their impact on the
semiconductor industry. It is also described how these trends can be matched to existing projects at a
company, in order to visualize where the focus might need to be modified. Lastly, a project prioritization
model is introduced, which describes a process for prioritizing different projects, whit the previous
mentioned rated trends used as input.
The weighted trend rating analysis can later be used in order to evaluate a company’s project portfolio. The
projects in a project portfolio are usually at different stages in the project life cycle –idea, preparation,
running, on hold or implementation phase. A financial breakdown of projected costs and earnings are
usually provided in a project portfolio, as well as a map with a planned timeline of smaller project steps.
286
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ehm H., Ström A., Hanson A., Larmon H., Habib H. | From Moore’s Law to “More Supply Chain”
Each running or recently implemented project should be assigned to a specific trend. An excel macro can
then be created in order to analyze the rating results compared with the current project trends. This allows
a clear visualization of the important trends according to a wide variety of supply chain experts and the
critical gaps amongst a company’s projects. This is shown in Figure , where the ratio of existing projects per
trend is directly correspondent to the size of the respective coloured bubble.
A gap can be defined as an influential trend which lacks a sufficient number of projects within the
company’s supply chain. In Figure an example of the trends and ratio of existence within projects in an
organisation is visualised. When gaps are identified, scouting innovation sessions should be held to
generate new project ideas.
The first segment of the model represents the scouting phase. The trend rating process as described in the
previous section serves as input for the scouting phase. An innovation session is held where new projects
ideas can be generated, combined with trend presentation and ratings. Here the participants should strive
to originate novel project ideas which correspond to important trends that are not yet satisfactorily covered
within the existing running projects. The scouting stage allows the company to “find the right things” and to
close vital gaps with a range of potential new projects supply that can be chosen from and prioritized in the
next phases.
The next stage is the Ideation where the right projects must be selected for further growth. This can be
broken down into two segments of evaluation and selection. In evaluation the created project ideas from
scouting can be classified to a specific trend and given a “trend label”. Firstly it is necessary to examine if
the new proposal is similar to any already existing project idea at the company, to counteract duplication of
work. If this idea is truly novel then it can be added to the existing pool of projects ideas and can be
processed to the “idea-liking” stage. This can be achieved using a “liking” tool, which allows for the supply
chain community to manifest their support to that idea by clicking, or “liking”, on the corresponding idea.
Understandably the tool incorporates the seniority and relevance to the topic of the person who has “liked”
the idea. Overall this ensures tha project prioritization aligns with vital future trends and with top
management and the general supply chain community in the company. A merge between the project owner
and any committed “idea liker” can then occur, as they can join forces to tackle an existing problem within
287
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ehm H., Ström A., Hanson A., Larmon H., Habib H. | From Moore’s Law to “More Supply Chain”
the supply chain network using the original idea generated through the innovation session. It is in the
selection stage that a funnel approach is clearly visible as many project ideas are filtered into trend
classification, “liked” ideas and possible gap closing projects. Finally, in the last selection step, it can be
decided on which idea will be implemented in order to close the project gap.
4. DISCUSSION
A highly innovative project identification and selection method is described in this paper. This method is
believed to increase the effectiveness of projects implement due to a highly comprehensive method that is
used to identify the field where new projects are needed. As stated, there is emphasis placed on the
importance of innovation within a volatile and complex industry such as the semiconductor one. The
project selection method is believed to have the potential of extending Moore’s law. Although Moore’s law
is about the number of components per integrated circuit function, it could also be translated into the
decreasing price per “piece of knowledge” as shown in Σφάλμα! Το αρχείο προέλευσης της αναφοράς δεν
βρέθηκε.. The Moore’s Law would hence be extended through innovation within supply chain, since
improvements in this field is shown to be crucial for the future development of the semiconductor industry.
Figure 4 Visualization of the extension of Moore’s Law and More than Moore with More Supply Chain
By introducing “More Supply Chain”, new areas of overall cost-improvement possibilities are initiated to the
already existing model of Moore’s Law. The improvements that e.g. can be achieved with correct project
selection in the field of supply chain management have the potential of extending the decrease in price per
piece of knowledge. The continuation to More Supply Chain can be achieved through the correct
combinations of gap closing project allowing innovation to prevail through an overall improved supply
chain.
As the project prioritization model in this paper facilitates for innovation to occur within a semiconductor
company, it is clear that this model created has many benefits for a semiconductor company. Firstly the
alignment with external market trends is of great importance for business strategy and essential for
sustained success especially in the semiconductor industry. The next benefit of the process is the both
innovative and constructive project ideas generated to close the gap between crucial trends which are
lacking adequate coverage. A different route is used in this method of a top trend overview preface which
aids to focus the group member’s minds in the direction requiring most attention for project idea
generation to heighten the companies supply chain performance. The use of the liking tool is also very
advantageous in the sense of incorporating different perspectives on the issues faced within semiconductor
organizations and methods employed to overcome such challenges. Traditionally projects are thought of by
288
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Ehm H., Ström A., Hanson A., Larmon H., Habib H. | From Moore’s Law to “More Supply Chain”
top management and implemented with little review of the entire supply chain community. Adding likes to
a project idea allows the company to develop a wider viewpoint to its project prioritization while closing
gaps between upcoming trends and supply chain strategy.
However, the model also holds some potential disadvantages. In order for the trends to be relevant to both
the semiconductor industry and company supply chain strategy it requires constant upkeep with regular
trends revisions and trend rating sessions. Although this requires extra effort it is believed that this process
will reap much more rewards in long term company success if followed. Another shortcoming lies in the
possibility that project ideas generated may not be in the required missing trend category. The company
must also be careful in project implementation to ensure that it does not fall into the easy trap of reverting
a once original idea back to its usual way of operating thereby not addressing the lacking problems.
5. CONCLUSIONS
The results in this paper shows that companies can employee open innovation in a project prioritization
process by combining external trends with internal ideas. Through careful alignment they can develop in
correct direction with the market and retain its forefront position. The model facilitates the extension of
Moore’s Law since further innovation can occur within the supply chain field in a semiconductor company
through a detailed project generation and prioritization process. As these projects are aligned with the
external trends, companies remain aligned with development in the market and can hence retain a
forefront position. It should be noted that companies can experience internal obstacles due to the model
implementation, and therefore not reach the level of innovation that was original aimed. Nevertheless, this
model for supply chain innovation is believed to truly facilitate and strengthen the global position for actors
in the semiconductor industry where supply chain management is a central and important pillar.
REFERENCES
Chesbrough, H. (2007). Business model innovation: it's not just about technology anymore. Strategy & Leadership, 35(6),
12-17.
Grier, D. (2006). The innovation curve [Moore’s law in semiconductor industry]. Computer, 39 (2), pp.8-10.
Katircioglu, K., & Gallego, G. (2011). A Practical Multi-Echelon Inventory Model with Semiconductor Manufacturing
Application. International Series in Operations Research & Management Science , 152, 133-151.
Lee, H. L., Padmanabhan, V., & Whang, S. (1997). Information Distortion in a Supply Chain: The Bullwhip Effect.
Management Science , 43 (4), 546-558.
Lee, H. L., Padmanabhan, V., & Whang, S. (1997). The Bullwhip Effect in Supply Chains. Sloan Management Review , 38
(3), pp. 93-102.
Mönch, L., Fowler, J., & Mason, S. (2013). Production Planning and Control for Semiconductor Wafer Fabrication
Facilities (Bd. 52). New York: Springer.
Sweeney, E., & Faulkner, R. (2007). Supply Chain Management: The business Model of the 21st Century. In E. Sweeney,
In perspecitve on Supply Chain Management and Logistics - Creating Competitive Organisations in the 21ts Century (S.
pp. 307-316). Dublin: Blackhall Publishers.
289
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
Abstract
In the years to come, public transport will be called to play a significant role towards achieving the sustainable transport
system objective set for the future in Europe and beyond. To this end, the quality, accessibility and reliability of its
operations should be improved. In this context, the favorable treatment of public transport means within the road
network may have, among others, a significant contribution. Such treatment can be derived as a result of an
appropriate design of the road network facilities and/or the employed signal control at the network junctions. To this
end, several approaches have been proposed, and it is the aim of this paper to review them, focusing mainly on those
attempting to provide priority via appropriate adjustment, in real time, of the junction’s signal control.
KEYWORDS
Public transport systems; public transport priority; priority strategies
1. INTRODUCTION
The road transport network, used for the mobility of people and goods, consists of the road network and
any existing bicycle and pedestrian paths or spaces, the pedestrians, the public and private transport means
and the terminals. The continuous increase of the urban population and of the mobility needs of people and
goods, as well as of the use of the private vehicle, in combination with the fact that the road transport
system had not been designed considering such a significant increase, have resulted in substantial traffic
and environmental problems. City centers, especially, suffer most from congestion, poor air quality and
noise exposure. To confront the significant challenges and set the roadmap towards a sustainable transport
system by 2050, the European Union (EU) has released a White Paper on Transportation (EC, 2011),
according to which three main goals have been set, which can be summarized as “use less energy, use
cleaner energy and better exploit infrastructure”.
Public transport (PT) can play a significant role in the achievement of the aforementioned goals via
increased ridership, which may be enabled by improving the quality, accessibility and reliability of its
operations. In this context, the favorable treatment of PT means within the road network, which is called
Public Transport Priority (PTP), may have, among others, a significant contribution. PTP can be derived as a
result of an appropriate design of the road network facilities and/or the employed signal control at the
network junctions. To this end, several approaches have been proposed, and it is the aim of this paper to
review them, focusing mainly on those attempting to provide priority via appropriate adjustment, in real
time, of the junction’s signal control. Applications of PTP systems worldwide are also presented and
discussed in an effort to identify the current trends and future perspectives in the respective field. The
paper is based on the outcome of an extensive literature review, which is reported in more detail in Diakaki
et al (2013).
290
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
As far as the signal-control-based measures are concerned, the level of provided priority may vary for
different types of served PT means. Highway-rail crossings are typically assigned the highest priority that
provides the most aggressive manipulation of the signal controller (Nelson and Bullock, 2000), while
emergency vehicles, such as fire trucks, are assigned a slightly lower priority to allow a signal from a
highway-rail grade crossing to override the emergency vehicle request. Buses, trams and LRTs are generally
assigned an even lower priority, which is often only granted if specific conditions are satisfied; mainly when
the PT vehicle is behind its schedule (Nelson and Bullock, 2000).
Green extension: This method refers to the extension of green time to serve a PT vehicle
approaching the junction towards the end of the green time. It is commonly used where the
detection is relatively close to the priority junction and is subject to constraints like maximum
extension time, minimum green-time for the other stages, etc.
Stage recall: This method refers to the recall of a stage, if its signal is on red at the estimated
arrival time of the PT vehicle. It may involve the (green) truncation of more than one stage,
subject to minimum green-time constraints. As green extension, stage recall is also commonly
used where the detection is relatively close to the priority junction.
Stage skipping: The previously mentioned methods do not affect the normal stage sequence.
An alternative and stronger form of priority is to omit one or more stages from the normal
stage sequence so as to allow for the service of a priority request as soon as possible.
Stage reordering: An even stronger form of priority is to modify the normal sequence, i.e. to
activate a stage, which is later in the order, before others to serve a received priority request.
Special stage: According to this method, a special stage is allocated to the movements of PT
vehicles and is introduced into the normal sequence at the first available opportunity in order
to serve a received priority request. This might mean that other stages may have to be
truncated to their minimum green times (as in stage recall) or even totally skipped (as in stage
skipping).
The aforementioned methods may be either considered explicitly or they may result as a solution of related
optimization problems. In general, the strategies employed to modify the normal signal control of junctions
in order to provide priority to PT means are classified as fixed-time and real-time strategies.
291
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
Fixed-time strategies are in fact fixed-time signal plans especially developed to favor the movements of PT
vehicles by considering factors such as longer green times for stages serving PT vehicles, reducing the cycle
length to reduce delay, providing stage sequences designed to more frequently serve the stages that have
high demand of PT vehicles, etc.. The most known example in this class of strategies is the TRANSYT PTP
tool, while more recently, Genetic Algorithms (GAs) coupled with micro-simulation tools have been
employed to solve appropriately developed optimization problems and provide the necessary fixed-time
plans.
The real-time strategies attempt to overcome the flexibility disadvantages of fixed settings via closed-loop
operation. To this end, they require the ability to detect in real-time PT vehicles approaching signalized
junctions via, as a minimum, Selective Vehicle Detectors (SVDs), such as bus detection loops and
transponders on buses. The sophistication and resulting performance though of these strategies may be
improved in case of availability of more advanced systems such as Automatic Vehicle Location (AVL) and
Global Positioning Systems (GPS), which provide in real-time more detailed PT-vehicle related data. Real-
time strategies may be further classified as rule-based versus optimization-based, depending on whether
their control decisions are based on a set of identified conditions or on the optimization of an appropriately
defined performance index.
Figure 1 Examples of priority methods: (a) green extension; (b) stage recall; (c) stage skipping; (d) stage reordering
traffic light
traffic stream with right of
Stage 2 way
Stage 1 (priority Stage 3
stage) intergreen time
(a) (b)
(c) (d)
292
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
Rule-based strategies are triggered by the receipt of particular priority calls and respond to them according
to their underlying rules, by directly modifying the implemented signal control; this modification may be
more or less aggressive depending on the prescribed priority level. Known examples include the PTP
modules that complement known signal control strategies such as SCOOT, SCATS, BALANCE, MOVA,
TRAFCOD and TUC, as well as strategies such as PRIBUSS, BCC-RAPID and SPRUCE that have been developed
specifically for PTP purposes.
A difficulty with the rule-based strategies is that they are not able to respond adequately when multiple
requests arrive simultaneously or in short time intervals, calling for contradictory signal control
modifications. This limitation may be overcome by the employment of optimization-based strategies. The
optimization-based strategies attempt to provide priority based on the optimization of some performance
criterion; primarily delay (passenger delay, vehicle delay, weighted vehicle delay or combination). They use
actual observed (both private and public) vehicle arrivals as inputs to traffic models that either evaluate
several alternative timing plans to select the most favorable option, or optimize the actual timing in terms
of stage durations and stage sequences. Common optimization techniques that are employed in this respect
include linear, mixed-integer and dynamic programming; heuristic approaches have also been proposed
aiming at reducing the computational effort that is usually very high due to the typical utilization of binary
decision variables representing the green-red switching of the traffic lights. Known examples of
optimization-based strategies include PRODYN, RHODES/BUSBAND and CAPRI, UTOPIA/SPOT, MOTION,
DARVIN, etc.
Beyond the above classification, PTP strategies may also be distinguished according to criteria, which
address other characteristics and lead to the following classifications:
Local vs. network-wide strategies: The local strategies operate at individual junctions,
independently of all other junctions, in the aim of accelerating the PT vehicle passage at those
junctions; while the network-wide strategies attempt to improve the progression of PT vehicles
within a network.
Reactive vs. proactive strategies: The reactive strategies are typically local, i.e. applied at each
junction separately of the others, and react to received PT presence messages from
approaching links to enable an accelerated passage, without involvement of model predictions
other than the estimated time of arrival of the PT vehicle at the specific junction. The proactive
strategies, on the other hand, attempt to proactively respond to priority requests. They receive
a request of service well in advance, perhaps when the PT vehicle is one or more signals
upstream, thus having more time to plan for the arriving vehicle, to accommodate multiple
requests for service, and to co-ordinate the vehicles’ transfer point of operation.
Unconditional vs. conditional strategies: The unconditional strategies provide priority
regardless of the status of the PT vehicle, i.e. regardless of whether the vehicle really needs to
be treated in a special way at its approach to a signal-controlled junction (e.g. the vehicle may
be in advance of its schedule, thus it does not need any priority treatment), while in conditional
strategies, the decisions are usually made based on the schedule or headway adherence of the
arriving vehicle. This assumes that the signal control system knows the operating status of the
arriving vehicle, which in turn implies that the application of conditional strategies requires
additional real-time information regarding the operating status of a detected PT vehicle.
293
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
adopt facility-design-based measures in an effort to improve their PT operations, thus encouraging modal
shift and promoting the use of PT means.
The reported PTP applications in European cities, as well as in cities in the rest of the world, concern
strategies of different architectures (centralized or decentralized) employing different detection and
communication devices and systems. Despite their differences, however, the vast majority of reported
strategies are based on a local, reactive, conditional, rule-based logic, which favors the movements of PT
vehicles at a higher or lower degree, depending upon the adopted priority levels as well as the availability of
other facility-design-based measures.
The European philosophy to PTP has been rather aggressive, with provision of higher priority levels and less
concern for the potential negative impacts to the rest of the traffic. The bus seems to be the most common
PT mean with the length of the bus networks ranging from a few to thousands of kilometers; cities of similar
size have considerably different bus network lengths, depending on the presence of other PT means in the
city (Kaparias et al, 2010). LRT and tram systems are also very common in European cities (Kaparias et al,
2010).
To improve the performance and efficiency of PT, many European cities employ PT priority measures,
mainly of a facility-design-based nature; exclusive bus lanes in specific. Known PTP strategies that are used
or have been tested in European cities include:
Beyond the above, several other rule-based strategies have been locally developed and applied in Austria,
Czech Republic, Denmark, Estonia, Finland, France, Germany, Italy, Romania, Sweden and Switzerland. The
case of Zurich in Switzerland where a full PT-oriented philosophy and approach has been developed and
adopted since 1970 is perhaps the most noticeable from all reported PTP cases. This philosophy, which
resulted in a full bus-tram priority via all available means, produced mobility and traffic conditions that
enabled a significant modal shift towards PT; it has been reported that approximately 42% of trips in Zurich
are made by PT means (Gardner et al, 2009).
Similarly to Europe, several PTP strategies have been reported in other cities in the rest of the world such
as:
In USA and Canada, several other PTP strategies of a rule-based nature have been developed by local or
state traffic/highway departments, with the level of deployment varying considerably from location to
location, and ranging from equipping a few junctions and a limited number of buses, to equipping entire
corridors and to system-wide deployment. PTP applications have also been reported in Japan.
5. CONCLUSIONS
As discussed in the previous sections, several different signal-control based strategies have been developed
and applied worldwide. The relevant literature offers a few examples of fixed-time priority strategies, and
294
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dinopoulou V., Diakaki C., Papamichail I., Papageorgiou M. | Public Transport Priority Strategies: Progress
and Prospects
numerous examples of real-time priority strategies, mainly of a rule-based nature. A similar tendency is also
observed in the practical applications of PTP systems, where the vast majority of adopted strategies are
real-time, rule-based.
Rule-based strategies are triggered by the receipt of particular priority calls and respond to them according
to their underlying rules, by directly modifying the implemented signal control. However, they are not able
to respond adequately when multiple requests arrive, simultaneously or in short time intervals, calling for
contradictory signal control modifications. This limitation may be overcome by the employment of
optimization strategies at the expense, however, of the required computational effort that is usually much
higher than that of the rule-based approaches.
For the above reasons, the development of more efficient rule-based strategies still remains a prime subject
of research and development, which calls for novel solutions that will evidently improve the public
transport operations and promote their use
ACKNOWLEDGEMENT
This research is implemented through the Operational Program “Education and Lifelong Learning” and is co-
financed by the European Union (European Social Fund) and Greek national funds. The contents of the
paper reflect the views of the authors, who are responsible for the accuracy of the data presented herein.
REFERENCES
Diakaki, C., Dinopoulou, V., Papamichail, I. and Papageorgiou, M., 2013. Public Transport Priority: A State-of-the-Art and
Practice Review. Deliverable 1 of ARCHIMEDES III National Project “Public Transport Priority in real time”, Greece.
EC, 2011. White Paper, Roadmap to a Single European Transport Area - Towards a competitive and resource efficient
transport system. European Union, Brussels, Belgium.
Gardner, K., D’Souza, C., Hounsell, N., Shrestha, B. and Bretherton, D., 2009. Review of Bus Priority at Traffic Signals
around the World. Final Report of Deliverable 1 of UITP Working Group: Interaction of buses and signals at road
crossings, Working Program Bus Committee 2007-2009 - Technical Cluster “Extra-vehicular technology”.
Kaparias, I., Zavitsas, K. and Bell, M.G., 2010. State-of-the-Art of Urban Traffic Management Policies and Technologies.
th
Deliverable 1.2-1.3 of the CONDUITS (Coordination Of Network Descriptors for Urban Intelligent Transport Systems) 7
Framework European project, Contract no 218636.
Nelson E.J. and Bullock D., 2000. Impact of emergency vehicle preemption on signalized corridor operation.
Transportation Research Record: Journal of the Transportation Research Board, Vol. 1727, pp. 1-11.
295
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
Abstract
Passenger flow forecasting represents a vital component for rail passenger operators which can be used to fine-tune
travel behaviors, reduce passenger congestion and enhance service quality. The forecasting results of passenger flow
can be applied to support rail system management such as operation planning and station passenger crowd regulation
planning. In this paper, we examine the capabilities of seasonal autoregressive integrated moving average (SARIMA)
models to fit and forecast rail passenger flows. We show that for given sample data, a SARIMA model could be found
that adequately fitted and forecasted the time series of monthly passenger flows on Serbian railways.
KEYWORDS
forecasting; railway; passenger service; SARIMA
1. INTRODUCTION
Forecasting represents an indispensable activity in transportation planning. Accurately forecasting future
demand for transport services is very difficult but is necessary if transport companies are to succeed. In
broad terms, transport forecasting can be defined as the business function in every transportation company
that attempts to predict demand and use of services. Thus, transport demand forecasting is central to the
planning and control function in every transportation company.
Transportation forecasting approaches can be generally divided into two categories: parametric and non-
parametric techniques (Smith et al. 2002; Vlahogianni et al. 2004). Parametric and non-parametric
techniques refer to the functional dependency assumed between independent variables and the dependent
variable. In the traditional parametric techniques, historical average (Smith and Demetsky 1997), smoothing
techniques (Wiliams et al. 1998), and autoregressive integrated moving average (ARIMA) (Hansen et al.
1999; Lee and Fambro 1999) have been applied to forecast transportation demand.
For the non-parametric techniques, several methods have been used to forecast the transportation demand
such as neural networks (Dougherty 1995; Vlahogianni et al. 2004; Tsai et al. 2009; Alekseev and Seixas
2009), non-parametric regression (Smith et al. 2002; Clark, 2003), Kalman filtering methods (Saab and
Zouein 2001; Wang and Papageorgiou 2007), and Gaussian maximum likelihood (Tang et al. 2003).
Autoregressive integrated moving average (ARIMA) models are time series models that can be used to fit
and forecast univariate data such as railway passenger flows. With ARIMA models data are assumed to be
the output of a stochastic process, generated by unknown cases, from which future values can be predicted
as linear combination of past observations and estimates of current and past random shocks to the system
(Box et al. 2008; Prista et al. 2011). In this paper, we report the first detailed application of seasonal ARIMA
296
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
models (SARIMA) for fitting and forecasting of railway passenger flows. We use the data representing
monthly passenger flows on all lines of Serbian railway network provided by Statistical Office of the
Republic of Serbia. At the time of our analysis a time series of monthly passenger flows from January 2006.
to December 2012 (84 months) was available. On the base of fitting and forecasting results we suggest that
SARIMA models should be more widely considered for railway passenger flow forecasting.
The remainder of this paper is organized as follows. Section 2 introduces ARIMA and SARIMA models and
describes Box-Jenkins methodology for their selection. In Section 3, SARIMA modeling process has been
applied for deriving an efficient model to assess rail passenger demand for a case of Serbian railways.
Section 4 concludes this paper.
where s is the length of the periodicity (seasonality) and t is a white noise sequence.
p ( B) 1 1 B 2 B2 p B p (2)
P ( Bs) 1 1 B 2 B P B
s 2s Ps
(3)
are the non-seasonal and seasonal autoregressive (AR) polynomial term of order p and P , respectively
q ( B) 1 1 B 2 B2 q Bq (4)
Q ( B ) 1 1 B 2 B Q B
s s 2s Qs
(5)
are the non-seasonal and seasonal moving average part (MA) of order q and Q , respectively. (1 B)d is
the non-seasonal differencing operator of order d used to eliminate polynomial trends and the seasonal
differencing operator (1 B s ) D of order D used to eliminate seasonal patterns. B is the backshift
operator, whose effect on a time series Yt can be summarized as Bd Yt Yt d .
ARIMA and SARIMA models are usually fitted by using a sequence of three general steps collectively known
as the Box-Jenkins method: 1) identification of the model; 2) estimation of the model; and 3) a diagnostic
check of the model (Box et al. 2008; Prista et al. 2011). In the identification stage a model structure
( p, d , q) ( P, D, Q) S is selected by comparisons of sample ACF and PACF with theoretical ACF/PACF profiles
of AR, MA and ARMA processes. In the estimation stage, the model structure is fitted to the data and its
parameters are estimated, generally by using conditional sum of squares or maximum likelihood methods.
In the diagnostic check stage, the goodness-of-fit and assumptions for the model are evaluated and if
necessary, the Box-Jenkins procedure is repeated until a suitable model is found. This model is then used to
forecast future values (Box et al. 2008; Prista et al. 2011).
297
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
To test SARIMA models in the forecasting of the passenger flows on Serbian railways, we obtained a time
series of a total monthly number of passengers travelled by the railway from the SORS (Statistical Office of
the Republic of Serbia, 2013). Data set covers the period from 01 January 2006 to 31 December 2012 (84
monthly values). The time series is presented on Figure 1. We used first 75 months to fit the SARIMA
models and the last 9 months as a hold-out period to evaluate forecasting performance.
Before fitting SARIMA model, the time series must be checked for violations of the week stationarity
assumption of the model (Brockwell and Davis 2002; Box et al. 2008). In SARIMA models, trend and
seasonal nonstationarities are handled directly by the model structure so that only the nonstationarity of
variance needs to be addressed before model fitting. As it can be seen from the Figure 1, time series data
( xt , t 1,...,75) for rail passenger traffic in Serbia have strong seasonality pattern with a slightly decreasing
trend. However, variance-mean plots indicated an increase in variance with the series mean. Therefore, we
used log10 transformation to stabilize the variance of the series. So, we log-transformed the data and then
used the log transformed data set ( zt , t 1,...,75) as input to the SARIMA analysis (Figure 2.).
Figure 1 Time series of monthly rail passenger flows (in Figure 2 Time series of monthly rail passenger flows (in
thousands) in Serbia (January 2006 to thousands) in Serbia (January 2006 to December 2012) –
December 2012) – Raw data Log10 transformed data
298
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
Figure 3 Sample autocorrelation function (ACF) and partial autocorrelation function (PACF) of the transformed rail
passenger flow data. ACF/PACF plots for log10 - transformed data ( zt , far left), lag-1 differenced series ( 11 zt ), lag-12
differenced series ( 112 zt ), and lag-1 and lag-12 differenced ( 1112
1
zt , far right) are displayed. Horizontal dashed lines
represent the 95% confidence limits valid under the null hypothesis of white noise error structure.
Among the statistical models, SARIMA(0,1,1) (0,1,1)12 was selected as the best model, with the lowest
normalized BIC of 6.780 and a MAPE of 3.567 (Table 1). The model explained 84.9% of the variance of the
series (stationary R-squared). The model parameters were significant (P-value < 0.05). This model has the
following equation:
Table 1. Normalized Bayesian information criteria (BIC), mean absolute percentage error (MAPE) and stationary R-
squared of SARIMA models
Diagnostic checks indicated that the SARIMA model was stationary and invertible and did not have
redundant parameters. The residuals were white noise (Ljung-Box Q = 7.68, P-value>0.05), so there is no
significant autocorrelation between residuals at different lag times (Figure 4.). All tests were performed at a
significance level of 0.05 .
Table 2 Forecasts of railway passenger flows in thousands (April 2012 – December 2012). Observed passenger counts
( xl ), forecasted passenger counts ( x l ), monthly forecast errors ( el ) and monthly absolute percent error ( APEl ) are
displayed for simple seasonal exponential smoothing model (SSES) and SARIMA model (SAR)
300
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
Figure 5. displays observed values and SARIMA model fit and forecast values. Model gives slightly lower
forecasts then observed values, but pattern of model forecasts almost matched the one in observed
passenger counts except for period May - June 2012. RMSE during the prediction period ( RMSE=36.94 )
was 1.6 times the RMSE of the fitting period ( RMSE=22.74 ).
Seven of the eight forecasts registered negative errors but the low ME ( ME=26.86 ) indicated that
underestimation was minor in global terms. MAPE was 6.35% reflecting the slightly lower forecasts during
the period June-December 2012. The SARIMA model forecasts also registered better performances with
respect to SSES, resulting in 9.4% reduction in RMSE, 16.3% reduction in ME, and 8.4% reduction in MAPE.
4. CONCLUSION
Rail passenger flow forecasting can provide useful information for decision makers of rail passenger
systems. An accurate forecasting model can be applied to support transportation system management such
as operation planning, revenue planning and facility improvement. In this paper, the use of SARIMA models
was applied to fit and predict the number of passenger on Serbian railways. Different SARIMA models were
tested to select an appropriate. The results of diagnostic check indicate that the SARIMA(0,1,1)(0,1,1)12
known as “airline” model is the most appropriate for modeling the rail passenger demand on Serbian
railways. A further work is still needed to evaluate and apply other forecasting methods into the railway
passengers time series in order to obtain better accuracy of forecast value.
ACKNOWLEDGEMENT
This paper has been performed as the result of research on project MNTR036022 “Critical infrastructure
management for sustainable development in postal, communication and railway sector of Republic of
Serbia”.
REFERENCES
Alekseev K.P.G. and Seixas J.M., 2009. A multivariate neural forecasting modeling for air transport – Preprocessed by
decomposition: A Brazilian application, Journal of Air Transport Management, Vol. 15, pp. 212–216.
301
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Milenković M., Bojović N., Glišović N., Nuhodžić R. | Use of SARIMA Models to Assess Rail Passenger
Flows: A Case Study of Serbian Railways
Box G.E., Jenkins G.M. and Reinsel G.C., 2008. Time Series Analysis, Forecasting and Control, New Jersey: John Wiley and
Sons.
Brockwell P. and Davis R., 2002. Introduction to time series and forecasting, Springer Verlag.
Chatfield C., 1993. Calculating interval forecasts, J. Bus. Econ. Stat., Vol. 11, pp. 121–135.
Clark S., 2003. Traffic prediction using multivariate nonparametric regression, Journal of Transportation Engineering,
Vol. 129, No. 2, pp. 161–168.
Cryer J.D. and Chan K.S., 2008. Time Series Analysis: with Application in R, New York: Springer.
Dougherty M., 1995. A review of neural networks applied to transport. Transportation Research Part C, Vol. 3, No. 4, pp.
247–260.
Hansen J.V., McDonald J.B. and Nelson R.D., 1999. Time series prediction with genetic-algorithms designed Neural
networks: an empirical comparison with modern statistical models, Journal of Computational Intelligence, Vol. 15, No. 3,
pp. 171–183.
Lee S. and Fambro D.B., 1999. Application of subset autoregressive integrated moving average model for short-term
freeway traffic volume forecasting, Transportation Research Board 1678, pp. 179–188.
Prista N., Diawara N., Jose Costa M. and Jones C., 2011. Use of SARIMA models to assess data poor fisheries: a case
study with a sciaenid fishery off Portugal, Fishery Bulleting, Vol. 109, No. 2, pp. 170-185.
Saab S.S. and Zouein P.P. 2001. Forecasting passenger load for a fixed planning horizon, Journal of Air Transport
Management, Vol. 7, pp. 361–372.
Smith B.L. and Demetsky M.J., 1997. Traffic flow forecasting: comparison of modeling approaches. Journal of
Transportation Engineering, Vol. 123, No. 4, pp. 261–266.
Smith B.L., Williams B.M. and Keith Oswald, R., 2002. Comparison of parametric and nonparametric models for traffic
flow forecasting, Transportation Research Part C, Vol. 10, No. 4, pp. 303–321.
Statistical Office of the Republic of Serbia, 2013. Transport, storage and communications, yearly statistical bulletin,
Republic of Serbia.
Suhartono, 2011. Time Series Forecasting by using Seasonal Autoregressive Integrated Moving Average: Subset,
Multiplicative or Additive Model, Journal of Mathematics and Statistics, Vol. 7, No. 1, pp. 20-27.
Tang Y.F., William H.K. and Pan L.P., 2003. Comparison of four modeling techniques for short-term AADT forecasting in
Hong Kong, Journal of Transportation Engineering, Vol. 129, No. 3, pp. 271–277.
Tsai T-H., Lee C-K and Wei C-H, 2009. Neural network based temporal feature models for short-term railway passenger
demand forecasting, Expert Systems with Applications, Vol. 36, pp. 3728–3736.
Vlahogianni E.I., Golia J.C. and Karlaftis M.G., 2004. Short-term traffic forecasting: overview of objectives and methods,
Transport Reviews, Vol. 24, No. 5, pp. 533–557.
Wang Y. and Papageorgiou M., 2007. Real-time freeway traffic state estimation based on extend Kalman filter: a case
study, Transportation Science, Vol. 42, No. 2, pp. 167–181.
302
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
Abstract
The calibration of a macroscopic traffic flow model aims at enabling the model to reproduce, as accurately as possible,
the real traffic conditions on a motorway network. Essentially, this procedure targets the best value for the parameter
vector of the model and this can be achieved using appropriate optimization algorithms. The parameter calibration
problem is formulated as a nonlinear, non-convex least-squares optimization problem, which is known to attain
multiple local minima, for which reason gradient-based solution algorithms are not considered to be an option. The
methodologies that are more appropriate for application to this problem are mainly some meta-heuristic algorithms
which use direct search approaches that allow them to avoid bad local minima. This paper presents an overview of the
most suitable nonlinear programming methods for the calibration procedure of macroscopic traffic flow models. More
specifically, the following six algorithms are described in some detail: Nelder-Mead Algorithm, Pattern Search Methods,
Cross-Entropy Method, Genetic Algorithms, Simulated Annealing and Particle-Swarm Optimization Algorithms.
Furthermore, an application example, where two well-known macroscopic traffic flow models are evaluated through
the calibration procedure, is presented in order to give a flavor of the kind of problems addressed.
KEYWORDS
Traffic flow, calibration, simulation, validation, optimization algorithms.
1. INTRODUCTION
The problem of traffic congestion on urban motorways becomes more and more intense. As a result, the
corresponding urban areas face financial, environmental and social degradation. Suitable control measures
are required to face this situation. Reduction of the congestion phenomena in urban networks will have a
beneficial influence for the suffered delays, the environment (reduction of air and noise pollution), the fuel
consumption and the safety conditions.
The availability of mathematical models, that describe accurately the traffic flow phenomena, is a major
premise in order to address traffic congestion. The last decade, a variety of potentially improved traffic
models appeared, that target the more accurate and detailed delineation of traffic states on motorways
(Hoogendoorn and Bovy, 2001). Unfortunately, to the best of the authors' knowledge, computational tools
as well as corresponding works that evaluate the accuracy of the proposed models against real traffic data
are sparse, and, as a result, their potential practical usefulness remains uncertain.
Section 2 presents the calibration procedure for macroscopic traffic flow models, while section 3 reviews
and analyzes suitable optimization algorithms. The general concepts characterizing each method, the basic
steps and, finally, the main advantages and disadvantages are presented. Section 3 demonstrates the
303
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
calibration procedure for two well-known macroscopic traffic flow models under recurrent traffic
congestion conditions created due to saturated off-ramps. Finally, the paper is summarized in Section 5.
This work is part of a national project that aims to develop generic software for the validation and
benchmarking of traffic flow models using real data and appropriate optimization methods.
where is the discrete time index; stands for the state vector, is the control vector, is the
disturbance vector and is the parameter vector. Such a model can be used for the simulation of traffic
flow on motorway networks. Furthermore, this formulation is extremely useful since it permits the use of
widespread known methods both for estimation and short-term prediction of the state and parameter
vectors as well as the design of control strategies for motorway networks.
The validation procedure of a macroscopic traffic flow model for a motorway network aims to represent the
real traffic conditions with sufficient accuracy. This becomes feasible through the proper specification of the
model parameters. The estimation of the unknown parameters is not a trivial task, since the equations are
typically strongly nonlinear in terms of the parameters and the state variables. The most common approach
is to minimize the difference between the model’s results and the measurements of the real process, by
using a quadratic error output function. Let be the measured output vector (typically consisting of the
flows and average speeds at different places of the network) of a nonlinear system, given by the
relationship:
The problem of estimating the parameters can be formulated as the following least-squares problem: Given
the disturbance and control vectors (to feed the model (2)) and the real measured output for
as well as the initial state , find the parameter vector which minimizes the cost
function:
subject to the equation (1) and (2), where is a positive definite diagonal matrix. The procedure is
illustrated in Figure 1. The set may be a subset comprising only some simulation points since the
simulation time step (e.g. ) is usually much smaller than the measurements interval (e.g. 60 s). The
model parameters are selected from a close region, which can be determined taking into account their
natural characteristics.
304
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
Description: The method uses a simplex, i.e. an -dimensional geometrical shape with vertices. Every
vertex corresponds to a potential solution which in turn corresponds to a function value for every
. Essentially, the algorithm implements a series of transformations of the initial simplex
which aim at reducing the function values at the vertices. To be more specific, the algorithm starts with a
set of randomly selected potential solutions (simplex vertices) ,…, . At each iteration, the algorithm
puts the vertices into ascending order e.g. and calculates the centroid of all
the vertices excluding the vertex. Then, it transforms the simplex according to the procedures of
reflection, expansion, contraction and shrinkage. Once one of these transformations has occurred, the
algorithm returns the function value which corresponds to the best vertex; and it continues the procedure
until one of the termination criteria has been satisfied.
Advantages & Disadvantages: The method often gives significant ameliorations of the cost function values
and satisfactory results are produced quite fast. Furthermore, it requires only one or two evaluations of the
cost function at every iteration, except from the shrinkage case which is extremely rare in practice.
Nonetheless, in some cases the method may perform a large number of iterations without actually
improving enough the value of the cost function. To cope with this problem, restarting the algorithm a few
times using a small number of iterations may be a heuristic solution.
305
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
Description: The algorithm generates a random population of a high number of potential solutions by
using a continuous uniform distribution . Then, the algorithm evaluates these solutions by using the
cost function and it orders them. A of the best solutions are selected (where is determined by the
user) and the algorithm estimates the probability density function based on the sample of the these
solutions. After this, the distribution function is re-estimated as:
where is determined by the user and usually takes values between 0.7 and 0.9. Finally, the algorithm
generates a new random population of solutions by using the new . The algorithm continues this
process until a termination criterion has been satisfied.
Advantages & Disadvantages: In contrast to other stochastic methods, the selection of the solutions in
Cross-Entropy Method is not a completely random elaboration but is affected by the best solutions in every
iteration. It is also robust since, despite its stochastic nature, investigations have shown that the method
leads to similar optimal solutions for different algorithm runs (Ngoduy and Maher 2011). The main
disadvantage of the method is the big computational effort due to the fact that the cost function is
evaluated for a huge number of potential solutions.
Description: The basic implementation steps at each iteration are applied to a random population of
solutions. These solutions are evaluated through the cost function. The algorithm selects a part of
population members (guided by their quality) that become the parents for the generation of a new
population through the processes of crossover and mutation. It evaluates these members and returns the
best solution; until convergence.
Advantages & Disadvantages: An advantage of genetic algorithms is that they perform search at many
different points simultaneously and not just at one like other algorithms. The use of probabilistic rules, gives
flexibility in seeking better solutions. Genetic algorithms are not effective for problems with a large number
of parameters.
Description: The algorithm starts with a random solution and determines the initial (maximum) value of
the temperature , where at each iteration, the algorithm calculates a random neighboring solution and
by using the Metropolis Criterion (Metropolis et al. 1983) decides if this solution should be accepted or not.
Specifically, the algorithm accepts a solution with a probability equal to 1 if and with a
probability equal to if . The value of parameter is gradually
reduced based on a prespecified function which depends on the iteration number.
306
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
Advantages & Disadvantages: The fact that the algorithm accepts the points that reduce the value of the
objective function but also accepts, with some probability points that increase the value of the objective
function, is an essential feature of the algorithm which helps it not to be trapped in bad local minima. On
the other hand, in order to run satisfactorily, the algorithm calls for the specification of appropriate
parameters that depend on the problem under consideration. For example, the maximum temperature, the
selection of neighboring solutions and the way of reducing the temperature affect significantly the
effectiveness of the algorithm and may differ for each application.
Description: The algorithm considers a population of potential solutions called particles. These particles
move in the search space under some rules. The motion of every particle is guided from the best individual
position and from the best position in the whole swarm. In the first step of the algorithm the particles are
distributed at random positions and with random speeds on a search space. Every particle is characterized
by its position and its velocity . The algorithm calculates the best individual position and the best
position in the swarm based on the cost function . For every particle , it calculates the particle’s speed
with a random selection of
values and it updates the position ( , , are constant parameters). If
then it updates the best known position of the particle as , while If then
it updates the best known position of the swarm as .
Advantages & Disadvantages: Particle Swarm Optimization is a simple method that can be applied to many
types of problems. It does not require assumptions about the form of the objective function and can search
very large spaces of possible solutions. Also, it converges relatively quickly to a minimum, as the
calculations performed for the creation of new solutions are quite simple. Determination of the algorithm’s
parameters and the swarm’s size, has to be suitable for each application, otherwise the algorithm may fail.
4. APPLICATION EXAMPLE
This section presents a calibration example for the two most popular macroscopic traffic flow models, the
first-order macroscopic traffic flow model CTM (Daganzo 1995a, 1995b) and the second-order macroscopic
traffic flow model METANET (Messmer and Papageorgiou 1990). The Nelder-Mead algorithm is utilized for
the solution of the calibration problem. The real data utilized are from a short stretch of the Attiki Odos
motorway in Athens, Greece. The traffic data analysis showed that, within this particular motorway stretch,
recurrent traffic congestion is formed during the morning peak hours (8 - 10 a.m.), and originates from a
saturated off-ramp area (29 km).
The measurable model output and the corresponding real traffic data include only the speed values at
selected network locations. Figure 2 illustrates, from left to right, the space-time diagram of real speed
measurements, the model estimation of speed using the optimal parameter values for CTM and the model
estimation of speed using the optimal parameter values for METANET. It is observed that both models
reproduce the real traffic conditions with sufficient accuracy, creating congestion at the right time and
place and for the right duration and extent, with METANET reflecting more realistically the vehicle
acceleration downstream of the congestion creation area, since this model acknowledges the limited
acceleration ability of vehicles. For a more detailed discussion of the models as well as a description of the
network used and the validation procedure see Spiliopoulou et al. (2013).
307
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
5. CONCLUSIONS
The work reported in this paper is part of a national project aiming to develop software for the validation
and benchmarking of macroscopic traffic flow models using real data and appropriate optimization
methods for the solution of the parameter calibration problem. A detailed overview of the most suitable
nonlinear programming methods as well as a realistic application example was presented.
ACKNOWLEDGEMENT
This research was co-financed by the European Union (European Social Fund - ESF) and by national funds
through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference
Framework (NSRF) - Research Funded Project: ARCHIMEDES III. Investing in society’s knowledge through the
European Social Fund.
REFERENCES
Černý, V. (1985). Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm.
Journal of optimization theory and applications, 45(1), 41-51.
Daganzo, C. F. (1995a). A Finite Difference Approximation of the Kinematic Wave Model of Traffic Flow. Transportation
Research Part Β, Vol. 29, No. 4, pp. 261-276.
Daganzo, C. F. (1995b). The Cell Transmission Model, Part II: Network Traffic. Transportation Research Part Β, Vol. 29,
No. 2, pp. 79-93.
Holland J. H. (1995). Hidden Order: How Adaptation Builds Complexity Addison-Wesley, Redwood City, CA.
Hoogendoorn, S.P., and Bovy, P.H. (2001). State-of-the-art of vehicular traffic flow modelling. Proceedings of the
Institution of Mechanical Engineers, 215(4), 283-303.
Kennedy, J., and Eberhart, R. (1995). Particle swarm optimization. In Neural Networks, 1995. Proceedings., IEEE
International Conference on (Vol. 4, pp. 1942-1948).
Kirkpatrick, S., Gelatt, C.D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671-680.
308
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kontorinaki M., Papamichail I., Papageorgiou M., Tyrinopoulos Y., Chrysoulakis J. | Overview of Nonlinear
Programming Methods Suitable for Calibration of Traffic Flow Models
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations
by fast computing machines. The journal of chemical physics, 21, 1087.
Messmer, A., and M. Papageorgiou. METANET: A macroscopic simulation program for motorway networks. Traffic
Engineering & Control, Vol. 31, No. 8-9, 1990, pp.466-470.
Nelder, J. A., and Mead, R. (1965). A simplex method for function minimization. The computer journal, 7(4), 308-313.
Ngoduy, D., and Maher, M. (2011). Cross Entropy Method for a Deterministic Optimal Signalization in an Urban
Network. In Transportation Research Board 90th Annual Meeting (No. 11-1946).
Rubinstein, R. Y., and Kroese, D. P. (2004). The cross-entropy method: a unified approach to combinatorial optimization,
Monte-Carlo simulation and machine learning. Springer.
Spiliopoulou A., Kontorinaki M., Papageorgiou M. and P. Kopelias (2013). Macroscopic traffic flow model validation at
congested freeway off-ramp areas. Transportation Research Part C. Submitted.
309
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
*National Technical University of Athens, Department of Chemical Engineering, Unit of Environmental Science and
Technology - 9 Iroon Polytechniou St, Zographou Campus, PC 157 73, Athens, Greece.
Abstract
The main objective of this work is to present the methodology and preliminary results of an ongoing work regarding the
development of an objective function for the quantification of greenhouse gas (ghg) emissions (expressed as co 2
equivalent emissions) produced by heavy-duty vehicles (hdv) during container transport in ports.
Firstly a critical review of emission calculation models was performed, and their capabilities and suitability for the
estimation of co2 equivalent production from the operations during the transport of containers to, from, and within
marine terminals was assessed. Based on this analysis, copert (www.emisia.com/copert) was chosen to be used as a
basis for the development of the objective function for the minimisation of ghg emissions. Copert was selected for
several reasons but mainly because its methodology balances the need for detailed emission calculations on one hand
and use of few input data on the other. The next step was to analyse copert’s equations in depth in order to assess and
evaluate their limitations. in summary, emission calculations take into account the mean speed, the type of the engine,
the weight and the mileage of the truck, the load, the road gradient, the type of fuel and the travelled kilometers;
however these equations do not take into consideration various attributes that are frequently observed during port
operations, such as extensive stop-and-go traffic, idling etc. The following step (in progress) is to fully evaluate and
address those limitations by introducing new relevant elements and factors, covering all possible elements contributing
to ghg increase.
Future steps include the modification of copert’s equations, the finalisation of the objective function, as well as its
further use for the ghg emissions minimisation problem in port terminals, by considering different appointment systems
and other operations management scenarios.
KEYWORDS
Greenhouse Gas (GHG) emissions; CO2 equivalent; emissions minimasation; port terminals; terminal operations;
drayage operations
1. INTRODUCTION
Within a seaport terminal, the main sources of emissions include (i) building use and maintenance, (ii)
ocean-going vessels & harbor crafts, (iii) cargo handling equipment and (iv) heavy-duty vehicles (HDV)
and/or rail locomotives used for the transportation of the containers.
Regarding Greenhouse Gas (GHG) emissions, in the case of Port of Los Angeles, in 2010 863.801 short tons
CO2 equivalent (CO2 eq.) were emitted by all the aforementioned port operations (excluding building use);
approximately 44% of these GHG emissions (378.955 short tons CO 2 eq.) were emitted from HDV (Chen, et
al., 2013), making container transportation by HDV one of the most polluting elements of port operations.
310
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
The main objective of this work is to present the methodology and preliminary results of an ongoing work
regarding the development of an objective function for the quantification of GHG emissions (expressed as
CO2 eq. emissions) produced by HDV during container transport in ports. The function will be based on a
holistic approach for the estimation of GHG emissions considering all pollutant contributing to their
increase, namely carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). Furthermore, the model
shall reflect European HDV fleet characteristics, e.g. European emission standards, as they are defined in a
series of stages, reflecting the progressive introduction of increasingly stringent standards. The stages are
typically referred to as Conventional, Euro I, Euro II, Euro III, Euro IV and Euro V for HDV standards.
2. METHODOLOGY
Τhe methodological approach involves the following four main steps:
Step 1: Identification of the basic model.
Step 2: Analysis of the selected model’s methodology and equations.
Step 3: Introduction of new elements and factors.
Step 4: Modification of the selected model’s equation and development of a new model.
Step 5: Use of that objective function for the GHG emissions minimisation problem in port
terminals
In brief, the first step of the methodological approach was a critical review of emission calculation models,
in order to identify one to be used as a basis for modelling the fleet in port operation (Step 1). The next step
was to analyse in depth the equations of the selected model and identify any potential limitations (Step 2).
The following step is to fully evaluate and address those limitations by introducing new relevant elements
and factors, covering all possible factors contributing to GHG increase (Step 3 - ongoing). Future steps
includes the modification of COPERT’s equations and the finalisation of the objective function (Step 4), as
well as its further use of for the GHG emissions minimisation problem in port terminals, by considering
different appointment systems and other operations management scenarios (Step 5).
As previously stated, this work is still in progress. Next, we present the preliminary results regarding the
first 3 steps of the methodology.
COPERT was selected for several reasons but mainly because its methodology balances the need for
detailed emission calculations on one hand and use of few input data on the other. Furthermore, COPERT 4
is continually updated and has been selected by EEA as a base for the EMEP/EEA air pollutant emission
inventory guidebook (http://www.eea.europa.eu/publications/emep-eea-emission-inventory-guidebook-
2009). On the other hand, COPERT does not include parameters such as mean positive acceleration, or
speed variation in the estimation of emission factors. Another possible limitation is that there is not good
311
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
information for upcoming vehicle emission standards, such as the new vehicle technologies at Euro 5 for
passenger cars and Euro V, VI for HDV. The emission factors now included in COPERT for these technologies
are based on emission reductions over Euro IV, based on the differences in the emission limits between the
two technologies. However, this is only a best-guess approach as the electronic control of recent vehicle
technologies makes difficult to estimate the actual emission performance, without actually testing a specific
technology.
CO2 emissions
CO2 emissions are estimated based on fuel consumption. More specifically, the mass of CO 2 emitted by
vehicles in case of technology k and combusting fuel m can be calculated as:
FCk , m
44,011
k ,m
ECO2 (1)
12,011 1,008 rH , m 16,000 rO ,m
C C
where,
ECO2k,m: End-of-pipe CO2 emissions of vehicle of technology k combusting fuel m for the time period
considered (either in mass, mass per kilometer or mass per time);
FCk,m: Total fuel consumption of vehicle of technology k combusting fuel m for the time period
considered (either in mass, mass per kilometer or mass per time);
rH/C, rO/C: Hydrogen/carbon ratios, oxygen/carbon ratios.
The hydrogen/carbon and oxygen/carbon ratios for different fuel types can be sourced from literature
(Ntziachristos et al., 2012). Regarding fuel consumption, COPERT offers speed dependent equations for
conventional vehicles as well as the Euro I to Euro V emission standards. These equations cover various
combinations of road gradient (-6% to 6%) and load factors (0%, 50%, 100%). COPERT suggests the emission
factors to be applied within the speed ranges 6 to 86 km/h (for HDV). Furthermore, distinct functions are
provided for Euro V vehicles only, depending on their emission control concept, i.e. exhaust gas
recirculation (EGR) or selective catalytic reduction (SCR).
312
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
in the exhaust of an SCR equipped vehicle (Suzuki, et al., 2008). Based on that we introduce a 20% increase
in total CO2 eq. emissions in of case HDV with SCR and mileage more than 200.000 km.
Basic assumptions and limitations
COPERT's methodology is based on various projects and mainly on ARTEMIS (Assessment and Reliability of
Transport Emission Models and Inventory Systems). In total, in Artemis emission tests from 102 HDV were
used as model input data, representing the most extensive database in Europe. The scientific methods used
to physically verify the model calculations were remote sensing studies, tunnel studies and on-board and
laboratory measurements and most of the engines measured were derived from HDV in use for two months
up to 2 years with regular service intervals (Rexeis, et al., 2005). Furthermore, on-board measurements
have been performed both in real traffic situations and on an isolated test track. On that basis, model does
not explicitly incorporate real world conditions (use of retrofitted devices to save fuel and use of air-
condition (A/C)) or congestion in the modeling process. Thus, in order to get more accurate emission
predictions and to achieve correct application in particular situations, it is generally recommended to
improve the model, by including relative correction factors for use of retrofitted devices and A/C, as well as
a congestion algorithm. Finally, an uncertainty analysis conducted by Kouridis et al. (2009) showed that the
most uncertain emissions calculations are for CH4 and N2O. However, despite the relatively larger
uncertainty in these two emissions, the uncertainty in total greenhouse gas emissions is dominated by CO 2
emissions. Therefore, improving the emission factors of N 2O and CH4 would not offer a substantially
improved calculation of total GHG emissions. On that basis, corrections would be only applied to CO 2
emissions calculations.
In order to address these limitations, we should introduce three distinct operating modes, namely cruise,
idling and acceleration / deceleration, and new equations and/or correction factors to cover all modes.
2 2
E(t) = max [Eo, f1+f2v(t)+f3v(t) + f4a(t)+ f5a(t) + f6v(t)a(t)] (2)
where
v(t): instantaneous velocity (m/s)
a(t): instantaneous acceleration (m/s2)
313
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
fi and Eo: constants determined from the regression analysis.
In their paper the authors have published separate values for each fi, depending on the pollutant and the
type of vehicle (i.e. petrol car, diesel car, HDV and bus).
Idling
Idling of the engine has a major impact on its fuel consumption and exhaust emission. More specifically,
CO2 emissions during idling can be as high as 16.500 g/h (Rahman, et al., 2013). CO 2 emissions during idle
time will be based on equation 1, by introducing specific fuel consumption rate (FCR) while idling (mass per
time). A number of studies have been conducted to quantify idle emissions and fuel consumption from
diesel vehicles, and especially from long-haul trucks (e.g. Brodrick, et al., (2002), Lim, (2002), Storey, et al.,
(2003), Clark, et al., (2005), Zietsman, et al., (2005), Khan, et al., (2006), Gaines, et al., (2008)). In our case
we will use FCR measured by Lim (2002) as presented in Table 1, due to the fact that do not involve
operation of accessories (e.g. A/C, cabin lights etc).
Several types of technologies exist that can effectively reduce long-duration idling and hence improve FCR.
The most widely used as well as their expected effect on fuel consumption, are presented in Table 2.
Table 2 Expected fuel consumption reductions during idle from various technologies/devices
2.3.2. Fuel Efficiency and Emissions Reduction Technologies and A/C Use
HDV can be retrofit with technologies to reduce emissions and save fuel. Table 3 summarises the most
important of them (in terms of the impact that each one can have on fuel economy), as well as the
expected effect from their application.
The use of A/C plays a significant role on the FC and emission rates of a vehicle. However, there are not
many studies to focus on the effect of A/C operation in HDV. USEPA (2010a) provides a range of air
conditioning exhaust correction factors. However, these factors were derived from processing data from
314
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
LDVs and no test data were available on other vehicle types (i.e., motorcycles, heavy trucks, etc). A/C
factors for LDV span from 1,365 (in idling mode) to 1,127 when cruising with speeds more than 80 km / hr
(USEPA, 2010a). Work regarding the effect of A/C use on the FC and emission rates of HDV is still in
progress.
3. FUTURE WORK
As previously stated, apart from the analysis on the effect of A/C use on the FC and emission rates of HDV,
future work includes the modification of COPERT’s equations, the finalisation of the objective function, as
well as its further use for the GHG emissions minimisation problem in port terminals, by considering
different appointment systems and other operations management scenarios.
ACKNOWLEDGEMENT
This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national
funds through the Operational Program “Education and Lifelong Learning” of the National Strategic
Reference Framework (NSRF) – Research Funding Program: THALES - Investing in knowledge society
through the European Social Fund.
REFERENCES
1. Brodrick, C-J, Dwyer, HA, Farshchi, M, Harris, DB, King, FG., 2002. Effects of engine speed and accessory load on idling
emissions from heavy-duty diesel truck engines. J Air Waste Manage Assoc. 2002, Vols. 52:1026–31.
2. Chen, Gang, Govindan, Kannan and Golias, Mihalis M., 2013. Reducing truck emissions at container terminals in a low
carbon economy: Proposal of a queueing-based bi-objective model for optimizing truck arrival pattern. Transportation
Research Part E, 2013, Vol. 55, 3–22.
3. Clark, N, Khan, A, Thompson, G., Wayne, W., Gautam, M. and Lyons, D., 2005. Idle Emissions from Heavy-Duty Diesel
Vehicles. s.l. : Center for Alternative Fuels, Engines, and Emissions (CAFEE).
4. Gaines, Linda and Hartman, Christie-Joy, 2008. Energy Use and Emissions Comparison of Idling Reduction Options for
Heavy-Duty Diesel Trucks. 88th Annual Meeting of the Transportation Research Board, Washington, D.C. : Center for
Transportation Research, Argonne National Laboratory, 2008. Paper No. 09-3395.
5. Khan, AS, et al., 2006. Idle emissions from heavy-duty diesel vehicles: review and recent data. J Air Waste Manage 2006,
56:1404–19. 2006.
6. Kouridis, C, et al., 2009. Uncertainty Estimates and Guidance for Road Transport Emission Calculations. JRC. EUR 24296
EN - 2010, 2009.
7. Lim, Han, 2002. Study of Exhaust Emissions from Idling Heavy-Duty Diesel Trucks and Commercially Available Idle-
Reducing Devices. s.l. : US Environmental Protection Agency, 2002.
8. Ntziachristos L. et al, 2012. Exhaust emissions from road transport. s.l. : EMEP/EEA emission inventory guidebook 2009,
updated May 2012, 2012.
9. Panis, L.I., Broekx, S. and Liu, R., 2006. Modelling instantaneous traffic emission and the influence of traffic speed limits.
Science of the Total Environment. 2006, Vol. 371, pp. 270-285.
315
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Konstantzos G., Saharidis G., Loizidou M., Kolomvos G., Tsoukala V., Bourousian M. | Development of an
Objective Function for the Minimisation of Greenhouse Gas (ghg) emissions From in Port
Truck Operations
10. Rahman, A.S.M., et al., 2013. Impact of idling on fuel consumption and exhaust emissions and available idle-reduction
technologies for diesel vehicles – A review. Energy Conversion and Management, 2013, Vol. 74, 171–182.
11. Rexeis, M. et al., 2005. Heavy duty vehicle emissions -Final Report for ARTEMIS WP 400. ARTEMIS, 2005, TUG Report.
12. Storey JME, et al., 2003. Particulate matter and aldehyde emissions from idling heavy-duty diesel trucks. Society of
automotive engineers paper, 2003, p. 01–0289.
13. Suzuki, H., Ishii, H., Sakai, K. and Fujimori, K., 2008. Regulated and Unregulated Emission Components Characteristics of
Urea SCR Vehicles. 2008. JSAE Proceedings, Vol. 39 No. 6. November.
14. USEPA, 2010a. MOVES 2010 Highway Vehicle Temperature, Humidity, Air Conditioning, and Inspection and Maintenance
Adjustments . s.l. : EPA-420-R-10-027, 2010.
15. USEPA, 2010b. SMARTWAY - Designing and Implementing a Freight Sustainability Program:Tools, Best Practices, and
Lessons Learned. s.l. : US Environmental Protection Agency, 2010.
16. Zietsman, J, Villa, JC, Forrest, TL and Storey, JM., 2005. Mexican truck idling emissions at the El Paso-Ciudad Juarez
Border Location. s.l. : Southwest University Transportation Center, Texas Transportation Institute, Texas A & M
University, 2005.
316
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
Abstract
This paper proposes a multicriteria model to support long-term natural gas supply alternatives for Greece. A model is
proposed that integrates the multicriteria additive value system in combination with the UTA II disaggregation method,
aiming to support the national energy policy makers to devise favorable strategies, concerning both long-term national
natural gas supplies and infrastructure development; the model proposed captures the multi-dimensional nature of the
problem, as it incorporates the three main points of view of the national energy policy makers: Economics of Supply,
Security of Supply and Cooperativity between Countries. The alternatives’ set has been determined after exhaustive
investigation of all possible existing and future routes, taking into consideration all possible natural gas infrastructure
development projects around Greece. The originality of the current work lies in the fact that it proposes a
methodological and comprehensive way to address national energy planning issues in the sector of natural gas supplies.
The produced ranking shows that noticeable alternatives of gas supply do exist for the case of Greece both in terms of
LNG producing countries and in terms of potential future infrastructure projects that could prove beneficial to Greece.
KEYWORDS
Multicriteria Decision Support, Natural Gas Supply, Energy Policy Decision Making, Greece
1. INTRODUCTION
The present work aims to resolve a real problem of Greece, relevant to the design of an energy strategy in
the sector of natural gas supply. The problem is significantly complicated, though very interesting, as it
involves evaluation not only of the potential reachable supply sources but also of future infrastructure
projects. The geographical position of Greece, located close to many of major natural gas producers of the
world, can provide significant advantage to the country, in terms of security of supply, if properly exploited
in the framework of the national energy policy (Stournaras et. al., 2011).
317
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
A) Mapping of the top hundred richest countries in terms of natural gas reserves (R); for these countries
exports (E), production (P) and consumption (C) data have also been recorded.
B) Based on recorded data from step A, calculation of:
a. Reserves-to-Production ratio (R/P);
b. Production-Consumption (P-C).
C) Designation of countries characterized by:
a. R/P > 20;
b. P-C ≥ 0
based on data from step A and on calculations from step B.
Research on the transportation infrastructure of natural gas, for each alternative country preconceived
from the previous step; within this step, existing infrastructure has been taken into consideration, in terms
of natural gas pipelines (both onshore and offshore), liquefaction and regasification facilities, as well as
projects under construction and planned. Countries, for which the research has not revealed a possible
route, have been excluded.
318
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
Table II presents the criteria under each point of view as well as their scales and sources.
In Figure I the multicriteria methodology flowchart is presented, while in Figure II shows the Flowchart of
actions to verify the existence of a feasible weighting vector .
Figure I: Multicriteria Methodology Flowchart Figure II: Flowchart of actions to verify the
existence of a feasible weighting vector
319
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
where is the performance vector of an alternative supply source through a specific corridor
on the n criteria; and are the least and most preferable levels of criterion , respectively;
are the monotonic marginal value functions of the performances . and
is the relative weight of the i-th function . Thus, for a given alternative , and represent
the multicriteria vector of performances and the global value of the alternative respectively.
Because of the objective difficulties to convince DMs in externalizing tradeoffs between heterogeneous
criteria, the analysts usually prefer to infer the DM’s additive value function from global preference
structures, by applying disaggregation or ordinal regression methods (Jacquet-Lagrèze and Siskos, 1982,
2001; Greco et al., 2008, 2010).
In order to take into account the incomplete determination of inter-criteria model parameters and obtain a
robust evaluation of the alternative corridors the extreme ranking analysis algorithm (Kadzinski et. al.,
2012) has been applied; the algorithm examines each alternative individually and estimates the best and
worst rank it can achieve, exploiting mathematical programming techniques. Apart from the above, within
the framework of the current work, the stability of the criteria’s weights has also been studied, with the use
of the Average Stability Index (ASI) introduced in Siskos and Grigoroudis (2002) and in Grigoroudis and
Siskos (2003).
The marginal value functions of criteria are assessed after a dialogue guided by the analyst, in cooperation
with the expert.
320
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
2.3. Results
The results of the ranking as well as of the extreme ranking analysis of are summarized in Table III.
Table III: Marginal values, global scoring and ranking order of each alternative corridor.
3. CONCLUSIONS
Text. A multicriteria model has been constructed in order to evaluate potential future natural gas supply
sources and routes for the case of Greece –a net natural gas importer that places security of supplies in high
priority in the national energy policy. As a DM an executive of the national natural gas energy policy was
involved, who served his experience for the creation of a coherent model that includes all axes of
importance for the national natural gas supply policy.
The ranking produced shows that, focusing on the three preference axes indicated by a national gas supply
expert of the Greek natural gas supply company–security of supplies, economy of supplies and
cooperativity within countries-, significant opportunities do exist for the national energy policy concerning
natural gas supplies. These opportunities concern both existing routes as well as potential future routes.
The Greek national energy policy makers can benefit from this research both to identify potential suppliers
to approach and negotiate with for long-term contracts and to assess future investments, participation in
consortiums, signing agreements and Memorandums of Understanding (MoUs).
REFERENCES
Stournaras, Y., Danchev, S., Paratsiokas, N., 2011. Greece As Europe’s Energy Highway: Natural Gas Pipeline Projects
Going Through Greece. Foundation For Economic & Industrial Research.
Energy Information Administration, 2012; “How much does it cost to produce natural gas“
(available at: http://www.eia.gov/tools/faqs/faq.cfm?id=367&t=8)
Durr, C., Coyle, D., Hill, D., Smith, S., 2007. ‘LNG Technology for the Commercially Minded’, Gastech, 2005. Gastech,
Bilbao, Spain.
Odumugbo, C.A., 2010. “Natural gas utilisation in Nigeria: Challenges and opportunities”, Journal of Natural Gas Science
and Engineering, 2: 310-316.
REACCESS FP7 Project Report. “Quantification of Socioeconomic Risk & Proposal for an Index of Security of Energy
Supply. Factor analysis methodology applied to the measurement of potential energy driven Risks vector”.
Greco, S., Mousseau, V., Słowiński, R. (2008). Ordinal regression revisited: Multiple criteria ranking using a set of
additive value functions, European Journal of Operational Research, 191 (2), pp. 416-436.
Greco, S., Słowiński, R., Figueira, J., Mousseau, V. (2010). Robust ordinal regression, in: M. Ehrgott, S. Greco, and J.
Figueira (eds.), Trends in multiple criteria decision analysis, Springer, Berlin.
Grigoroudis, E., Siskos, Y., 2002. Preference disaggregation for measuring and analysing customer satisfaction: The
MUSA method. European Journal of Operational research, 143(1): : 148–170.
Grigoroudis, E., Siskos, Y., 2003. “MUSA: a Decision Support System for Evaluating and Analyzing Customer Satisfaction”,
Proceedings of the 9th Panhellenic conference in Informatics, in K. Margaritis and I. Pitas, editors, Thessaloniki, Greece,
2003, 113-127.
Jacquet-Lagrèze, E. and Siskos, J., 1982. Assessing a set of additive utility functions for multicriteria decision making,
European Journal of Operational Research, 10 (2): 151-164.
Jacquet-Lagrèze, E. and Siskos, J., 2001. Preference disaggregation: 20 years of MCDA experience, European Journal of
Operational Research, 130: 233-245.
322
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Androulaki S., Psarras J., Angelopoulos D. | Multicriteria Decision Support To Evaluate Potential Long-
Term Natural Gas Supply Alternatives: The Case Of Greece
Kadzinski, M., Greco, S., Slowinski, R., 2012. “Extreme ranking analysis in robust ordinal regression”, Omega, 40:488-
501.
http://chartsbin.com/
http://unctadstat.unctad.org/
http://www.indexmundi.com
323
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
Abstract
Through the last decade, the Renewable Energy Source (RES) deployment has been rapidly increasing in Europe and
more targets have been set regarding the penetration of Renewable Energy until 2020. Sceptics protest that the current
approach of the electricity market development and RES integration is unavoidably doomed because it leads to
extremely tight situations that threaten the stability of the market. This paper focuses on the investigation of the effects
of wind power penetration on the wholesale prices in electricity markets. Specifically, the basic characteristics of an
electricity market are demonstrated and the roles and the dynamics of the producers, the retailers, the final consumers
and the Transmission System Operators (TSOs) are analyzed. The Merit Order Effect is explained and illustrated via both
theoretical and empirical approach trying to evaluate the final total cost of the support scheme for the final consumers
regarding wind production and decide, eventually, if it is beneficial or not from a consumer’s perspective. Moreover,
Negative Electricity Prices, a phenomenon that stems from the integration of wind power, is also explained. The
situation is analyzed and the reasons that lead to such ambiguous and controversial situations are put under the
microscope. Finally a review of the solutions currently presented in the literature, which are aiming either at a short
term decompression of the problem or at long term structural changes, is incorporated in this paper.
KEYWORDS
RES .-S, Merit Order Effect, Negative Electricity Prices, Support Schemes, Price Reduction, Wind Energy
1. INTRODUCTION
This paper focuses on the effect of the wind power penetration in the electrical markets. The main
attention is paid to the wind power because of its special characteristics. The hydroelectric power, the type
of RES that has the biggest share in the European energy mix, has reached grid parity. Therefore, support is
no longer needed in order to be competitive to the conventional energy sources. In addition, the focus now
is placed on the wind power which is the leading RES with very high penetration percentages all over
Europe, and, especially, in the largest European economies like Germany and Spain (Figure 1). Furthermore,
the wind power is intermitting, volatile and very stochastic. All that combined with the abundance of the
available wind power, the lowering investment costs and the raising efficiency ratios create and intriguing
324
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
mixture to be investigated. The main case study, from which examples will be taken, is Germany not only
because of the high wind power penetration ratio but also because of the kind of support scheme, the
Renewable Energy Act (EEG). Since 2000, the law has successfully formed the basis for the strong expansion
of renewable energies in Germany by establishing a secure financial environment. There is a combination of
options for the support mechanism with the feed-in tariffs still prevailing but losing ground.
There is also an increasing amount of energy exchange between the European countries, which are trying to
closely integrate their electrical networks for economic, security of supply, technical and other reasons.
However, there is only a specific capacity of energy that the interconnectors can handle; therefore, the
transmission rights are auctioned to the highest bidder. That may occur via explicit or implicit auctions. The
first auctions the transmission rights of the interconnectors regardless of the electrical exchange markets
that take place between the interconnected countries hence disregarding the price signals that occur
between the markets that imply the amount and the direction of the energy flow. On the other hand, the
second includes the auction of the transmission rights in the auction that takes place for the electrical
power. The current trend is to integrate the markets as closely as possible to increase the economic
utilization of the interconnector capacities.
The price of the electrical power exchange, as mentioned before, occurs via auctions. Although there are
other price settling mechanisms like local marginal prices or nodal pricing which have their economic
advantages. The prevailing price settling mechanism for the electrical power exchange all over the world is
325
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
the uniform price auction, which is also used in Germany. This is a double sided, blinded auction between
the suppliers and the demanders, who are the producers and the retailers, accordingly, that set the price
for the wholesale market. The price is set by the intersection of the supply and the demand curve and all
purchases and sales are settled at one single price, the “uniform” price. The main advantage of this
mechanism is its transparency that overcompensates other drawbacks of economic nature.
Each electrical market is being operated by the transmission system operator (TSO). In Germany, there are
four TSOs (Amprion, TenneT TSO, 50Hertz Transmission, TransnetBW) which are either legally unbundled
from the four main power producers or even sold by now. The TSOs apply a security constrained, economic
dispatch. In the aggregated supply curve, the offers are placed in an ascending order. The TSOs dispatch
those producers whose offer is lower or equal to the uniform price, respectively, provided that the security
of the system is not threatened. When it comes to bottlenecks within the grid infrastructure, the TSOs are
obligated to redispatch the power plant operation after the market settlement of a single price zone. This is
common for the most power markets in Europe. This mechanism gives a strong incentive to the producers
to bid at their marginal prices in order to have their best chances to be included in the dispatch while not
inflicted damage. If a producer is included in the dispatch the mechanism makes sure that he will be paid at
least what he offered and most of the times more. The supply curve is well known as the merit order curve
and actually is an ascending illustration of the producers’ marginal costs that create the power plant
portfolio.
At this point, it should be mentioned that even though the demand is fluctuating, the demand curve is very
inelastic. This fact is not striking since, except for large consumers, most do not monitor the price changes.
So, in a case without RES production, equilibrium is reached and there is a rather high marginal cost plant at
the end of the merit order curve which sets the price. When RES is added, the supply curve is “shifted” to
the right because the overall supply has increased.
In this way, the intersection point between the supply and demand curve changes and, obviously, the price
reduces since the new price intersects the merit order curve in a lower point which implies a lower marginal
cost power plant (Figure 2). The dual equivalent is that when the RES power is introduced in the market, it
takes a portion of the demand. The lowered residual demand must be met by the same existing
conventional supply thus the price decreases.
326
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
The level of the total load is also crucial since the lower the total load the smaller the effect of the RES
integration becomes, as the supply curve is not exactly linear but rather concave up. Furthermore, the load
elasticity affects the impact of RES integration. An increased elasticity of the load curve is negatively
correlated with the absolute amount of price reduction. In addition, the power plant portfolio is very
important because it changes the construction of the supply curve hence the slopes of the supply curve
which is of an outmost relevance for the volume of the merit order effect.
For the same reasons, the fuels costs and the CO 2 prices constitute also influence parameters. Attention
must, also, be paid on the controversial effect that these factors have on the volume of the merit order
effect. In a following chapter, we will investigate how these prices indirectly affect the price reduction
caused by the penetration of RES. Although in this perspective, the reduction of these costs leads to a lower
slope of the merit order curve thus lowers the volume of the price reduction. Finally, the scarcity mark-up
at the end of the supply curve is positively correlated with the volume of the price reduction. Nevertheless,
further investigation is needed to conclude in the specific numerical correlation of these effects.
327
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
relation to the situation before the RES integration because this amount of demand, now satisfied by RES,
was met by these conventional emission intensive power sources.
At this point, a brief explanation of the EU-Emission Trade Scheme (EU-ETS) deemed necessary. The EU-ETS
rd
has put a cap to the emissions (at the 3 phase there will be a single EU wide cap which complicates the
analysis but the basic principle remains). The total allowed emissions are distributed among the electricity
industry and other sectors and are called ‘’allowances’’ to emission. The allowances allocation methods are
the following:
Through the EU-ETS, the CO2 emissions have gained an economic value and respond to the supply and
demand laws. On the first hand, if the allowances need to be bought then the demand for them will
decrease so the price of the allowances will decrease as well. However, at times when wind power is not
available, those power plants do operate and, since their actual costs of production are declined, they
integrate the allowance price reduction in their marginal pricing. That leads to lower market prices. On the
rd
other hand, even if the allowances are allocated freely, although at 3 phase auctioning method gradually
prevails, the result remains the same. The conventional power producers take, also, into consideration the
opportunity costs when pricing, namely the alternative revenue that they would gain if they sold the
allowances since they are tradable. This implies that the marginal price offered must overcompensate that
amount. Through this procedure, if the price of the allowances decline, consequently, declines the marginal
prices thus lowering the wholesale prices. A better understanding of this phenomenon can be illustrated via
Figures 3 and 4.
stems from the fact that beyond a point they do not want or are even not able to further reduce their
output. There are technical boundaries that prohibit those baseload plants to reduce their output below a
certain percentage of their nominal output. Also, those plants have huge investment costs but low marginal
ones and, consequently, need to be highly utilized to finance their investments (the flexibility of the
conventional power plants can be seen in Figure 5). So either they will ramp down or they will produce at
those minimum thresholds. This situation, when there are not any further opportunities to reduce
conventional production, is called a tight market situation. However, the supply must always meet the
demand and the baseload plant endures great damage by ramping down. On the first hand, this is because
the current procedure is really expensive and also reduces the lifetime of the installation. In addition, there
is great opportunity cost, because ramping down and up takes time and opportunities to earn revenue in
the meantime if prices increase again are lost. To avoid ramping down, the baseload plants manage to
integrate these costs into their negative bids thus taking a better place in the merit order curve and being
included into the dispatch. Figure 6 clearly illustrates this situation.
329
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
5.1. Implications
As far as the level of the price is concerned, there are two controversial effects which form the effect on the
final consumers. The first is the wholesale price decrease which occurs due to the wind power penetration
and the second is the support payments that are induced on the final consumers to finance the support
mechanisms that guarantee this penetration to be achieved. The total net effect can be either an absolute
or relative negative correlation between the wholesale and retail price. The literature has not concluded yet
on the prevailing case since it severely depends on the assumptions and the different parameters of the
investigation creates a controversial issue. Bode (2006) created a market model and showed the different
net effects in relation to the slope of the merit order curve and the remuneration of the feed-in tariffs
(Figures 8,9).
330
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
st nd
Figure 8: Net effect - 1 Type Figure 9: Net effect - 2 Type
Source: Bode, S. (2006) Source: Bode, S. (2006)
Nonetheless, there are great concerns for the electricity industry. The increased price volatility, in
combination with declined electricity price levels, has created an insecure environment for further
investments in the electrical industry. The utilization ratios of the conventional power plants are lowered
and that increases their investment risks. The negative prices are not per se problematic but clearly
demonstrate a malfunction of the system. The nuclear energy is phasing out and the wind power sources
seem unable to give an alternative option to replace them. The main concerns are energy security and
safety based on the technical stability of the system due to their small capacity credit. In addition, the
increasing share of high marginal cost power plants further increases the volatility of the price, thus
creating greater positive and negative spikes, putting the market into a vicious circle. Nevertheless, it is
undoubtable that a turn to the RES is more than necessary, especially to wind power which is on the verge
of achieving grid parity. There are feasible and implementable solutions that can reform the electricity
market and long term scheduling and addressing of the problem is essential.
5.2. Solutions
A brief presentation of short and long term solutions follows.
To conclude, further analysis and investigation of the key parameters and the proposed solutions need to
be implemented in the future.
331
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Sietis A., Angelopoulos D., Doukas H., Psarras J. | The Effect of Wind Power Penetration on the Wholesale
Prices in Electricity Markets
ACKNOWLEDGEMENT
This work was supported by the Management and Decision Support Systems Laboratory, School of Electrical
and Computer Engineering of the National Technical University of Athens
REFERENCES
Bode, Sven, and Helmuth-Michael Groscurth. The Effect of the German Renewable Energy Act (EEG) on" the Electricity
Price". No. 358. HWWA Discussion Paper, 2006.
Bode, Sven. On the impact of renewable energy support schemes on power prices. No. 4-7. HWWI Research Paper, 2006.
Frondel, Manuel, Ritter, Nolan, Schmidt, Christoph and Vance, Colin. "Economic impacts from the promotion of
renewable energy technologies: The German experience." Energy Policy 38.8 (2010): 4048-4056.
Ketterer, Janina. The Impact of Wind Power Generation on the Electricity Price in Germany. No. 143. Ifo Working Paper,
2012.
Klessmann, Corinna, Christian Nabe, and Karsten Burges. "Pros and cons of exposing renewables to electricity market
risks—A comparison of the market integration approaches in Germany, Spain, and the UK." Energy Policy 36.10 (2008):
3646-3661.
Munksgaard, Jesper, and Poul Erik Morthorst. "Wind power in the Danish liberalised power market—Policy measures,
price impact and investor incentives." Energy Policy 36.10 (2008): 3940-3947.
Nicolosi, Marco. "Wind power integration and power system flexibility–An empirical analysis of extreme events in
Germany under the new negative price regime." Energy Policy 38.11 (2010): 7257-7268.
Rathmann, M., 2007. Do support systems for RES-E reduce EU-ETS-driven electricity prices? Energy Policy 35 (1), 342–
349.
Sáenz de Miera, Gonzalo, Pablo del Río González, and Ignacio Vizcaíno. "Analysing the impact of renewable electricity
support schemes on power prices: The case of wind electricity in Spain." Energy Policy 36.9 (2008): 3345-3359.
Sensfuί, F., 2007. Assessment of the impact of renewable electricity generation on the German electricity sector—an
agent-based simulation approach. Dissertation, Universita¨ t Karlsruhe (TH), Karlsruhe.VDI, Dusseldorf.
Sensfuß, Frank, Mario Ragwitz, and Massimo Genoese. "The merit-order effect: A detailed analysis of the price effect of
renewable electricity generation on spot market prices in Germany." Energy policy 36.8 (2008): 3086-3094.
Traber, Thure, and Claudia Kemfert. "Impacts of the German support for renewable energy on electricity prices,
emissions, and firms." Energy Journal 30.3 (2009): 155.
332
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
Abstract
In today’s rapidly changing world, the development of suitable and specialized decision support systems is needed more
than ever. This need is even more conspicuous in the natural gas market since the deregulation and pipeline
unbundling. Also a new player has entered the market. The lng easily transferred by ships, has changed the rules of the
game. The traditional natural gas market structure is now changed dramatically. Taking into consideration these new
market conditions, distribution companies have to adjust their tactical planning. Getting used to their new role,
distribution companies focus on the delivery of natural gas from the major pipelines or other lng sources, to the end
user. To remain competitive and profitable distribution companies have to optimize their supply plans based on
different and complicated variables including current market prices.
This paper is not focused on the day to day company’s operation, but is trying to provide a managerial tool for the
tactical and strategic decision making level. This approach meets the need for a simplified analysis of different scenarios
for the higher levels in an organization’s hierarchy. We are trying to establish the most influential external variables that
influence the supply cost and find the minimized yearly cost. Benefits offered by this approach for cost minimization in
a variety of business scenarios, such as the case where the company can hold some amount of gas in storage, the
optimized annual delivery plans, planning and optimization of all modes of transportation including pipeline and sea,
are studied and presented. Also scenarios with different types of contracts to manage the natural gas supply, are also
taken into consideration.
The main purpose of this study is to develop an optimization model for the monthly operation of a natural gas
distribution company that holds a monopolist position in a local market. We are going to use linear programming with
an objective function that seeks to minimize the yearly gas supply costs. This tool has as its main goal is to provide a
general view of the company’s options and the basic strategies that can be followed through different scenarios.
KEYWORDS
LNG supply, natural gas, optimization, Decision Support System
1. INTRODUCTION
For the purpose of this study, we are going to deal with a fictional Natural Gas Distribution Company. In
order to proceed with the optimization model for the monthly operation of the company, we have to place
the company in the supply chain. The model that we are going to construct will decide on the minimization
of the annual total (supply, transportation, gasification) cost of natural gas. Also it supports the decisions for
monthly orders from each supplier and assesses the opportunities of the spot market.
333
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
As we can see in the Figure 1, our company has the role of a middleman between the production and the
end user. Also it has LNG storage ability. The main features of this company are the following:
The company:
Has signed long terms contracts with three different suppliers
Has the ability to order LNG from the Spot Market.
Can use a tank for the LNG storage
Has to pay the Transmission System Operator for the use of the pipelines (Euros/MWh). This price
is constant for the whole year.
Has to pay for the gasification cost of the LNG (Euros/MWh).
The decision for the monthly order schedule is taken at the start of the year and it can be re-
evaluated every month.
The long term contracts are possible to contain many special terms. For the purpose of this study we decide
to the following terms with each supplier.
The company cannot order more quantity than the maximum agreed
The price of natural gas is fixed at the start of the year and is constant throughout the year
The company cannot order less than the minimum quantity agreed
If the annual ordered quantity is less than the minimum quantity, the residual must be paid and
becomes available for the next year (pool)
The available quantity (pool) from the previous year’s use is called available make up quantity. It is
a pre-paid quantity from previous year that can be ordered only if we have already meet the
minimum constraint and only for the last quarter of the year. It will be treated as an asset for the
company.
2. THE MODEL
The model has to be provided with necessary data in order to proceed to the minimization. Having
described above the basic frame in which the company operates we have to set the appropriate inputs. The
model has to be built with the following data given:
Monthly Demand
Natural Gas price agreed with each supplier (Euros/MWh)
334
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
Having equipped the model with the above data, it will provide several data as outputs. Of course the main
goal is to have the minimized cost. Alongside with the cost the model will provide a monthly schedule of
natural gas orders per supplier (MWh). Another set of data outputs will be the total monthly gasification
quantity (MWh) and also the total available make up quantity used per month per supplier (MWh). Finally
the model will give as an output the available make up quantity for next years use.
Finally to conclude the modeling of the problem the objective function is set up.
12 12 12 12
U [(TrA PrA)( Ai Api ) (TrB PrB)( Bi Bpi ) (GaC PrC)( Ci ) (GaC PrSp)( Spi )] (mkA)PrA (mkB)PrB
i 1 i 1 i 1 i 1
Where:
PrA, PrB, PrC and PrSp are the NG prices (Euros/MWh) from each supplier.
Tr is the transportation cost (Euros/MWh) from suppliers A and B respectively.
GaC is the gasification cost (Euros/MWh) from supplier C and Spot Market.
3
α is the conversion factor from m LNG to MWh.
335
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
2.1.1. Scenarios
In the scenario that it was chosen to be presented, we set the Tank Capacity equal to 1.000.000 MWh and
3
the LNG loads all equal to 40.000 m . Also we set the total number of loads allowed equal to 3 and the
available make up quantity for supplier A from the previous year’s use equal to 500.000 MWh. We assume
that the other two suppliers had not any quantity created previous year. Here it must be noted that the
above numbers are randomly chosen for the sake of the study.
A monthly demand is set using the information we can absorb from the Figure 2.
Figure 2 Natural Gas Daily Demand for the year 2010 in the Greece Market.
Source: www.desfa.gr
From the figure above it can be easily extracted the following demand pattern. A high demand appears
during the first quarter and the demand follows a steady trend at a lower level. In Table 1 a high demand is
set the first two months of the year and then descending to a steady lower level.
Month Demand Transp. Cost Gas. Cost Suppliers Max Min Available Price
(MWh) (€/MWh) (€/MWh) Make-up (€/MWh)
Jan 4.5 M 0,2 0,22 Sup. A 30.0 M 20.0 M 500.000 32
Feb 4.5 M Sup B. 21.0 M 7.0 M 32
Mar 3.5 M Sup. C 18.0 M 5.0 M 32
Apr 3.0 M 28
May 3.0 M
Jun 3.0 M
Jul 3.0 M
Aug 3.0 M
Sep 3.0 M
Oct 3.0 M
Nov 3.0 M
Dec 3.0 M
Total 39.5 M
336
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
Running the model with the above input, the model gives the outputs that are shown in Table 3.
Sup. A Pool Sup. B Pool Spot Sup C Total Gas. Tank Total
(MWh) (MWh) (# Loads) (# Loads) ( #Loads) Quantity Quantity
Jan - 3.694.800 3,00 - 3,00 805.200 - 805.200
Feb 1.052.000 2.642.800 3,00 - 3,00 805.200 - 805.200
Mar 2.889.600 3,00 - 3,00 610.400 194.800 805.200
Apr 3.000.000 3,00 3,00 - 1.000.000 805.200
May 1.194.800 3,00 3,00 1.805.200 - 805.200
Jun 2.194800 3,00 3,00 805.200 - 805.200
Jul 2.389.600 3,00 - 3,00 610.400 194.800 805.200
Aug - 3.000.000 3,00 - 3,00 - 1.000.000 805.200
Sep 2.194.800 2,00 1,00 3,00 805.200 1.000.000 805.200
Oct 1.389.600 - 3,00 3,00 1.610.400 194.800 805.200
Nov 3.000.000 - 3,00 3,00 - 1.000.000 805.200
Dec 694.800 500.000 - 3,00 3,00 1.805.200 - 805.200
Total 20,008 M 500.000 9.337.600 0 17,00 19,00 36,00 9.662.400 9.062.400
In this scenario many things must be mentioned. First of all because of the high demand the model chooses
to meet the minimum agreed quantities for all of the 3 suppliers. So no make-up quantity for next year was
created. Also because of the price difference in the spot market the model decides to order as much LNG as
it can but the infrastructures does not allow such a big order. The infrastructure constraint is imposed to
the model through the variable of total # of loads. The total 19 ships that is shown to Supplier C equal to
5.000.000 MWh of LNG. The rest quantity to meet the demand the model decides to order it from Supplier
B. This is a random choice. Of course if this excess quantity was ordered from Supplier A, the result would
be the same as long as the prices are equal.
3. CONCLUSIONS
This is a simplified analysis of a hypothetical market. A more day to day approach would require a lot more
constraints and assumptions. But the purpose of this study is to provide a tool for the higher level in the
administration pyramid. There many expansions that can be proposed for the existing model. The entry of a
competitor, increase of the storage capacity of the tank etc. But we decide to provide the framework in
which we can later include the special characteristics of the market we decide to examine specifically.
REFERENCES
Book
G. Prastakos, 2006. Management Science- Operational decisions in the Information Society). Ath. Stamoulis Publications,
Athens.
Richard Bronson, Ph.D. and Govindasami Naadimuthu Ph.D., P.E. , 1997, Schaum’s Outline of Theory and Problems of
Operations Research, McGraw-Hill Companies Inc. , 1221 Avenue if the Americas, New York, NY, 10020, USA
F. Robert Jacobs, Richard B. Chase, 2011, Operations and Supply Chain Management, McGraw-Hill Companies Inc. ,
1221 Avenue if the Americas, New York, NY, 10020, USA
WebSites
www.depa.gr
337
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Chatzithanasis E., Androulaki S., Psarras J. | Optimization of Medium-term Natural and Liquefied Gas
Supply for a Distribution Company
www.desfa.gr
http://www.eia.gov/naturalgas
http://www.window.state.tx.us/specialrpt/energy/nonrenewable/gas.php
Journal
Francisco Torres, 1991, Linearization of mixed integer products. Mathematical Programming 49, 427-428
DESFA, 2012. Study on the Development of the National Natural Gas Transmission System 2013-2022, July 2012.
O’neill, R.P., Williard, M., Wilkins, B., Pike, R., A mathematical Programming Model for Allocation of Natural Gas,1979.
Operations Research, 27(5):857-873.
Feldman, M.B., Optimization of Gas transmission systems using linear programming, 1988. PSIG annual meeting,
Ontario, Toronto, Canada.
338
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
* Corresponding author
Abstract
The design, development and implementation of a Decision Support System (DSS) which evaluates and ranks European
Union (EU) countries over their e-Government progress, is proposed within the scope of this paper. The DSS, unlike
other relative implementations that are based on a limited number of indicators and do not highlight the
multidimensional nature of electronically-provided services, embeds a particular multicriteria evaluation methodology
for e-Government benchmarking. The evaluation system is constituted of four points of view: infrastructures,
investments, e-processes, and users’ experience. The main objective of the DSS is the provision of assistance and
support to a stakeholder in order to form his own personalized e-Government evaluation. In the end, 21 selected EU
countries are evaluated and ranked over their e-Government readiness, based on the unique preferences and standards
of each stakeholder-decision maker. Their ranking is obtained through an additive value model which is assessed by an
ordinal regression method, namely UTA II, along with specific mathematical programming techniques, implemented
within the DSS, in order to estimate each country’s score and rank. The DSS is designed to be launched as a web
application on the World Wide Web (WWW) and thus, be available to citizens, in order to provide them with
information over the status of e-Government readiness at EU level, to decision makers, serving as a tool to support
them in the evaluation and ranking of EU countries, to policy makers, for making justified decisions over reforms in the
public sector and generally, to all relative stakeholders who want to have a generic and diachronic view of e-
Government progress in the EU.
KEYWORDS
E-Government evaluation, Multicriteria Analysis, Decision Support System, Ordinal Regression Approach
It has also been proven that e-Government has presented an unprecedented potential to improve the
responsiveness of governments to the needs of citizens and has thus long been recognized as a key
strategic tool to enable reforms in the public sector (Charalabidis et al. 2010). This evolving environment
that e-Government creates, can further improve quality of life and can contribute towards trust in
government and democracy.
339
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
Therefore, considering all this beneficial impact of e-Government to society, one of the research questions
that could be logically addressed, is how can one measure the e-Government level of a country, as well as
assess its progress and superiority compared to other EU countries. Besides, a focused assessment of e-
Government and other relative initiatives such as e-Commerce, e-Education, e-Health, and e-Science, is
essential if a country is to make substantial progress (Ojo et al., 2007). Towards this direction, the so-called
research field of “e-Government Benchmarking” is used as a tool to measure and assess the progress made
by an individual country over a period of time and compare its growth against other countries. Benchmarks
can have a significant practical impact, both political and potentially economic and can influence the
development of e-Government services (Bannister, 2007).
For the purpose of this paper, i.e. to answer the aforementioned research question, a particular
multicriteria evaluation methodology of an individual country’s e-Government progress is proposed,
embedded and implemented in a software system, in the form of a DSS. This methodology is based on the
principles and tools of multicriteria decision analysis (MCDA), where each decision maker (expert,
evaluator, etc.) is able to globally evaluate every country in a personal way over a consistent family of
evaluation criteria. The overall evaluation of multiple countries and their ranking is obtained through the
implementation of an additive value model which is assessed by the decision maker. The additive value
system is assessed and defined with the use of the ordinal regression method UTA II, whose interactive
application process is divided in two phases.
The developed DSS evaluates and ranks twenty one developed European countries on the latest criteria
data, taking into consideration the knowledge and the preferential data of a senior expert on e-Government
who is going to use the proposed DSS. Apart from this, it is designed to embed various other functionalities
and operations for the users, from simple information presentation, to comparative data analysis, as well as
evaluation, assessment and ranking reports. Notably, it is designed to be launched on the WWW in order to
constitute a reference point for citizens, decision makers, policy makers and any other relative
stakeholders.
Consequently, the continuous need for monitoring and assessing e-Government progresses has led to the
manufacturing of tools and models towards this direction. These projects are conducted predominantly by
governmental, academic and private organizations and sometimes, by ordinary developers/citizens.
However, the concept of e-Government evaluation is very broad and spans across many different levels of
evaluation, from the evaluation of authorities’ websites, to the evaluation of actions, policies and
investment plans for e-Government web services, to the evaluation and comparison of the performance of
countries in the field of e-Government (benchmarking).
340
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
The third level of evaluation, namely global e-Government evaluation is the one that correlates with the
proposed DSS of the present study. Tools and models for this specific purpose have already been
implemented by
the United Nations (e-Government Survey, United Nations 2012),
the European Commission (European Commission Benchmark Measurement, European
Commission, 2010),
the Economist’s Digital Economy Ranking (Economist Intelligence Unit, 2010),
the American Brown University (Brookings Institution, West, 2008)
and the private company Accenture (Accenture 2007).
However, these implementations have been proven to have a number of weaknesses, mainly because they
are based on a limited number of indicators and do not highlight the multidimensional nature of
electronically-provided services (Siskos et al., 2013).
For the purposes of the present study, we deliberately omit the analytical descriptions of the evaluation
criteria. The reader is prompted to the paper of Siskos et al. (2013) for more information and analysis on
the subject and the data. The DSS evaluates and ranks twenty one developed European countries on the
latest criteria data.
341
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
(1)
, , for
(2)
(3)
, for
(4)
Where is the performance vector of a country on the n criteria; and are the
least and most preferable levels of the criterion , respectively. Also, the , are non-
decreasing marginal value functions of the performances , and is the relative weight
of the i-th function . Thus, for a given country a, and represent the multicriteria
vector of performances and the global value of the country a respectively.
Both the marginal and the global value functions have the monotonicity property of the true criterion. For
instance, in the case of the global value function, given two countries a and b the following properties hold:
(Preference) (5)
(Indifference) (6)
Because of the objective difficulties to convince decision makers in externalizing tradeoffs between
heterogeneous criteria and verify the preferential conditions cited above, MCDA practitioners usually prefer
to infer the DM’s additive value function from global preference structures, by applying disaggregation or
ordinal regression methods (see Jacquet-Lagrèze and Siskos, 2001, Greco et al., 2008). In this study the
disaggregation two-phased UTA II method, introduced by Siskos (1980), is implemented. The phases consist
of the construction of the marginal value functions of the criteria and the criteria weights through the
evaluation of a reference set of alternatives by the DM. The methodological frame along with the
implementation of the method is thoroughly described in Siskos et al. (2013).
4.1. Users
In order to design, develop and implement the proposed DSS embedding the abovementioned
methodology, its basic requirements were initially defined before taking the according design decisions.
Towards this direction, the proposed DSS was decided to be designed as a software application that would
run online on the WWW, in the form of a website, publicly available to the various categories of users. The
three targeted categories of users were: (i) simple citizens, (ii) decision makers/policy makers and (iii)
administrators. This separation has been made depending on the purpose for which each category uses the
DSS, as well as on the various different functionalities available to each one of these categories. More
specifically, the first category, i.e. citizens, includes all people that are interested in obtaining information
and a generic view of the current situation and the diachronic evolution of e-Government across the various
EU countries. They can obtain information about the various e-Government dimensions of each country,
comparative graphs and charts, as well as relative evaluation and ranking reports. The second category, i.e.
decision makers, is prompted to achieve their own evaluation over e-Government instead of just obtaining
information. These stakeholders are interacting with the DSS, communicate their preferences through the
342
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
system and finally result to their personal evaluation and ranking of the countries. Administrators on the
other hand are the users who are responsible for the proper functioning of the website, for the
consideration and management of the experience and feedback of users, as well as the updating of the
site’s statistical data and sources.
Based on the abovementioned categories of users, it was decided to design three different interfaces, each
one embedding different data forms, all based on a master template. Consequently, three interfaces were
designed: (i) for the citizens, for informational, comparative and statistical reasons, (ii) for the decision
makers, including the actual implementation and execution of the multicriteria methodology described
earlier and (iii) for the administrators, which includes all the above-mentioned functionalities, as well as
extended operations for managing and giving access rights to users (citizens or decision makers), entering
data, updating and managing the database, debugging any errors and generally, ensuring the effective
maintenance, good operation and efficient performance of the Website.
Figure 1 and 2: The implemented interfaces for citizens (left) and for decision makers (right)
4.3. Data
Indices and indicators used in benchmarks are generally quantitative in nature, and collectively form a
framework for assessment and ranking. Rankings should be supported by well understood and clarified
frameworks and indices, as well as by transparent computational procedures, in order to maximize their
acceptability by the governments and the scientific community. In the present study, the data taken as
input in the DSS and its relative processes stem from reliable and recognized sources such as the Statistical
Administration of the European Commission (Eurostat), the International Monetary Fund (IMF) and the
United Nation’s e-Government survey and the European Commission’s e-Government benchmarking.
343
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
344
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
Figures 5, 6 and 7: Steps (iii), (iv) and (v) of the use-case scenario within the DSS
What is more substantial is that the actual DSS, which is embedded in the web application developed,
addresses the e-Government evaluation problem effectively with the aid of multicriteria methods and
techniques. Consequently, it supports, empowers and facilitates the decision maker in the process of
extracting his own evaluation and ranking of the e-Government readiness of various selected EU countries.
Considering the sustainability and future prospects of the information system, it should be noted that,
thanks to its scalable design, it allows the incorporation of more criteria and more countries (non-EU ones
345
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
and maybe at a global level), as the multicriteria methodology evolves, as well as the involvement of the
preferences of multiple decision makers. Additionally, supplementary information and statistics, as well as
more visualization features such as charts, maps, comparative views, are currently under implementation
for a future, extended version of the web application. Finally, the system is aspired to support in the future,
user management and remote access, enabling therefore access and data entry or validation from multiple
authorized users, offering as a result the possibility to other entities or organizations to contribute to the e-
Government evaluator, by adding or editing information regarding the country they represent.
Finally, as a next step, the web application is planned to be launched online, first at a pilot stage, so that
initial feedback will be collected from the users (decision makers, citizens, researchers, etc.) and thus, be
taken into consideration for future implementation within an extended version of the DSS. Then, after the
pilot phase and the according incorporation of all the initially-intended and proposed-by-the-users
additional features, the web application is aimed to be launched online, at an open stage and evolve as an
advanced and multi-purpose online Decision Support tool for Governments, Citizens and Businesses.
REFERENCES
Accenture (2007). Leadership in customer service: Delivering on the promise. Retrieved November 10, 2012,
http://www.accenture.com/us-en/Pages/insight-public-leadership-customer-service-delivering-promise.aspx.
Bannister F. (2007). The curse of the benchmark: an assessment of the validity and value of e-Government comparisons,
International Review of Administrative Sciences, 73 (2), pp. 171-188.
Charalabidis Y., Markaki O., Lampathaki F., Matzakou I., Sarantis D. (2010), Towards a Scientific Approach to e-
Government, Proceedings of the Transforming Government Workshop 2010 (tGov10), March 18 – 19 2010, Brunel
University, West London, UK.
Charalabidis, Y. (2012). Governmental Service Transformation through Cost Scenarios Simulation: The eGOVSIM Model.
In E. Kajan, F. Dorloff, & I. Bedini (Eds.) Handbook of Research on E-Business Standards and Protocols: Documents, Data
and Advanced Web Technologies, pp. 791-805.
Codagnone, C. and Undheim, T. A. (2008). Benchmarking e-Government: Tools, theory, and practice, European Journal
of ePractice. Retrieved November 10, 2012, from http://www.epractice.eu/files/4.2_0.pdf.
Commission of the European Communities (2003), The Role of eGovernment for Europe's Future, COM(2003) 567 final,
retrieved from www.eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2003:0567:FIN:EN:PDF in November 2013
Economist Intelligence Unit (2010). Digital economy rankings 2010, Beyond e-readiness. Retrieved August 20, 2013,
from http://www-935.ibm.com/services/us/gbs/bus/pdf/eiu_digital-economy-rankings-2010_final_web.pdf.
th
European Commission, 2010. Digitizing public services in Europe: Putting ambition into action, 9 benchmark
measurement, Directorate General Information Society and Media, Brussels.
Greco, S., Mousseau, V., Słowiński, R. (2008). Ordinal regression revisited: Multiple criteria ranking using a set of
additive value functions, European Journal of Operational Research, 191 (2), pp. 416-436.
Gupta, M.P. and Jana, D. (2003). E-Government evaluation: A framework and case study, Government Information
Quarterly, 20, pp. 365–387.
Heeks, R. (2006). Understanding and measuring e-Government: International benchmarking studies. Retrieved
November 10, 2012, from http://unpan1.un.org/intradoc/groups/public/documents/UN/UNPAN023686.pdf.
Jacquet-Lagrèze, E. and Siskos, J. (2001). Preference disaggregation: 20 years of MCDA experience, European Journal of
Operational Research, 130, pp. 233-245.
Mandl, U., Dierx, A., Ilzkovits, F. (2008). The effectiveness and efficiency of public spending, Economic Papers 301.
Retrieved November 10, 2012, from http://ec.europa.eu/economy_finance/publications/publication11902_en.pdf
346
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Matzakou I., Siskos E., Askounis D. | Towards a Multicriteria Decision Support System for e-Government
Benchmarking in European Union
Ojo, A., Janowski, T., Estevez, E. (2007). Determining progress towards e-Government - What are the core indicators?
Retrieved November 26, 2013, http://www.iist.unu.edu/newrh/III/1/docs/techreports/report360.pdf
Rorissa, A., Demissie, D., Pardo, T. (2011). Benchmarking e-Government: A comparison of frameworks for computing e-
Government index and ranking, Government Information Quarterly, 28, pp. 354-362.
Siskos, E., Askounis, D., Psarras, J. (2013). Multicriteria decision support for global e-Government evaluation, Omega,
under revision
West, D.M. (2008). Improving Technology Utilization in Electronic Government around the World, 2008, Brookings
Institution.
United Nations (2012). E-government survey 2012, E-government for the people. Retrieved November 10, 2012,
http://www2.unpan.org/egovkb/global_reports/12report.htm
347
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
Abstract
Many government agencies worldwide have started making considerable investments for developing information
systems that enable opening important data they possess to the society, in order to be used for scientific, commercial
and political purposes. In order to rationalise and support future decisions concerning the development, upgrade,
improvement and management of this new type of information systems it is important to understand better what value
they create and how, and at the same time to identify the main improvements they require. This paper contributes in
this direction presenting a methodology for determining the value generation mechanism of open government data
(OGD) systems and also priorities for their improvement. It is based on the estimation of a ‘value model’ of the OGD
system under evaluation from users’ ratings. It consists of several value dimensions and their corresponding value
measures, organized in three ‘value layers’, and also the relations among them. These three value layers concern value
related to the efficiency of the OGD (= quality of the various capabilities it provides to the users), its effectiveness (=
degree of supporting users for achieving their objectives) and also users’ future behavior intentions respectively. The
proposed methodology has been applied successfully to an advanced OGD system developed as part of the European
project ENGAGE (‘An Infrastructure for Open, Linked Governmental Data Provision towards Research Communities and
Citizens’), providing to interesting insights and improvement priorities. This first application provides evidence that our
methodology can be a useful decision support tool for important ODG systems development, upgrade, improvement
and management decisions.
KEYWORDS
open government data; public sector information; evaluation; value model, decision support system.
1. INTRODUCTION
Many government agencies worldwide have started making considerable investments for developing
information systems that enable opening important data they possess to the society, in order to be used for
scientific, commercial and political purposes [1]-[6]. The future evolutions in this area will rely critically on
the decisions of many government agencies concerning the development of new open government data
(OGD) systems, the upgrade (e.g. of initial small pilot ones, so that they can serve more users and host more
datasets), the functional improvement (e.g. in order to serve better users’ needs) and the management
(e.g. concerning the level of maintenance and technical support) of existing ones. In order to rationalise and
support these future decisions it is important to understand and assess the various types of value that these
OGD systems generate, their value generation mechanism, and at the same time – since this is a relatively
new type of information systems – to identify the main improvements they require. However, there has
been quite limited activity in this direction. A recent study of the OECD on OGD initiatives [1] concludes that
‘So far, little has been done to analyse and prove the impact and accrued value of these initiatives’, and
calls for action in this direction. It also notes that an important barrier for this is the lack of a structured and
348
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
comprehensive evaluation methodology. The same study, in order to contribute to filling this gap, proposes
an analytical framework for assessing country-level OGD initiatives, which includes three main assessment
dimensions: strategy and legal-institutional framework, implementation framework, and value creation
(social, political and economic). Also, in [7] is described an open data maturity model to be used for
assessing the commitment and capabilities of a government agency in pursuing the principles of open data;
it includes three evaluation domains (each of them being divided into several sub-domains consisting of
several individual variables): establishment and legal perspective, technological perspective and citizen-
entrepreneurial perspective. Therefore, though there are some first methodologies for evaluating OGD
initiatives at the level of country and individual government agency, there is not a methodology for
evaluating OGD systems, which is the most critical level for value creation from OGD.
In this direction this paper describes and validates an methodology for evaluating OGD infrastructures,
which adopts the ‘value model’ approach to IS evaluation proposed in [10-11]. According to this approach
the evaluation of IS should include not only the assessment of various measures of generated value (as in
the ‘conventional’ IS evaluation approaches), but also the relations among them as well; this leads to the
formation of a value model of the IS, which provides highly important advantages: it enables a deeper
understanding of the whole value generation mechanism of the particular IS, and also a rational definition
of its improvement priorities (see section 2 for more details on this approach). The proposed methodology
has been used for the evaluation of an advanced ‘second generation’ OGD e-Infrastructure [8-9] developed
in the European project ENGAGE (for more details see http://www. engagedata.eu/about/).
In the following section 2 the proposed methodology is described, while in section 3 the above application
of it is presented. Finally in section 4 the conclusions are summarized and future research directions are
proposed.
2. AN EVALUATION METHODOLOGY
Our methodology for evaluating this advanced second generation of OGD infrastructures was based on one
hand on the above three layers’ value model approach [10-11], and on the other hand on:
i) Approaches and frameworks from previous relevant IS research (briefly reviewed in [10-11]) concerning:
IS evaluation (we have included in the methodology both IS ‘efficiency’ and ‘effectiveness’ measures), IS
acceptance (we have included measures of ease of use, usefulness and users’ future intentions), IS success
(our methodology has adopted a layered evaluation approach, and has included measures of both
information and system quality, and also of user satisfaction and individual impact) and e-services
evaluation (our methodology has included measures of both the quality of the capabilities offered to the
users, and the support provided to them for achieving their OGD related objectives).
ii) The results of the analysis of potential users’ requirements conducted as part of the above ENGAGE
project (which, as described in more detail in [8-9], include data search, provision and download
capabilities, data processing capabilities, and also capabilities for communication with other users).
iii) The high level technological aspects proposed in the methodologies for country and government agency
level OGD initiatives’ evaluation proposed in [1] and [7] respectively (such as data completeness, quality,
quantity, format and metadata, search capabilities, users satisfaction, platform availability).
and the relations among them, and is shown in the Appendix. We remark that these value dimensions are
organized in three value layers, adopting the structure proposed by [10-11], which correspond to efficiency
(value associated with the capabilities it offers to the users), effectiveness (value associated with the
support provided to the users for achieving their objectives) and future behavior (value associated with
users’ future behavior) respectively. It should be noted that the value dimensions of the first efficiency layer
are independent variables, which are under the direct control of the OGD infrastructure developer, who can
take direct actions for improving them if necessary. In contrast, the value dimensions of the other two
layers are not under the direct control of the infrastructure developer, and are dependent to some extent
on the first level ones.
The presented eight value dimensions were further elaborated, and for each of them a number of individual
value measures were defined (again based on the foundations i) to iii) mentioned in the beginning of this
section). Each of these value measures was then converted to a question to be included in a questionnaire
to be distributed to users of the infrastructure. All these questions have the form of statements, and the
users are asked to enter the extent of their agreement or disagreement with them, answering the question:
“To which extend do you agree with the following statements?”. A five point Likert scale is used to measure
agreement or disagreement with such a statement (1=Strongly Disagree, 2= Disagree, 3=Neutral, 4=Agree,
5=Strongly Agree). In Table 1 we can see the questions that correspond to the value measures of each value
dimension.
It should be mentioned that the above value model definition can be adapted based on the capabilities
offered by the particular OGD infrastructure under evaluation, so additional value dimensions can be added
for additional capabilities that might be offered (or some value dimensions might not be used if the
corresponding capabilities are not offered).
1. Initially for each value dimension we examine the internal consistency of its value measures, by
calculating the Cronbach Alpha of the variables corresponding to its value measures [12]. This coefficient
quantifies to what extent a set of variables measure different aspects of the same single uni-dimensional
construct, and is calculated as:
2 2
Alpha = (k/(k-1)) * [1- (s i)/ssum]
2 2
where the s i (i = 1, 2, …, k) denote the variances of the k individual variables, while the s sum denotes the
variance of the sum of these variables. A widely accepted and used practical ‘rule of thumb’ is that values of
Cronbach Alpha exceeding 0.7 indicate ‘acceptable’ levels of internal consistency of the variables [12].
Therefore if for a value dimension its calculated value of Cronbach Alpha exceeds 0.7, we can conclude that
all its measures have acceptable internal consistency.
2. For each value dimension an aggregate variable is calculated as the average of its individual measures’
variables.
3. Average ratings are calculated for all value measures and dimensions (using for the latter the aggregate
variables calculated in the previous step); this allows us to identify ‘strengths’ and ‘weaknesses’ (=value
measures and dimensions with high and low average ratings) of the OGD infrastructure.
350
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
4. For each aggregate variable – value dimension of the second and third layer, we estimate a regression
having it as dependent variable, and having as independent variables all the aggregate variables- value
dimensions of the previous layers, in order to estimate to what extent this value dimension is affected by
2
value dimensions of previous layers; this is quantified by the R coefficient of the regression [13]. If some
value dimensions of the second or third layer are affected only to a small extent by the value dimensions of
2
the previous layers (e.g. R <0.5), this indicates that some important value dimensions have been omitted in
the previous layers, so we have to redefine the value model of the OGD infrastructure.
5. For each value dimension of the first level we calculate its impact on the higher level value dimensions
(of the second and the third layers), using again the aggregate variables calculated in step 2. For this
purpose we can use the corresponding standardized coefficients of the regressions of the above step 4.
However, according to the econometric literature [13], if there are high levels of correlation between the
independent variables of a regression (and this happened in our data), then the estimated regression
coefficients are not reliable measures of the impacts of the independent variables on the dependent
variable (multi-collinearity problem). For this reason we decided to use correlations instead; so as measure
of the impact of a first layer value dimension on a higher layer one has been used the correlation coefficient
between them. Furthermore we calculated the average correlations of each first level value measures with
all second and third layers’ value dimensions and measure, as measures of their impact on higher level
value generation.
6. By combining the average ratings calculated in step 2 with the correlations calculated in step 3 we can
construct one value model of the OGD infrastructure at the level of value dimensions, and also a more
detailed one at the level of value measures. These value models enable a deeper understanding of the
value generation mechanism of the OGD infrastructure, as they visualize on one hand the levels of the
various types of value it generates, and on the other hand how value of one layer is transformed to value of
higher layers, and also the origins of higher layers’ value in the lower layers.
7. Finally the value dimensions and the value measures of the first layer, which are the only ‘independent
variables’ within the control of the OGD infrastructure developer, are classified, based on their average
ratings by users, and then based on their impacts on the value dimensions of the second and the third level,
into four groups: low rating – high impact, low rating – low impact, high rating – high impact and high rating
– low impact. The highest priority should be given to the improvement of the value dimensions and
individual value measures of the first group, which receive low ratings and at the same time have a high
impact on the generation of higher level value; so it is on them that we should focus our scarce human and
financial resources. Furthermore, for each value dimension and value measure we can calculate an
‘Improvement Priority Index’ (IMPRi), which quantifies the priority that has to be given to it, and is equal to
the product of its average correlation with second and third layer value dimensions (AVCOR i) and the
distance of its average rating (AVRATi) from the highest possible rating (equal to 5 in our case):
3. APPLICATION
The proposed methodology has been applied for the evaluation of the first version of an advanced second
generation OGD infrastructure developed in the abovementioned ENGAGE project. The evaluation
questionnaire shown in Table 1 was initially tested by three colleagues highly experienced in quantitative
research in the IS domain, who found it clear and understandable, and did not report any important
problems. Then 42 postgraduate students of the University of the Aegean (Greece) and the Delft University
of Technology (The Netherlands) (both partners of the above project) in the IS domain were trained in the
capabilities of this OGD infrastructure (in a two hours session), and used it next for implementing a
representative scenario (in an one hour session). Immediately after the end of these tasks they all filled the
351
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
questionnaire in paper form. We believe that since all these postgraduate students had some experience in
quantitative IS research, they are satisfactory sources of information concerning various aspects of value of
this OGD infrastructure.
The collected evaluation data were processed using the algorithm described in section 2.2. Initially we
calculated the Cronbach Alpha values for all dimensions (step 1), and since all of them exceeded the lowest
acceptable value 0.7 we can conclude that they are internally consistent, so we proceeded to the
calculation of the value dimensions’ aggregate variables (step 2). Then we estimated the regression models
2
of the second and the third layer value dimensions (step 4), and both had R coefficients higher than 0.5. In
Table 1 we can see for each value dimension and each value measure its average rating (step 3), its
correlations with the two second and third layer measures and also the average of these two correlations
(step 5).
Table 1. Questions-Average ratings of value dimensions/measures, and correlations with 2nd and 3rd layer value
dimensions
352
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
The platform supports user account creation in order to personalize 3.44 0.220 0.213 0.217
EOU6
views and information shown
EOU7 The platform provides high quality of documentation and online help. 2.83 0.634 0.592 0.613
Performance (PER) 2.15 0.379 0.377 0.378
PER1 The platform is always up and available without any interruptions. 2.10 0.363 0.371 0.367
PER2 Services and pages are loaded quickly. 2.15 0.310 0.328 0.319
PER3 I did not realize any bugs while using the platform. 2.20 0.278 0.209 0.244
Data Processing Capabilities (DPR) 3.27 0.735 0.640 0.688
The platform provides good capabilities for data enrichment (i.e. 3.29 0.483 0.460 0.472
DPR1
adding new elements - fields)
The platform provides good capabilities for data cleansing (i.e. 3.26 0.644 0.581 0.613
DPR2
detecting and correcting ubiquities in a dataset)
DPR3 The platform provides good capabilities for linking datasets. 3.17 0.599 0.652 0.626
DPR4 The platform provides good capabilities for visualization of datasets 3.41 0.619 0.354 0.487
Support for Achieving User-level Objectives (SUO) 3.17 - 0.624
I think that using this platform enables me to do better 3.27 - 0.513
SUO1
research/inquiry and accomplish it more quickly
This platform allows drawing interesting conclusions on past 3.17 - 0.570
SUO2
government activity
This platform allows creating successful added-value electronic 3.07 - 0.548
SUO3
services
Future Behaviour (FBE) 3.19 0.624 -
FBE1 I would like to use this platform again. 3.24 0.472 -
FBE2 I‘ll recommend this platform colleagues. 3.15 0.702 -
Using the average ratings and correlations shown in Table 1, we constructed the value model of the OGD
infrastructure (step 6), at the level of value dimensions, which is shown in the Appendix (while similarly we
can construct a more detailed value model at the level of value measures). This provides a compact
visualization of the value generation mechanism of this OGD infrastructure.
Furthermore, priorities for improvements were identified (step 7). For this purpose we classified the first
layer value dimensions into two groups according to their average rating: a higher ratings group and a lower
ratings group (Table 2). Also, we classified them into two groups according to their impact on (average
correlation with) second and third layers’ value dimensions: a higher impact group and a lower impact
group (Table 3). From these two classifications we can conclude that our highest priority should be given to
the improvement of the data search-download capabilities, since they received low ratings from the users,
and at the same time they have high impact on higher layers’ value generation.
Table 2. Classification of first layer value dimensions according to their average ratings by the users
Lower Ratings Group Higher Ratings Group
Data provision capabilities Ease of use
Data search-download capabil. Data processing capabilities
User-level feedback capabilities
Performance
Table 3. Classification of first layer value dimensions according to their impact on higher level value dimensions
Lower Impact Group Higher Impact Group
Data provision capabilities Data search-download capabil.
User-level feedback capabilities Data processing capabilities
Performance Ease of use
Finally, we calculated the Improvement Priority Index for all value dimensions, shown in Table 4 in order of
improvement priority; so the highest improvement priority should be given to the data search-download
353
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
capabilities (confirming the conclusion drawn based on Tablew 3 and 4), mainly due to its high impact on
higher layer value generation, followed by data processing and data provision capabilities.
4. CONCLUSIONS
In the previous sections of this paper has been presented and validated an OGD evaluation methodology
that adopts a novel approach, based on the estimation of value models of these advanced OGD
infrastructures, which include assessments of both the main types of value they generate, and also the
relations among them (which are neglected and not exploited by the ‘conventional’ IS evaluation
approaches). It enables not only the identification of strengths and weaknesses of an OGD infrastructure,
but also a deeper understanding of the whole value generation mechanism of it. It also allows a rational
definition of improvement priorities, which is quite important, as this is a relatively new type of IS.
The first application of the proposed evaluation methodology for the evaluation of the users’ perspective of
an advanced second generation OGD Infrastructure, lead to interesting insights into this new type of IS,
especially with respect to their novel features, providing evidence that our methodology can be a useful
decision support tool for important ODG systems development, upgrade, improvement and management
decisions. In particular, it has been concluded that the data processing capabilities, a key novel feature of
this new generation of OGD Infrastructures, has a very strong impact on the generation of higher level
value, associated with the achievement of fundamental objectives of users, and their future behaviour.
Another novel feature, the user-level feedback capabilities (concerning rating and commenting datasets
and also reading other users’ ratings and comments), was found to have considerable impact on higher
level value generation.
Further research is required concerning the application of the proposed methodology for the evaluation of
the other advanced second generation OGD infrastructures, after appropriate adaptations. Furthermore,
the above future research should be based on larger and more ‘professional’ users’ groups (more
experienced than the postgraduate students’ group we used in the present study), taking into account all
the main segments targeted by such OGD infrastructures (e.g. professional researchers in the political,
economic, administrative and management sciences, developers of added-value electronic services,
political analysts and journalists).
REFERENCES
[1] Ubaldi, B. (2013), “Open Government Data: Towards Empirical Analysis of Open Government Data Initiatives”. OECD
Working Papers on Public Governance No 22.
[2] Commission of the European Communities (2011), “Communication from the Commission to the European
Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Open data:
An engine for innovation, growth and transparent governance”, COM (2011) 882 Final, Brussels.
354
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
[3] Dekkers, M., Polman, F., De Velde, R. De Vries, M. (2006), “MEPSIR – Measuring European Public Sector Information
Resources: Final report of study on exploitation of public sector information – benchmarking of EU framework
Conditions”, Report for the European Commission, June 2006.
[4] Commission of the European Communities (2009), “Communication from the Commission to the European
Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – ICT
Infrastructures for e-Science”, COM (2009) 108 Final, Brussels
[5] Nentwich, M (2003), “Cyberscience: Research in the Age of the Internet”, Vienna, Austrian Academy of Sciences.
[6] Chan, C. (20130, “From Open Data to Open Innovation Strategies: Creating e-Services Using Open Government
th
Data”, 46 Hawaii International Conference on System Sciences.
[7] Solar, M., Concha, G., Meijueiro, L. (2013), ‘A Model to Assess Open Government Data in Public Agencies’, in H. J.
Scholl et al. (Eds.) Proceedings of IFIP EGOV Conference 2012, LNCS 7443, pp. 210-221.
[8] Charalabidis, Y., Ntanos, E., Lampathaki, F. (2011). An architectural framework for open governmental data for
researchers and citizens. In M. Janssen, A. Macintosh, J. Scholl, E. Tambouris, M. Wimmer, H. d. Bruijn & Y. H. Tan (Eds.),
Electronic government and electronic participation joint proceedings of ongoing research and projects of IFIP EGOV and
ePart 2011, pp. 77-85.
[9] Zuiderwick, A., Janssen, M., Jeffery, K., (2013), ‘An e-infrastructure to support the provision and use of open data’,
International Conference for eDemocracy and Open Government 2013 (CEDEM 13), 22-24 May 2013, Krems, Austria.
[10] Pazalos, K., Loukis, E., Nikolopoulos, V. (2012), ‘A Structured Methodology for Assessing and Improving e-Services in
Digital Cities’, Telematics and Informatics Vol. 29, pp. 123-136.
[11] Loukis, E. Pazalos, K. Salagara, A. (2012), “Transforming e-services evaluation data into business analytics using
value models”, Electronic Commerce Research and Applications, 11(2), 129–141.
[12] Boudreau, M., Gefen, D. (2004), ‘Validation Guidelines for IS Positivist Research’, Communications of the
Association for Information Systems, 13, pp. 380-427.
[13] Greene, W. H. (2011), ‘Econometric Analysis - 7th edition’, Prentice Hall Inc., Upper Saddle River, New Jersey.
355
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Loukis E., Charalabidis Y., Alexopoulos C. | A Methodology for Determining the Value Generation
Mechanism and the Improvement Priorities of Open Government Data Systems
Appendix
Value Model of an Advanced Second Generation OGD Infrastructure
Data Provision
Capabilities
3.03
Data Search
& Download
Capabilities 0.639
3.03
0.760
User-level
Feedback
Capabilities
2.97 0.651 Support for
Achieving Future Behaviour
User Object. 0.624 3.19
Ease of Use 0.730 3.17
3.35
0.379
Performance 0.735
2.15
Data Processing
Capabilities
3.27
356
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
ABSTRACT
The paper at hand aims towards investigating the differentiated requirements of Open Data infrastructures from an
end-user perspective and per stakeholder group. The analysis has been performed in the context of the ENGAGE FP7 e-
Infrastructures Project. The first steps of the paper include extensive literature review, study of relevant use case
scenarios, interviews, workshops, consultation and conversations between the ENGAGE consortium and
representatives of the various end-users’ groups, as well as brainstorming amongst experts and the ENGAGE
consortium. The reported results show that the end-users’ needs cover a rather wide spectrum: social/ community
characteristics (e.g. integration with social channels, private messaging), uploading/ storing/ downloading features,
statistical analysis/ comparing mechanisms/ integration mechanisms/ visualisations, metadata/ high level descriptions,
legal disclaimers, training facilities, applications etc. A comparative cross analysis depicts the common needs of all end-
users’ groups, as well as specific needs that apply only to particular stakeholders’ groups. Based on the needs
recognised, suggestions towards open data infrastructures and the open data community were derived. The ENGAGE
project will constitute the initial test-bed of the results of the work performed.
KEYWORDS
Open Data, Infrastructures, End-User, Needs, Stakeholders.
1. INTRODUCTION
Identifying and recording the various end-users as well as their needs is one of the major steps towards an
efficient and effective implementation of every initiative; regardless if it is research-oriented or applied.
Thus, in order to facilitate an open governmental data service infrastructure reach open public, an a-priori
identification of the potential end-users (and end-user groups), as well as of their actual needs regarding
both the infrastructure itself as well as open/ public governmental data in general, is more than necessary.
The paper at hand aims to identify, define and present these needs in an integrated and coherent manner.
This paper is organized as follows. The current Chapter provides an introduction to the rest of the
document, while Chapter 2 provides the identification and reporting of the potential end-user base of an
open governmental data platform/ infrastructure. Chapter 2 also aims towards presenting the needs of
each end-user profile related to open/ public data and to the relevant infrastructure. Chapter 3 presents a
results-oriented analysis of Chapter 2 and provides suggestions towards any interested stakeholder,
followed by conclusions and directions for future work.
357
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
Nevertheless, it can also be taken for granted that each of the end-users’ categories that have been
recognised and reported in the previous sub-chapter is accompanied both with horizontal needs (needs
that are common amongst all end-users’ groups), but also with individual needs, that might probably not be
met in any other stakeholders’ category. The identification and reporting of these needs is of high
importance, as it will attach each infrastructure a competitive advantage towards engaging all possible end-
users in an efficient and effective way.
In general, the process behind identifying and reporting the end-user needs that follow is depicted in the
following figure:
358
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
359
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
The results show that the needs cover a rather wide spectrum: social/ community characteristics,
uploading/ storing/ downloading features, statistical analysis/ comparing mechanisms/ integration
mechanisms/ visualisations, metadata/ high level descriptions, legal disclaimers, training facilities,
applications were just some of the needs identified and reported. The cross analysis performed in Chapter
2.3 showed the common needs of all end-users’ groups, as well as specific needs that apply only to
particular stakeholders’ groups.
360
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
Based on the needs recognised, suggestions towards both ENGAGE and the open data community were
derived. Based on the common character that they present (representatives from almost every category of
end-users referred to them), as well as the importance they are reported to have for each end-users’ group,
the suggestions that came out of the analysis performed in the previous chapters are:
Provide a quick, comprehensive, high level description of each dataset; regardless of their profile
and accompanying needs, all kinds of stakeholders wish for suck a high level description, in order
to understand quickly if they are interested in the dataset or not.
Have the datasets available for downloading; this reasonable and expected need comes as a result
of the fact that every stakeholder interested in a dataset wishes for the ability to download it and
deal with it locally/ offline.
Keep/ Provide historical records regarding each dataset; these information (e.g. who generated the
dataset, which were the initial sources, when it was generated, what was the motivation behind its
generation etc.) provide valuable information regarding the reliability of the dataset and are of
high importance to all users.
Provide the dataset in various/ alternative formats; each end-users’ group may utilise different
tools and/ or operating systems, face particular restrictions etc. Thus, end users wish for the ability
to view and/ or download datasets in different file formats, in order to cope with any situation or
system.
Provide suggestion mechanisms; as it was concluded, all stakeholders wish for suggestions for
relevant datasets in particular, and/ or relevant work in general with the dataset under
consideration. Thus, open data providers are requested to offer suggestion mechanisms.
Provide visualised statistical analysis; all end users have indicated their need for statistical analysis
accompanying all datasets. The statistical analysis refers to both the dataset as a “package” (e.g.
how many times it has been viewed, how many times it has been downloaded, form which
countries, how many comments follow the dataset etc.), but also the content of the package. In
addition, the visualisation of the statistical analysis is also welcome, as it is more user friendly and
easily understandable.
Allow online processing; all stakeholders wish to contribute to the available datasets, in any
possible way (e.g. clean a dataset, enrich a dataset, add metadata to a dataset, etc.). It requested
that this can be performed online, in order to save time and redundant effort. Of course, the
original/ previous dataset should not be lost; proper versioning should be available, in order to
ensure the preservation of all dataset’s versions.
Provide comparing and integration mechanisms; this need is as simple as it seems. End-users wish
for the ability to fast and easily compare and/ or integrate two (or more) datasets, in order to be
aware of the differences between (or among) them and/ or take advantage of the combined result.
Provide reporting mechanisms; in case of a false and/ or abusive and/ or broken dataset, end-users
would like to have a fast and reliable way of directly reporting the case to the providers of the
dataset, rather than starting communications via phone or emails.
Provide rating mechanism; this probably is the most expected (but yet not so much established)
feature. End-users need to rate quality and/ or adequacy, not only of each dataset, but also of the
various open/ public (governmental or not) data providers.
Embed community features in the open/ public data platforms; without doubt, offering community
features when dealing with open/ public data comes with great added value. That is a main actual
need of the end-users too. They ask for the ability to discuss with each other about datasets and
their utilisation, brainstorm on relevant ideas etc.
It is interesting to note that the suggestions towards the open data community derived from the analysis
performed seem to be aligned with policy implications/ recommendations deriving from other research
initiatives (Misuraca, Koussouris, Lampathaki, Kokkinakos, Charalabidis, & Askounis, 2013).
361
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Kokkinakos P., Petychakis M., Zuiderwijk A., Mouzakitis S., Argyzoudis E., Psarras J.| An investigation of
Open Data Infrastructures characteristics: rationale and end-user perspective
ACKNOWLEDGEMENT
This work has been partly funded by the European Commission through the Project ENGAGE (An
Infrastructure for Open, Linked Governmental Data Provision towards Research Communities and Citizens).
REFERENCES
Book
Misuraca, G., Koussouris, S., Lampathaki, F., Kokkinakos, P., Charalabidis, Y., & Askounis, D. (2013). Deliverable 5.2: Case
studies on specific applications of ICT solutions for policy modelling. CROSSOVER Project.
362
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
ABSTRACT
Policy decisions (e.g. economic, fiscal, development) are often complex and multifaceted and involve many different
stakeholders with different objectives and priorities. Very often decision-makers, when confronted with such problems,
attempt to use intuitive or heuristic approaches to simplify the complexity until the problem seems more manageable.
In this process, important information may be lost, opposing points of view may be discarded, and elements of
uncertainty may be ignored. A crucial issue, when dealing with political decisions, is the radical uncertainty about the
present (e.g. lack or poor quality of information) and also about the future. The latter one addresses the seeming
paradox - how can we be rational in taking decisions today if the most important fact that we know about future
conditions is that they are unknowable? Robustness Analysis is a way of supporting government decision making when
dealing with uncertainties and ignorance. In the present research we discuss the different definitions and approaches of
Robustness Analysis in government decision-making concerning the present and the future as a way to support the
identification of potential robust strategies in policy circles. We also initiate the discussion on how facilitated forms of
MCDA could tackle different aspects associated with government decision making and provide effective support in
dealing with robustness of strategic decisions in designing complex policies with long-term consequences.
KEYWORDS
Robustness Analysis, Government Decision-Making, Facilitated Modelling
1. INTRODUCTION
The government decision-making processes have always been a subject of philosophical enquiry and an
open field for debating in OR community for the last 40 years. Nowadays, this specific topic is at the
forefront of the discussion because of the crisis in Europe which emerges the urgent need for defining
“efficient policy” concepts. The produced policies should be the rational and robust outcome of a collective
decision-making process, based on a well-defined and sound framework of rules and methods.
Policy decisions (e.g. economic, fiscal, development) are often complex and multifaceted and involve many
different stakeholders with different priorities or objectives. Furthermore, policy-making consists of several
sequential actions focusing on the achievement of a specific goal with societal, economic and political
implications, with several feedbacks and loops, so we may describe the whole process as a policy-circle
(Lasswell, 1956). Very often decision-makers, when confronted with such problems, attempt to use
heuristic and intuitive approaches to simplify the complexity until the problem seems more manageable. In
this process, important information may be lost, opposing points of view may be discarded, and elements of
uncertainty may be ignored. In short, there are many reasons to expect that during the evolution of a policy
circle the involved stakeholders will often experience difficulty making informed, thoughtful choices in a
complex decision-making environment involving value trade-offs and uncertainty (McDaniels et al, 1999).
The problem is even more complex if we consider that, according to principles of sociotechnical design,
specific - most of the times conflicting - objectives shall be best met by the joint optimization of technical
and social aspects. Of course, the absence of certainty, the interference of political-power and the presence
of complexity shall not be an excuse of inaction in this field.
363
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
The well-known area of Multi-criteria Decision Aiding (MCDA) (Roy, 2005; Montibeller and Franco, 2010)
offers techniques designed to deal with situations, as the aforementioned, in which there are multiple
conflicting goals for reaching strategic decisions. Furthermore, the process of creating, evaluating and
implementing strategic political decisions is typically characterised by the consideration of potential
synergies between different options, long term consequences, and the need of key stakeholders to engage
in significant psychological and social negotiation about the strategic decision under consideration. MCDA
can efficiently tackle all these issues. However, the political decision and negotiation process among
members of the governmental committees does not take place in a political vacuum and political conflict is
a reality. Thus, certain adaptations to the methods, tools and processes of MCDA are required if it is to be
effectively applied in such a context (Tsoukias et al., 2013).
In the present research we initiate the discussion concerning the application of robustness analysis as a way
to support the identification of potential robust strategies. We also discuss how facilitated forms of MCDA,
where the model is created and analysed directly with a group of decision-makers in a decision conference,
could tackle different aspects associated with government decision making and provide effective support in
dealing with robustness of strategic decisions in designing complex policies with long-term consequences.
2. GOVERNMENT DECISION-MAKING
Political behaviour is argued to result from interacting political and information-processing mechanisms.
The measures taken in order to meet a certain political goal can create conflict when simultaneously trying
to achieve other goals. Thus, government policy makers seldom seek to maximize a single welfare objective;
typically they are concerned about a bundle of policy objectives, expressed by contributing variables or
indicators, conditional on and constrained by, applicable legislation (André García and Cardenete Flores,
2008). Another important characteristic of the mechanism of political actions is that politics is a game
among forward-looking stakeholders (Lempert and Collins, 2007). As a result, the government’s current
payoffs are equal to the “net present value” of its anticipated future actions (and resulting victories/losses),
not just its present and past policy. In this context actions of government policy makers can be interpreted
as efforts to:
(a) design “efficient” policies (those for which every objective is reached with the minimum loss for
the other relevant objectives ) to improve government performance, as measured by well-defined
indicators, while at the same time
(b) maintain a political behaviour true to their “political identity” (ideology, values, interests,
influences).
Political mechanisms are intended to explain how the interests of the participants get balanced,
compromised, suppressed, or are met to varying degrees. Equally important, these mechanisms are
intended to account for whether and how the values and interests of the participants get transformed into
the goals of the political unit as a whole. These political mechanisms, however, are only one part of a full
account of the political decision-making process. Political actors are also problem solvers, who make
decisions by exploring different choices, who plan and execute various actions, who make use of their
knowledge and experience in the pursuit of their interests and goals (Sylvan et al, 1990). This is the area of
political decision making we are studying in this work. Nevertheless, we keep in mind that even the most
systematic operations of a government decision making are not implemented in a political vacuum, and this
reality shall not be absent in the decision models which are proposed by the analysts.
There is an on-going discussion on whether the performance-based policies could reach optimum results.
Performance-based government efforts aim to:
clarify the mission and prioritize objectives with an emphasis on the expected results,
364
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
develop mechanisms for monitoring and reporting the achievement of those objectives, and
use this information to make decisions about government activities, including making government
more accountable.
The quantitative, decision-analytic framework of the Multi-criteria Decision Aid (MCDA) discipline, to be
presented in more detail in Section 3, offers a wide pallet of techniques designed to deal with problems
which face multiple, conflicting goals, such as the government policy-making objectives. The target of joint
optimization of technical and social aspects indicates the need of a collective, participating sociotechnical
approach informed by both MCDA and “facilitated modeling” (presented in more detail in Section 5),
focusing not only on addressing challenges involved, but also on exploiting the adaptability and
innovativeness of stakeholders in achieving goals instead of over-determining technically the matter in
which these goals should be attained. It should also be noted that MCDA techniques are particularly
appropriate for servicing the need of accountability of government, through the measurement of
performance.
A strategic decision has been defined as one that is “important, in terms of the actions taken, the
resources committed, or the precedents is sets”
Strategic decisions are “infrequent decisions made by the top leaders of an organisation that
critically affect organizational health and survival”
The process of creating, evaluating and implementing strategic decisions is typically characterised by
the consideration of high levels of uncertainty, potential synergies between different options, long
term consequences, and the need of key stakeholders to engage in significant psychological and
social negotiation about the strategic decision under consideration
In order to define the “main strategic choices” of an organization Richard (1983) takes into account the
organization’s mode of action and suggests a set of “strategic criteria” that permit an assessment of the
organization’s possibilities for survival and success, and verify the limitations of the economic system. These
criteria can be apportioned into three groups or points of view, according to the organization’s provisional
horizon and the subsystem under study, namely:
• Competitiveness (analysis of the current external environment – a known field).
• Effectiveness (internal analysis of the company – a known field).
• Flexibility (analysis of the future external environment – unknown field that cannot be modelled).
The performance of an organization’s (or equally of a government) should aim at improving each of the
above three groups of criteria. In other words, an organization is engaged in a strategic path whenever it
chooses to alter the balance of its available resources with the environment. Furthermore, according to the
principles of sociotechnical design, organizational objectives are best met by the joint optimization of the
technical and the social aspects (Cherns, 1976). For the complex, multidimensional process of strategic
10
Option Appraisal: Making informed decisions in government, 2011, National Audit Office,
www.nao.org.uk
365
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
Complex decision problems need a multicriteria decision analysis (MCDA) approach to be adopted, in order
to take into account all the criteria/options involved in the analytical process of defining the scope of the
decision, to construct a preference model, and to support the decision (see Roy, 1985; Roy and Bouyssou,
1993; Belton and Stewart, 2002; Figueira et al., 2005; Siskos, 2008). A collection of papers dealing with new
trends in multicriteria analysis theory and practice was presented by Zopounidis and Pardalos (2010).
4. ROBUSTNESS ANALYSIS
An essential issue that shall be taken into consideration when implementing a political decision making
process is the need for robustness analysis of the results of this process, given broad issues and multiple
values being considered (Tsoukias et al, 2013). The stability of a model or/and of a solution should be
assessed and evaluated each time so that the analyst shall be able to have a clear picture regarding the
reliability and robustness of the produced results. Stability and reliability shall be expressed using measures
which are understandable by the analyst and the decision maker and based on these measures the decision
maker may accept or reject or adapt the proposed decision model. Given the fact that that uncertainty is
present and has an influence on every decision-making context and that it appears in several different ways,
it shall be neither omitted, nor relegated it. Its importance shall be realized and it shall be considered in an
appropriate manner. As robustness allows us to experiment with uncertainty, it is necessary to define its
concept, its significance and to emphasize its importance in the MCDA field.
Robustness analysis has achieved a remarkable importance in recent years. However, there is some
confusion about the different meanings that the term robustness has received. For that reason it is
necessary to consider the different notions behind the word “robustness” according to Vincke’s approach
(2003):
Robust conclusion – valid in all or most pairs (version, procedure) – dealing with system values and
gap from reality (Roy, 2010)
Robust solution – good in all or most cases– dealing with uncertainty of external environment and
external factors (Kouvelis and Yu, 1997)
Robust decision in dynamic context – keep open as many good plans as possible for the future –
dealing with the unknown future (Rosenhead et al, 1972; Rosenhead, 2003; Haasnoot et al, 2013)
The question is how somebody can tackle the aforementioned robustness issues. It is believed that
specification and studying of robustness issues are often best achieved in some kind of an interactive mode
with those who are faced with the need to decide. That is, the analysis is carried out by and under the
control of the relevant policy group, with the assistance of one or more consultants (facilitators).
Furthermore, robustness shall be expressed using measures which are understandable by the analyst and
the decision maker. In this view it is suggested that visual tools may serve this need in a better way.
Robust strategic approaches in political decision-making are commonly expressed by trading some optimal
performance for less sensitivity to assumptions, satisficing over a wide range of futures, and keeping
options open. Relevant research suggests that this often adopted strategy is also usually identified as the
most robust choice. Robust political strategies may be preferable to optimum strategies when the
uncertainty is sufficiently deep and the set of alternative policy options is sufficiently rich (Lempert and
Collins, 2007).
366
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
5. FACILITATED MODELLING
Up to now the most usual way to conduct OR consultancy for strategic decision making support in
organizations has been to adopt what is called the “expert mode”, where the operational researcher uses
operational research methods and models that permit an “objective” analysis of the client’s problem
situation, together with the recommendation of optimal solutions. The “expert mode” faces decision
problems as real entities, thus the main task of the operational researcher is to represent the real problem
that the client organization is dealing with, avoiding “biases” from different perspectives (Franco and
Montibeller, 2010).
Yet, more often than not, problems are socially constructed, thus the operational researcher has to help a
policy-makers team in negotiating a problem definition that can accommodate their different perspectives.
This process is a participative one, in the sense that participants are able to:
jointly define the situation, structure it, and agree in a focus,
negotiate a shared problem definition by developing a model of organizational objectives,
create, refine and evaluate a portfolio of options/priorities, and
develop action plans for subsequent implementation.
In the cases of a policy analysis circle a participative operational research consultancy process for strategic
decision aid support, other than “expert mode”, needs to be applied. This process should incorporate the
exploration of the notions of strategic decisions and the decision aid process, the examination of
interconnectedness and long-term consequences as key characteristics of strategic decisions, and the
consideration of the discursive nature of the processes within which strategic decisions are made. Such a
process was proposed by Franco and Montibeller (2010) incorporating “facilitated decision modelling”
(Eden, 1990; Phillips 2007). In facilitated modelling, a management team or a group of policy-makers is
typically placed as responsible for scoping, analysing and solving the problem situation of interest. The
operational researcher acts not only as an analyst, but also as a facilitator to this team. Participants’
interaction with the model reshapes the analysis, and the model analysis reshapes the group discussion.
Facilitated modelling is used as an intervention tool, which requires the operational researcher to carry out
the whole intervention jointly with the client, and enables the accommodation of multiple and differing
positions, possible objectives and strategies among participants (Checkland, 1981; Eden and Ackermann,
2004; Rosenhead and Mingers, 2001; Williams, 2008). As a result, strategic problems frequently require the
facilitated mode, due to their complex social nature and qualitative dimensions, their uniqueness, and the
need to engage a management team in the decision making process (Ackermann and Eden, 2001; Friend
and Hickling, 2005).
Given the nature of the governmental decision-making we think that an adapted approach of a facilitated
MCDA model, based on the ideas of Montibeller and Franco (2010), could efficiently support the following
tasks: defining the problem, scoping participation, tackling uncertainty with future scenarios, considering
multiple objectives, designing and appraising complex strategic options, and finally considering long term
367
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
consequences. Special attention shall be paid on how to address issues of robustness at the different stages
and mostly in scenario planning (Tsoukias et al, 2013). Furthermore, the framework shall describe in details
the procedure of identifying, prioritizing and using multiple objectives (Keeney, 2013), as well as the
procedure of choosing a multicriteria decision aiding method well adapted to each political decision context
(Roy and Slowinski, 2013).
The types of facilitated modelling that we are including in our approach for political decision making (Figure
1a and Figure 1b) are the following:
Facilitated Problem Structuring: A set of modelling methods collectively known as ‘Soft OR’
methods (tools: Cognitive maps, DFDs, Strategic Options Development and Analysis, Soft Systems
Methodology, Strategic Choice Approach, Organizational Knowledge Management Systems)
Facilitated System Dynamics: Originated in the system dynamics field, it supports the modelling of
systems where dynamics and feedback loops are important in understanding the impact of
decision policies/options over time (tools: Workflow Diagrams, Spiral Method, Adaptive Policy
making, Adaptation Pathways, Dynamic Adaptive Policy Pathways)
Facilitated Decision Analysis: A set of methods that help modelling decisions that involve multiple
objectives and/or uncertainty of outcomes.
Facilitated Robustness Analysis: A set of tools supporting the comprehension of robustness and its
handling
st
Figure 16a Facilitated Approach in Political Decision Making (1 part)
368
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
nd
Figure 17b Facilitated Approach in Political Decision Making (2 part)
7. CONCLUSIONS
Is it a utopia to have informed government using scientific tools of OR trying to reach and to produce viable
and robust strategic plans? For sure it is not a straightforward task (Andersson et al, 2013) but efforts are
11
undertaken towards that direction. There are organisations such as GORS (Government Operational
Research Service in UK - supports policy-making, strategy and operations in many different departments
12
and agencies and employs around 400 analysts) and RAND Corporation (a non-profit institution that since
1948 helps improve policy and decision making through research and analysis, with approximately 1,700
people from more than 50 countries) which insist to use scientific tools in the political decision field. If a
government intends to take decisions based on fairness, objectivity and thoroughness, then the proposed
holistic approach has more strengths in relation to no scientific at all approaches such us TINA (stands for
13
There is No Alternative, see Guardian, 4/7/2013 ).
Concluding, we believe that given the importance of political strategic decision making for the survival of
any political system, further developments in this field could, therefore, not only bring opportunities for
research on the several challenges we highlighted here, but also have a real impact on MCDA practice.
More studies on robustness of strategic options under multiple scenarios are required; for example, about
suitable operators and graphical displays for interacting with policy makers. As far as the design of complex
policies is concerned, structuring policies composed by options that are interconnected is an area almost
unexplored, from a decision analysis perspective, and ideas from the field of problem structuring methods
may be relevant for this intent. Furthermore, given the special nature of the political decision making, it
would be interesting to assess the impacts of the framework we are suggesting and the effectiveness of
strategy workshops, as well as the overall usefulness of the framework to increase our understanding of
decision analytical support at the political strategic level.
11
GORS: http://www.operational-research.gov.uk/recruitment
12
RAND: http://www.rand.org/
13
Guardian: http://www.theguardian.com/science/life-and-physics/2013/may/04/no-alternative-bayes-
penalties-philosophy-thatcher-merkel
369
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
ACKNOWLEDGEMENT
This research has been co-financed by the European Union (European Social Fund) and Greek national funds
through the Operational Program "Education and Lifelong Learning".
REFERENCES
Ackermann F. and Eden C., 2001. Contrasting single user and networked group decision support systems for strategy
making. Group Decision and Negotiation, Vol. 10, No. 1, pp. 47–66
Andersson A., Grönlund A. and Åström J., 2012. “You can't make this a science!”—Analyzing decision support systems in
political contexts. Government Information Quarterly, Vol. 29, pp. 543–552
André García F.J. and Cardenete Flores M.A., 2008. Economic and environmental efficient policies in an applied general
equilibrium framework. Ekonomiaz: revista vasca de economía, Vitoria-Gasteiz: Servicio Central de Publ. del Gobierno
Vasco, Vol. 67, 1, pp. 72-91. (in Spanish)
Belton V. and Stewart T., 2002. Multiple criteria decision analysis: An integrated approach. Kluwer Academic Publishers,
Dordrecht
Cherns A., 1976. The Principles of sociotechnical design. Human Relations. Vol. 29, No. 8, pp. 783-792
David F.R., 2009. Strategic Management: Concepts and Cases, 12th edition. Prentice Hall, Upper Saddle River
Eden C., 1990. The unfolding nature of group decision support: Two dimensions of Skill. In: Eden C. and Radford J. (eds.),
Tackling Strategic Problems: The Role of Group Decision Support. Sage, London, pp. pp.48–52
Eden C. and Ackermann F., 2004. Use of ‘Soft OR’ models by clients: What do they want from them? In: Pidd M., (Ed.)
Systems Modeling: Theory and Practice. Wiley, Chichester, pp. 146–163
Figueira J., Greco S. and Ehrgott, M., 2005. State-of-Art of Multiple Criteria Decision Analysis. Kluwer Academic
Publishers, Dortrecht
Franco L.A. and Montibeller G., 2010. Facilitated modeling in operational research. European Journal of Operational
Research, Vol. 205, pp. 489-500
Friend J. and Hickling A., 2005. Planning Under Pressure: The Strategic Choice Approach. Third ed. Elsevier
Haasnoot M., Kwakkel J. H., Walker Warren E. and Maat J., 2013. Dynamic adaptive policy pathways: A method for
crafting robust decisions for a deeply uncertain world. Global Environmental Change, Vol. 23, No. 2, pp. 485-498
Keeney R.L., 2013. Identifying, prioritizing, and using multiple objectives. EURO Journal on Decision Processes, Vol. 1,
No. 1-2, pp. 45-67
Kouvelis P. and Yu G., 1997. Robust Discrete Optimisation and Its Applications. Kluwer Academic Publishers, Netherlands
Lasswell H.D., 1956. The decision process: seven categories of functional analysis. University of Maryland Press, College
Park
370
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
Lempert R.J. and Collins M.T., 2007. Managing the Risk of Uncertain Threshold Responses: Comparison of Robust,
Optimum, and Precautionary Approaches. Risk Analysis-An official publication of the Society for Risk Analysis, vol. 27,
No. 4, pp. 1009-1026
McDaniels T.L., Gregory R.S. and Fields D., 1999. Democratizing risk management: Successful public involvement in local
water management decisions. Risk Analysis, vol. 19, pp.497–510
Montibeller G. and Franco A., 2010. Multi-Criteria Decision Analysis for Strategic Decision Making. In: Zopounidis, C. and
Pardalos, P.M. (eds) Handbook of Multicriteria Analysis. Springer New York, pp. 25-48
Phillips L., 2007. Decision conferencing. In: Edwards W., Miles R. Jr,. and von Winterfeldt D., (eds.), Advances in Decision
Analysis: From Foundations to Applications. Cambridge University Press, New York, pp. 375–399
Richard J.L., 1983. Aide à la Décision Stratégique en PME. In: Jacquet-Lagrèze, E. et Siskos, J. (eds.), Méthode de Décision
Multicritère. Hommes et Techniques, Paris, pp. 119–142
Rosenhead J., 2002. Robustness Analysis. Newsletter of the European Working Group “Multiple Criteria Decision Aiding”,
Series 3, No. 6, pp. 6-10
Rosenhead J., Elton M. and Gupta, S.K., 1972. Robustness and optimality as criteria for strategic decisions. Operational
Research Quarterly, Vol. 23, No. 4, pp. 413-430
Rosenhead J. and Mingers J., 2001. A new paradigm of analysis. In: Rosenhead J. and Mingers J. (eds.), Rational Analysis
for a Problematic World Revisited: Problem Structuring Methods for Complexity, Uncertainty, and Conflict. Wiley,
Chichester, pp. 1–19
Roy B., 2005. Paradigms and challenges. In: Figueira J.R., Greco J.S. and Ehrgott S. (eds) Multiple Criteria Decision
Analysis - State of the Art Surveys. Springer, pp. 3-24
Roy B., 2010. Robustness in operational research and decision aiding: A multi-faceted issue. European Journal of
Operational Research, Vol. 200, pp. 629-638
Roy B. and Bouyssou, D., 1993. Aide multicritère à la décision: Méthodes et cas. Paris: Economica
Siskos Y., 2008. Decision Models. New Technologies Publications, Athens (in Greek)
Roy B. and Słowiński, R., 2013. Questions guiding the choice of a multicriteria decision aiding method. EURO Journal on
Decision Processes, Vol. 1, No. 1-2, pp. 69-97
Sylvan D.A., Goel A. and Chandrasekaran B., 1990. Analyzing political decision making from an information-processing
perspective: JESSE. American Journal of Political Science. Vol. 34, No. 1, pp. 74-123
Tsoukias A., Montibeller G., Lucertini G. and Belton V., 2013. Policy analytics: an agenda for research and practice. EURO
Journal on Decision Processes, Vol. 1, No. 1-2, pp. 115-134
Vincke P., 2003. About Robustness Analysis. Newsletter of the European Working Group “Multiple Criteria Decision
Aiding”, Series 3, No. 8, pp. 7-9
371
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Tsotsolas N., Alexopoulos S.| Dealing with Robustness in Government Decision-Making using Facilitated
Modelling
Zopounidis C. and Pardalos P.M., 2010. Handbook of Multicriteria Analysis. Springer, New York
372
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Abstract
The energy alternative solutions of a decision-making problem can either be quantitative or qualitative by nature. The
complexity of the solution frequently makes the use of non-homogenous information necessary in order to deal with
such problems. Therefore, the use of non-homogeneous information in decision-making problems is not an unusual
situation [1,2,3] with proposals that combine numeric expressions of preference, fuzzy preference relations,
multiplicative preference relations, utility preferences, preferences expressed by numerical intervals etc. However,
most of the proposals for solving decision-making heterogeneity in the information focus on cases where the
alternatives are expressed through real values [15,16], through value intervals [17,18] or through linguistic labels
[19,20] that refer to a specific set of linguistic terms.
In this paper, an information management tool has been developed, that implements successfully the three phases of
the Herrera F. aggregation process [4] for dealing with non-homogeneous contexts, taking as base the 2-tuple fuzzy
linguistic representation model [5]:
(a) unification of the information
(b) aggregation of the preferred valued and
(c) transformation into 2-tuple.
The calculation is achieved through a powerful excel sheet that uses a lot of macros and Visual Basic (VBA)
programming language. At first, the software unifies the heterogeneous information into a specific linguistic domain,
which is a Basic Linguistic Term Set (BLTS), St. Each numerical, interval-valued and linguistic values is expressed by
means of a fuzzy set on the BLTS, F(St). The process is carried out in the following order:
Transforming numerical values in [0,1] into F(St)
Transforming linguistic terms into F(St)
Transforming interval-valued into F(St).
Especially, for the transformation of the linguistic terms, an algorithm has been developed where the multivector sets S
and St are divided into four separate segments, leading to a calculation that is divided into four different curves. Finally,
the intersection points for each segment are calculated. In the 2nd phase, the tool aggregates individual performance
values. For each alternative, a collective performance value is obtained aggregating the above fuzzy sets on BLTS. In the
3rd phase, the tool transforms the fuzzy sets into linguistic 2-tuples. Especially, for the calculation of the unification
phase (a) of non-homogeneous information, up to our knowledge there is not such a mechanism proposed in the
existing bibliography. The software, also offers a variety of additional options for the user. It allows the user to alter the
BLTS and all the parameters in order to get multiple results quickly. The features described above assist any user that
dealing with energy-related decision-making problems by allowing him to unify non-homogenous information in a fast
and ground breaking way.
KEYWORDS
Decision making problems, heterogeneous information, software for supporting energy-related decision making
problems.
14
Corresponding author. Email address: [email protected].
373
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
1. INTRODUCTION
The linguistic 2-tuple representation model [10] uses method unifies the non-homogeneous input
information into a unique domain, in this case a linguistic one called Basic Linguistic Term Set (BLTS), which
expresses the information by means of fuzzy sets over the BLTS.
The 2-tuple model offers two main advantages over other models [11]:
The linguistic domain can be treated as continuous and not discrete as, for example, in the symbolic
model
The linguistic computational model based on linguistic 2-tuples carries out processes of computing
with words easily and without loss of information.
Definition 1. Let b be the result of an aggregation of the indexes of a set of labels assessed in a linguistic
term set , i.e., the result of a symbolic aggregation operation.
Let i=round(β) and α=β-i be two values, such that, then a is called a symbolic
translation.
Definition 2. Let be a linguistic term set and a value supporting the result of a
symbolic aggregation operation, then the 2-tuple that expresses the equivalent information to b is
obtained with the following function
374
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
-1
Proposition 1. Let be a linguistic term set and (si,ai) be a 2-tuple. There is a Δ function,
such that, from a 2-tuple it returns its equivalent numerical value
Example
If a numerical value β=3.25 is transformed to 2-tuple on a 7-label linguistic term set and the opposite:
si i round ( )
( ) ( si , ), with
i [0.5,0.5)
1 ( si , ) i
Ar. Mean
(L,VL,VH,P)=(2+1+
5+6)/4 = 3.25
(3.25) (s3,0.25) 1 ( s3 ,0.25) 3.25
Remark 1. From Definitions 1 and 2 and from Proposition 1, it is obvious that the conversion of a linguistic
term into a linguistic 2-tuple consist of adding a value 0 as symbolic translation
If
If k=l, then
if α1=α2 then represent the same information
if and
if
1. Unification
Making the information uniform. The non-homogeneous information will be unified into a specific
linguistic domain, called BLTS, ST (basic linguistic term set). Each numerical, interval-valued and
linguistic preference value is expressed by means of a fuzzy set on the BLTS, the F(ST). This is achieved
through three steps:
375
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
2. Aggregating individual preference values. For each pair of alternatives, a collective preference value is
obtained, aggregating the above fuzzy sets on the BLTS that represents the individual preference values
as signed by the experts according to his/her preference.
Therefore, each collective preference value is a fuzzy set on the specific linguistic domain, the BLTS.
During this step, it is clear that the information has been unified into manageable fuzzy sets. However,
during the exploitation phase, the collective preferences will be ranked in order to obtain the best
solution. To facilitate this process the fuzzy sets are transformed to 2-tuples.
3. Transformation into 2-tuple. This step exists to facilitate the exploitation phase, during which the
collective preferences are ranked in order to obtain the best solution. Turning the fuzzy sets of the
previous step into 2-tuples makes them easier to rank.
In the following paragraphs, the aggregation process’ steps are explained in detail in conjunction with their
implementation by the software.
2.1. Unification
The heterogeneous information needs to be unified into a specific linguistic domain, as mentioned before.
The first thing to be done is to choose a BLTS. In order to do so, we need to study the linguistic term set S.
If:
S is a fuzzy partition and
if si=(ai,bi, ci)
then S is selected as the BLTS, because these are the necessary and sufficient conditions for the
transformation of values between [0,1] into 2-tuple with no information loss.
A different, more practical approach is to choose a BLTS with more terms than the linguistic term set
(usually 11 or 13 [8]).
376
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Definition 3. [9] The function transforms a numerical value into a fuzzy set ST:
Proposition 2. We consider membership functions, for the linguistic labels, , that are depicted
by a parametric ( A special case are the linguistic assessments whose membership function are
triangular.
Example 1. (figure 4) Let θ=0.78 the numerical value to be turned to a fuzzy set S={s 0,…,s4}. The semantic of
this term is:
377
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
The numerical value 0.78 intersects the BLTS at in 0.32 and at in 0.68.
The above process is implemented by the software by using a calculation tool in excel, its excel sheet
depicted below. The calculation phases are numbered 1 to 5.
1.
378
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Step 1: The user inserts thee number of labels for the BLTS and the numerical value he needs transformed.
This is the only involvement of the user, the rest of the process is automatic and the results appear in step
5.
Step 2: The software defines the upper and the lower limits of the triangle multivectors.
Step 3: Depending on the interval in each triangle, the appropriate mathematical equation is selected.
Where F(ST) is the set of fuzzy sets ST, and and are the membership functions associated with
the term li και sk respectively.
Therefore, the resulting for every linguistic value of S is a fuzzy set defined in the BLTS, S T.
Example 2. (figure 6). Let set S be defined with 5 labels (l0,..,l4) linguistic triangular scale and St with 7
labels (s0,…,s6).
Therefore, the result aftet the transformation of l1 on St is: t sst(l1)= {(s0,0.39), (s1, 0.85), (s2,0.85), (s3,0.39),
(s4,0), (s5, 0), (s6,0)}.
379
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
This part is very complicated so only part of its excel sheet is presented, its steps numbered 1-5.
Step 1: The user inserts the scale of S and St respectively. The calculation is made automatically and the
result is depicted in step 5. To make its comprehension easier more details of the steps are depicted in
figures 7 to 9.
Figure 7: Transforming the 5-label linguistic scale into 7-label FS in the BLTS
380
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
The solving process has been structured by developing an alogirthm based on which the multivectors of
the sets S and St are divided into four separate segments in order to divide the calculations in 4 different
linear functions.
Step 2: The axis y is divided in equal spaces with a step of 0.01 and after extensive trial-and-error, the
correct intersection points with μσi are found.
Figure 8: Transforming the 5-label linguistic scale into 7-label FS in the BLTS Step 1 & 2
Step 3: All the intersection points of all four graphical representations are aggregated.
The process could end here but to better aid the user, the results are visualized with the help of Visual Basic
in step 5 (figure 9). The code reads the aggregated results and finds the intersection points that appear in
red once the user pressed “Execute”.
381
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Figure 9: Transforming the 5-label linguistic scale into 7-label FS in the BLTS Step 5
2.
Let a n interval valued in [0,1], in order to achieve this transformation we assume that the interval
has a representation , derived by the membership function as seen below:
Definition 6. Let ST={s0,…,sg} be the BLTS. Then the function transforms an interval Ι into the fuzzy set
ST.
where is the set of fuzzy sets defined in ST, and μΙ(.) and μsk(.) are membership functions associated
with interval I and the sk terms, respectively.
Example 3 (figure 10). Let Ι=[0.7,0.9] be the interval valued to be transformed into a fuzzy set S T, with 5
terms symmetrically distributed. The derived fuzzy set is:
382
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
The calculation of the above in excel is depicted in figures 11 ,12 and 13.
Δώσε numerical value (0…1) 0,82 Δώσε numerical value (0…1) 0,89
user:
Τα παρακάτω όπως user:
ακριβώς στο sheet με τη Τα παρακάτω όπως
a b=d c numerical value a b=d c
ακριβώς στο sheet με τη
(1ο Βήμα) numerical value
s0 1 0,00 0,00 0,17 s0 1 0,00 0,00 0,17
(1ο Βήμα)
s1 2 0,00 0,17 0,33 s1 2 0,00 0,17 0,33
s2 3 0,17 0,33 0,50 s2 3 0,17 0,33 0,50
s3 4 0,33 0,50 0,67 s3 4 0,33 0,50 0,67
s4 5 0,50 0,67 0,83 s4 5 0,50 0,67 0,83
s5 6 0,67 0,83 1,00 s5 6 0,67 0,83 1,00
s6 7 0,83 1,00 1,00 s6 7 0,83 1,00 1,00
s7 8 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ s7 8 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ
s8 9 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ s8 9 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ
s9 10 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ s9 10 ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ ΔΕΝ ΟΡΙΖΕΤΑΙ
ΠΑΡΑΜΕΤΡΟΙ ΠΑΡΑΜΕΤΡΟΙ
ΥΠΟΛΟΓΙΣΜΟΣ γi=μsi(θ) a<=θ<=b d<=θ<=c (θ-a)/(b-a) (c-θ)/(c-d ΥΠΟΛΟΓΙΣΜΟΣ γi=μsi(θ) a<=θ<=b d<=θ<=c (θ-a)/(b-a) (c-θ)/(c-d
383
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Figure 12: Transforming the internal valued into FS in BLTS Step 1 & 2
Figure 13: Transforming the internal valued into FS in BLTS Step 3, 4 & 5
3 4
384
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
ΤΕΛΙΚΟ ΑΠΟΤΕΛΕΣΜΑ
5 0,00 0,00 s0
0,00 0,00 s1
0,00 0,00 s2
0,00 0,00 s3
0,08 0,08 s4
1,00 1,00 s5
0,34 0,34 s6
0,00 0,00 s7
0,00 0,00 s8
0,00 0,00 s9
As in the previous steps, the user inserts the desire linguistic scale and the internal valued and the result
appears in step 5.
Step 1: The triangular multivector is calculated and depending on the space on the interval of the numerical
value, the appropriate mathematical type is applied, in the same way with the transformation of numerical
value.
Step 2: The intersection points are calculated the same way but for both values of the borders of the
interval.
Step 3: There is also a possibility that the square pulse contains any vertex. In this case the maximum point
is 1. To check this, the possibility that the lower limit stands on the left of a vertex while the upper limit is
on the right of the same vertex is examined. In this case, there definitely a 1 and it’s the maximum possible
value.
j *
j o
j
385
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Therefore, implementing definition 2, in other words the Δ function to β we shall obtain a collective
preference relation whose values are expressed by 2-tuple:
For better comprehension, the commands that implement the transformation into 2-tuple are showcased
in chapter 3 with a numerical example. However, a short description will be made here:
Step 4: The integral component of β is compared to the numbers corresponding with linguistic values (see
example in chapter 3) and a linguistic value is chosen
Step 5:The 2-tuple is created by the linguistic value and the decimal component of β and appears as a
result.
A1 Natural Gas
A2 Heating oil
A3 Electrical heat pump
A4 Fireplace
Table 2: Criteria
Reduction of
Numerical
C1 CO2
values
emissions
Quality of Linguistic
C2
heating terms
Financial
Interval
C3 savings from
valued
fuel cost
386
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
A1 A2 A3 A4
C2 Η Η Μ L
Note that the quality of heat is expressed through linguistic terms of a 5-term scale (0 = VL, 1 = L, 2 = M, 3 =
H, 4= VH).
3.1. Unification
3.1.1. Choosing the BLTS
As mentioned before a scale larger than the scale of the current linguistic terms must be selected. In this
case a scale of 7 is selected, since it’s the smallest possible. The seven linguistic terms of the BLTS are 0=N,
1=VL, 2=L, 3=M, 4=H, 5=VH, 6=P.
Figure 13: Diagram of intersection points between numerical value and BLTS
387
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Figure 14: Diagram of intersection points between linguistic term and BLTS
tsst(l3)= {(s0,0), (s1, 0), (s2,0), (s3,0.4), (s4,0.8), (s5, 0.8), (s6,0.4)}.
Similarly
tsst(l2)= {(s0,0), (s1, 0.2), (s2,0.6), (s3,1), (s4,0.6), (s5, 0.2), (s6,0)} (for Μ)
tsst(l1)= {(s0,0.4), (s1, 0.8), (s2,0.8), (s3,0.4), (s4,0), (s5, 0), (s6,0)} (for L)
388
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Figure 14: Diagram of intersection points between interval valued and BLTS
Similarly:
3.
4.
5.
To aggregate the data the arithmetic mean is used. For example for A1:
tsst(l3)= {(s0,0), (s1, 0), (s2,0), (s3,0.4), (s4,0.8), (s5, 0.8), (s6,0.4)}
389
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
The arithmetic mean for s0, s1,s2 terms is 0. For s4 the arithmetic mean is 0.3 etc. The results of the
aggregation are depicted in table 5:
j *
j o
j
The others are similarly calculated and the result appears on screen. The basic commands of the software
are depicted below.
390
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
Α1 Α2 Α3 Α4
(Η, 0.39) (Μ, 0.37) (L, 0.38) (M, 0.21
And the user can decide with ease which solution is better, in this case, A1.
4. CONCLUSIONS
The software showcased in this paper offers a fast, reliable and easy way to solve decision making problems
with heterogeneous information, by transforming into 2-tuple. Concerning the unification phase, up to our
knowledge, a mechanism such as this has not yet been proposed (in the existing bibliography).
The software’s greatest asset is that the user is not required to know the mathematical background for the
process and only needs to choose the input data and the BLTS.
In addition to this, the software also offers a variety of additional options as it allows the user to easily alter
the BLTS and all the parameters in order to get multiple results.
The features described above assist any user in dealing with energy-related decision-making problems by
allowing him to unify non-homogenous information in a fast and ground breaking way.
REFERENCES
[1] F. Chiclana, F. Herrera, E. Herrera-Viedma, Integrating three representation models in fuzzy multipurpose decision
making based on fuzzy preference relations, Fuzzy Sets and Systems 97 (1998) 33–48
[2] Z.-P. Fan, J. Ma, Q. Zhang, An approach to multiple attribute decision making based on fuzzy preference information
alternatives, Fuzzy Sets and Systems 131 (1) (2002) 101–106.
[3] Q. Tian, J. Ma, O. Liu, A hybrid knowledge and model system for R&D project selection, Expert Systems with
Application 23 (3) (2002) 121–152.
391
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Papastamatiou I., Doukas H., Psarras J. | Heterogeneous management information software for
supporting energy-related decision making problems
[4] Herrera F, “Managing non-homogeneous information”, European Journal of Operational Research, 2005 166, pp.
115–132
[5] Doukas H, “Modelling of linguistic variables in multicriteria energy policy support”, European Journal of Operational
Research, 2013, 227 (2) , pp. 227-238.
[6] M. Roubens, Fuzzy sets and decision analysis, Fuzzy Sets and Systems 90 (1997) 199–206.
[7] M. Delgado, J.L. Verdegay, M.A. Vila, On aggregation operations of linguistic labels, International Journal of
Intelligent Systems 8 (1993) 351–370.
[8] G.A. Miller, The magical number seven plus or minus two: Some limits on our capacity of processing information,
Psychological Review 63 (1956) 81–97.
[9] F. Herrera, L. Mart_ınez, An approach for combining linguistic and numerical information based on 2-tuple fuzzy
representation model in decision-making, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
8 (5) (2000) 539–562.
[10] F. Herrera, L. Mart ınez, A 2-tuple fuzzy linguistic representation model for computing with words, IEEE
Transactions on Fuzzy Systems 8 (6) (2000) 746–752.
[11] F. Herrera, L. Mart ınez, The 2-tuple linguistic computational model. Advantages of its linguistic description,
accuracy and consistency, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9 (2001) 33–
49.
[12] K.J. Arrow, Social Choice and Individual Values, Yale University Press, New Haven, CT, 1963.
[13] S.A. Orlovsky, Decision-making with a fuzzy preference relation, Fuzzy Sets Systems 1 (1978) 155–167.
[14] M. Roubens, Some properties of choice functions based on valued binary relations, European Journal of
Operational Research 40 (1989) 309–321.
[15] J. Kacprzyk, Group decision making with a fuzzy linguistic majority, Fuzzy Sets and Systems 18 (1986) 105–118.
[16] R.R. Yager, On ordered weighted averaging aggregation operators in ulticriteria decision making, IEEE Transactions
on Systems, Man, and Cybernetics 18 (1988) 183–190.
[17] S. Kundu, Min-transitivity of fuzzy leftness relationship and its application to decision making, Fuzzy Sets and
Systems 86 (1997) 357–367.
[18] R.R. Yager, On ordered weighted averaging aggregation operators in ulticriteria decision making, IEEE Transactions
on Systems, Man, and Cybernetics 18 (1988) 183–190.
[19] F. Herrera, E. Herrera-Viedma, Linguistic decision analysis: Steps for solving decision problems under linguistic
information, Fuzzy Sets and Systems 115 (2000) 67–82.
[20] R.R. Yager, An approach to ordinal decision making, International Journal of Approximate Reasoning 12 (1995) 237–
261.
392
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
Abstract
In today’s rapidly evolving management world, the scheduling of multiple projects where each one’s execution depends
on another’s successful completion, is of great importance. This paper presents an ACO algorithm for the multi-mode
resource constrained multi-project scheduling problem (MRCMPSP). The proposed idea is grounded in the concept of
prioritizing the sub-projects’ scheduling based on: a) the number of external (to other sub-projects) relations and b) the
resource requirements as compared to the resource shortage for each resource type and each sub-project. The
implementation is based on two nested ACO algorithms, where the outer ACO algorithm, named MODACO, deals with
the classification and prioritization of the projects to be scheduled as well as the mode selection for each activity of
each project and the inner ACO algorithm, named ALACO, is called by MODACO to perform the activity list optimization
for each project. The proposed method was validated using a consistent number of multi-mode PSP Lib (Kolisch and
Sprecher, 1997a) data sets.
KEYWORDS
Project scheduling, multi-project scheduling, multi-mode resource constrained project scheduling, MRCPSP, ant colony
optimization
1. INTRODUCTION
The classical Resource-Constraint Project Scheduling Problem (RCPSP) has been studied and developed
extensively over the last decades (Davis, 1973, Weglarz, 1980, Christofides et al., 1987, Davis et al., 1992,
Ulusoy and Ozdamar, 1994, Icmeli and Erenguc, 1996, Demeulemeester and Herroelen, 1997, Hartmann
and Drexl, 1998, Herroelen et al., 1998) . In RCPSP we have a project of n activities, that represent actual
work, plus 2 dummy activities, that represent the start and finish of the project. This specific problem is
constrained both by time and resource availability. The time constraints are caused by the relationships
between each activity and the durations of the activities. The resource constraints rise from the relation
between the availability of a resource type at a given time and the resource demand of all the activities
being executed at that time based on a specific project schedule.
The activity concept as given in the standard RCPSP has been extended by allowing several alternatives
(modes) in which an activity can be performed. In the so called, multi-mode resource-constrained project
scheduling problem (MRCPSP), each activity can be performed in one out of several modes (Elmaghraby,
1964). Each mode reflects a feasible way to combine an alternative duration and different levels of resource
requirements that allow accomplishing the underlying activity. The idea is based on the assumption that by
using more resources of the same type or more efficient types of resources it is possible to get shorter
execution time.
The multi-project MRCPSP is a generalization of the multi-mode resource constrained project scheduling
problem (MRCPSP). Although RCPSP and MRCPSP have been treated by many approaches, there are very
393
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
few studies that deal with the problem of scheduling multiple projects (Hartmann and Briskorn, 2010). The
common solution method for the multi-project scheduling problem entails the conversion of the multiple
projects problem into a single project problem through merging the network graphs (Confessore et al.,
2007, Franck and Schwindt, 1995, Kumanan et al., 2006, Kurtulus, 1985, Kurtulus and Davis, 1982). This
method can lead to cumbersome networks that their solution can be very time consuming.
In situations like our case, where getting a close optimal solution is satisfactory and the analytical solution it
is overly time consuming, solution processes involving evolutionary algorithms, like the ant colony
optimization (ACO) are a convenient way to handle the problem. Ant colony optimization (ACO) takes
inspiration from the foraging behavior of some ant species. These ants deposit pheromone on the ground in
order to mark some favorable path that should be followed by other members of the colony. In Figure 1 the
ants’ foraging behavior is shown (Blum, 2005). Initially, all ants are in the nest (Figure 1a) and there is no
pheromone in the environment. When the foraging starts (Figure1b), 50% of the ants take the shorter path
(elliptical shaped ants) and 50% take the longer path (rhomboid shaped ants). The ants that took the
shorter path nave arrive earlier at the food source (Figure 1c). Therefore, when returning, the probability to
take again the shorter path is higher and the pheromone trail on the shorter path receives a stronger
reinforcement (Figure 1d), hence the probability to take that path grows. Furthermore, each time a part
(percentage) of the pheromone evaporates results in minimal chances of taking the long path, as the
probability to follow a specific path is positively related to the amount of pheromone residing in it.
(c) (d)
Food Food
Nest Nest
source source
In the herein presented problem we have a number of projects that could be partially executed in parallel
and that share one or more resource pools with limited availability. There are finish-to-start relations
between activities of each subproject (inner relationships) and between activities of different sub-projects
(outer relationships). Activities can be executed in multiple modes using different resource types and/or
amounts that results in different execution times. The idea is to use a nested set of ACO algorithms to
search for the optimal way to schedule the full project and the sub-projects. To do so, the sub-projects are
prioritized based on the amount of outer relationships with other sub-projects and the weighted average of
the resource strength of each resource type. The internal ACO simply optimizes the schedule of each sub-
project given a specific combination of modes, the corresponding resource demands and activities
durations along with the resource availabilities per type. The external ACO handles the mode optimization
for each sub-project, the sub-project’s prioritization and the sub-projects scheduling on the completion of
each iteration.
The rest of this paper is organized as follows. In Section 2 the problem setting and the proposed solution
method is presented. In Section 3 the experimental results using multimode project schedules are
394
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
presented for validation purposes. Finally, in Section 4 some concluding remarks about the solution process
and further research steps are presented.
The solution process is differentiated based on the existence and quantity of outer relationships. As a
result, in the proposed approach two project sets are generated. The first one is a set of independent
projects and the second a set of projects with outer dependencies. The set of projects with outer
dependencies is divided in subsets based on the number of outer dependencies in order to form two or
three subsets: of low, medium and high dependency. The projects are then ordered and scheduled based
on the set that they belong. This way, all the projects that belong to the high dependency set and have
higher resource requirements will be prioritized and scheduled up front, then those that still have high
degree of outer dependencies but low resource requirements and so on. The scheduling process uses an
adapted ACO algorithm for the RCPSP scheduling and the second ACO algorithm acts as a moderator of the
process that handles the mode optimization as well as the scheduling order and therefore the prioritization
of the sets and the randomisation of the projects in the same set, along with the final scheduling of all the
sub-projects.
ACO Optimization
Medium dependency set
Order based on ARF/RS
Low dependency set
Scheduling priority
Execution modes
ACO Optimization
Activity List Selection
Finish
395
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
More specifically, the process begins with a preprocessing step, where the outer relationships of each sub-
project are counted and three dependency sets (high, medium and low) are generated based on the results,
as shown in Figure 6. The size, min and max number of outer relationships per sub-set are dynamically
calculated based on the multi-project instance being solved. The idea is to have sets with increasing number
of sub-projects, going from the low dependency set to the high, as to have a relatively small number of sub-
projects with high scheduling priority and more sub-projects in the other two sets. Finally, those sub-
projects with no outer dependencies are allocated to the set of independent projects and will always be the
last to be scheduled. This does not mean that the independent projects will be, by default, the last to be
executed, but that they will be scheduled after having already scheduled the sub-projects that have outer
dependencies. This is done in order to start the total schedule from those sub-projects that due to the
external relationships, have more complex requirements to be accommodated, which cause less feasible
alternative solutions and then schedule the more flexible independent sub-projects.
The sets are then given as input to the external/moderator ACO where the sub-projects of each set are
internally ordered based on the weighted sum of the resource factor of each sub-project. Then each sub-
project’s modes are selected and fed to the internal ACO algorithm in order to get the corresponding
activity list optimization. Finally, the optimized schedules of the sub-projects are arranged to give the total
schedule and the fitness of the corresponding solution is saved. To generate the total schedule, starting
from the sub-project having the highest priority in the high dependency set and whose predecessors have
already been scheduled, the sub-projects are allocated to the first available time slot that fulfills the
maximal resource requirements for each resource type for the whole duration of the sub-project. The
process is repeated for a predefined number of times and the total best is given back to the project
manager. The result is composed of a pair of vectors, representing the mode list and the activity list, for
each sub-project and a vector of start times for each sub-project, which is the actual total schedule.
3. EXPERIMENTAL RESULTS
The suggested method was experimentally verified using the multi-mode datasets from PSP LIB (Kolisch and
Sprecher, 1997b). We experimented with the J10, J12 and J15 multi-mode sets of data and created multi-
project problem instances by randomly combining the given problem instances. The number of projects per
instance and the outer dependencies among the projects was either randomly generated or interactively
set by the user. The ACO algorithms in this solution process, use the serial SGS (Hartmann, 2001) and
priority rules (Kurtulus and Davis, 1982) to create and evaluate the projects’ schedules.
The only possible comparison of the results was with the optimal results found in the single project multi-
mode data sets. To validate the proposed solution method the multi-mode PSPLIB datasets were used and
the results as average deviation from optimum, average penalty and frequency of solutions without penalty
are shown in Table 18. The negative values in the average deviation from optimum are caused by the fact
that often lower durations than the best known where calculated.
Data Set Average Deviation from Frequency of Optimum Average Penalty Frequency of solution
Optimum without Penalty
J10 -2,64% 94% 0,25 75%
J12 -8,59% 98% 2,11 53%
J15 -0,06% 86% 0,40 78%
The actual results of the proposed algorithm in multi project multi-mode RCPSP are shown in Table 19. The
randomly generated multi-project instances are noted as SP_*. For each total project the best duration
along with the durations of its sub-projects are shown. The very low duration’s of the sub-projects is caused
396
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
by the fact that the resource availability was calculated as the sum of the availabilities of each sub-project,
which led to less rigid resource constraints for the total project.
Number of Average
Min
File Name External Max Duration Optimum Deviation from
Duration
Relations Optimum
SP_0_1 0 22 24 - -
j1010_1.mm 17 17 17 0,00%
j1010_2.mm 22 24 24 -7,50%
j1010_3.mm 21 21 21 0,00%
j1010_4.mm 15 15 15 0,00%
j1010_5.mm 22 22 24 -8,33%
SP_1_2 1 87 93 - -
j1237_6.mm 36 46 48 -21,67%
j1237_7.mm 36 42 45 -14,89%
j1237_8.mm 28 33 31 -5,16%
j1237_9.mm 24 30 26 1,92%
j1237_10.mm 23 32 42 -38,81%
SP_2_2 2 28 34 - -
j1010_6.mm 13 15 18 -21,67%
j1010_7.mm 13 13 15 -13,33%
j1010_8.mm 11 15 15 -12,00%
j1010_9.mm 10 10 10 0,00%
j1010_10.mm 14 18 17 -7,65%
SP_3_1 3 43 43 - -
j1010_1.mm 17 17 17 0,00%
j1010_2.mm 22 24 24 -5,83%
j1010_3.mm 21 22 21 0,48%
j1010_4.mm 15 16 15 0,67%
j1010_5.mm 22 22 24 -8,33%
SP_3_4 3 117 126 - -
j1237_6.mm 35 37 48 -25,42%
j1237_7.mm 35 39 45 -19,11%
j1237_8.mm 28 31 31 -5,81%
j1237_9.mm 23 28 26 -4,23%
j1237_10.mm 23 24 42 -43,10%
4. CONCLUSIONS
A new approach to the multi-project multi-mode RCPSP with relationships between sub-projects has been
proposed. The idea is based on a three steps iterative optimization of the modes of each sub-project, the
activities and then the sub-projects, as if they were the activities of a single project with duration equal to
the calculated duration of the corresponding sub-project and resource requirements the maximal resource
requirements of the activities composing the sub-project. The proposed solution uses two encapsulated
ACO algorithms to find a near optimal solution after a predefined number of iterations.
The experiments using multi-mode test cases from PSP Lib gave some very promising results for the case of
the multi-project multi-mode RCPSP instances. However, further experimentation on a wider range of cases
and with different levels of resource lacking should follow to explore the strengths and limitations of the
proposed method.
397
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
REFERENCES
BLUM, C. 2005. Ant colony optimization: Introduction and recent trends. Physics of Life reviews, 2, 353-373.
CHRISTOFIDES, N., ALVAREZ-VALDES, R. & TAMARIT, J. M. 1987. Project scheduling with resource constraints: A branch
and bound approach. European Journal of Operational Research, 29, 262-273.
CONFESSORE, G., GIORDANI, S. & RISMONDO, S. 2007. A market-based multi-agent system model for decentralized
multi-project scheduling. Annals of Operations Research, 150, 115-135.
DAVIS, E. W. 1973. PROJECT SCHEDULING UNDER RESOURCE CONSTRAINTS - HISTORICAL REVIEW AND
CATEGORIZATION OF PROCEDURES. AIIE Trans, 5, 297-313.
DAVIS, K. R., STAM, A. & GRZYBOWSKI, R. A. 1992. Resource constrained project scheduling with multiple objectives: A
decision support approach. Computers and Operations Research, 19, 657-669.
DEMEULEMEESTER, E. L. & HERROELEN, W. S. 1997. A branch-and-bound procedure for the generalized resource-
constrained project scheduling problem. Operations Research, 45, 201-212.
ELMAGHRABY, S. E. 1964. An algebra for the analysis of generalized activity networks. Management Science, 10, 494-
514.
FRANCK, B. & SCHWINDT, C. 1995. Different resource-constrained project scheduling models with minimal and maximal
time-lags. Different Resource-constrained Project Scheduling Models with Minimal and Maximal Time-lags.
HARTMANN, S. 2001. Project Scheduling with Multiple Modes: A Genetic Algorithm. Annals of Operations Research,
102, 111-135.
HARTMANN, S. & BRISKORN, D. 2010. A survey of variants and extensions of the resource-constrained project
scheduling problem. European Journal of Operational Research, 207, 1-14.
HARTMANN, S. & DREXL, A. 1998. Project scheduling with multiple modes: A comparison of exact algorithms. Networks,
32, 283-297.
HERROELEN, W., DE REYCK, B. & DEMEULEMEESTER, E. 1998. Resource-constrained project scheduling: A survey of
recent developments. Computers and Operations Research, 25, 279-302.
ICMELI, O. & ERENGUC, S. S. 1996. A branch and bound procedure for the resource constrained project scheduling
problem with discounted cash flows. Management Science, 42, 1395-1408.
KOLISCH, R. & SPRECHER, A. 1997a. PSPLIB - A project scheduling problem library. European Journal of Operational
Research, 96, 205-216.
KOLISCH, R. & SPRECHER, A. 1997b. PSPLIB - A project scheduling problem library: OR Software - ORSEP Operations
Research Software Exchange Program. European Journal of Operational Research, 96, 205-216.
KUMANAN, S., JEGAN JOSE, G. & RAJA, K. 2006. Multi-project scheduling using an heuristic and a genetic algorithm.
International Journal of Advanced Manufacturing Technology, 31, 360-366.
KURTULUS, I. 1985. Multiproject scheduling: Analysis of scheduling strategies under unequal delay penalties. Journal of
Operations Management, 5, 291-307.
KURTULUS, I. & DAVIS, E. W. 1982. Multi-Project Scheduling: Categorization of Heuristic Rules Performance.
Management Science, 28, 161-172.
ULUSOY, G. & OZDAMAR, L. 1994. Constraint-based perspective in resource constrained project scheduling.
International Journal of Production Research, 32, 693-705.
398
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
Dermitzakis E., Rokou E., Kirytopoulos K. | Multi-project Multi-mode Resource Constrained Project
Scheduling with Ant Colony Optimization
399
2nd International Symposium and 24th National Conference on Operational Research
ISBN: 978-618-80361-1-6
1st Version, Copyright © 2014
ISBN 978-618-80361-1-6