Nader Chmeit Thesis
Nader Chmeit Thesis
Nader Chmeit
Notre Dame University
Zouk Mosbeh - Lebanon
nbchniait@ndu. edu . lb
Approved by:
220CT 2012 j
L
RECEIVED
Contents
List of Figures
Abstract 7
Introduction 8
1 The Scheduling Problem 11
1.1 Problem Formulation .......................11
2 Heuristics And Optimization Algorithms 13
2.1 Simulated Annealing ........................ 13
2.1.1 Initial Temperature .................... 16
2.1.2 Equilibrium State ..................... 17
2.1.3 Cooling Schedule ..................... 18
2.1.4 Stopping Condition .................... 19
2.1.5 Performance of the SA Algorithm ............ 20
2.2 Ant Colony Optimization Algorithm .............. 21
2.3 Combinatorial Optimization Problems ............. 23
2.4 The Pheromone Model ...................... 23
2.5 The Double-Bridge Experiment ................. 25
2.6 Ant System (AS) .......................... 26
2.7 Complexity Analysis of Ant Colony Optimization ....... 27
3 Related Work 29
4 SA Applied To The Scheduling Problem 33
4.1 Our Approach Using SA .....................33
4.2 Empirical Results (SA) ......................37
5 ACO Applied To The Scheduling Problem 41
5.1 Our Approach Using ACO ....................41
5.2 Pheromone Update ........................43
3
ru CONTENTS
5
Abstract
7
Introduction
A wide variety of scheduling (timetabling) problems have been described
in the literature of computer science during the last decade. Some of these
problems are: the weekly course scheduling done at schools and universities
[7], examination timetables [37], airline crew and flight scheduling problems
[31], job and machine scheduling [43], train timetabling problems [49]
Many definitions of the scheduling problem exist. A general definition
was given by A. Wren in 1996 as:
"The allocation of, subjects to constraints, of given resources to objects being
placed in space and time, in such a way as to satisfy as nearly as possible a
set of desirable objectives" [47].
Another way of looking at the scheduling problem is to consider it as a
timetable consisting of four finite sets:
• a set of constraints.
8
Even though we will present in this research our solution in the context
of the exam scheduling problem, we can generalize it to solve many different
scheduling problems with some minor modifications regarding the variables
related the the problem in hand and the resources available.
An informal definition of the exam scheduling problem is the following:
a combinatorial optimization problem that consists of scheduling a number
of examinations in a given set of exam sessions so as to satisfy a given set of
constraints. As Carter states in [37] the basic challenge faced when solving
this problem is to "schedule examinations over a limited time period so as to
avoid conflicts and to satisfy a number of side-constraints". These constraints
can be split into hard and soft constraints where the hard constraints must
be satisfied in order to produce a feasible or acceptable solution, while the
violation of soft constraints should be minimized since they provide a measure
of how good the solution is with regard to the requirements [8].
The main hard constraints [20] are given below:
As for the soft constraints, they vary between different academic insti-
tutions and depend on their internal rules [11]. The most common soft
constraints are:
1. Minimize the total examination period or, more commonly, fit all exams
within a shorter time period.
2. Increase students' comfort by spacing the exams fairly and evenly across
the whole group of students. It is preferable for students not to sit for
exams occurring in 2 consecutive periods.
3. Schedule the exams in specific order, such as scheduling the maths and
sciences related exams in the morning time.
4. Allocate each exam to a suitable room. Lab exams for example, should
be held in the corresponding labs.
10
Chapter 1
The Scheduling Problem
11
12 CHAPTER 1. THE SCHEDULING PROBLEM
.
epj- { 1, if exam e j is held in period Pu
- 0, otherwise
We shall assume that all exam sessions have the same duration (say one
period of 2 hours). We recap that the problem is, given a set of periods,
we need to assign each exam to some period in such a way that no student
has more than one exam at the same time, and the room capacity is not
breached. We therefore have to make sure that the equations (constraints)
below are always satisfied [14]:
1. V sj E S : se capacity
2. V sj eS, V ej E E, and one Pk E P: > iespiik 1
3. V ej E E, V E P : > i epuj = 1
These equations reveal only the hard constraints which are critical for
reaching a correct schedule. They must always evaluate to true otherwise we
will end up by an erroneous schedule. So our aim is to find a schedule that
meets all the hard constraints and try to adhere as much as possible to the
soft constraints.
Chapter 2
13
14 CHAPTER 2. HEURISTICS AND OPTIMIZATION ALGORITHMS
a simple form of local search, like the Hill Climbing algorithm, we notice that
it starts with an initial solution usually chosen at random. Then a neighbor
of this solution is generated by some suitable mechanism depending on the
problem we are solving, and the change in the cost of the new solution is
calculated [21]. If a reduction in cost is found, the current solution is re-
placed by the generated neighbor, otherwise the current solution is retained.
The process is repeated until no further improvement can be found in the
neighborhood of the current solution. We give below the Hill Climbing local
search algorithm:
As shown in the algorithm above, the local search algorithm works by:
Since we are calculating the change in cost W and choosing the solution
with the lowest cost, the above algorithm solves a minimization problem.
It can easily be used for solving a maximization problem by checking for
solutions with higher costs instead.
2.1. SIMULATED ANNEALING 15
G. Kendall [29] states that "hill climbing" suffers from the problem of
getting stuck at local minima (or maxima depending on whether it is a mini-
mization or an optimization problem). There are many techniques described
in the literature to try to overcome these problems as:
exp(—p/T) (2.1)
Repeat
Set repetition counter n = 0
Repeat
Generate state j, a neighbor of i
calculate = f(j) - f(i)
If W < 0 Then i := j
Else
generate random number x E 10,11
If x < exp(-/t) Then i := j
n:=n+1
Until n = maximum neighborhood moves allowed at each temperature
endif
endif
update temperature decrease function a
T=a(T)
Until stopping condition = true.
There are many heuristics used for choosing the initial temperature at
which SA starts [38]. We will next introduce many important concepts in
SA which are crucial for building efficient solutions. These concepts are listed
below:
• Initial Temperature
• Equilibrium State
• Cooling Schedule
• Stopping Condition
computational cost.
We have 2 main methods for finding a suitable starting temperature:
1. The Acceptance deviation method and
2. The tuning for initial temperature method
Acceptance deviation method
Before explaining how this method works, we recap that the neighbor
solution is accepted with a probability p depending on the energy difference
between the new and the old solution. This selection scheme is called the
Metropolis criterion [38], and therefore SA consists of a series of Metropolis
chains at different decreasing temperatures.
The acceptance deviation method computes the starting temperature using
preliminary experimentations by:
ka (2.2)
where o represents the standard deviation of difference between values of
objective functions representing the energy of the system (cost of solution),
and
k= ( 2.3)
ln(p)
where p denotes the acceptance probability of the next solution.
Static strategy In this strategy, the number of moves within each itera-
tion is determined before starting the search. We define a proportion y of the
neighborhood size IN(s)I to be explored. The number of generated neighbors
from a solution s is as large as: y x IN(s)I
The more significant the ratio y the higher the computational cost and
the better the results.
T=T0—ix/3 (2.4)
Is is clear from the equation above that the higher the value of a, the longer
it will take to decrement the temperature to the stopping criterion. The best
assumption is to take a between 0.8 and 0.99 [29].
Logarithmic approach
Adaptive Strategy
The adaptive cooling schedule depends on the characteristics of the search
landscape. The temperature decrease rate is dynamic and it depends on some
information obtained during the search. So the adaptive strategy approach
carries out a small number of iterations at high temperatures and a large
number of iterations at low temperatures [44].
According to [44], the stopping criteria is not only governed by the tem-
perature T. Sometimes we reach the stopping condition when:
• no improvements in the solutions' cost are found after some pre-determined
number of successive iterations done at several temperature values,
• Upper bounds can be given for the proximity of the probability distri-
bution of the configurations after generating a finite number of tran-
sitions. An upper bound on the quality of the final solution is only
known for the maximum matching problem which is known to be in P.
when it completes a solution, the ant evaluates it and modifies the trail value
on the components used in its solution which helps in directing the search
of the future ants [19]. There is also a mechanism called trail evaporation
used in ACO. This mechanism decreases all trail levels after each iteration
of the AC algorithm. Trail evaporation ensures that unlimited accumulation
of trails over some component are avoided and therefore the chances to get
stuck in local optimums are decreased [33]. Another optional mechanism
that exists in ACO algorithms is daemon actions. "Daemon actions can be
used to implement centralized actions which cannot be performed by single
ants, such as the invocation of a local optimization procedure, or the update
of global information to be used to decide whether to bias the search process
from a non-local perspective" [33].
The pseudo-code for the ACO as described in [10] is shown below:
These values of the pheromone trails will allow us to model the probabil-
ity distribution of the components of the solution.
The pheromone model of an AGO algorithm is closely related to the model
of a combinatorial optimization problem. Each possible solution component,
or in other words each possible assignment of a value to a variable define a
pheromone value [10, 5]. As described above, the pheromone Tij is associated
with the solution component Cj3 , which consists of the assignment X = v,
and the set of all possible solution components is denoted by C.
The artificial ants move from vertex to vertex along the edges of the
graph Cc(V, E), incrementally building a partial solution while they deposit
a certain amount of pheromone on the components, that is, either on the
vertices or on the edges that they traverse.
The amount Ar of pheromone that ants deposit on the components de-
pends on the quality of the solution found. In subsequent iterations, the ants
follow the path with high amounts of pheromone as an indicator to promising
regions of the search space [10].
It is also common (optional) to improve the solutions obtained by the
ants through a local search before updating the pheromone [16]. We then
update the pheromone positively in order to increase the pheromone values
associated with good or promising solutions, and sometimes negatively in
order to decrease those that are associated with bad solutions [16]. The
pheromone update usually consists of:
of m artificial ants which construct solutions from the elements of the set C
[16]. We start from an empty partial solution s = 0 and extend it by adding
a feasible solution SP using the components from the set N(s) ç C (where
N(sP ) denotes the set of components that can be added to the current partial
solution sP without violating any of the constraints in I). It is clear that
the process of constructing solutions can be regarded as a walk through the
construction graph Cc = (V, E).
The choice of a solution component from N(s) is done probabilistically at
each construction step. Before giving the rules controlling the probabilistic
choice of the solution components, we will next describe an experiment that
was run on real ants called the double-bridge experiment which derived these
probabilistic choices.
(mi + k)'
P1= (2.7)
(mi + k)h + (m 2 + k)h
iiiil
Figure 2.1: Double-bridge Experiment
a /3
Pik
Vj E S (2.9)
= cj1ESP j1
0
or zero otherwise. The parameters a and control the relative importance
of the pheromone versus the heuristic information rjj, which is given by:
'Tlij =(2.10)
dij
(2.11)
where E (0, 1] is the pheromone decay coefficient, and r0 is the initial
value of the pheromone. Local pheromone update decreases the pheromone
concentration on the traversed edges in order to encourage subsequent ants
to choose other edges and, hence, to produce different solutions [17].
The offline pheromone update is applied at the end of each iteration by
only one ant, which can be either the iteration-best or the best-so-far as
shown below:
{ (1 - p) . Tj + p rjj, if (i,j) belongs to best tour,
otherwise.
The next section describes in more detail the Ant System Algorithm and
briefly presents its complexity bounds.
notation: for some state i, j is any state that can be reached from i; 77ij is a
heuristic information between i and j calculated depending on the problem
in hand, and 7-ij is the pheromone trail value between i and j.
1. Initialization
V (i,j) initialize Tij and ijj
2 .Construction
For each ant k (currently in state i) do
repeat
choose the next state to move to by means of Equation 2.9
append the chosen move to the kt'2 ant's set tabuk
until ant k has completed its solution
end for
3.Trail update
For each ant k do
find all possible transitions from state i to j
compute
update the trail values
end for
4. Terminating condition
If not (end test) go to step 2
Related Work
During the past years, many algorithms and heuristics were used in solving
the timetabling problem. The algorithms vary from simple local search algo-
rithms to variations of genetic algorithms and graph representation problems.
We list some of the recognized techniques which proved to find acceptable
solutions concerning the scheduling problem:
Simulated Annealing (SA) (I) and Ant Colony Optimization (ACO) (II)
algorithms where described in the previous chapter.
NE
30 CHAPTER 3. RELATED WORK
In the Hybrid Approach(V), the idea is to combine more than one al-
gorithm or heuristic and apply them on the same optimization problem in
order to reach a better and more feasible solution. Sometimes the heuristics
are combined into a new heuristic and then the problem is solved using this
new heuristic. In other cases the different heuristics are used in phases, and
every phase consists of applying one of these heuristics to solve a part of the
optimization problem.
Duong T.A and Lam K.H presented in [20] a solution method for exami-
nation timetabling, consisting of two phases: a constraint programming phase
to provide an initial solution, and a simulated annealing phase with Kempe
chain neighborhood. They also refined mechanisms that helped to determine
some crucial cooling schedule parameters. In [36] a method using Graph
Coloring was developed for optimizing solutions to the timetabling problem.
The eleven course timetabling test data-sets were introduced by Socha K.
and Sampels M. [41] who applied a Max-Min Ant System (MMAS) with a
construction graph for the problem representation. Socha K. also compared
Ant Colony System (ACS) against a Random Restart Local Search (RRLS)
algorithm and Simulated Annealing (SA) [40]. A comparison of five meta-
heuristics for the same eleven data-sets was presented by Rossi-Doria 0. [39];
the approaches included in this study were: the Ant Colony System (ACS),
Simulated Annealing (SA), Random Restart Local Search (RRLS), Genetic
Algorithm (GA) and Tabu Search (TS). The conclusions drawn from the
comparisons done in [ 39 ] are the following:
in a particular year.
to achieve high system throughput and to match the application need with
the available computing resources.
J-M. Su and J-Y. Huang used Ant Colony Optimization to solve another
scheduling problem which is the Train Timetabling Problem [49]. The aim
is to determine a periodic timetable for a set of trains that does not violate
track capacities and some other operation constraints.
ACO was also used to solve Fuzzy Job Shop Scheduling Problems [43] used
in complex equipment manufacturing systems to validate the performance of
heuristic algorithms. The idea was to move ants from one machine (nest) to
another machine (food source) depending upon the job flow, thereby opti-
mizing the sequence of jobs.
In the following chapter we will present our solution in the context of the
exam scheduling problem using the Simulated Annealing meta-heuristic.
Chapter 4
We start by building an initial solution for the schedule. All the following
work was implemented using Matlab.
The schedule is represented by an m x n matrix denoted by Sched.
Sched/i,jJ holds a set Sij of exams scheduled at day d and period p, where
i—i, 2. . . , m and j= 1, 2,. . . , n. Hence the matrix Sched will have the follow-
ing properties:
33
34 CHAPTER 4. SA APPLIED TO THE SCHEDULING PROBLEM
Periods
8:00-10:00 11:00-1:00 2:00-4:00 5:00-8:00
DaYI j (E1,E2E3) (E7.E8,E9) (E13,E14,E15) (Ei9,E20.E;1)
(E4, El, E6) (E10,E11,E12) (E16.E17,E18) (E22,E23,E24)
The exams are denoted by the Letter E concatenated to the course code.
For example if we have a course code CS1 11 the exam code for this course
will be ECS111. Exam E9 in the picture above is scheduled from 11:00 am
to 1:00 pm of day one. As stated above, each day we have 3 exams at each
period, this is because we have 3 examination rooms available and we wish
to maximize their utilization by always allocating non-scheduled exams to
the empty rooms.
Depending on the number of exams to be scheduled, a case frequently
appears where we sometimes end up by having rooms which are not allo-
cated to any exam at the final examination day. This is logical since the
number of exams might not be a multiple of the size of the matrix Sched. We
accommodate for this by adding virtual exams in the remaining empty cells.
These exams have no conflicts whatsoever with any of the other exams. This
is done to allow for the algorithm to run on Matlab since it is necessary to
fill-in all matrix cells.
In some other cases, the examination rooms are vast halls, and might hold
more than one exam at one period. To account for this change, we consider
4. 1. OUR APPROACH USING SA 35
Function: ReturnConflicts(ea,eb)
set counter = 0
for all sj E S do
if seja = 1 && se b = 1 then
counter = counter + 1
end if
end for
return counter
1. the cost of its hard constraints returned by checking for any students
having exam clashing (more than one exam in the same time-slot), and
Still remains to explain how to calculate the cost of the hard and soft con-
straints of a solution schedule. The cost of the hard constraints is calculated
over 2 steps:
1. Run the function ReturnConfiicts() for any two exams occurring in the
same time-slot of the same day
36 CHAPTER 4. SA APPLIED TO THE SCHEDULING PROBLEM
As for the soft constraints, we need to make sure that we space the exams
fairly and evenly across the whole group of students thus, we need to check
that a student has no more than one exam in two consecutive time-slots of
the same day. The same procedure will be used to calculate the cost of the
soft constraints with only one modification. This time the function Return-
Conflicts() is run on all exams pairs occurring in consecutive time-slots of
the same day, and their costs are added together.
To control the relative importance of the hard constraints over the soft con-
straints, we added the cost of the hard constraints to a fraction of the total
cost of the solution's soft constraints by multiplying it by a decimal E E 10,1 [.
Once we have defined how to calculate the cost of the solution in hand, we
can now use the SA algorithm to iterate over neighbor solutions in the aim
of reaching better cost solutions. This is describe in the following sections.
Temperature Decrement
In this research we will use an alternative method from those described in
Subsection 4.1 to decrement the temperature. This method was first sug-
gested by Lundy [32] in 1986 and it consists of doing only one iteration at
each temperature, but to decrease the temperature very slowly. The formula
that illustrates this method is the following:
Ti
—Ti+1=l+T (4.2)
Final Temperature
Neighborhood structure
In SA, a neighbor solution s' of s is usually any acceptable solution that can
be reached from s. In the context of the scheduling problem, a neighbor s'
of the current schedule s is a another schedule where the exams have been
distributed differently starting from s. This works in practice, but we have
improved it by constraining some schedule configurations which return very
high and impractical costs. This was done by adding these configurations to
a black-list in such a way that whenever such configurations appear during
the running time of the algorithm, they are skipped and a search for new
neighbors with different configuration is launched. Although this is naturally
controlled in the SA algorithm by the acceptance probability of neighbor
solutions, constraining such high cost solutions can save many unsatisfactory
iterations and therefore a big amount of computer processing time.
Cost
We can see that the cost starts with high value (equal to 78) when the
temperature is near 100, and it drops gradually with the temperature until it
reaches a value equal to 3. At some points of the plot, the cost increases even
though the temperatures are decreasing. These are exactly the uphill steps
that appear in SA where worse moves are allowed to be taken to escape from
local minimum. One more thing to notice in the plot is that the probability
of accepting a worse move is decreased when the temperature decreases just
as expected from Equation 2.1.
Another plot is shown in Figure 4.3, where the SA algorithm is started
from the same initial configuration and run within the same range of tem-
peratures, but now using the improved version that consists of constraining
unfeasible neighbor configurations.
We can see that the difference between the costs in adjacent temperatures
is narrower than those in Figure 4.2. Even though the final cost reached is the
same, the cost function with respect to temperature drops more strictly now.
This resulted from the fact that many non-useful iterations were avoided due
to the restrictions of unfeasible schedule configurations.
4.2. EMPIRICAL RESULTS (SA) 39
Cost
41
42 CHAPTER 5. ACO APPLIED TO THE SCHEDULING PROBLEM
which exams are feasible to be placed in the same set S jj of the schedule.
We know that we can have as many exams in each S ij as there are available
rooms.
PhMatrix is first initialized so that the values 'r : i, j E {1,. . . , n} are all
equal to 1. The attractiveness 77ij is defined as follows:
1
(5.1)
= ReturnConflicts(E, E)
jj=1Yjj) 11ij
Pij Vj E S (5.2)
= >(> iz) z
5.2. PHEROMONE UPDATE 43
(5.3)
where V E (0, 1] is the pheromone decay coefficient, and T01d is the old value
of the pheromone. This negative pheromone update can be induced directly
after the pheromone initialization between the exams resulting in conflicting
configurations, therefore before the ants start building their solution. This
will ensure that for ant k the probability of choosing these exams in the same
set is very low.
the cost of the schedule at several iterations. At each iteration we start from
a different nest (source) and build a complete solution.
We can see that the cost drops significantly from a value around 1520 to
almost 5 after 100 iterations. Plot 5.1 corresponds to solution for a schedul-
ing problem instance with a huge number of conflicts between students. We
have also run the AGO algorithm on a different instance having a lower num-
ber (still a considerable number) of conflicts between exams and the results
are plotted in Figure 5.2. The initial solution has a cost equal to 20 and it
drops to 0 after only 5 iterations, therefore an optimal solution was found
in this case.
a a 10 IS 20 25 30 35 40
Cost
The results with the lowest cost were recorded at each trial, together with
their corresponding CPU running time. The standard deviation (o) of every
5 trials was also calculated. All the trials were run on a Core2 Duo PC with
2.0 GHz CPU and 2 GB of RAM. The results are shown in the table 5.1
below.
1. The running times of ACO are better than those of SA in all 15 trials.
SA annealing takes more time to discover and evaluate the neighbor
solutions at each iteration (temperature). AGO uses information from
prior iterations to guide subsequent colonies to new states (neighbors)
which reduces the processing time needed to calculate the cost of such
moves.
2. ACO found the least cost solution in all 3 timetable sizes, even though
it sometimes lead to high cost solutions compared to those found in
SA. The standard deviations of the timetables' costs produced using
SA are lower than those found in the case of AGO which means that
SA provide tight results where the costs of the solutions are close to
each other, while ACO gives broad results where the difference between
the costs can be high.
3. If we choose to use more ants in ACO, the running time will increase
and we will not get any better results, so we fixed the number of ants
46 CHAPTER 5. AGO APPLIED TO THE SCHEDULING PROBLEM
4. When the number of conflicting exams chosen is too high in such a way
that more than 70% of the exams have conflicts with each other (not
shown in table 5.1), the ACO algorithm outperforms the SA algorithm.
We had to highly increase the number of iterations done at each tem-
perature (using the static strategy of temperature decrement) to allow
for the SA algorithm to explore enough neighbors so that it is able to
find a better move (neighbor).
5.4. PERFORMANCE ANALYSIS: AGO VS. SA 47
Conclusion
49
50 CHAPTER 6. CONCLUSION
Bibliography
[1] Karen I. Aardal, Stan P. M. Van Hoesel, Arie M. C. A. Koster, Carlo
Mannino, and Antonio Sassano. Models and solution techniques for
frequency assignment problems. pages 261-317, 2001.
[4] Masri Binti Ayob and Ghaith Jaradat. Hybrid ant colony systems for
course timetabling problems. In Proceedings of the 2nd conference on
data mining and optimization, Universiti Kebangsaan Malaysia, pages
120-126. IEEE, 2009.
[5] Imed Bouazizi. Ara - the ant colony based routing algorithm for manets.
In Proceedings of the 2002 International Conference on Parallel Process-
ing Workshops, ICPPW 02 series, page 79, Washington, DC, USA, 2002.
IEEE.
[7] Edmund Burke, Kirk Jackson, Jeff Kingston, and Rupert Weare. Au-
tomated university timetabling: the state of the art. The Computer
Journal, 40:565-571, 1997.
[8] Edmund K. Burke, Dave Elliman, Peter H. Ford, and Rupert F. Weare.
Examination timetabling in british universities: a survey. In Selected
papers from the First International Conference on Practice and The-
ory of Automated Timetabling, pages 76-90, London, UK, UK, 1996.
Springer-Verlag.
51
52 BIBLIOGRAPHY
[40] Krzysztof Socha, Joshua Knowles, and Michael Sampels. A max-min ant
system for the university course timetabling problem. In Proceedings of
the Third International Workshop on Ant Algorithms, ANTS '02, pages
1-13, London, UK, UK, 2002. Springer-Verlag.
[41] Krzysztof Socha, Michael Sampels, and Max Manfrmn. Ant algorithms
for the university course timetabling problem with regard to the state-
of-the-art. In In Proc. Third European Workshop on Evolutionary Com-
putation in Combinatorial Optimization (Evo COP 2003, pages 334-345.
Springer Verlag, 2003.
[42] Thomas Sttzle and Marco Dorigo. Aco algorithms for the quadratic as-
signment problem. In New Ideas in Optimization, pages 33-50. McGraw-
Hill.
[45] Peter J. M. Van Laarhoven, Emile H. L. Aarts, and Jan Karel Lenstra.
Job shop scheduling by simulated annealing. OR, 40(1):113-125, Jan-
uary 1992.
[48] Qinghong Wu, Zongmin Ma, and Ying Zhang. Current status of ant
colony optimization algorithm applications. In Proceedings of the 2010
International Conference on Web Information Systems and Mining -
Volume 02, WISM '10, pages 305-308, Washington, DC, USA, 2010.
IEEE Computer Society.
[49] Jen yu Huang. Using ant colony optimization to solve train timetabling
problem of mass rapid transit. In Journal of Computer Information
Systems, 2006.