NIPS 2017 Learning Combinatorial Optimization Algorithms Over Graphs Paper
NIPS 2017 Learning Combinatorial Optimization Algorithms Over Graphs Paper
Abstract
1 Introduction
Combinatorial optimization problems over graphs arising from numerous application domains, such
as social networks, transportation, telecommunications and scheduling, are NP-hard, and have thus
attracted considerable interest from the theory and algorithm design communities over the years. In
fact, of Karp’s 21 problems in the seminal paper on reducibility [19], 10 are decision versions of graph
optimization problems, while most of the other 11 problems, such as set covering, can be naturally
formulated on graphs. Traditional approaches to tackling an NP-hard graph optimization problem
have three main flavors: exact algorithms, approximation algorithms and heuristics. Exact algorithms
are based on enumeration or branch-and-bound with an integer programming formulation, but may
be prohibitive for large instances. On the other hand, polynomial-time approximation algorithms are
desirable, but may suffer from weak optimality guarantees or empirical performance, or may not even
exist for inapproximable problems. Heuristics are often fast, effective algorithms that lack theoretical
guarantees, and may also require substantial problem-specific research and trial-and-error on the part
of algorithm designers.
All three paradigms seldom exploit a common trait of real-world optimization problems: instances
of the same type of problem are solved again and again on a regular basis, maintaining the same
combinatorial structure, but differing mainly in their data. That is, in many applications, values of
the coefficients in the objective function or constraints can be thought of as being sampled from the
same underlying distribution. For instance, an advertiser on a social network targets a limited set of
users with ads, in the hope that they spread them to their neighbors; such covering instances need
to be solved repeatedly, since the influence pattern between neighbors may be different each time.
Alternatively, a package delivery company routes trucks on a daily basis in a given city; thousands of
similar optimizations need to be solved, since the underlying demand locations can differ.
⇤
Both authors contributed equally to the paper.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Embed Θ Θ Greedy: add
graph best node
1st iteration
ReLu ReLu Θ
Θ Θ
Θ Θ
Recently, there has been some seminal work on using deep architectures to learn heuristics for
combinatorial problems, including the Traveling Salesman Problem [37, 6, 14]. However, the
architectures used in these works are generic, not yet effectively reflecting the combinatorial structure
of graph problems. As we show later, these architectures often require a huge number of instances in
order to learn to generalize to new ones. Furthermore, existing works typically use the policy gradient
for training [6], a method that is not particularly sample-efficient. While the methods in [37, 6] can
be used on graphs with different sizes – a desirable trait – they require manual, ad-hoc input/output
engineering to do so (e.g. padding with zeros).
In this paper, we address the challenge of learning algorithms for graph problems using a unique
combination of reinforcement learning and graph embedding. The learned policy behaves like a
meta-algorithm that incrementally constructs a solution, with the action being determined by a graph
embedding network over the current state of the solution. More specifically, our proposed solution
framework is different from previous work in the following aspects:
1. Algorithm design pattern. We will adopt a greedy meta-algorithm design, whereby a feasible
solution is constructed by successive addition of nodes based on the graph structure, and is maintained
so as to satisfy the problem’s graph constraints. Greedy algorithms are a popular pattern for designing
approximation and heuristic algorithms for graph problems. As such, the same high-level design can
be seamlessly used for different graph optimization problems.
2. Algorithm representation. We will use a graph embedding network, called structure2vec
(S2V) [9], to represent the policy in the greedy algorithm. This novel deep learning architecture
over the instance graph “featurizes” the nodes in the graph, capturing the properties of a node in the
context of its graph neighborhood. This allows the policy to discriminate among nodes based on
their usefulness, and generalizes to problem instances of different sizes. This contrasts with recent
approaches [37, 6] that adopt a graph-agnostic sequence-to-sequence mapping that does not fully
exploit graph structure.
3. Algorithm training. We will use fitted Q-learning to learn a greedy policy that is parametrized
by the graph embedding network. The framework is set up in such a way that the policy will aim
to optimize the objective function of the original problem instance directly. The main advantage of
this approach is that it can deal with delayed rewards, which here represent the remaining increase in
objective function value obtained by the greedy algorithm, in a data-efficient way; in each step of the
greedy algorithm, the graph embeddings are updated according to the partial solution to reflect new
knowledge of the benefit of each node to the final objective value. In contrast, the policy gradient
approach of [6] updates the model parameters only once w.r.t. the whole solution (e.g. the tour in
TSP).
2
The application of a greedy heuristic learned with our framework is illustrated in Figure 1. To
demonstrate the effectiveness of the proposed framework, we apply it to three extensively studied
graph optimization problems. Experimental results show that our framework, a single meta-learning
algorithm, efficiently learns effective heuristics for each of the problems. Furthermore, we show that
our learned heuristics preserve their effectiveness even when used on graphs much larger than the
ones they were trained on. Since many combinatorial optimization problems, such as the set covering
problem, can be explicitly or implicitly formulated on graphs, we believe that our work opens up a
new avenue for graph algorithm design and discovery with deep learning.
3
activated when S = V . Empirically, inserting a node u in the partial tour at the position which
increases the tour length the least is a better choice. We adopt this insertion procedure as a helper
function for TSP.
An estimate of the quality of the solution resulting from adding a node to partial solution S will
be determined by the evaluation function Q, which will be learned using a collection of problem
instances. This is in contrast with traditional greedy algorithm design, where the evaluation function
Q is typically hand-crafted, and requires substantial problem-specific research and trial-and-error. In
the following, we will design a powerful deep learning parameterization for the evaluation function,
b
Q(h(S), v; ⇥), with parameters ⇥.
where ✓1 2 R , ✓2 , ✓3 2 R
p p⇥p
and ✓4 2 R are the model parameters, and relu is the rectified linear
p
unit (relu(z) = max(0, z)) applied elementwise to its input. The summation over neighbors is one
way of aggregating neighborhood information that is invariant to permutations over neighbors. For
simplicity of exposition, xv here is a binary scalar as described earlier; it is straightforward to extend
xv to a vector representation by incorporating any additional useful node information. To make the
4
nonlinear transformations more powerful, we can add some more layers of relu before we pool over
the neighboring embeddings µu .
Once the embedding for each node is computed after T iterations, we will use these embeddings
b (T )
to define the Q(h(S), v; ⇥) function. More specifically, we will use the embedding µv for node
P (T )
v and the pooled embedding over the entire graph, u2V µu , as the surrogates for v and h(S),
respectively, i.e.
X
b
Q(h(S), v; ⇥) = ✓5> relu([✓6 µ(T ) (T )
u , ✓7 µv ]) (4)
u2V
(T )
where ✓5 2 R , ✓6 , ✓7 2 R
2p p⇥p
and [·, ·] is the concatenation operator. Since the embedding µu
is computed based on the parameters from the graph embedding network, Q(h(S), b v) will depend
on a collection of 7 parameters ⇥ = {✓i }7i=1 . The number of iterations T for the graph embedding
computation is usually small, such as T = 4.
The parameters ⇥ will be learned. Previously, [9] required a ground truth label for every input
graph G in order to train the structure2vec architecture. There, the output of the embedding
is linked with a softmax-layer, so that the parameters can by trained end-to-end by minimizing the
cross-entropy loss. This approach is not applicable to our case due to the lack of training labels.
Instead, we train these parameters together end-to-end using reinforcement learning.
4 Training: Q-learning
We show how reinforcement learning is a natural framework for learning the evaluation function Q. b
b
The definition of the evaluation function Q naturally lends itself to a reinforcement learning (RL)
formulation [36], and we will use Q b as a model for the state-value function in RL. We note that we
would like to learn a function Qb across a set of m graphs from distribution D, D = {Gi }m , with
i=1
potentially different sizes. The advantage of the graph embedding parameterization in our previous
section is that we can deal with different graph instances and sizes seamlessly.
4.1 Reinforcement learning formulation
We define the states, actions and rewards in the reinforcement learning framework as follows:
1. States: a state S is a sequence of actions (nodes) on a graph G. Since we have already represented
nodes
P in the tagged graph with their embeddings, the state is a vector in p-dimensional space,
v2V µv . It is easy to see that this embedding representation of the state can be used across
different graphs. The terminal state Sb will depend on the problem at hand;
2. Transition: transition is deterministic here, and corresponds to tagging the node v 2 G that was
selected as the last action with feature xv = 1;
3. Actions: an action v is a node of G that is not part of the current state S. Similarly, we will
represent actions as their corresponding p-dimensional node embedding µv , and such a definition
is applicable across graphs of various sizes;
4. Rewards: the reward function r(S, v) at state S is defined as the change in the cost function after
taking action v and transitioning to a new state S 0 := (S, v). That is,
r(S, v) = c(h(S 0 ), G) c(h(S), G), (5)
and c(h(;), G) = 0. As such, the cumulative reward R of a terminal state Sb coincides exactly
b = P|S|
b i.e. R(S)
with the objective function value of the S,
b
b
i=1 r(Si , vi ) is equal to c(h(S), G);
b b
5. Policy: based on Q, a deterministic greedy policy ⇡(v|S) := argmaxv0 2S Q(h(S), v 0 ) will be
used. Selecting action v corresponds to adding a node of G to the current partial solution, which
results in collecting a reward r(S, v).
Table 1 shows the instantiations of the reinforcement learning framework for the three optimization
problems considered herein. We let Q⇤ denote the optimal Q-function for each RL problem. Our graph
b
embedding parameterization Q(h(S), v; ⇥) from Section 3 will then be a function approximation
model for it, which will be learned via n-step Q-learning.
4.2 Learning algorithm
b
In order to perform end-to-end learning of the parameters in Q(h(S), v; ⇥), we use a combination
of n-step Q-learning [36] and fitted Q-iteration [33], as illustrated in Algorithm 1. We use the term
5
Table 1: Definition of reinforcement learning components for each of the three problems considered.
Problem State Action Helper function Reward Termination
MVC subset of nodes selected so far add node to subset None -1 all edges are covered
MAXCUT subset of nodes selected so far add node to subset None change in cut weight cut weight cannot be improved
TSP partial tour grow tour by one node Insertion operation change in tour cost tour includes all nodes
episode to refer to a complete sequence of node additions starting from an empty solution, and until
termination; a step within an episode is a single action (node addition).
Standard (1-step) Q-learning updates the function approximator’s parameters at each step of an
episode by performing a gradient step to minimize the squared loss:
b
(y Q(h(S 2
t ), vt ; ⇥)) , (6)
where y = maxv0 Q(h(S b t+1 ), v ; ⇥) + r(St , vt ) for a non-terminal state St . The n-step Q-learning
0
helps deal with the issue of delayed rewards, where the final reward of interest to the agent is only
received far in the future during an episode. In our setting, the final objective value of a solution is
only revealed after many node additions. As such, the 1-step update may be too myopic. A natural
extension of 1-step Q-learning is to wait n steps before updating the approximator’s parameters, so
as to collect a more accurate estimate of the future rewards. Formally, the update is over the same
Pn 1 b
squared loss (6), but with a different target, y = i=0 r(St+i , vt+i ) + maxv0 Q(h(S 0
t+n ), v ; ⇥).
The fitted Q-iteration approach has been shown to result in faster learning convergence when using
a neural network as a function approximator [33, 28], a property that also applies in our setting, as
we use the embedding defined in Section 3.2. Instead of updating the Q-function sample-by-sample
as in Equation (6), the fitted Q-iteration approach uses experience replay to update the function
approximator with a batch of samples from a dataset E, rather than the single sample being currently
experienced. The dataset E is populated during previous episodes, such that at step t + n, the tuple
Pn 1
(St , at , Rt,t+n , St+n ) is added to E, with Rt,t+n = i=0 r(St+i , at+i ). Instead of performing
a gradient step in the loss of the current sample as in (6), stochastic gradient descent updates are
performed on a random sample of tuples drawn from E.
It is known that off-policy reinforcement learning algorithms such as Q-learning can be more sample
efficient than their policy gradient counterparts [15]. This is largely due to the fact that policy gradient
methods require on-policy samples for the new policy obtained after each parameter update of the
function approximator.
5 Experimental Evaluation
Instance generation. To evaluate the proposed method against other approximation/heuristic algo-
rithms and deep learning approaches, we generate graph instances for each of the three problems.
For the MVC and MAXCUT problems, we generate Erdős-Renyi (ER) [11] and Barabasi-Albert
(BA) [1] graphs which have been used to model many real-world networks. For a given range on the
number of nodes, e.g. 50-100, we first sample the number of nodes uniformly at random from that
6
range, then generate a graph according to either ER or BA. For the two-dimensional TSP problem,
we use an instance generator from the DIMACS TSP Challenge [18] to generate uniformly random
or clustered points in the 2-D grid. We refer the reader to the Appendix D.1 for complete details on
instance generation. We have also tackled the Set Covering Problem, for which the description and
results are deferred to Appendix B.
Structure2Vec Deep Q-learning. For our method, S2V-DQN, we use the graph representations and
hyperparameters described in Appendix D.4. The hyperparameters are selected via preliminary results
on small graphs, and then fixed for large ones. Note that for TSP, where the graph is fully-connected,
we build the K-nearest neighbor graph (K = 10) to scale up to large graphs. For MVC, where
we train the model on graphs with up to 500 nodes, we use the model trained on small graphs as
initialization for training on larger ones. We refer to this trick as “pre-training", which is illustrated in
Figure D.2.
Pointer Networks with Actor-Critic. We compare our method to a method, based on Recurrent
Neural Networks (RNNs), which does not make full use of graph structure [6]. We implement
and train their algorithm (PN-AC) for all three problems. The original model only works on the
Euclidian TSP problem, where each node is represented by its (x, y) coordinates, and is not designed
for problems with graph structure. To handle other graph problems, we describe each node by its
adjacency vector instead of coordinates. To handle different graph sizes, we use a singular value
decomposition (SVD) to obtain a rank-8 approximation for the adjacency matrix, and use the low-rank
embeddings as inputs to the pointer network.
Baseline Algorithms. Besides the PN-AC, we also include powerful approximation or heuristic
algorithms from the literature. These algorithms are specifically designed for each type of problem:
• MVC: MVCApprox iteratively selects an uncovered edge and adds both of its endpoints [30]. We
designed a stronger variant, called MVCApprox-Greedy, that greedily picks the uncovered edge
with maximum sum of degrees of its endpoints. Both algorithms are 2-approximations.
• MAXCUT: We include MaxcutApprox, which maintains the cut set (S, V \ S) and moves a node
from one side to the other side of the cut if that operation results in cut weight improvement [25].
To make MaxcutApprox stronger, we greedily move the node that results in the largest improvement
in cut weight. A randomized, non-greedy algorithm, referred to as SDP, is also implemented based
on [12]; 100 solutions are generated for each graph, and the best one is taken.
• TSP: We include the following approximation algorithms: Minimum Spanning Tree (MST),
Farthest insertion (Farthest), Cheapest insertion (Cheapest), Closest insertion (Closest), Christofides
and 2-opt. We also add the Nearest Neighbor heuristic (Nearest); see [4] for algorithmic details.
Details on Validation and Testing. For S2V-DQN and PN-AC, we use a CUDA K80-enabled cluster
for training and testing. Training convergence for S2V-DQN is discussed in Appendix D.6. S2V-DQN
and PN-AC use 100 held-out graphs for validation, and we report the test results on another 1000
graphs. We use CPLEX[17] to get optimal solutions for MVC and MAXCUT, and Concorde [3] for
TSP (details in Appendix D.1). All approximation ratios reported in the paper are with respect to the
best (possibly optimal) solution found by the solvers within 1 hour. For MVC, we vary the training
and test graph sizes in the ranges {15–20, 40–50, 50–100, 100–200, 400–500}. For MAXCUT and
TSP, which involve edge weights, we train up to 200–300 nodes due to the limited computation
resource. For all problems, we test on graphs of size up to 1000–1200.
During testing, instead of using Active Search as in [6], we simply use the greedy policy. This gives
us much faster inference, while still being powerful enough. We modify existing open-source code to
implement both S2V-DQN 2 and PN-AC 3 . Our code is publicly available 4 .
2
https://github.com/Hanjun-Dai/graphnn
3
https://github.com/devsisters/pointer-network-tensorflow
4
https://github.com/Hanjun-Dai/graph_comb_opt
7
(a) MVC BA (b) MAXCUT BA (c) TSP random
Figure 2: Approximation ratio on 1000 test graphs. Note that on MVC, our performance is pretty close to
optimal. In this figure, training and testing graphs are generated according to the same distribution.
Figure 2 shows the average approximation ratio across the three problems; other graph types are in
Figure D.1 in the appendix. In all of these figures, a lower approximation ratio is better. Overall,
our proposed method, S2V-DQN, performs significantly better than other methods. In MVC, the
performance of S2V-DQN is particularly good, as the approximation ratio is roughly 1 and the bar is
barely visible.
The PN-AC algorithm performs well on TSP, as expected. Since the TSP graph is essentially fully-
connected, graph structure is not as important. On problems such as MVC and MAXCUT, where
graph information is more crucial, our algorithm performs significantly better than PN-AC. For TSP,
The Farthest and 2-opt algorithm perform as well as S2V-DQN, and slightly better in some cases.
However, we will show later that in real-world TSP data, our algorithm still performs better.
Table 2: S2V-DQN’s generalization ability. Values are average approximation ratios over 1000 test instances.
These test results are produced by S2V-DQN algorithms trained on graphs with 50-100 nodes.
Test Size 50-100 100-200 200-300 300-400 400-500 500-600 1000-1200
MVC (BA) 1.0033 1.0041 1.0045 1.0040 1.0045 1.0048 1.0062
MAXCUT (BA) 1.0150 1.0181 1.0202 1.0188 1.0123 1.0177 1.0038
TSP (clustered) 1.0730 1.0895 1.0869 1.0918 1.0944 1.0975 1.1065
We can see that S2V-DQN achieves a very good approximation ratio. Note that the “optimal" value
used in the computation of approximation ratios may not be truly optimal (due to the solver time
cutoff at 1 hour), and so CPLEX’s solutions do typically get worse as problem size grows. This is
why sometimes we can even get better approximation ratio on larger graphs.
5.3 Scalability & Trade-off between running time and approximation ratio
To construct a solution on a test graph, our algorithm has polynomial complexity of O(k|E|) where k
is number of greedy steps (at most the number of nodes |V |) and |E| is number of edges. For instance,
on graphs with 1200 nodes, we can find the solution of MVC within 11 seconds using a single GPU,
while getting an approximation ratio of 1.0062. For dense graphs, we can also sample the edges for
the graph embedding computation to save time, a measure we will investigate in the future.
Figure 3 illustrates the approximation ratios of various approaches as a function of running time.
All algorithms report a single solution at termination, whereas CPLEX reports multiple improving
solutions, for which we recorded the corresponding running time and approximation ratio. Figure D.3
(Appendix D.7) includes other graph sizes and types, where the results are consistent with Figure 3.
Figure 3 shows that, for MVC, we are slightly slower than the approximation algorithms but enjoy a
much better approximation ratio. Also note that although CPLEX found the first feasible solution
quickly, it also has much worse ratio; the second improved solution found by CPLEX takes similar or
longer time than our S2V-DQN, but is still of worse quality. For MAXCUT, the observations are still
consistent. One should be aware that sometimes our algorithm can obtain better results than 1-hour
CPLEX, which gives ratios below 1.0. Furthermore, sometimes S2V-DQN is even faster than the
8
Figure 3: Time-approximation
trade-off for MVC and MAX-
CUT. In this figure, each dot
represents a solution found for
a single problem instance, for
100 instances. For CPLEX, we
also record the time and qual-
ity of each solution it finds, e.g.
CPLEX-1st means the first feasi-
(a) MVC BA 200-300 (b) MAXCUT BA 200-300 ble solution found by CPLEX.
MaxcutApprox, although this comparison is not exactly fair, since we use GPUs; however, we can
still see that our algorithm is efficient.
5.4 Experiments on real-world datasets
In addition to the experiments for synthetic data, we identified sets of publicly available benchmark
or real-world instances for each problem, and performed experiments on them. A summary of results
is in Table 3, and details are given in Appendix C. S2V-DQN significantly outperforms all competing
methods for MVC, MAXCUT and TSP.
Table 3: Realistic data experiments, results summary. Values are average approximation ratios.
Problem Dataset S2V-DQN Best Competitor 2nd Best Competitor
MVC MemeTracker 1.0021 1.2220 (MVCApprox-Greedy) 1.4080 (MVCApprox)
MAXCUT Physics 1.0223 1.2825 (MaxcutApprox) 1.8996 (SDP)
TSP TSPLIB 1.0475 1.0800 (Farthest) 1.0947 (2-opt)
References
[1] Albert, Réka and Barabási, Albert-László. Statistical mechanics of complex networks. Reviews
of modern physics, 74(1):47, 2002.
[2] Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David,
Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent.
In Advances in Neural Information Processing Systems, pp. 3981–3989, 2016.
9
[3] Applegate, David, Bixby, Robert, Chvatal, Vasek, and Cook, William. Concorde TSP solver,
2006.
[4] Applegate, David L, Bixby, Robert E, Chvatal, Vasek, and Cook, William J. The traveling
salesman problem: a computational study. Princeton university press, 2011.
[5] Balas, Egon and Ho, Andrew. Set covering algorithms using cutting planes, heuristics, and
subgradient optimization: a computational study. Combinatorial Optimization, pp. 37–60, 1980.
[6] Bello, Irwan, Pham, Hieu, Le, Quoc V, Norouzi, Mohammad, and Bengio, Samy. Neural
combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940,
2016.
[7] Boyan, Justin and Moore, Andrew W. Learning evaluation functions to improve optimization
by local search. Journal of Machine Learning Research, 1(Nov):77–112, 2000.
[8] Chen, Yutian, Hoffman, Matthew W, Colmenarejo, Sergio Gomez, Denil, Misha, Lillicrap,
Timothy P, and de Freitas, Nando. Learning to learn for global optimization of black box
functions. arXiv preprint arXiv:1611.03824, 2016.
[9] Dai, Hanjun, Dai, Bo, and Song, Le. Discriminative embeddings of latent variable models for
structured data. In ICML, 2016.
[10] Du, Nan, Song, Le, Gomez-Rodriguez, Manuel, and Zha, Hongyuan. Scalable influence
estimation in continuous-time diffusion networks. In NIPS, 2013.
[11] Erdos, Paul and Rényi, A. On the evolution of random graphs. Publ. Math. Inst. Hungar. Acad.
Sci, 5:17–61, 1960.
[12] Goemans, M.X. and Williamson, D. P. Improved approximation algorithms for maximum
cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):
1115–1145, 1995.
[13] Gomez-Rodriguez, Manuel, Leskovec, Jure, and Krause, Andreas. Inferring networks of
diffusion and influence. In Proceedings of the 16th ACM SIGKDD international conference on
Knowledge discovery and data mining, pp. 1019–1028. ACM, 2010.
[14] Graves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, Grabska-
Barwińska, Agnieszka, Colmenarejo, Sergio Gómez, Grefenstette, Edward, Ramalho, Tiago,
Agapiou, John, et al. Hybrid computing using a neural network with dynamic external memory.
Nature, 538(7626):471–476, 2016.
[15] Gu, Shixiang, Lillicrap, Timothy, Ghahramani, Zoubin, Turner, Richard E, and Levine,
Sergey. Q-prop: Sample-efficient policy gradient with an off-policy critic. arXiv preprint
arXiv:1611.02247, 2016.
[16] He, He, Daume III, Hal, and Eisner, Jason M. Learning to search in branch and bound algorithms.
In Advances in Neural Information Processing Systems, pp. 3293–3301, 2014.
[17] IBM. CPLEX User’s Manual, Version 12.6.1, 2014.
[18] Johnson, David S and McGeoch, Lyle A. Experimental analysis of heuristics for the stsp. In
The traveling salesman problem and its variations, pp. 369–443. Springer, 2007.
[19] Karp, Richard M. Reducibility among combinatorial problems. In Complexity of computer
computations, pp. 85–103. Springer, 1972.
[20] Kempe, David, Kleinberg, Jon, and Tardos, Éva. Maximizing the spread of influence through a
social network. In KDD, pp. 137–146. ACM, 2003.
[21] Khalil, Elias B., Dilkina, B., and Song, L. Scalable diffusion-aware optimization of network
topology. In Knowledge Discovery and Data Mining (KDD), 2014.
[22] Khalil, Elias B., Le Bodic, Pierre, Song, Le, Nemhauser, George L, and Dilkina, Bistra N.
Learning to branch in mixed integer programming. In AAAI, pp. 724–731, 2016.
10
[23] Khalil, Elias B., Dilkina, Bistra, Nemhauser, George, Ahmed, Shabbir, and Shao, Yufen.
Learning to run heuristics in tree search. In 26th International Joint Conference on Artificial
Intelligence (IJCAI), 2017.
[24] Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[25] Kleinberg, Jon and Tardos, Eva. Algorithm design. Pearson Education India, 2006.
[26] Lagoudakis, Michail G and Littman, Michael L. Learning to select branching rules in the dpll
procedure for satisfiability. Electronic Notes in Discrete Mathematics, 9:344–359, 2001.
[27] Li, Ke and Malik, Jitendra. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
[28] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis,
Wierstra, Daan, and Riedmiller, Martin A. Playing atari with deep reinforcement learning.
CoRR, abs/1312.5602, 2013. URL http://arxiv.org/abs/1312.5602.
[29] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare,
Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
[30] Papadimitriou, C. H. and Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity.
Prentice-Hall, New Jersey, 1982.
[31] Peleg, David, Schechtman, Gideon, and Wool, Avishai. Approximating bounded 0-1 integer
linear programs. In Theory and Computing Systems, 1993., Proceedings of the 2nd Israel
Symposium on the, pp. 69–77. IEEE, 1993.
[32] Reinelt, Gerhard. Tsplib—a traveling salesman problem library. ORSA journal on computing, 3
(4):376–384, 1991.
[33] Riedmiller, Martin. Neural fitted q iteration–first experiences with a data efficient neural
reinforcement learning method. In European Conference on Machine Learning, pp. 317–328.
Springer, 2005.
[34] Sabharwal, Ashish, Samulowitz, Horst, and Reddy, Chandra. Guiding combinatorial optimiza-
tion with uct. In CPAIOR, pp. 356–361. Springer, 2012.
[35] Samulowitz, Horst and Memisevic, Roland. Learning to solve QBF. In AAAI, 2007.
[36] Sutton, R.S. and Barto, A.G. Reinforcement Learning: An Introduction. MIT Press, 1998.
[37] Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. Pointer networks. In Advances in Neural
Information Processing Systems, pp. 2692–2700, 2015.
[38] Zhang, Wei and Dietterich, Thomas G. Solving combinatorial optimization tasks by reinforce-
ment learning: A general methodology applied to resource-constrained scheduling. Journal of
Artificial Intelligence Reseach, 1:1–38, 2000.
11