0% found this document useful (0 votes)
80 views

Practical Lesson 4. Constrained Optimization: Degree in Data Science

1. The document introduces the nloptr function in R, which can be used to solve nonlinear optimization problems with constraints. It describes the arguments and usage of the nloptr function. 2. It provides two examples of using nloptr: 1) minimizing the Rosenbrock function without constraints, and 2) solving a constrained optimization problem from a test collection. 3. The document concludes by noting that when there are multiple constraints of the same type, the constraint evaluation functions will return them as rows in a matrix using rbind. It then presents a problem asking to calculate maximum network flow with capacity constraints.

Uploaded by

Ivan Alexandrov
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Practical Lesson 4. Constrained Optimization: Degree in Data Science

1. The document introduces the nloptr function in R, which can be used to solve nonlinear optimization problems with constraints. It describes the arguments and usage of the nloptr function. 2. It provides two examples of using nloptr: 1) minimizing the Rosenbrock function without constraints, and 2) solving a constrained optimization problem from a test collection. 3. The document concludes by noting that when there are multiple constraints of the same type, the constraint evaluation functions will return them as rows in a matrix using rbind. It then presents a problem asking to calculate maximum network flow with capacity constraints.

Uploaded by

Ivan Alexandrov
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Practical lesson 4.

Constrained Optimization
Degree in Data Science

Ramón Álvarez-Valdés and Juan Francisco Correcher

April 6, 2022

Introduction
In this lesson, we will do optimization exercises with constraints related to Themes 9, 10 and 11 of theory.
To do this, we will use the function nloptr.

Function nloptr (non-linear optimization with constraints)


The nloptr function is actually an R interface to access NLopt, which is an open source library for solving
non-linear optimization problems containing implementations of several algorithms and an access interface to
other algorithms. You need to have the nloptr library installed.
The nloptr function solves the general optimization problem:
M inf (x)
s.t. g(x) ≤ 0; h(x) = 0; lb ≤ x ≤ ub
where f is the function to minimize, the elements of vector x are the optimization parameters (variables), g
are the inequality constraints, h are the equality constraints, lb are the lower bounds on the variables and ub
the upper bounds. As will be explained later, not all algorithms in NLopt can solve problems with all kinds
of constraints.

Use of the function nloptr


nloptr(x0, eval_f, eval_grad_f = NULL, lb = NULL, ub = NULL, eval_g_ineq = NULL,
eval_jac_g_ineq = NULL, eval_g_eq = NULL, eval_jac_g_eq = NULL, opts = list(), . . . )

Argument Definition
x0 Vector with the initial values
eval_f Function that returns the value of the function to minimize
eval_grad_f Function that returns the value of the gradient of the function to minimize, if the
algorithm used requires the gradient
lb Vector with the lower bounds on the variables (if there is no lower bound, by default it
takes minus infinity)
ub Vector with the upper bounds on the variables (if there is no upper bound, by default it
takes infinity)
eval_g_ineq Function to evaluate the inequality constraints that the solution has to satisfy
eval_jac_g_ineq Function to evaluate the Jacobian of inequality constraints, if the algorithm used
requires it
eval_g_eq Function to evaluate the equality constraints that the solution has to satisfy
eval_jac_g_eq Function to evaluate the Jacobian of equality constraints, if the algorithm used requires
it

1
Argument Definition
opts List with options. The option algorithm is required ans we will include the stopping
criterion

There are different stopping criteria. We will use:


maxeval: The algorithm stops when the number of evaluations of the function reaches that value.
Some algorithms that call another algorithm in their execution require the option: local_opt, which must
contain a list of the local algorithm and its stopping criterion.

Available algorithms in nloptr


They all start with NLOPT. If it has L$, it is an algorithm that stops at a local minimum. If it is G, it
searches for the global minimum. The second letter, if it is D, is that the method uses derivatives. If
it is N , does not use derivatives. There are many options, but here we will only comment on those that
we will use in the practice.
1. When the problem to be minimized is unrestricted we will use
a) When derivatives can be used: NLOPT_LD_LBFGS, which is a version of the Broyden-Fletcher-
Goldfarb-Shanno Quasi-Newton BFGS algorithm, which we have seen in Theme 6, with low memory
requirements.
b) When derivatives cannot be used: NLOPT_LN_NELDERMEAD, which we have seen in Theme 8.
2.- When the problem is constrained, we will use the Augmented Lagrangian Method, which we have seen in
Theme 11:
a) When derivatives can be used: NLOPT_LD_AUGLAG, which will internally call NLOPT_LD_LBFGS
in each iteration
b) When derivatives cannot be used: NLOPT_LN_AUGLAG, which will internally call NLOPT_LN_NELDERMEAD
in each iteration
The option print_level controls the output displayed during optimization. Possible values:
• 0 : Default option. No output
• 1 : Shows the number of the iteration and the value of the function
• 2 : Also shows the value of the restrictions
• 3 : Also shows the value of the variables

Examples of use of nloptr


1. Unconstrained problem:
Let us consider the Rosenbrock function:
f (x) = 100(x2 − x21 )2 + (1 − x1 )2
 
The gradient of f is ∇f (x) = − 400x1 (x2 − x21 ) − 2(1 − x1 ), 200(x2 − x21 ) .
library(nloptr)

# Definition of the function:


f_Ros <- function(x) {
return( 100 * (x[2] - x[1]ˆ2)ˆ2 + (1 - x[1])ˆ2 )
}

2
# Definition of the gradient:
grad_Ros <- function(x) {
return( c( -400 * x[1] * (x[2] - x[1]ˆ2) - 2 * (1 - x[1]),200 * (x[2] - x[1]ˆ2) ) )
}

#Starting point:
x_initial <- c( -1.2, 1 )

# Options (at least the algorithm option has to be used)


# In this case, it will use LBFGS, which searches for local minimum using derivatives
options <- list("algorithm"="NLOPT_LD_LBFGS", "maxeval"=1000)

# Calling the funcion


res <- nloptr( x0=x_initial, eval_f=f_Ros, eval_grad_f=grad_Ros, opts=options)

#Printing the result


print (res)

##
## Call:
## nloptr(x0 = x_initial, eval_f = f_Ros, eval_grad_f = grad_Ros,
## opts = options)
##
##
## Minimization using NLopt version 2.7.1
##
## NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped because
## xtol_rel or xtol_abs (above) was reached. )
##
## Number of Iterations....: 55
## Termination conditions: maxeval: 1000
## Number of inequality constraints: 0
## Number of equality constraints: 0
## Optimal value of objective function: 6.57984662182145e-17
## Optimal value of controls: 1 1

2. An example of a constrained problem:


Problem 71 from the Hock-Schittkowsky collection:
Min x1 · x4 · (x1 + x2 + x3 ) + x3
s.t. x1 · x2 · x3 · x4 >= 25
x21 + x22 + x23 + x24 = 40
1 ≤ x1 , x2 , x3 , x4 ≤ 5
We rewrite the inequality as: 25 − x1 · x2 · x3 · x4 ≤ 0
and the equality as: x21 + x22 + x23 + x24 − 40 = 0
Starting point: x0 = (1, 5, 5, 1)
Optimal solution = (1.00000000, 4.74299963, 3.82114998, 1.37940829)

3
#Definition of the funtion
f_Hock <- function( x ) {
return( x[1]*x[4]*(x[1] + x[2] + x[3]) + x[3]) }

#Definition of the gradient of the function


grad_Hock <- function( x )
{
return(c(x[4]*(x[1] + x[2] + x[3])+x[1]*x[4],
x[1]*x[4],
x[1]*x[4]+1,
x[1]*(x[1] + x[2] + x[3]))
)
}

# Definition of the inequality constraints


desig_Hock <- function( x ) {
constr <- c( 25 - x[1] * x[2] * x[3] * x[4] )
return( constr )
}

# Definition of the jacobian of the inequality constraints


grad_desig_Hock <- function(x)
{
return(c(-x[2]*x[3]*x[4],
-x[1]*x[3]*x[4],
-x[1]*x[2]*x[4],
-x[1]*x[2]*x[3]) )
}

# Definition of the equality constraints


equal_Hock <- function( x ) {
constr <- c( x[1]ˆ2 + x[2]ˆ2 + x[3]ˆ2 + x[4]ˆ2 - 40 )
return( constr)
}

# Definition of the jacobian of the equality constraints


grad_equal_Hock <- function(x)
{
return(c(2*x[1],
2*x[2],
2*x[3],
2*x[4]) )
}

# Starting point
x_initial <- c( 1, 5, 5, 1 )

# Lower and upper bounds on the variables


lower_b <- c( 1, 1, 1, 1 )
upper_b <- c( 5, 5, 5, 5 )

# Options (in this case, we will use the Augmented Lagrangian Method)
# Local options (the Augmented Lagrangian Method needs a local optimizer,

4
# in this case a Quasi-Newton method)

opciones_locales <- list( "algorithm" = "NLOPT_LD_LBFGS",


"maxeval"=5000)

opciones <- list( "algorithm" = "NLOPT_LD_AUGLAG",


"maxeval" = 5000,
"local_opts"=opciones_locales,
"print_level"=0)

# Calling the function


res <- nloptr( x0=x_initial,
eval_f=f_Hock,
eval_grad_f = grad_Hock,
lb=lower_b,
ub=upper_b,
eval_g_ineq=desig_Hock,
eval_jac_g_ineq=grad_desig_Hock,
eval_g_eq=equal_Hock,
eval_jac_g_eq=grad_equal_Hock,
opts=opciones
)

#Printing the results


print(res)

##
## Call:
## nloptr(x0 = x_initial, eval_f = f_Hock, eval_grad_f = grad_Hock,
## lb = lower_b, ub = upper_b, eval_g_ineq = desig_Hock, eval_jac_g_ineq = grad_desig_Hock,
## eval_g_eq = equal_Hock, eval_jac_g_eq = grad_equal_Hock,
## opts = opciones)
##
##
## Minimization using NLopt version 2.7.1
##
## NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped because
## xtol_rel or xtol_abs (above) was reached. )
##
## Number of Iterations....: 142
## Termination conditions: maxeval: 5000
## Number of inequality constraints: 1
## Number of equality constraints: 1
## Optimal value of objective function: 17.0140173366013
## Optimal value of controls: 1 4.743174 3.820922 1.37944
NOTE: When there is more than one constraint of the same type (equality or inequality), the functions
eval_g_ineq, eval_jac_g_ineq, eval_g_eq and eval_jac_g_eq use the rbind function to add each constraint
as a row in the matrix.

5
Problems
Problem 1
The figure shows a network consisting of 4 nodes and 5 arcs, representing a simplified street or road network.

Every minute a flow F enters the network at node 1 and leaves the network at node 4. Calculate the maximum
flow per minute that can traverse the network, knowing that the maximum capacities of the arcs are:
X1 , X3 , X5 ≤ 10.
X2 , X4 ≤ 30.
and taking into account that in the intermediate nodes it must be fulfilled that the input flow is equal to the
output flow and that all vehicles entering node 1 leave through node 4.
a) Write the model of the problem
b) Solve the model using nloptr. Since the constraints are linear, they have no second deriva-
tive, and it may be more appropriate to use methods that do not require derivatives, such as
NLOPT_LN_NELDERMEAD, called from the NLOPT_LN_AUGLAG function. Since this method
has a slow convergence, it has to be given a very high limit for the number of iterations (e.g., 100000).
As a starting point x0 = (10, 10, 10, 10, 10) can be used,
c) Let us now consider the travel time through the network. The time required to travel through an arc
depends on the number of vehicles in that arc. The travel times of a vehicle on each arc and the flow of
vehicles on the arc follow the following relationships:
Ti = Xi /(1 − Xi /80).
Calculate the minimum total time used by 80 vehicles to traverse the network. The maximum capacity of
each arc will now be 80.

Problem 2
Find the square of minimum area containing three squares of sides 2, 4 and 8. To guarantee a correct
distribution of the small squares within the large one, the squared distance between the centers of two squares
of sides li and lj has to be greater than or equal to ((li + lj )2 )/2.
a) Write the model of the problem, assuming that the lower left end of the large square is at (0, 0).
b) Solve the model using nloptr. Since the function and some constraints are quadratic, but are derivable,
the NLOPT_LD_AUGLAG algorithm can be used and NLOPT_LD_LBFGS as a local optimizer.
The problem has many local minima and the solution obtained may depend on the initial point. Use as
initial solutions the points (1, 1, 2, 2, 4, 4, 14) and (1, 1, 2, 8, 7, 3, 14) and compare the results. Set the
lower bound to 0 and the upper bound to 14 for all variables.
c) Draw the obtained solutions. Do you think that any of them will be a global minimum?

You might also like