1 % Usage: [xopt, fmin, retcode] = nlopt_minimize_constrained
2 % (algorithm, f, f_data,
6 % Minimizes a nonlinear multivariable function f(x, f_data{:}), subject
7 % to nonlinear constraints described by fc and fc_data (see below), where
8 % x is a row vector, returning the optimal x found (xopt) along with
9 % the minimum function value (fmin = f(xopt)) and a return code (retcode).
10 % A variety of local and global optimization algorithms can be used,
11 % as specified by the algorithm parameter described below. lb and ub
12 % are row vectors giving the upper and lower bounds on x, xinit is
13 % a row vector giving the initial guess for x, and stop is a struct
14 % containing termination conditions (see below).
16 % This function is a front-end for the external routine
17 % nlopt_minimize_constrained in the free NLopt nonlinear-optimization
18 % library, which is a wrapper around a number of free/open-source
19 % optimization subroutines. More details can be found on the NLopt
20 % web page (ab-initio.mit.edu/nlopt) and also under
21 % 'man nlopt_minimize_constrained' on Unix.
23 % f should be a handle (@) to a function of the form:
25 % [val, gradient] = f(x, ...)
27 % where x is a row vector, val is the function value f(x), and gradient
28 % is a row vector giving the gradient of the function with respect to x.
29 % The gradient is only used for gradient-based optimization algorithms;
30 % some of the algorithms (below) are derivative-free and only require
31 % f to return val (its value). f can take additional arguments (...)
32 % which are passed via the argument f_data: f_data is a cell array
33 % of the additional arguments to pass to f. (Recall that cell arrays
34 % are specified by curly brackets { ... }. For example, pass f_data={}
35 % for functions that require no additional arguments.)
37 % A few of the algorithms (below) support nonlinear constraints,
38 % in particular NLOPT_LD_MMA and NLOPT_LN_COBYLA. These (if any)
39 % are specified by fc and fc_data. fc is a cell array of
40 % function handles, and fc_data is a cell array of cell arrays of the
41 % corresponding arguments. Both must have the same length m, the
42 % number of nonlinear constraints. That is, fc{i} is a handle
43 % to a function of the form:
45 % [val, gradient] = fc(x, ...)
47 % (where the gradient is only used for gradient-based algorithms),
48 % and the ... arguments are given by fc_data{i}{:}.
50 % If you have no nonlinear constraints, i.e. fc = fc_data = {}, then
51 % it is equivalent to calling the the nlopt_minimize() function,
52 % which omits the fc and fc_data arguments.
54 % stop describes the termination criteria, and is a struct with a
55 % number of optional fields:
56 % stop.ftol_rel = fractional tolerance on function value
57 % stop.ftol_abs = absolute tolerance on function value
58 % stop.xtol_rel = fractional tolerance on x
59 % stop.xtol_abs = row vector of absolute tolerances on x components
60 % stop.fmin_max = stop when f < fmin_max is found
61 % stop.maxeval = maximum number of function evaluations
62 % stop.maxtime = maximum run time in seconds
63 % stop.verbose = > 0 indicates verbose output
64 % Minimization stops when any one of these conditions is met; any
65 % condition that is omitted from stop will be ignored. WARNING:
66 % not all algorithms interpret the stopping criteria in exactly the
67 % same way, and in any case ftol/xtol specify only a crude estimate
68 % for the accuracy of the minimum function value/x.
70 % The algorithm should be one of the following constants (name and
71 % interpretation are the same as for the C function). Names with
72 % _G*_ are global optimization, and names with _L*_ are local
73 % optimization. Names with _*N_ are derivative-free, while names
74 % with _*D_ are gradient-based algorithms. Algorithms:
76 % NLOPT_GD_MLSL_LDS, NLOPT_GD_MLSL, NLOPT_GD_STOGO, NLOPT_GD_STOGO_RAND,
77 % NLOPT_GN_CRS2_LM, NLOPT_GN_DIRECT_L, NLOPT_GN_DIRECT_L_NOSCAL,
78 % NLOPT_GN_DIRECT_L_RAND, NLOPT_GN_DIRECT_L_RAND_NOSCAL, NLOPT_GN_DIRECT,
79 % NLOPT_GN_DIRECT_NOSCAL, NLOPT_GN_ISRES, NLOPT_GN_MLSL_LDS, NLOPT_GN_MLSL,
80 % NLOPT_GN_ORIG_DIRECT_L, NLOPT_GN_ORIG_DIRECT, NLOPT_LD_AUGLAG_EQ,
81 % NLOPT_LD_AUGLAG, NLOPT_LD_LBFGS, NLOPT_LD_LBFGS_NOCEDAL, NLOPT_LD_MMA,
82 % NLOPT_LD_TNEWTON, NLOPT_LD_TNEWTON_PRECOND,
83 % NLOPT_LD_TNEWTON_PRECOND_RESTART, NLOPT_LD_TNEWTON_RESTART,
84 % NLOPT_LD_VAR1, NLOPT_LD_VAR2, NLOPT_LN_AUGLAG_EQ, NLOPT_LN_AUGLAG,
85 % NLOPT_LN_BOBYQA, NLOPT_LN_COBYLA, NLOPT_LN_NELDERMEAD,
86 % NLOPT_LN_NEWUOA_BOUND, NLOPT_LN_NEWUOA, NLOPT_LN_PRAXIS, NLOPT_LN_SBPLX
88 % For more information on individual algorithms, see their individual
89 % help pages (e.g. "help NLOPT_LN_SBPLX").
90 function [xopt, fmin, retcode] = nlopt_minimize_constrained(algorithm, f, f_data, fc, fc_data, lb, ub, xinit, stop)
93 if (isfield(stop, 'minf_max'))
94 opt.stopval = stop.minf_max;
96 opt.algorithm = algorithm;
97 opt.min_objective = @(x) f(x, f_data{:});
98 opt.lower_bounds = lb;
99 opt.upper_bounds = ub;
101 opt.fc{i} = @(x) fc{i}(x, fc_data{i}{:});
103 [xopt, fmin, retcode] = nlopt_optimize(opt, xinit);