1 % Usage: [xopt, fopt, retcode] = nlopt_optimize(opt, xinit)
3 % Optimizes (minimizes or maximizes) a nonlinear function under
4 % nonlinear constraints from the starting guess xinit, where the
5 % objective, constraints, stopping criteria, and other options are
6 % specified in the structure opt described below. A variety of local
7 % and global optimization algorithms can be used, as specified by the
8 % opt.algorithm parameter described below. Returns the optimum
9 % function value fopt, the location xopt of the optimum, and a
10 % return code retcode described below (> 0 on success).
12 % The dimension (n) of the problem, i.e. the number of design variables,
13 % is specified implicitly via the length of xinit.
15 % This function is a front-end for the external routine nlopt_optimize
16 % in the free NLopt nonlinear-optimization library, which is a wrapper
17 % around a number of free/open-source optimization subroutines. More
18 % details can be found on the NLopt web page (ab-initio.mit.edu/nlopt)
19 % and also under 'man nlopt_minimize' on Unix.
23 % The objective function f is specified via opt.min_objective or
24 % opt.max_objective for minimization or maximization, respectively.
25 % opt.min/max_objective should be a handle (@) to a function of the form:
27 % [val, gradient] = f(x)
29 % where x is a row vector, val is the function value f(x), and gradient
30 % is a row vector giving the gradient of the function with respect to x.
31 % The gradient is only used for gradient-based optimization algorithms;
32 % some of the algorithms (below) are derivative-free and only require
33 % f to return val (its value).
37 % Lower and/or upper bounds for the design variables x are specified
38 % via opt.lower_bounds and/or opt.upper_bounds, respectively: these
39 % are vectors (of the same length as xinit, above) giving the bounds
40 % in each component. An unbounded component may be specified by a
41 % lower/upper bound of -inf/+inf, respectively. If opt.lower_bounds
42 % and/or opt.upper_bounds are not specified, the default bounds are
43 % -inf/+inf (i.e. unbounded), respectively.
45 % NONLINEAR CONSTRAINTS:
47 % Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also
48 % support arbitrary nonlinear inequality constraints, and some also allow
49 % nonlinear equality constraints (ISRES and AUGLAG). For these
50 % algorithms, you can specify as many nonlinear constraints as you wish.
51 % (The default is no nonlinear constraints.)
53 % Inequality constraints of the form fc{i}(x) <= 0 are specified via opt.fc,
54 % which is a cell array of function handles (@) of the same form as
55 % the objective function above (i.e., returning the value and optionally
56 % the gradient of the constraint function fc{i}, where the gradient is
57 % only needed for gradient-based algorithms).
59 % Equality constraints of the form h{i}(x) = 0 are specified via opt.h,
60 % which is a cell array of function handles (@) of the same form as
61 % the objective function above (i.e., returning the value and optionally
62 % the gradient of the constraint function h{i}, where the gradient is
63 % only needed for gradient-based algorithms).
65 % For both inequality and equality constraints, you can supply a
66 % "tolerance" for each constraint: this tolerance is used for convergence
67 % tests only, and a point x is considered feasible for purposes of
68 % convergence if the constraint is violated by the given tolerance.
69 % The tolerances are specified via opt.fc_tol and opt.h_tol, respectively,
70 % which must be vectors of the same length as opt.fc and opt.h, so
71 % that opt.fc_tol(i) is the tolerance for opt.fc{i} and opt.h_tol(i)
72 % is the tolerance for opt.h{i}. These tolerances default to zero; a
73 % small nonzero tolerance is recommended, however, especially for h_tol.
77 % The optimization algorithm must be specified via opt.algorithm.
79 % The algorithm should be one of the following constants (name and
80 % interpretation are the same as for the C language interface). Names
81 % with _G*_ are global optimization, and names with _L*_ are local
82 % optimization. Names with _*N_ are derivative-free, while names
83 % with _*D_ are gradient-based algorithms. Algorithms:
85 % NLOPT_GD_MLSL_LDS, NLOPT_GD_MLSL, NLOPT_GD_STOGO, NLOPT_GD_STOGO_RAND,
86 % NLOPT_GN_CRS2_LM, NLOPT_GN_DIRECT_L, NLOPT_GN_DIRECT_L_NOSCAL,
87 % NLOPT_GN_DIRECT_L_RAND, NLOPT_GN_DIRECT_L_RAND_NOSCAL, NLOPT_GN_DIRECT,
88 % NLOPT_GN_DIRECT_NOSCAL, NLOPT_GN_ISRES, NLOPT_GN_MLSL_LDS, NLOPT_GN_MLSL,
89 % NLOPT_GN_ORIG_DIRECT_L, NLOPT_GN_ORIG_DIRECT, NLOPT_LD_AUGLAG_EQ,
90 % NLOPT_LD_AUGLAG, NLOPT_LD_LBFGS, NLOPT_LD_LBFGS_NOCEDAL, NLOPT_LD_MMA,
91 % NLOPT_LD_TNEWTON, NLOPT_LD_TNEWTON_PRECOND,
92 % NLOPT_LD_TNEWTON_PRECOND_RESTART, NLOPT_LD_TNEWTON_RESTART,
93 % NLOPT_LD_VAR1, NLOPT_LD_VAR2, NLOPT_LN_AUGLAG_EQ, NLOPT_LN_AUGLAG,
94 % NLOPT_LN_BOBYQA, NLOPT_LN_COBYLA, NLOPT_LN_NELDERMEAD,
95 % NLOPT_LN_NEWUOA_BOUND, NLOPT_LN_NEWUOA, NLOPT_LN_PRAXIS, NLOPT_LN_SBPLX
97 % For more information on individual algorithms, see their individual
98 % help pages (e.g. "help NLOPT_LN_SBPLX").
102 % Multiple stopping criteria can be specified by setting one or more of
103 % the following fields of opt. The optimization halts whenever any
104 % one of the given criteria is satisfied.
106 % opt.stopval: Stop when an objective value of at least stopval is found.
107 % That is, stop minimizing when a value <= stopval is found, or stop
108 % maximizing when a value >= stopval is found.
110 % opt.ftol_rel: Relative tolerance on function value, to stop when
111 % an optimization step (or an estimate of the optimum) changes
112 % the function value by less than opt.ftol_rel multiplied by
113 % the absolute value of the function.
115 % opt.ftol_abs: Absolute tolerance on function value, to stop when
116 % an optimization step (or an estimate of the optimum) changes
117 % the function value by less than opt.ftol_abs.
119 % opt.xtol_rel: Relative tolerance on function value, to stop when
120 % an optimization step (or an estimate of the optimum) changes
121 % every component of x by less than opt.xtol_rel multiplied by
122 % the absolute value of that component of x.
124 % opt.xtol_abs: Absolute tolerance on function value, to stop when
125 % an optimization step (or an estimate of the optimum) changes
126 % every component of x by less than that component of opt.xtol_abs
127 % -- should be a vector of same length as x.
129 % opt.maxeval: Maximum number of function evaluations.
131 % opt.maxtime: Maximum runtime (in seconds) for the optimization.
135 % The retcode result is positive upon successful completion, and
136 % negative for an error. The specific values are:
138 % generic success code: +1
139 % stopval reached: +2
142 % maxeval reached: +5
143 % maxtime reached: +6
144 % generic failure code: -1
145 % invalid arguments: -2
147 % roundoff-limited: -4
151 % Some of the algorithms, especially MLSL and AUGLAG, use a different
152 % optimization algorithm as a subroutine, typically for local optimization.
153 % By default, they use MMA or COBYLA for gradient-based or derivative-free
154 % searching, respectively. However, you can change this by specifying
155 % opt.local_optimizer: this is a structure with the same types of fields as opt
156 % (stopping criteria, algorithm, etcetera). The objective function
157 % and nonlinear constraint parameters of opt.local_optimizer are ignored.
161 % For derivative-free local-optimization algorithms, the optimizer must
162 % somehow decide on some initial step size to perturb x by when it begins
163 % the optimization. This step size should be big enough that the value
164 % of the objective changes significantly, but not too big if you want to
165 % find the local optimum nearest to x. By default, NLopt chooses this
166 % initial step size heuristically from the bounds, tolerances, and other
167 % information, but this may not always be the best choice.
169 % You can modify the initial step by setting opt.initial_step, which
170 % is a vector of the same length as x containing the (nonzero) initial
171 % step size for each component of x.
173 % STOCHASTIC POPULATION:
175 % Several of the stochastic search algorithms (e.g., CRS, MLSL, and
176 % ISRES) start by generating some initial "population" of random points
177 % x. By default, this initial population size is chosen heuristically in
178 % some algorithm-specific way, but the initial population can by changed
179 % by setting opt.population to the desired initial population size.
183 % If opt.verbose is set to a nonzero value, then nlopt_optimize
184 % will print out verbose output; for example, it will print the
185 % value of the objective function after each evaluation.
189 % For more documentation, such as a detailed description of all the
190 % algorithms, see the NLopt home page: http://ab-initio.mit.edu/nlopt