2 .\" Copyright (c) 2007 Massachusetts Institute of Technology
4 .\" Copying and distribution of this file, with or without modification,
5 .\" are permitted in any medium without royalty provided the copyright
6 .\" notice and this notice are preserved.
8 .TH NLOPT_MINIMIZE 3 2007-08-23 "MIT" "NLopt programming manual"
10 nlopt_minimize \- Minimize a multivariate nonlinear function
15 .BI "nlopt_result nlopt_minimize(nlopt_algorithm " "algorithm" ,
18 .BI " nlopt_func " "f" ,
19 .BI " void* " "f_data" ,
20 .BI " const double* " "lb" ,
21 .BI " const double* " "ub" ,
23 .BI " double* " "minf" ,
24 .BI " double " "minf_max" ,
25 .BI " double " "ftol_rel" ,
26 .BI " double " "ftol_abs" ,
27 .BI " double " "xtol_rel" ,
28 .BI " const double* " "xtol_abs" ,
29 .BI " int " "maxeval" ,
30 .BI " double " "maxtime" );
32 You should link the resulting program with the linker flags
37 attempts to minimize a nonlinear function
41 design variables using the specified
43 The minimum function value found is returned in
45 with the corresponding design variable values returned in the array
51 should be a starting guess for the optimum.
58 containing lower and upper bounds, respectively, on the design variables
60 The other parameters specify stopping criteria (tolerances, the maximum
61 number of function evaluations, etcetera) and other information as described
62 in more detail below. The return value is a integer code indicating success
63 (positive) or failure (negative), as described below.
65 By changing the parameter
67 among several predefined constants described below, one can switch easily
68 between a variety of minimization algorithms. Some of these algorithms
69 require the gradient (derivatives) of the function to be supplied via
71 and other algorithms do not require derivatives. Some of the
72 algorithms attempt to find a global minimum within the given bounds,
73 and others find only a local minimum.
77 function is a wrapper around several free/open-source minimization packages.
78 as well as some new implementations of published optimization algorithms.
79 You could, of course, compile and call these packages separately, and in
80 some cases this will provide greater flexibility than is available via the
82 interface. However, depending upon the specific function being minimized,
83 the different algorithms will vary in effectiveness. The intent of
85 is to allow you to quickly switch between algorithms in order to experiment
86 with them for your problem, by providing a simple unified interface to
88 .SH OBJECTIVE FUNCTION
90 minimizes an objective function
94 .BI " double f(int " "n" ,
96 .BI " const double* " "x" ,
98 .BI " double* " "grad" ,
100 .BI " void* " "f_data" );
102 The return value should be the value of the function at the point
106 points to an array of length
108 of the design variables. The dimension
110 is identical to the one passed to
111 .BR nlopt_minimize ().
113 In addition, if the argument
117 points to an array of length
119 which should (upon return) be set to the gradient of the function with
120 respect to the design variables at
124 should upon return contain the partial derivative df/dx[i],
128 Not all of the optimization algorithms (below) use the gradient information:
129 for algorithms listed as "derivative-free," the
131 argument will always be NULL and need never be computed. (For
132 algorithms that do use gradient information, however,
134 may still be NULL for some calls.)
138 argument is the same as the one passed to
139 .BR nlopt_minimize (),
140 and may be used to pass any additional data through to the function.
141 (That is, it may be a pointer to some caller-defined data
142 structure/type containing information your function needs, which you
143 convert from void* by a typecast.)
146 Most of the algorithms in NLopt are designed for minimization of functions
147 with simple bound constraints on the inputs. That is, the input vectors
148 x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i] <= ub[i] for
153 are the two arrays passed to
154 .BR nlopt_minimize ().
156 However, a few of the algorithms support partially or totally
157 unconstrained optimization, as noted below, where a (totally or
158 partially) unconstrained design variable is indicated by a lower bound
159 equal to -Inf and/or an upper bound equal to +Inf. Here, Inf is the
160 IEEE-754 floating-point infinity, which (in ANSI C99) is represented by
161 the macro INFINITY in math.h. Alternatively, for older C versions
162 you may also use the macro HUGE_VAL (also in math.h).
164 With some of the algorithms, especially those that do not require
165 derivative information, a simple (but not especially efficient) way
166 to implement arbitrary nonlinear constraints is to return Inf (see
167 above) whenever the constraints are violated by a given input
169 More generally, there are various ways to implement constraints
170 by adding "penalty terms" to your objective function, which are
171 described in the optimization literature.
172 A much more efficient way to specify nonlinear constraints is
174 .BR nlopt_minimize_constrained ()
175 function (described in its own manual page).
179 parameter specifies the optimization algorithm (for more detail on
180 these, see the README files in the source-code subdirectories), and
181 can take on any of the following constant values. Constants with
184 refer to global optimization methods, whereas
186 refers to local optimization methods (that try to find a local minimum
187 starting from the starting guess
191 refer to non-gradient (derivative-free) algorithms that do not require the
192 objective function to supply a gradient, whereas
194 refers to derivative-based algorithms that require the objective
195 function to supply a gradient. (Especially for local optimization,
196 derivative-based algorithms are generally superior to derivative-free
197 ones: the gradient is good to have
199 you can compute it cheaply, e.g. via an adjoint method.)
202 Perform a global (G) derivative-free (N) optimization using the
203 DIRECT-L search algorithm by Jones et al. as modified by Gablonsky et
204 al. to be more weighted towards local search. Does not support
205 unconstrainted optimization. There are also several other variants of
206 the DIRECT algorithm that are supported:
207 .BR NLOPT_GLOBAL_DIRECT ,
208 which is the original DIRECT algorithm;
209 .BR NLOPT_GLOBAL_DIRECT_L_RAND ,
210 a slightly randomized version of DIRECT-L that may be better in
211 high-dimensional search spaces;
212 .BR NLOPT_GLOBAL_DIRECT_NOSCAL ,
213 .BR NLOPT_GLOBAL_DIRECT_L_NOSCAL ,
215 .BR NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL ,
216 which are versions of DIRECT where the dimensions are not rescaled to
217 a unit hypercube (which means that dimensions with larger bounds are
220 .B NLOPT_GN_ORIG_DIRECT_L
221 A global (G) derivative-free optimization using the DIRECT-L algorithm
223 .B NLOPT_GN_ORIG_DIRECT
224 which is the original DIRECT algorithm. Unlike
226 above, these two algorithms refer to code based on the original
227 Fortran code of Gablonsky et al., which has some hard-coded
228 limitations on the number of subdivisions etc. and does not support
229 all of the NLopt stopping criteria, but on the other hand supports
230 arbitrary nonlinear constraints as described above.
233 Global (G) optimization using the StoGO algorithm by Madsen et al. StoGO
234 exploits gradient information (D) (which must be supplied by the
235 objective) for its local searches, and performs the global search by a
236 branch-and-bound technique. Only bound-constrained optimization
237 is supported. There is also another variant of this algorithm,
238 .BR NLOPT_GD_STOGO_RAND ,
239 which is a randomized version of the StoGO search scheme. The StoGO
240 algorithms are only available if NLopt is compiled with C++ enabled,
241 and should be linked via -lnlopt_cxx (via a C++ compiler, in order
242 to link the C++ standard libraries).
245 Perform a local (L) derivative-free (N) optimization, starting at
247 using the Subplex algorithm of Rowan et al., which is an improved
248 variant of Nelder-Mead simplex algorithm. (Like Nelder-Mead, Subplex
249 often works well in practice, even for discontinuous objectives, but
250 there is no rigorous guarantee that it will converge.) Subplex is
251 best for unconstrained optimization, but constrained optimization also
252 works (both for simple bound constraints via
256 as well as nonlinear constraints as described above).
259 Local (L) derivative-free (N) optimization using the principal-axis
260 method, based on code by Richard Brent. Designed for unconstrained
261 optimization, although bound constraints are supported too (via a
262 potentially inefficient method).
265 Local (L) gradient-based (D) optimization using the limited-memory BFGS
266 (L-BFGS) algorithm. (The objective function must supply the
267 gradient.) Unconstrained optimization is supported in addition to
268 simple bound constraints (see above). Based on an implementation by
272 Local (L) gradient-based (D) optimization using a shifted limited-memory
273 variable-metric method based on code by Luksan et al., supporting both
274 unconstrained and bound-constrained optimization.
276 uses a rank-2 method, while
278 is another variant using a rank-1 method.
280 .B NLOPT_LD_TNEWTON_PRECOND_RESTART
281 Local (L) gradient-based (D) optimization using an
282 LBFGS-preconditioned truncated Newton method with steepest-descent
283 restarting, based on code by Luksan et al., supporting both
284 unconstrained and bound-constrained optimization. There are several
285 other variants of this algorithm:
286 .B NLOPT_LD_TNEWTON_PRECOND
287 (same without restarting),
288 .B NLOPT_LD_TNEWTON_RESTART
289 (same without preconditioning), and
291 (same without restarting or preconditioning).
294 Global (G) derivative-free (N) optimization using controlled random
295 search (CRS2) algorithm of Price, with the "local mutation" (LM)
296 modification suggested by Kaelo and Ali.
298 \fBNLOPT_GD_MLSL_LDS\fR, \fBNLOPT_GN_MLSL_LDS\fR
299 Global (G) derivative-based (D) or derivative-free (N) optimization
300 using the multi-level single-linkage (MLSL) algorithm with a
301 low-discrepancy sequence (LDS). This algorithm executes a quasi-random
302 (LDS) sequence of local searches, with a clustering heuristic to
303 avoid multiple local searches for the same local minimum. The local
304 search uses the derivative/nonderivative algorithm set by
305 .I nlopt_set_local_search_algorithm
306 (currently defaulting to
310 for derivative/nonderivative searches, respectively). There are also
311 two other variants, \fBNLOPT_GD_MLSL\fR and \fBNLOPT_GN_MLSL\fR, which use
312 pseudo-random numbers (instead of an LDS) as in the original MLSL algorithm.
315 Local (L) gradient-based (D) optimization using the method of moving
316 asymptotes (MMA), or rather a refined version of the algorithm as
317 published by Svanberg (2002). (NLopt uses an independent free
318 implementation of Svanberg's algorithm.) The
320 algorithm supports both bound-constrained and unconstrained optimization,
321 and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
323 .BR nlopt_minimize_constrained ()
325 .SH STOPPING CRITERIA
326 Multiple stopping criteria for the optimization are supported, as
327 specified by the following arguments to
328 .BR nlopt_minimize ().
329 The optimization halts whenever any one of these criteria is
330 satisfied. In some cases, the precise interpretation of the stopping
331 criterion depends on the optimization algorithm above (although we
332 have tried to make them as consistent as reasonably possible), and
333 some algorithms do not support all of the stopping criteria.
336 Stop when a function value less than or equal to
338 is found. Set to -Inf or NaN (see constraints section above) to disable.
341 Relative tolerance on function value: stop when an optimization step
342 (or an estimate of the minimum) changes the function value by less
345 multiplied by the absolute value of the function value. (If there is any chance that your minimum function value is close to zero, you might want to set an absolute tolerance with
347 as well.) Disabled if non-positive.
350 Absolute tolerance on function value: stop when an optimization step
351 (or an estimate of the minimum) changes the function value by less
354 Disabled if non-positive.
357 Relative tolerance on design variables: stop when an optimization step
358 (or an estimate of the minimum) changes every design variable by less
361 multiplied by the absolute value of the design variable. (If there is
362 any chance that an optimal design variable is close to zero, you
363 might want to set an absolute tolerance with
365 as well.) Disabled if non-positive.
368 Pointer to an array of length
370 n giving absolute tolerances on design variables: stop when an
371 optimization step (or an estimate of the minimum) changes every design
376 Disabled if non-positive, or if
381 Stop when the number of function evaluations exceeds
383 (This is not a strict maximum: the number of function evaluations may
386 slightly, depending upon the algorithm.) Disabled
390 Stop when the optimization time (in seconds) exceeds
392 (This is not a strict maximum: the time may
395 slightly, depending upon the algorithm and on how slow your function
396 evaluation is.) Disabled if non-positive.
398 The value returned is one of the following enumerated constants.
399 .SS Successful termination (positive return values):
402 Generic success return value.
404 .B NLOPT_MINF_MAX_REACHED
405 Optimization stopped because
409 .B NLOPT_FTOL_REACHED
410 Optimization stopped because
416 .B NLOPT_XTOL_REACHED
417 Optimization stopped because
423 .B NLOPT_MAXEVAL_REACHED
424 Optimization stopped because
428 .B NLOPT_MAXTIME_REACHED
429 Optimization stopped because
432 .SS Error codes (negative return values):
435 Generic failure code.
437 .B NLOPT_INVALID_ARGS
438 Invalid arguments (e.g. lower bounds are bigger than upper bounds, an
439 unknown algorithm was specified, etcetera).
441 .B NLOPT_OUT_OF_MEMORY
443 .SH PSEUDORANDOM NUMBERS
444 For stochastic optimization algorithms, we use pseudorandom numbers generated
445 by the Mersenne Twister algorithm, based on code from Makoto Matsumoto.
446 By default, the seed for the random numbers is generated from the system
447 time, so that they will be different each time you run the program. If
448 you want to use deterministic random numbers, you can set the seed by
451 .BI " void nlopt_srand(unsigned long " "seed" );
453 Some of the algorithms also support using low-discrepancy sequences (LDS),
454 sometimes known as quasi-random numbers. NLopt uses the Sobol LDS, which
455 is implemented for up to 1111 dimensions.
457 Currently the NLopt library is in pre-alpha stage. Most algorithms
458 currently do not support all termination conditions: the only
459 termination condition that is consistently supported right now is
462 Written by Steven G. Johnson.
464 Copyright (c) 2007 Massachusetts Institute of Technology.
466 nlopt_minimize_constrained(3)