2 .\" Copyright (c) 2007 Massachusetts Institute of Technology
4 .\" Copying and distribution of this file, with or without modification,
5 .\" are permitted in any medium without royalty provided the copyright
6 .\" notice and this notice are preserved.
8 .TH NLOPT_MINIMIZE_CONSTRAINED 3 2007-08-23 "MIT" "NLopt programming manual"
10 nlopt_minimize_constrained \- Minimize a multivariate nonlinear function subject to nonlinear constraints
15 .BI "nlopt_result nlopt_minimize_constrained(nlopt_algorithm " "algorithm" ,
18 .BI " nlopt_func " "f" ,
19 .BI " void* " "f_data" ,
21 .BI " nlopt_func " "c" ,
22 .BI " void* " "fc_data" ,
23 .BI " ptrdiff_t " "fc_datum_size" ,
24 .BI " const double* " "lb" ,
25 .BI " const double* " "ub" ,
27 .BI " double* " "minf" ,
28 .BI " double " "minf_max" ,
29 .BI " double " "ftol_rel" ,
30 .BI " double " "ftol_abs" ,
31 .BI " double " "xtol_rel" ,
32 .BI " const double* " "xtol_abs" ,
33 .BI " int " "maxeval" ,
34 .BI " double " "maxtime" );
36 You should link the resulting program with the linker flags
40 .BR nlopt_minimize_constrained ()
41 attempts to minimize a nonlinear function
45 design variables, subject to
47 nonlinear constraints described by the function
49 (see below), using the specified
51 The minimum function value found is returned in
53 with the corresponding design variable values returned in the array
59 should be a starting guess for the optimum.
66 containing lower and upper bounds, respectively, on the design variables
68 The other parameters specify stopping criteria (tolerances, the maximum
69 number of function evaluations, etcetera) and other information as described
70 in more detail below. The return value is a integer code indicating success
71 (positive) or failure (negative), as described below.
73 By changing the parameter
75 among several predefined constants described below, one can switch easily
76 between a variety of minimization algorithms. Some of these algorithms
77 require the gradient (derivatives) of the function to be supplied via
79 and other algorithms do not require derivatives. Some of the
80 algorithms attempt to find a global minimum within the given bounds,
81 and others find only a local minimum. Some of the algorithms can handle
82 nonlinear constraints, but most of the algorithms only handle the
85 is zero (no explicit nonlinear constraints).
88 .B nlopt_minimize_constrained
89 function is a wrapper around several free/open-source minimization packages,
90 as well as some new implementations of published optimization algorithms.
91 You could, of course, compile and call these packages separately, and in
92 some cases this will provide greater flexibility than is available via the
93 .B nlopt_minimize_constrained
94 interface. However, depending upon the specific function being minimized,
95 the different algorithms will vary in effectiveness. The intent of
96 .B nlopt_minimize_constrained
97 is to allow you to quickly switch between algorithms in order to experiment
98 with them for your problem, by providing a simple unified interface to
100 .SH OBJECTIVE FUNCTION
101 .BR nlopt_minimize_constrained ()
102 minimizes an objective function
106 .BI " double f(int " "n" ,
108 .BI " const double* " "x" ,
110 .BI " double* " "grad" ,
112 .BI " void* " "f_data" );
114 The return value should be the value of the function at the point
118 points to an array of length
120 of the design variables. The dimension
122 is identical to the one passed to
123 .BR nlopt_minimize_constrained ().
125 In addition, if the argument
129 points to an array of length
131 which should (upon return) be set to the gradient of the function with
132 respect to the design variables at
136 should upon return contain the partial derivative df/dx[i],
140 Not all of the optimization algorithms (below) use the gradient information:
141 for algorithms listed as "derivative-free," the
143 argument will always be NULL and need never be computed. (For
144 algorithms that do use gradient information, however,
146 may still be NULL for some calls.)
150 argument is the same as the one passed to
151 .BR nlopt_minimize_constrained (),
152 and may be used to pass any additional data through to the function.
153 (That is, it may be a pointer to some caller-defined data
154 structure/type containing information your function needs, which you
155 convert from void* by a typecast.)
157 .SH BOUND CONSTRAINTS
158 Most of the algorithms in NLopt are designed for minimization of functions
159 with simple bound constraints on the inputs. That is, the input vectors
160 x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i] <= ub[i] for
165 are the two arrays passed to
166 .BR nlopt_minimize_constrained ().
168 However, a few of the algorithms support partially or totally
169 unconstrained optimization, as noted below, where a (totally or
170 partially) unconstrained design variable is indicated by a lower bound
171 equal to -Inf and/or an upper bound equal to +Inf. Here, Inf is the
172 IEEE-754 floating-point infinity, which (in ANSI C99) is represented by
173 the macro INFINITY in math.h. Alternatively, for older C versions
174 you may also use the macro HUGE_VAL (also in math.h).
176 With some of the algorithms, especially those that do not require
177 derivative information, a simple (but not especially efficient) way
178 to implement arbitrary nonlinear constraints is to return Inf (see
179 above) whenever the constraints are violated by a given input
181 More generally, there are various ways to implement constraints
182 by adding "penalty terms" to your objective function, which are
183 described in the optimization literature.
184 A much more efficient way to specify nonlinear constraints is described
185 below, but is only supported by a small subset of the algorithms.
186 .SH NONLINEAR CONSTRAINTS
188 .B nlopt_minimize_constrained
189 function also allows you to specify
191 nonlinear constraints via the function
195 is any nonnegative integer. However, nonzero
197 is currently only supported by the
201 In particular, the nonlinear constraints are of the form
202 \fIfc\fR(\fIx\fR) <= 0, where the function
204 is of the same form as the objective function described above:
206 .BI " double fc(int " "n" ,
208 .BI " const double* " "x" ,
210 .BI " double* " "grad" ,
212 .BI " void* " "fc_datum" );
214 The return value should be the value of the constraint at the point
219 is identical to the one passed to
220 .BR nlopt_minimize_constrained ().
221 As for the objective function, if the argument
225 points to an array of length
227 which should (upon return) be set to the gradient of the function with
230 (For any algorithm listed as "derivative-free" below, the
232 argument will always be NULL and need never be computed.)
236 argument is based on the
239 .BR nlopt_minimize_constrained (),
240 and may be used to pass any additional data through to the function,
241 and is used to distinguish between different constraints.
243 In particular, the constraint function
249 and the i-th constraint (0 <= i <=
257 For example, suppose that you have a data structure of type "foo"
258 that describes the data needed by each constraint, and you store
259 the information for the constraints in an array "foo data[m]". In
260 this case, you would pass "data" as the
263 .BR nlopt_minimize_constrained ,
264 and "sizeof(foo)" as the
266 parameter. Then, your
268 function would be called
270 times for each point, and be passed data[0] through data[m-1] in sequence.
274 parameter specifies the optimization algorithm (for more detail on
275 these, see the README files in the source-code subdirectories), and
276 can take on any of the following constant values. Constants with
279 refer to global optimization methods, whereas
281 refers to local optimization methods (that try to find a local minimum
282 starting from the starting guess
286 refer to non-gradient (derivative-free) algorithms that do not require the
287 objective function to supply a gradient, whereas
289 refers to derivative-based algorithms that require the objective
290 function to supply a gradient. (Especially for local optimization,
291 derivative-based algorithms are generally superior to derivative-free
292 ones: the gradient is good to have
294 you can compute it cheaply, e.g. via an adjoint method.)
297 Perform a global (G) derivative-free (N) optimization using the
298 DIRECT-L search algorithm by Jones et al. as modified by Gablonsky et
299 al. to be more weighted towards local search. Does not support
300 unconstrainted optimization. There are also several other variants of
301 the DIRECT algorithm that are supported:
302 .BR NLOPT_GLOBAL_DIRECT ,
303 which is the original DIRECT algorithm;
304 .BR NLOPT_GLOBAL_DIRECT_L_RAND ,
305 a slightly randomized version of DIRECT-L that may be better in
306 high-dimensional search spaces;
307 .BR NLOPT_GLOBAL_DIRECT_NOSCAL ,
308 .BR NLOPT_GLOBAL_DIRECT_L_NOSCAL ,
310 .BR NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL ,
311 which are versions of DIRECT where the dimensions are not rescaled to
312 a unit hypercube (which means that dimensions with larger bounds are
315 .B NLOPT_GN_ORIG_DIRECT_L
316 A global (G) derivative-free optimization using the DIRECT-L algorithm
318 .B NLOPT_GN_ORIG_DIRECT
319 which is the original DIRECT algorithm. Unlike
321 above, these two algorithms refer to code based on the original
322 Fortran code of Gablonsky et al., which has some hard-coded
323 limitations on the number of subdivisions etc. and does not support
324 all of the NLopt stopping criteria, but on the other hand supports
325 arbitrary nonlinear constraints as described above.
328 Global (G) optimization using the StoGO algorithm by Madsen et al. StoGO
329 exploits gradient information (D) (which must be supplied by the
330 objective) for its local searches, and performs the global search by a
331 branch-and-bound technique. Only bound-constrained optimization
332 is supported. There is also another variant of this algorithm,
333 .BR NLOPT_GD_STOGO_RAND ,
334 which is a randomized version of the StoGO search scheme. The StoGO
335 algorithms are only available if NLopt is compiled with C++ enabled,
336 and should be linked via -lnlopt_cxx (via a C++ compiler, in order
337 to link the C++ standard libraries).
340 Perform a local (L) derivative-free (N) optimization, starting at
342 using the Subplex algorithm of Rowan et al., which is an improved
343 variant of Nelder-Mead simplex algorithm. (Like Nelder-Mead, Subplex
344 often works well in practice, even for discontinuous objectives, but
345 there is no rigorous guarantee that it will converge.) Subplex is
346 best for unconstrained optimization, but constrained optimization also
347 works (both for simple bound constraints via
351 as well as nonlinear constraints as described above).
354 Local (L) derivative-free (N) optimization using the principal-axis
355 method, based on code by Richard Brent. Designed for unconstrained
356 optimization, although bound constraints are supported too (via a
357 potentially inefficient method).
360 Local (L) gradient-based (D) optimization using the limited-memory BFGS
361 (L-BFGS) algorithm. (The objective function must supply the
362 gradient.) Unconstrained optimization is supported in addition to
363 simple bound constraints (see above). Based on an implementation by
367 Local (L) gradient-based (D) optimization using a shifted limited-memory
368 variable-metric method based on code by Luksan et al., supporting both
369 unconstrained and bound-constrained optimization.
371 uses a rank-2 method, while
373 is another variant using a rank-1 method.
375 .B NLOPT_LD_TNEWTON_PRECOND_RESTART
376 Local (L) gradient-based (D) optimization using an
377 LBFGS-preconditioned truncated Newton method with steepest-descent
378 restarting, based on code by Luksan et al., supporting both
379 unconstrained and bound-constrained optimization. There are several
380 other variants of this algorithm:
381 .B NLOPT_LD_TNEWTON_PRECOND
382 (same without restarting),
383 .B NLOPT_LD_TNEWTON_RESTART
384 (same without preconditioning), and
386 (same without restarting or preconditioning).
389 Global (G) derivative-free (N) optimization using controlled random
390 search (CRS2) algorithm of Price, with the "local mutation" (LM)
391 modification suggested by Kaelo and Ali.
393 \fBNLOPT_GD_MLSL_LDS\fR, \fBNLOPT_GN_MLSL_LDS\fR
394 Global (G) derivative-based (D) or derivative-free (N) optimization
395 using the multi-level single-linkage (MLSL) algorithm with a
396 low-discrepancy sequence (LDS). This algorithm executes a quasi-random
397 (LDS) sequence of local searches, with a clustering heuristic to
398 avoid multiple local searches for the same local minimum. The local
399 search uses the derivative/nonderivative algorithm set by
400 .I nlopt_set_local_search_algorithm
401 (currently defaulting to
405 for derivative/nonderivative searches, respectively). There are also
406 two other variants, \fBNLOPT_GD_MLSL\fR and \fBNLOPT_GN_MLSL\fR, which use
407 pseudo-random numbers (instead of an LDS) as in the original MLSL algorithm.
410 Local (L) gradient-based (D) optimization using the method of moving
411 asymptotes (MMA), or rather a refined version of the algorithm as
412 published by Svanberg (2002). (NLopt uses an independent free
413 implementation of Svanberg's algorithm.) The
415 algorithm supports both bound-constrained and unconstrained optimization,
416 and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
418 .SH STOPPING CRITERIA
419 Multiple stopping criteria for the optimization are supported, as
420 specified by the following arguments to
421 .BR nlopt_minimize_constrained ().
422 The optimization halts whenever any one of these criteria is
423 satisfied. In some cases, the precise interpretation of the stopping
424 criterion depends on the optimization algorithm above (although we
425 have tried to make them as consistent as reasonably possible), and
426 some algorithms do not support all of the stopping criteria.
428 Important: you do not need to use all of the stopping criteria! In most
429 cases, you only need one or two, and can set the remainder to values where
430 they do nothing (as described below).
433 Stop when a function value less than or equal to
435 is found. Set to -Inf or NaN (see constraints section above) to disable.
438 Relative tolerance on function value: stop when an optimization step
439 (or an estimate of the minimum) changes the function value by less
442 multiplied by the absolute value of the function value. (If there is any chance that your minimum function value is close to zero, you might want to set an absolute tolerance with
444 as well.) Disabled if non-positive.
447 Absolute tolerance on function value: stop when an optimization step
448 (or an estimate of the minimum) changes the function value by less
451 Disabled if non-positive.
454 Relative tolerance on design variables: stop when an optimization step
455 (or an estimate of the minimum) changes every design variable by less
458 multiplied by the absolute value of the design variable. (If there is
459 any chance that an optimal design variable is close to zero, you
460 might want to set an absolute tolerance with
462 as well.) Disabled if non-positive.
465 Pointer to an array of length
467 n giving absolute tolerances on design variables: stop when an
468 optimization step (or an estimate of the minimum) changes every design
473 Disabled if non-positive, or if
478 Stop when the number of function evaluations exceeds
480 (This is not a strict maximum: the number of function evaluations may
483 slightly, depending upon the algorithm.) Disabled
487 Stop when the optimization time (in seconds) exceeds
489 (This is not a strict maximum: the time may
492 slightly, depending upon the algorithm and on how slow your function
493 evaluation is.) Disabled if non-positive.
495 The value returned is one of the following enumerated constants.
496 .SS Successful termination (positive return values):
499 Generic success return value.
501 .B NLOPT_MINF_MAX_REACHED
502 Optimization stopped because
506 .B NLOPT_FTOL_REACHED
507 Optimization stopped because
513 .B NLOPT_XTOL_REACHED
514 Optimization stopped because
520 .B NLOPT_MAXEVAL_REACHED
521 Optimization stopped because
525 .B NLOPT_MAXTIME_REACHED
526 Optimization stopped because
529 .SS Error codes (negative return values):
532 Generic failure code.
534 .B NLOPT_INVALID_ARGS
535 Invalid arguments (e.g. lower bounds are bigger than upper bounds, an
536 unknown algorithm was specified, etcetera).
538 .B NLOPT_OUT_OF_MEMORY
540 .SH PSEUDORANDOM NUMBERS
541 For stochastic optimization algorithms, we use pseudorandom numbers generated
542 by the Mersenne Twister algorithm, based on code from Makoto Matsumoto.
543 By default, the seed for the random numbers is generated from the system
544 time, so that they will be different each time you run the program. If
545 you want to use deterministic random numbers, you can set the seed by
548 .BI " void nlopt_srand(unsigned long " "seed" );
550 Some of the algorithms also support using low-discrepancy sequences (LDS),
551 sometimes known as quasi-random numbers. NLopt uses the Sobol LDS, which
552 is implemented for up to 1111 dimensions.
554 Currently the NLopt library is in pre-alpha stage. Most algorithms
555 currently do not support all termination conditions: the only
556 termination condition that is consistently supported right now is
559 Written by Steven G. Johnson.
561 Copyright (c) 2007-2008 Massachusetts Institute of Technology.