2 .\" Copyright (c) 2007 Massachusetts Institute of Technology
4 .\" Copying and distribution of this file, with or without modification,
5 .\" are permitted in any medium without royalty provided the copyright
6 .\" notice and this notice are preserved.
8 .TH NLOPT_MINIMIZE 3 2007-08-23 "MIT" "NLopt programming manual"
10 nlopt_minimize \- Minimize a multivariate nonlinear function
15 .BI "nlopt_result nlopt_minimize(nlopt_algorithm " "algorithm" ,
18 .BI " nlopt_func " "f" ,
19 .BI " void* " "f_data" ,
20 .BI " const double* " "lb" ,
21 .BI " const double* " "ub" ,
23 .BI " double* " "fmin" ,
24 .BI " double " "fmin_max" ,
25 .BI " double " "ftol_rel" ,
26 .BI " double " "ftol_abs" ,
27 .BI " double " "xtol_rel" ,
28 .BI " const double* " "xtol_abs" ,
29 .BI " int " "maxeval" ,
30 .BI " double " "maxtime" );
32 You should link the resulting program with the linker flags
37 attempts to minimize a nonlinear function
41 design variables using the specified
43 The minimum function value found is returned in
45 with the corresponding design variable values stored in the array
55 containing lower and upper bounds, respectively, on the design variables
57 The other parameters specify stopping criteria (tolerances, the maximum
58 number of function evaluations, etcetera) and other information as described
59 in more detail below. The return value is a integer code indicating success
60 (positive) or failure (negative), as described below.
62 By changing the parameter
64 among several predefined constants described below, one can switch easily
65 between a variety of minimization algorithms. Some of these algorithms
66 require the gradient (derivatives) of the function to be supplied via
68 and other algorithms do not require derivatives. Some of the
69 algorithms attempt to find a global minimum within the given bounds,
70 and others find only a local minimum (with the initial value of
76 function is a wrapper around several free/open-source minimization packages.
77 You could, of course, compile and call these packages separately, and in
78 some cases this will provide greater flexibility than is available via the
80 interface. However, depending upon the specific function being minimized,
81 the different algorithms will vary in effectiveness. The intent of
83 is to allow you to quickly switch between algorithms in order to experiment
84 with them for your problem, by providing a simple unified interface to
86 .SH OBJECTIVE FUNCTION
88 minimizes an objective function
92 .BI " double f(int " "n" ,
94 .BI " const double* " "x" ,
96 .BI " double* " "grad" ,
98 .BI " void* " "f_data" );
100 The return value should be the value of the function at the point
104 points to an array of length
106 of the design variables. The dimension
108 is identical to the one passed to
109 .BR nlopt_minimize ().
111 In addition, if the argument
115 points to an array of length
117 which should (upon return) be set to the gradient of the function with
118 respect to the design variables at
122 should upon return contain the partial derivative df/dx[i],
126 Not all of the optimization algorithms (below) use the gradient information:
127 for algorithms listed as "derivative-free," the
129 argument will always be NULL and need never be computed. (For
130 algorithms that do use gradient information, however,
132 may still be NULL for some calls.)
136 argument is the same as the one passed to
137 .BR nlopt_minimize (),
138 and may be used to pass any additional data through to the function.
139 (That is, it may be a pointer to some caller-defined data
140 structure/type containing information your function needs, which you
141 convert from void* by a typecast.)
144 Most of the algorithms in NLopt are designed for minimization of functions
145 with simple bound constraints on the inputs. That is, the input vectors
146 x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i] <= ub[i] for
151 are the two arrays passed to
152 .BR nlopt_minimize ().
154 However, a few of the algorithms support partially or totally
155 unconstrained optimization, as noted below, where a (totally or
156 partially) unconstrained design variable is indicated by a lower bound
157 equal to -Inf and/or an upper bound equal to +Inf. Here, Inf is the
158 IEEE-754 floating-point infinity, which (in ANSI C99) is represented by
159 the macro INFINITY in math.h. Alternatively, for older C versions
160 you may also use the macro HUGE_VAL (also in math.h).
162 With some of the algorithms, you may also employ arbitrary nonlinear
163 constraints on the input variables. This is indicated by returning NaN
164 (not a number) or Inf (infinity) from your objective function whenever
165 a forbidden point is requested. See above for how to specify infinity;
166 NaN is specified by the macro NAN in math.h (for ANSI C99).
170 parameter specifies the optimization algorithm (for more detail on
171 these, see the README files in the source-code subdirectories), and
172 can take on any of the following values:
174 .B NLOPT_GLOBAL_DIRECT
175 Perform a global derivative-free optimization using the DIRECT search
176 algorithm by Jones et al. Supports arbitrary nonlinear constraints as
177 described above, but does not support unconstrained optimization.
179 .B NLOPT_GLOBAL_DIRECT_L
180 Perform a global derivative-free optimization using the DIRECT-L
181 search algorithm by Gablonsky et al., a modified version of the DIRECT
182 algorithm that is more suited to functions with modest numbers of
183 local minima. Supports arbitrary nonlinear constraints as described
184 above, but does not support unconstrained optimization.
186 .B NLOPT_LOCAL_SUBPLEX
187 Perform a local derivative-free optimization, starting at
189 using the Subplex algorithm of Rowan et al., which is an improved
190 variant of Nelder-Mead simplex algorithm. (Like Nelder-Mead, Subplex
191 often works well in practice, even for discontinuous objectives, but
192 there is no rigorous guarantee that it will converge.) Subplex is
193 best for unconstrained optimization, but constrained optimization also
194 works (both for simple bound constraints via
198 as well as nonlinear constraints as described above).
200 .B NLOPT_GLOBAL_STOGO
201 Global optimization using the StoGO algorithm by Madsen et al. StoGO
202 exploits gradient information (which must be supplied by the
203 objective) for its local searches, and performs the global search by a
204 stochastic branch-and-bound technique. Only bound-constrained optimization
208 Local gradient-based optimization using the low-storage BFGS (L-BFGS)
209 algorithm. (The objective function must supply the gradient.)
210 Unconstrained optimization is supported in addition to simple bound
211 constraints (see above).
212 .SH STOPPING CRITERIA
213 Multiple stopping criteria for the optimization are supported, as
214 specified by the following arguments to
215 .BR nlopt_minimize ().
216 The optimization halts whenever any one of these criteria is
217 satisfied. In some cases, the precise interpretation of the stopping
218 criterion depends on the optimization algorithm above (although we
219 have tried to make them as consistent as reasonably possible), and
220 some algorithms do not support all of the stopping criteria.
223 Stop when a function value less than or equal to
225 is found. Set to Inf (see constraints section above) to disable.
228 Relative tolerance on function value: stop when an optimization step
229 (or an estimate of the minimum) changes the function value by less
232 multiplied by the absolute value of the function value. (If there is any chance that your minimum function value is close to zero, you might want to set an absolute tolerance with
234 as well.) Disabled if non-positive.
237 Absolute tolerance on function value: stop when an optimization step
238 (or an estimate of the minimum) changes the function value by less
241 Disabled if non-positive.
244 Relative tolerance on design variables: stop when an optimization step
245 (or an estimate of the minimum) changes every design variable by less
248 multiplied by the absolute value of the design variable. (If there is
249 any chance that an optimal design variable is close to zero, you
250 might want to set an absolute tolerance with
252 as well.) Disabled if non-positive.
255 Pointer to an array of length
257 n giving absolute tolerances on design variables: stop when an
258 optimization step (or an estimate of the minimum) changes every design
263 Disabled if non-positive, or if
268 Stop when the number of function evaluations exceeds
270 (This is not a strict maximum: the number of function evaluations may
273 slightly, depending upon the algorithm.) Disabled
277 Stop when the optimization time (in seconds) exceeds
279 (This is not a strict maximum: the time may
282 slightly, depending upon the algorithm and on how slow your function
283 evaluation is.) Disabled if non-positive.
285 The value returned is one of the following enumerated constants.
286 .SS Successful termination (positive return values):
289 Generic success return value.
291 .B NLOPT_FMIN_MAX_REACHED
292 Optimization stopped because
296 .B NLOPT_FTOL_REACHED
297 Optimization stopped because
303 .B NLOPT_XTOL_REACHED
304 Optimization stopped because
310 .B NLOPT_MAXEVAL_REACHED
311 Optimization stopped because
315 .B NLOPT_MAXTIME_REACHED
316 Optimization stopped because
319 .SS Error codes (negative return values):
322 Generic failure code.
324 .B NLOPT_INVALID_ARGS
325 Invalid arguments (e.g. lower bounds are bigger than upper bounds, an
326 unknown algorithm was specified, etcetera).
328 .B NLOPT_OUT_OF_MEMORY
330 .SH PSEUDORANDOM NUMBERS
331 For stochastic optimization algorithms, we use pseudorandom numbers generated
332 by the Mersenne Twister algorithm, based on code from Makoto Matsumoto.
333 By default, the seed for the random numbers is generated from the system
334 time, so that they will be different each time you run the program. If
335 you want to use deterministic random numbers, you can set the seed by
338 .BI " void nlopt_srand(unsigned long " "seed" );
340 Currently the NLopt library is in pre-alpha stage. Most algorithms
341 currently do not support all termination conditions: the only
342 termination condition that is consistently supported right now is
345 Written by Steven G. Johnson.
347 Copyright (c) 2007 Massachusetts Institute of Technology.