.\" .\" Copyright (c) 2007 Massachusetts Institute of Technology .\" .\" Copying and distribution of this file, with or without modification, .\" are permitted in any medium without royalty provided the copyright .\" notice and this notice are preserved. .\" .TH NLOPT_MINIMIZE 3 2007-08-23 "MIT" "NLopt programming manual" .SH NAME nlopt_minimize \- Minimize a multivariate nonlinear function .SH SYNOPSIS .nf .B #include .sp .BI "nlopt_result nlopt_minimize(nlopt_algorithm " "algorithm" , .br .BI " int " "n" , .BI " nlopt_func " "f" , .BI " void* " "f_data" , .BI " const double* " "lb" , .BI " const double* " "ub" , .BI " double* " "x" , .BI " double* " "fmin" , .BI " double " "fmin_max" , .BI " double " "ftol_rel" , .BI " double " "ftol_abs" , .BI " double " "xtol_rel" , .BI " const double* " "xtol_abs" , .BI " int " "maxeval" , .BI " double " "maxtime" ); .sp You should link the resulting program with the linker flags -lnlopt -lm on Unix. .fi .SH DESCRIPTION .BR nlopt_minimize () attempts to minimize a nonlinear function .I f of .I n design variables using the specified .IR algorithm . The minimum function value found is returned in .IR fmin , with the corresponding design variable values stored in the array .I x of length .IR n . The inputs .I lb and .I ub are arrays of length .I n containing lower and upper bounds, respectively, on the design variables .IR x . The other parameters specify stopping criteria (tolerances, the maximum number of function evaluations, etcetera) and other information as described in more detail below. The return value is a integer code indicating success (positive) or failure (negative), as described below. .PP By changing the parameter .I algorithm among several predefined constants described below, one can switch easily between a variety of minimization algorithms. Some of these algorithms require the gradient (derivatives) of the function to be supplied via .IR f , and other algorithms do not require derivatives. Some of the algorithms attempt to find a global minimum within the given bounds, and others find only a local minimum (with the initial value of .I x as a starting guess). .PP The .B nlopt_minimize function is a wrapper around several free/open-source minimization packages. You could, of course, compile and call these packages separately, and in some cases this will provide greater flexibility than is available via the .B nlopt_minimize interface. However, depending upon the specific function being minimized, the different algorithms will vary in effectiveness. The intent of .B nlopt_minimize is to allow you to quickly switch between algorithms in order to experiment with them for your problem, by providing a simple unified interface to these subroutines. .SH OBJECTIVE FUNCTION .BR nlopt_minimize () minimizes an objective function .I f of the form: .sp .BI " double f(int " "n" , .br .BI " const double* " "x" , .br .BI " double* " "grad" , .br .BI " void* " "f_data" ); .sp The return value should be the value of the function at the point .IR x , where .I x points to an array of length .I n of the design variables. The dimension .I n is identical to the one passed to .BR nlopt_minimize (). .sp In addition, if the argument .I grad is not NULL, then .I grad points to an array of length .I n which should (upon return) be set to the gradient of the function with respect to the design variables at .IR x . That is, .IR grad[i] should upon return contain the partial derivative df/dx[i], for 0 <= i < n, if .I grad is non-NULL. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the .I grad argument will always be NULL and need never be computed. (For algorithms that do use gradient information, however, .I grad may still be NULL for some calls.) .sp The .I f_data argument is the same as the one passed to .BR nlopt_minimize (), and may be used to pass any additional data through to the function. (That is, it may be a pointer to some caller-defined data structure/type containing information your function needs, which you convert from void* by a typecast.) .sp .SH CONSTRAINTS Most of the algorithms in NLopt are designed for minimization of functions with simple bound constraints on the inputs. That is, the input vectors x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i] <= ub[i] for 0 <= i < n, where .I lb and .I ub are the two arrays passed to .BR nlopt_minimize (). .sp However, a few of the algorithms support partially or totally unconstrained optimization, as noted below, where a (totally or partially) unconstrained design variable is indicated by a lower bound equal to -Inf and/or an upper bound equal to +Inf. Here, Inf is the IEEE-754 floating-point infinity, which (in ANSI C99) is represented by the macro INFINITY in math.h. Alternatively, for older C versions you may also use the macro HUGE_VAL (also in math.h). .sp With some of the algorithms, you may also employ arbitrary nonlinear constraints on the input variables. This is indicated by returning NaN (not a number) or Inf (infinity) from your objective function whenever a forbidden point is requested. See above for how to specify infinity; NaN is specified by the macro NAN in math.h (for ANSI C99). .SH ALGORITHMS The .I algorithm parameter specifies the optimization algorithm (for more detail on these, see the README files in the source-code subdirectories), and can take on any of the following values: .TP .B NLOPT_GLOBAL_DIRECT Perform a global derivative-free optimization using the DIRECT search algorithm by Jones et al. Supports arbitrary nonlinear constraints as described above, but does not support unconstrained optimization. .TP .B NLOPT_GLOBAL_DIRECT_L Perform a global derivative-free optimization using the DIRECT-L search algorithm by Gablonsky et al., a modified version of the DIRECT algorithm that is more suited to functions with modest numbers of local minima. Supports arbitrary nonlinear constraints as described above, but does not support unconstrained optimization. .TP .B NLOPT_LOCAL_SUBPLEX Perform a local derivative-free optimization, starting at .IR x , using the Subplex algorithm of Rowan et al., which is an improved variant of Nelder-Mead simplex algorithm. (Like Nelder-Mead, Subplex often works well in practice, even for discontinuous objectives, but there is no rigorous guarantee that it will converge.) Subplex is best for unconstrained optimization, but constrained optimization also works (both for simple bound constraints via .I lb and .I ub as well as nonlinear constraints as described above). .TP .B NLOPT_GLOBAL_STOGO Global optimization using the StoGO algorithm by Madsen et al. StoGO exploits gradient information (which must be supplied by the objective) for its local searches, and performs the global search by a stochastic branch-and-bound technique. Only bound-constrained optimization is supported. .TP .B NLOPT_LOCAL_LBFGS Local gradient-based optimization using the low-storage BFGS (L-BFGS) algorithm. (The objective function must supply the gradient.) Unconstrained optimization is supported in addition to simple bound constraints (see above). .SH RETURN VALUE The value returned is one of the following enumerated constants. (Positive return values indicate successful termination, while negative return values indicate an error condition.) .SH PSEUDORANDOM NUMBERS For stochastic optimization algorithms, we use pseudorandom numbers generated by the Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for the random numbers is generated from the system time, so that they will be different each time you run the program. If you want to use deterministic random numbers, you can set the seed by calling: .sp .BI " void nlopt_srand(unsigned long " "seed" ); .SH BUGS Currently the NLopt library is in pre-alpha stage. Most algorithms currently do not support all termination conditions: the only termination condition that is consistently supported right now is .BR maxeval . .SH AUTHORS Written by Steven G. Johnson. .PP Copyright (c) 2007 Massachusetts Institute of Technology.