set parameters of the optimization, constraints, and stopping
criteria. Here, \fBnlopt_set_ftol_rel\fR is merely an example of a
possible stopping criterion. You should link the resulting program
-with the linker flags -lnlopt -lm on Unix.
+with the linker flags \-lnlopt \-lm on Unix.
.fi
.SH DESCRIPTION
NLopt is a library for nonlinear optimization. It attempts to
.I n
design variables, using the specified
.IR algorithm ,
-possibly subject to linear or nonlinear constraints. The minimum
+possibly subject to linear or nonlinear constraints. The optimum
function value found is returned in \fIopt_f\fR (type double) with the
corresponding design variable values returned in the (double) array
.I x
require the gradient (derivatives) of the function to be supplied via
.IR f ,
and other algorithms do not require derivatives. Some of the
-algorithms attempt to find a global minimum within the given bounds,
-and others find only a local minimum. Most of the algorithms only
+algorithms attempt to find a global optimum within the given bounds,
+and others find only a local optimum. Most of the algorithms only
handle the case where there are no nonlinear constraints. The NLopt
library is a wrapper around several free/open-source minimization
packages, as well as some new implementations of published
.I f
should be of the form:
.sp
-.BI " double f(int " "n" ,
+.BI " double f(unsigned " "n" ,
.br
.BI " const double* " "x" ,
.br
(unconstrained, i.e. a bound of infinity); it is possible to have
lower bounds but not upper bounds or vice versa. Alternatively, the
user can call one of the above functions and explicitly pass a lower
-bound of -HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
+bound of \-HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
variables to make them have no lower/upper bound, respectively.
(HUGE_VAL is the standard C constant for a floating-point infinity,
found in the math.h header file.)
.BI " double " "ub" );
.sp
.SH NONLINEAR CONSTRAINTS
-Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also
-support arbitrary nonlinear inequality constraints, and some also
-allow nonlinear equality constraints (ISRES and AUGLAG). For these
-algorithms, you can specify as many nonlinear constraints as you wish
-by calling the following functions multiple times.
+Several of the algorithms in NLopt (MMA and ORIG_DIRECT) also support
+arbitrary nonlinear inequality constraints, and some also allow
+nonlinear equality constraints (COBYLA, SLSQP, ISRES, and AUGLAG).
+For these algorithms, you can specify as many nonlinear constraints as
+you wish by calling the following functions multiple times.
.sp
In particular, a nonlinear inequality constraint of the form
\fIfc\fR(\fIx\fR) <= 0, where the function
in their names
refer to global optimization methods, whereas
.B _L{N,D}_
-refers to local optimization methods (that try to find a local minimum
+refers to local optimization methods (that try to find a local optimum
starting from the starting guess
.IR x ).
Constants with
al. to be more weighted towards local search. Does not support
unconstrainted optimization. There are also several other variants of
the DIRECT algorithm that are supported:
-.BR NLOPT_GLOBAL_DIRECT ,
+.BR NLOPT_GN_DIRECT ,
which is the original DIRECT algorithm;
-.BR NLOPT_GLOBAL_DIRECT_L_RAND ,
+.BR NLOPT_GN_DIRECT_L_RAND ,
a slightly randomized version of DIRECT-L that may be better in
high-dimensional search spaces;
-.BR NLOPT_GLOBAL_DIRECT_NOSCAL ,
-.BR NLOPT_GLOBAL_DIRECT_L_NOSCAL ,
+.BR NLOPT_GN_DIRECT_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_NOSCAL ,
and
-.BR NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_RAND_NOSCAL ,
which are versions of DIRECT where the dimensions are not rescaled to
a unit hypercube (which means that dimensions with larger bounds are
given more weight).
.BR NLOPT_GD_STOGO_RAND ,
which is a randomized version of the StoGO search scheme. The StoGO
algorithms are only available if NLopt is compiled with C++ code
-enabled, and should be linked via -lnlopt_cxx instead of -lnlopt (via
+enabled, and should be linked via \-lnlopt_cxx instead of \-lnlopt (via
a C++ compiler, in order to link the C++ standard libraries).
.TP
.B NLOPT_LN_NELDERMEAD
handle nonlinear inequality and equality constraints as suggested by
Runarsson and Yao.
.TP
-\fBNLOPT_GD_MLSL_LDS\fR, \fBNLOPT_GN_MLSL_LDS\fR
-Global (G) derivative-based (D) or derivative-free (N) optimization
-using the multi-level single-linkage (MLSL) algorithm with a
-low-discrepancy sequence (LDS). This algorithm executes a quasi-random
-(LDS) sequence of local searches, with a clustering heuristic to
-avoid multiple local searches for the same local minimum. The local
-search uses the derivative/nonderivative algorithm set by
-.I nlopt_set_local_optimizer
-(currently defaulting to
-.I NLOPT_LD_MMA
-and
-.I NLOPT_LN_COBYLA
-for derivative/nonderivative searches, respectively). There are also
-two other variants, \fBNLOPT_GD_MLSL\fR and \fBNLOPT_GN_MLSL\fR, which use
-pseudo-random numbers (instead of an LDS) as in the original MLSL algorithm.
+\fBNLOPT_G_MLSL_LDS\fR, \fBNLOPT_G_MLSL\fR
+Global (G) optimization using the multi-level single-linkage (MLSL)
+algorithm with a low-discrepancy sequence (LDS) or pseudorandom
+numbers, respectively. This algorithm executes a low-discrepancy
+or pseudorandom sequence of local searches, with a clustering
+heuristic to avoid multiple local searches for the same local optimum.
+The local search algorithm must be specified, along with termination
+criteria/tolerances for the local searches, by
+\fInlopt_set_local_optimizer\fR. (This subsidiary algorithm can be
+with or without derivatives, and determines whether the objective
+function needs gradients.)
.TP
-.B NLOPT_LD_MMA
+\fBNLOPT_LD_MMA\fR, \fBNLOPT_LD_CCSAQ\fR
Local (L) gradient-based (D) optimization using the method of moving
asymptotes (MMA), or rather a refined version of the algorithm as
published by Svanberg (2002). (NLopt uses an independent
-free-software/open-source implementation of Svanberg's algorithm.)
+free-software/open-source implementation of Svanberg's algorithm.) CCSAQ
+is a related algorithm from Svanberg's paper which uses a local quadratic
+approximation rather than the more-complicated MMA model; the two usually
+have similar convergence rates.
The
.B NLOPT_LD_MMA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality (not equality) constraints as described above.
+.TP
+.B NLOPT_LD_SLSQP
+Local (L) gradient-based (D) optimization using sequential quadratic
+programming and BFGS updates, supporting arbitrary nonlinear
+inequality and equality constraints, based on the code by Dieter Kraft
+(1988) adapted for use by the SciPy project. Note that this algorithm
+uses dense-matrix methods requiring O(\fIn\fR^2) storage and
+O(\fIn\fR^3) time, making it less practical for problems involving
+more than a few thousand parameters.
.TP
.B NLOPT_LN_COBYLA
Local (L) derivative-free (N) optimization using the COBYLA algorithm
of Powell (Constrained Optimization BY Linear Approximations).
The
.B NLOPT_LN_COBYLA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality/equality constraints as described above.
.TP
.B NLOPT_LN_NEWUOA
Local (L) derivative-free (N) optimization using a variant of the
Local (L) derivative-free (N) optimization using the BOBYQA algorithm
of Powell, based on successive quadratic approximations of the
objective function, supporting bound constraints.
+.TP
+.B NLOPT_AUGLAG
+Optimize an objective with nonlinear inequality/equality constraints
+via an unconstrained (or bound-constrained) optimization algorithm,
+using a gradually increasing "augmented Lagrangian" penalty for
+violated constraints. Requires you to specify another optimization
+algorithm for optimizing the objective+penalty function, using
+\fInlopt_set_local_optimizer\fR. (This subsidiary algorithm can be
+global or local and with or without derivatives, but you must specify
+its own termination criteria.) A variant, \fBNLOPT_AUGLAG_EQ\fR, only
+uses the penalty approach for equality constraints, while inequality
+constraints are handled directly by the subsidiary algorithm (restricting
+the choice of subsidiary algorithms to those that can handle inequality
+constraints).
.SH STOPPING CRITERIA
Multiple stopping criteria for the optimization are supported, as
specified by the functions to modify a given optimization problem
.I stopval
is found: stop minimizing when a value <= \fIstopval\fR is found, or
stop maximizing when a value >= \fIstopval\fR is found. (Setting
-\fIstopval\fR to -HUGE_VAL for minimizing or +HUGE_VAL for maximizing
+\fIstopval\fR to \-HUGE_VAL for minimizing or +HUGE_VAL for maximizing
disables this stopping criterion.)
.TP
.BI "nlopt_result nlopt_set_ftol_rel(nlopt_opt " "opt" ,
.BI " double " tol );
.sp
Set relative tolerance on function value: stop when an optimization step
-(or an estimate of the minimum) changes the function value by less
+(or an estimate of the optimum) changes the function value by less
than
.I tol
-multiplied by the absolute value of the function value. (If there is any chance that your minimum function value is close to zero, you might want to set an absolute tolerance with
+multiplied by the absolute value of the function value. (If there is any chance that your optimum function value is close to zero, you might want to set an absolute tolerance with
.B nlopt_set_ftol_abs
as well.) Criterion is disabled if \fItol\fR is non-positive.
.TP
.BI " double " tol );
.sp
Set absolute tolerance on function value: stop when an optimization step
-(or an estimate of the minimum) changes the function value by less
+(or an estimate of the optimum) changes the function value by less
than
.IR tol .
Criterion is disabled if \fItol\fR is non-positive.
.BI " double " tol );
.sp
Set relative tolerance on design variables: stop when an optimization step
-(or an estimate of the minimum) changes every design variable by less
+(or an estimate of the optimum) changes every design variable by less
than
.I tol
multiplied by the absolute value of the design variable. (If there is
to an array of length
.I
n giving the tolerances: stop when an
-optimization step (or an estimate of the minimum) changes every design
+optimization step (or an estimate of the optimum) changes every design
variable
.IR x [i]
by less than
.TP
.B NLOPT_OUT_OF_MEMORY
Ran out of memory.
+.TP
+.B NLOPT_ROUNDOFF_LIMITED
+Halted because roundoff errors limited progress.
+.TP
+.B NLOPT_FORCED_STOP
+Halted because the user called \fBnlopt_force_stop\fR(\fIopt\fR) on
+the optimization's \fBnlopt_opt\fR object \fIopt\fR from the user's
+objective function.
.SH LOCAL OPTIMIZER
Some of the algorithms, especially MLSL and AUGLAG, use a different
optimization algorithm as a subroutine, typically for local
-optimization. By default, they use MMA or COBYLA for gradient-based
-or derivative-free searching, respectively. However, you can change
-the local search algorithm and its tolerances by calling:
+optimization. You can change the local search algorithm and its
+tolerances by calling:
.sp
.BI " nlopt_result nlopt_set_local_optimizer(nlopt_opt " "opt" ,
.br
.BI " const nlopt_opt " "local_opt" );
.sp
-Here, \fIlocal_opt\fR is another \fBnlopt_opt\fB object whose
+Here, \fIlocal_opt\fR is another \fBnlopt_opt\fR object whose
parameters are used to determine the local search algorithm and
-stopping criteria. (The objective function and nonlinear-constraint
-parameters of \fIlocal_opt\fR are ignored.) The dimension \fIn\fR of
-\fIlocal_opt\fR must match that of \fIopt\fR.
+stopping criteria. (The objective function, bounds, and
+nonlinear-constraint parameters of \fIlocal_opt\fR are ignored.) The
+dimension \fIn\fR of \fIlocal_opt\fR must match that of \fIopt\fR.
.sp
This function makes a copy of the \fIlocal_opt\fR object, so you can
freely destroy your original \fIlocal_opt\fR afterwards.
.SH AUTHORS
Written by Steven G. Johnson.
.PP
-Copyright (c) 2007-2010 Massachusetts Institute of Technology.
+Copyright (c) 2007-2014 Massachusetts Institute of Technology.
.SH "SEE ALSO"
nlopt_minimize(3)