set parameters of the optimization, constraints, and stopping
criteria. Here, \fBnlopt_set_ftol_rel\fR is merely an example of a
possible stopping criterion. You should link the resulting program
-with the linker flags -lnlopt -lm on Unix.
+with the linker flags \-lnlopt \-lm on Unix.
.fi
.SH DESCRIPTION
NLopt is a library for nonlinear optimization. It attempts to
(unconstrained, i.e. a bound of infinity); it is possible to have
lower bounds but not upper bounds or vice versa. Alternatively, the
user can call one of the above functions and explicitly pass a lower
-bound of -HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
+bound of \-HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
variables to make them have no lower/upper bound, respectively.
(HUGE_VAL is the standard C constant for a floating-point infinity,
found in the math.h header file.)
.BI " double " "ub" );
.sp
.SH NONLINEAR CONSTRAINTS
-Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also
-support arbitrary nonlinear inequality constraints, and some also
-allow nonlinear equality constraints (ISRES and AUGLAG). For these
-algorithms, you can specify as many nonlinear constraints as you wish
-by calling the following functions multiple times.
+Several of the algorithms in NLopt (MMA and ORIG_DIRECT) also support
+arbitrary nonlinear inequality constraints, and some also allow
+nonlinear equality constraints (COBYLA, SLSQP, ISRES, and AUGLAG).
+For these algorithms, you can specify as many nonlinear constraints as
+you wish by calling the following functions multiple times.
.sp
In particular, a nonlinear inequality constraint of the form
\fIfc\fR(\fIx\fR) <= 0, where the function
al. to be more weighted towards local search. Does not support
unconstrainted optimization. There are also several other variants of
the DIRECT algorithm that are supported:
-.BR NLOPT_GLOBAL_DIRECT ,
+.BR NLOPT_GN_DIRECT ,
which is the original DIRECT algorithm;
-.BR NLOPT_GLOBAL_DIRECT_L_RAND ,
+.BR NLOPT_GN_DIRECT_L_RAND ,
a slightly randomized version of DIRECT-L that may be better in
high-dimensional search spaces;
-.BR NLOPT_GLOBAL_DIRECT_NOSCAL ,
-.BR NLOPT_GLOBAL_DIRECT_L_NOSCAL ,
+.BR NLOPT_GN_DIRECT_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_NOSCAL ,
and
-.BR NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_RAND_NOSCAL ,
which are versions of DIRECT where the dimensions are not rescaled to
a unit hypercube (which means that dimensions with larger bounds are
given more weight).
.BR NLOPT_GD_STOGO_RAND ,
which is a randomized version of the StoGO search scheme. The StoGO
algorithms are only available if NLopt is compiled with C++ code
-enabled, and should be linked via -lnlopt_cxx instead of -lnlopt (via
+enabled, and should be linked via \-lnlopt_cxx instead of \-lnlopt (via
a C++ compiler, in order to link the C++ standard libraries).
.TP
.B NLOPT_LN_NELDERMEAD
handle nonlinear inequality and equality constraints as suggested by
Runarsson and Yao.
.TP
-\fBNLOPT_GD_MLSL_LDS\fR, \fBNLOPT_GN_MLSL_LDS\fR
-Global (G) derivative-based (D) or derivative-free (N) optimization
-using the multi-level single-linkage (MLSL) algorithm with a
-low-discrepancy sequence (LDS). This algorithm executes a quasi-random
-(LDS) sequence of local searches, with a clustering heuristic to
-avoid multiple local searches for the same local optimum. The local
-search uses the derivative/nonderivative algorithm set by
-.I nlopt_set_local_optimizer
-(currently defaulting to
-.I NLOPT_LD_MMA
-and
-.I NLOPT_LN_COBYLA
-for derivative/nonderivative searches, respectively). There are also
-two other variants, \fBNLOPT_GD_MLSL\fR and \fBNLOPT_GN_MLSL\fR, which use
-pseudo-random numbers (instead of an LDS) as in the original MLSL algorithm.
+\fBNLOPT_G_MLSL_LDS\fR, \fBNLOPT_G_MLSL\fR
+Global (G) optimization using the multi-level single-linkage (MLSL)
+algorithm with a low-discrepancy sequence (LDS) or pseudorandom
+numbers, respectively. This algorithm executes a low-discrepancy
+or pseudorandom sequence of local searches, with a clustering
+heuristic to avoid multiple local searches for the same local optimum.
+The local search algorithm must be specified, along with termination
+criteria/tolerances for the local searches, by
+\fInlopt_set_local_optimizer\fR. (This subsidiary algorithm can be
+with or without derivatives, and determines whether the objective
+function needs gradients.)
.TP
-.B NLOPT_LD_MMA
+\fBNLOPT_LD_MMA\fR, \fBNLOPT_LD_CCSAQ\fR
Local (L) gradient-based (D) optimization using the method of moving
asymptotes (MMA), or rather a refined version of the algorithm as
published by Svanberg (2002). (NLopt uses an independent
-free-software/open-source implementation of Svanberg's algorithm.)
+free-software/open-source implementation of Svanberg's algorithm.) CCSAQ
+is a related algorithm from Svanberg's paper which uses a local quadratic
+approximation rather than the more-complicated MMA model; the two usually
+have similar convergence rates.
The
.B NLOPT_LD_MMA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality (not equality) constraints as described above.
+.TP
+.B NLOPT_LD_SLSQP
+Local (L) gradient-based (D) optimization using sequential quadratic
+programming and BFGS updates, supporting arbitrary nonlinear
+inequality and equality constraints, based on the code by Dieter Kraft
+(1988) adapted for use by the SciPy project. Note that this algorithm
+uses dense-matrix methods requiring O(\fIn\fR^2) storage and
+O(\fIn\fR^3) time, making it less practical for problems involving
+more than a few thousand parameters.
.TP
.B NLOPT_LN_COBYLA
Local (L) derivative-free (N) optimization using the COBYLA algorithm
of Powell (Constrained Optimization BY Linear Approximations).
The
.B NLOPT_LN_COBYLA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality/equality constraints as described above.
.TP
.B NLOPT_LN_NEWUOA
Local (L) derivative-free (N) optimization using a variant of the
Local (L) derivative-free (N) optimization using the BOBYQA algorithm
of Powell, based on successive quadratic approximations of the
objective function, supporting bound constraints.
+.TP
+.B NLOPT_AUGLAG
+Optimize an objective with nonlinear inequality/equality constraints
+via an unconstrained (or bound-constrained) optimization algorithm,
+using a gradually increasing "augmented Lagrangian" penalty for
+violated constraints. Requires you to specify another optimization
+algorithm for optimizing the objective+penalty function, using
+\fInlopt_set_local_optimizer\fR. (This subsidiary algorithm can be
+global or local and with or without derivatives, but you must specify
+its own termination criteria.) A variant, \fBNLOPT_AUGLAG_EQ\fR, only
+uses the penalty approach for equality constraints, while inequality
+constraints are handled directly by the subsidiary algorithm (restricting
+the choice of subsidiary algorithms to those that can handle inequality
+constraints).
.SH STOPPING CRITERIA
Multiple stopping criteria for the optimization are supported, as
specified by the functions to modify a given optimization problem
.I stopval
is found: stop minimizing when a value <= \fIstopval\fR is found, or
stop maximizing when a value >= \fIstopval\fR is found. (Setting
-\fIstopval\fR to -HUGE_VAL for minimizing or +HUGE_VAL for maximizing
+\fIstopval\fR to \-HUGE_VAL for minimizing or +HUGE_VAL for maximizing
disables this stopping criterion.)
.TP
.BI "nlopt_result nlopt_set_ftol_rel(nlopt_opt " "opt" ,
.SH LOCAL OPTIMIZER
Some of the algorithms, especially MLSL and AUGLAG, use a different
optimization algorithm as a subroutine, typically for local
-optimization. By default, they use MMA or COBYLA for gradient-based
-or derivative-free searching, respectively. However, you can change
-the local search algorithm and its tolerances by calling:
+optimization. You can change the local search algorithm and its
+tolerances by calling:
.sp
.BI " nlopt_result nlopt_set_local_optimizer(nlopt_opt " "opt" ,
.br
.sp
Here, \fIlocal_opt\fR is another \fBnlopt_opt\fR object whose
parameters are used to determine the local search algorithm and
-stopping criteria. (The objective function and nonlinear-constraint
-parameters of \fIlocal_opt\fR are ignored.) The dimension \fIn\fR of
-\fIlocal_opt\fR must match that of \fIopt\fR.
+stopping criteria. (The objective function, bounds, and
+nonlinear-constraint parameters of \fIlocal_opt\fR are ignored.) The
+dimension \fIn\fR of \fIlocal_opt\fR must match that of \fIopt\fR.
.sp
This function makes a copy of the \fIlocal_opt\fR object, so you can
freely destroy your original \fIlocal_opt\fR afterwards.
.SH AUTHORS
Written by Steven G. Johnson.
.PP
-Copyright (c) 2007-2010 Massachusetts Institute of Technology.
+Copyright (c) 2007-2014 Massachusetts Institute of Technology.
.SH "SEE ALSO"
nlopt_minimize(3)