set parameters of the optimization, constraints, and stopping
criteria. Here, \fBnlopt_set_ftol_rel\fR is merely an example of a
possible stopping criterion. You should link the resulting program
-with the linker flags -lnlopt -lm on Unix.
+with the linker flags \-lnlopt \-lm on Unix.
.fi
.SH DESCRIPTION
NLopt is a library for nonlinear optimization. It attempts to
(unconstrained, i.e. a bound of infinity); it is possible to have
lower bounds but not upper bounds or vice versa. Alternatively, the
user can call one of the above functions and explicitly pass a lower
-bound of -HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
+bound of \-HUGE_VAL and/or an upper bound of +HUGE_VAL for some design
variables to make them have no lower/upper bound, respectively.
(HUGE_VAL is the standard C constant for a floating-point infinity,
found in the math.h header file.)
.BI " double " "ub" );
.sp
.SH NONLINEAR CONSTRAINTS
-Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also
-support arbitrary nonlinear inequality constraints, and some also
-allow nonlinear equality constraints (ISRES and AUGLAG). For these
-algorithms, you can specify as many nonlinear constraints as you wish
-by calling the following functions multiple times.
+Several of the algorithms in NLopt (MMA and ORIG_DIRECT) also support
+arbitrary nonlinear inequality constraints, and some also allow
+nonlinear equality constraints (COBYLA, SLSQP, ISRES, and AUGLAG).
+For these algorithms, you can specify as many nonlinear constraints as
+you wish by calling the following functions multiple times.
.sp
In particular, a nonlinear inequality constraint of the form
\fIfc\fR(\fIx\fR) <= 0, where the function
al. to be more weighted towards local search. Does not support
unconstrainted optimization. There are also several other variants of
the DIRECT algorithm that are supported:
-.BR NLOPT_GLOBAL_DIRECT ,
+.BR NLOPT_GN_DIRECT ,
which is the original DIRECT algorithm;
-.BR NLOPT_GLOBAL_DIRECT_L_RAND ,
+.BR NLOPT_GN_DIRECT_L_RAND ,
a slightly randomized version of DIRECT-L that may be better in
high-dimensional search spaces;
-.BR NLOPT_GLOBAL_DIRECT_NOSCAL ,
-.BR NLOPT_GLOBAL_DIRECT_L_NOSCAL ,
+.BR NLOPT_GN_DIRECT_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_NOSCAL ,
and
-.BR NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL ,
+.BR NLOPT_GN_DIRECT_L_RAND_NOSCAL ,
which are versions of DIRECT where the dimensions are not rescaled to
a unit hypercube (which means that dimensions with larger bounds are
given more weight).
.BR NLOPT_GD_STOGO_RAND ,
which is a randomized version of the StoGO search scheme. The StoGO
algorithms are only available if NLopt is compiled with C++ code
-enabled, and should be linked via -lnlopt_cxx instead of -lnlopt (via
+enabled, and should be linked via \-lnlopt_cxx instead of \-lnlopt (via
a C++ compiler, in order to link the C++ standard libraries).
.TP
.B NLOPT_LN_NELDERMEAD
with or without derivatives, and determines whether the objective
function needs gradients.)
.TP
-.B NLOPT_LD_MMA
+\fBNLOPT_LD_MMA\fR, \fBNLOPT_LD_CCSAQ\fR
Local (L) gradient-based (D) optimization using the method of moving
asymptotes (MMA), or rather a refined version of the algorithm as
published by Svanberg (2002). (NLopt uses an independent
-free-software/open-source implementation of Svanberg's algorithm.)
+free-software/open-source implementation of Svanberg's algorithm.) CCSAQ
+is a related algorithm from Svanberg's paper which uses a local quadratic
+approximation rather than the more-complicated MMA model; the two usually
+have similar convergence rates.
The
.B NLOPT_LD_MMA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality (not equality) constraints as described above.
+.TP
+.B NLOPT_LD_SLSQP
+Local (L) gradient-based (D) optimization using sequential quadratic
+programming and BFGS updates, supporting arbitrary nonlinear
+inequality and equality constraints, based on the code by Dieter Kraft
+(1988) adapted for use by the SciPy project. Note that this algorithm
+uses dense-matrix methods requiring O(\fIn\fR^2) storage and
+O(\fIn\fR^3) time, making it less practical for problems involving
+more than a few thousand parameters.
.TP
.B NLOPT_LN_COBYLA
Local (L) derivative-free (N) optimization using the COBYLA algorithm
of Powell (Constrained Optimization BY Linear Approximations).
The
.B NLOPT_LN_COBYLA
-algorithm supports both bound-constrained and unconstrained optimization,
-and also supports an arbitrary number (\fIm\fR) of nonlinear constraints
-as described above.
+algorithm supports both bound-constrained and unconstrained
+optimization, and also supports an arbitrary number (\fIm\fR) of
+nonlinear inequality/equality constraints as described above.
.TP
.B NLOPT_LN_NEWUOA
Local (L) derivative-free (N) optimization using a variant of the
.I stopval
is found: stop minimizing when a value <= \fIstopval\fR is found, or
stop maximizing when a value >= \fIstopval\fR is found. (Setting
-\fIstopval\fR to -HUGE_VAL for minimizing or +HUGE_VAL for maximizing
+\fIstopval\fR to \-HUGE_VAL for minimizing or +HUGE_VAL for maximizing
disables this stopping criterion.)
.TP
.BI "nlopt_result nlopt_set_ftol_rel(nlopt_opt " "opt" ,
.SH LOCAL OPTIMIZER
Some of the algorithms, especially MLSL and AUGLAG, use a different
optimization algorithm as a subroutine, typically for local
-optimization. By default, MLSL uses MMA or COBYLA for gradient-based
-or derivative-free searching, respectively. You can change
-the local search algorithm and its tolerances by calling:
+optimization. You can change the local search algorithm and its
+tolerances by calling:
.sp
.BI " nlopt_result nlopt_set_local_optimizer(nlopt_opt " "opt" ,
.br
.SH AUTHORS
Written by Steven G. Johnson.
.PP
-Copyright (c) 2007-2010 Massachusetts Institute of Technology.
+Copyright (c) 2007-2014 Massachusetts Institute of Technology.
.SH "SEE ALSO"
nlopt_minimize(3)