typedef struct {
nlopt_func f;
void *f_data;
+ const double *lb, *ub;
} nlopt_data;
#include "subplex.h"
static double f_subplex(int n, const double *x, void *data_)
{
+ int i;
nlopt_data *data = (nlopt_data *) data_;
+
+ /* subplex does not support bound constraints, but it supports
+ discontinuous objectives so we can just return Inf for invalid x */
+ for (i = 0; i < n; ++i)
+ if (x[i] < data->lb[i] || x[i] > data->ub[i])
+#ifdef INFINITY
+ return INFINITY;
+#else
+ return HUGE_VAL;
+#endif
+
return data->f(n, x, NULL, data->f_data);
}
double xtol_rel, const double *xtol_abs,
int maxeval, double maxtime)
{
+ int i;
nlopt_data d;
d.f = f;
d.f_data = f_data;
+ d.lb = lb;
+ d.ub = ub;
+
+ /* check bound constraints */
+ for (i = 0; i < n; ++i)
+ if (lb[i] > ub[i] || x[i] < lb[i] || x[i] > ub[i])
+ return NLOPT_INVALID_ARGS;
switch (algorithm) {
case NLOPT_GLOBAL_DIRECT:
break;
case NLOPT_LOCAL_SUBPLEX: {
- int iret, i;
+ int iret;
double *scale = (double *) malloc(sizeof(double) * n);
if (!scale) return NLOPT_OUT_OF_MEMORY;
- for (i = 0; i < n; ++i)
- scale[i] = fabs(ub[i] - lb[i]);
+ for (i = 0; i < n; ++i) {
+ if (!my_isinf(ub[i]) && !my_isinf(lb[i]))
+ scale[i] = (ub[i] - lb[i]) * 0.01;
+ else if (!my_isinf(lb[i]) && x[i] > lb[i])
+ scale[i] = (x[i] - lb[i]) * 0.01;
+ else if (!my_isinf(ub[i]) && x[i] < ub[i])
+ scale[i] = (ub[i] - x[i]) * 0.01;
+ else
+ scale[i] = 0.01 * x[i] + 0.0001;
+ }
iret = subplex(f_subplex, fmin, x, n, &d, xtol_rel, maxeval,
fmin_max, !my_isinf(fmin_max), scale);
free(scale);
}
case NLOPT_LOCAL_LBFGS: {
- int iret, i, *nbd = (int *) malloc(sizeof(int) * n);
+ int iret, *nbd = (int *) malloc(sizeof(int) * n);
if (!nbd) return NLOPT_OUT_OF_MEMORY;
for (i = 0; i < n; ++i) {
int linf = my_isinf(lb[i]) && lb[i] < 0;
(That is, it may be a pointer to some caller-defined data
structure/type containing information your function needs, which you
convert from void* by a typecast.)
+.sp
+.SH CONSTRAINTS
+Most of the algorithms in NLopt are designed for minimization of functions
+with simple bound constraints on the inputs. That is, the input vectors
+x[i] are constrainted to lie in a hyperrectangle lb[i] <= x[i] <= ub[i] for
+0 <= i < n, where
+.I lb
+and
+.I ub
+are the two arrays passed to
+.BR nlopt_minimize ().
+.sp
+However, a few of the algorithms support partially or totally
+unconstrained optimization, as noted below, where a (totally or
+partially) unconstrained design variable is indicated by a lower bound
+equal to -Inf and/or an upper bound equal to +Inf. Here, Inf is the
+IEEE-754 floating-point infinity, which (in ANSI C99) is represented by
+the macro INFINITY in math.h. Alternatively, for older C versions
+you may also use the macro HUGE_VAL (also in math.h).
+.sp
+With some of the algorithms, you may also employ arbitrary nonlinear
+constraints on the input variables. This is indicated by returning NaN
+(not a number) or Inf (infinity) from your objective function whenever
+a forbidden point is requested. See above for how to specify infinity;
+NaN is specified by the macro NAN in math.h (for ANSI C99).
.SH ALGORITHMS
The
.I algorithm
.TP
.B NLOPT_GLOBAL_DIRECT
Perform a global derivative-free optimization using the DIRECT search
-algorithm by Jones et al., based on the free implementation by Gablonsky
-et al.
+algorithm by Jones et al. See direct/README. Supports arbitrary
+nonlinear constraints as described above.
+.TP
+.B NLOPT_GLOBAL_DIRECT_L
+Perform a global derivative-free optimization using the DIRECT-L
+search algorithm by Gablonsky et al., a modified version of the DIRECT
+algorithm that is more suited to functions with few local minima. See
+direct/README. Supports arbitrary nonlinear constraints as described
+above.
+.TP
+.B NLOPT_LOCAL_SUBPLEX
+Perform a local derivative-free optimization, starting at
+.IR x ,
+using the Subplex algorithm of Rowan et al., which is an improved
+variant of Nelder-Mead simplex algorithm. (Like Nelder-Mead, Subplex
+often works well in practice, even for discontinuous objectives, but
+there is no rigorous guarantee that it will converge.) Subplex is
+best for unconstrained optimization, but constrained optimization also
+works (both for simple bound constraints via
+.I lb
+and
+.I ub
+as well as nonlinear constraints as described above).
.SH RETURN VALUE
The value returned is one of the following enumerated constants.
(Positive return values indicate successful termination, while negative
up a bit by Steven G. Johnson (stevenj@alum.mit.edu) for use with libctl.
Unfortunately, neither Netlib nor the Subplex source code contain any
-indication of the copyright/license status of the code. Arguably, anything
-posted to Netlib absent such information was intended to be free of
-restrictions. However, it would be nice to have an explicit license!
+explicit indication of the copyright/license status of the code, and I
+was unable to contact the author. However, it is listed as "public
+domain" by the Starlink astronomical software review
+(http://star-www.rl.ac.uk/star/docs/sun194.htx/node151.html).
+
+The original README file describes the algorithm as:
+
+ Subplex is a subspace-searching simplex method for the
+ unconstrained optimization of general multivariate functions.
+ Like the Nelder-Mead simplex method it generalizes, the subplex
+ method is well suited for optimizing noisy objective functions.
+ The number of function evaluations required for convergence
+ typically increases only linearly with the problem size, so for
+ most applications the subplex method is much more efficient than
+ the simplex method.