From 6b08d2cc021f1e724f76cbf895374d1668b7ad0f Mon Sep 17 00:00:00 2001 From: Julien Schueller Date: Thu, 6 Sep 2018 08:57:18 +0200 Subject: [PATCH] Fix math layout --- doc/docs/NLopt_C-plus-plus_Reference.md | 2 +- doc/docs/NLopt_Python_Reference.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/docs/NLopt_C-plus-plus_Reference.md b/doc/docs/NLopt_C-plus-plus_Reference.md index 463deb3..3665707 100644 --- a/doc/docs/NLopt_C-plus-plus_Reference.md +++ b/doc/docs/NLopt_C-plus-plus_Reference.md @@ -84,7 +84,7 @@ depending on whether one wishes to minimize or maximize the objective function ` The return value should be the value of the function at the point `x`, where `x` is a vector of length `n` of the optimization parameters (the same as the dimension passed to the constructor). -In addition, if the argument `grad` is not empty $$i.e. `!grad.empty()` or equivalently `grad.size()>0`$$, then `grad` is a vector of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative $\partial f / \partial x_i$, for $0 \leq i < n$, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) +In addition, if the argument `grad` is not empty, i.e. `!grad.empty()` or equivalently `grad.size()>0`, then `grad` is a vector of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative $\partial f / \partial x_i$, for $0 \leq i < n$, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) The `f_data` argument is the same as the one passed to `nlopt_set_min_objective` or `nlopt_set_max_objective`, and may be used to pass any additional data through to the function. (That is, it may be a pointer to some caller-defined data structure/type containing information your function needs, which you convert from `void*` by a typecast.) You can just pass `NULL` for `f_data` if you don't want to pass any additional information. Note that the `nlopt::opt` object does *not* make a copy of whatever is pointed to by your `f_data` pointer; you must not deallocate its contents until *after* you are done calling `nlopt::opt::optimize`. (There is a low-level way to make the nlopt::opt object "take ownership" of the f_data pointer, which is mainly used for wrapping other languages.) diff --git a/doc/docs/NLopt_Python_Reference.md b/doc/docs/NLopt_Python_Reference.md index 5bdc8a7..ce019f9 100644 --- a/doc/docs/NLopt_Python_Reference.md +++ b/doc/docs/NLopt_Python_Reference.md @@ -73,7 +73,7 @@ def f(x, grad): The return value should be the value of the function at the point `x`, where `x` is a NumPy array of length `n` of the optimization parameters (the same as the dimension passed to the constructor). -In addition, if the argument `grad` is not empty $$i.e. `grad.size>0`$$, then `grad` is a NumPy array of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative $\partial f / \partial x_i$, for $0 \leq i < n$, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) +In addition, if the argument `grad` is not empty, i.e. `grad.size>0`, then `grad` is a NumPy array of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative $\partial f / \partial x_i$, for $0 \leq i < n$, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) Note that `grad` must be modified *in-place* by your function `f`. Generally, this means using indexing operations `grad[...]` `=` `...` to overwrite the contents of `grad`, as described below. @@ -170,7 +170,7 @@ def c(result, x, grad): `result` is a NumPy array whose length equals the dimensionality *m* of the constraint (same as the length of `tol` above), which upon return should be set *in-place* ([see above](#Assigning_results_in-place.md)) to the constraint results at the point `x` (a NumPy array whose length *n* is the same as the dimension passed to the constructor). Any return value of the function is ignored. -In addition, if the argument `grad` is not empty $$i.e. `grad.size>0`$$, then `grad` is a 2d NumPy array of size *m*×*n* which should (upon return) be set in-place ([see above](#Assigning_results_in-place.md)) to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i,j]` should upon return contain the partial derivative $\partial c_i / \partial x_j$ if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) +In addition, if the argument `grad` is not empty, i.e. `grad.size>0`, then `grad` is a 2d NumPy array of size *m*×*n* which should (upon return) be set in-place ([see above](#Assigning_results_in-place.md)) to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i,j]` should upon return contain the partial derivative $\partial c_i / \partial x_j$ if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.) An inequality constraint corresponds to $c_i \le 0$ for $0 \le i < m$, and an equality constraint corresponds to $c_i = 0$, in both cases with tolerance `tol[i]` for purposes of termination criteria. -- 2.30.2