optim {base} | R Documentation |
General-purpose optimization based on NelderMead, quasi-Newton and conjugate-gradient algorithms. It includes an option for box-constrained optimization.
optim(par, fn, gr = NULL, method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B"), lower = -Inf, upper = +Inf, control = list(), hessian = FALSE), ...)
par |
Initial values for the parameters to be optimized over. |
fn |
A function to be minimized, with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. |
gr |
A function to return the gradient. Not needed for the
"Nelder-Mead" method. If it is NULL and it is needed, a
finite-difference approximation will be used. It is guaranteed that
gr will be called immediately after a call to fn at
the same parameter values. |
method |
The method to be used. See Details. |
lower, upper |
Bounds on the variables for the "L-BFGS-B" method. |
control |
A list of control parameters. See Details. |
hessian |
Logical. Should a numerically differentiated Hessian matrix be returned? |
... |
Further arguments to be passed to fn and gr . |
By default this function performs minimization, but it will maximize
if control$fnscale
is negative.
The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions.
The second method is a quasi-Newton method (also known as a variable metric algorithm), specifically that published simultaneously in 1970 by Broyden, Fletcher, Goldfarb and Shanno. This uses function values and gradients to build up a picture of the surface to be optimized.
The third method is a conjugate gradients method based on that by Fletcher and Reeves (1964) (but with the option of PolakRibiere or BealeSorenson updates). Conjugate gradient methods will generally be more fragile that the BFGS method, but as they do not store a matrix they may be successful in much larger optimization problems.
The fourth method is that of Byrd et. al. (1994) which allows box constraints, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning.
Function fn
can return NA
or Inf
if the function
cannot be evaluated at the supplied value, but the initial value must
have a computable finite value of fn
.
(Except for method "L-BFGS-B"
.)
optim
can be used recursively, and for a single parameter
as well as many.
The control
argument is a list that can supply any of the
following components:
trace
fnscale
fn
and gr
during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale
.parscale
par/parscale
and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value.ndeps
par/parscale
scale. Defaults to 1e-3
.maxit
100
for the derivative-based methods,
and 500
for "Nelder-Mead"
.abstol
reltol
reltol * (abs(val) + reltol)
at a step.alpha
, beta
, gamma
"Nelder-Mead"
method. alpha
is the reflection
factor (default 1.0), beta
the contraction factor (0.5) and
gamma
the expansion factor (2.0).REPORT
"BFGS"
method in control$trace
is positive.
Defaults to every 10 iterations.type
1
for the FletcherReeves update, 2
for
PolakRibiere and 3
for BealeSorenson.lmm
"L-BFGS-B"
method, It defaults to 5
.factr
"L-BFGS-B"
method. Convergence occurs when the reduction in the objective is
within this factor of the machine tolerance. Default is 1e7
,
that is a tolerance of about 1e-8
.pgtol
"L-BFGS-B"
method. It is a tolerance on the projected gradient in the current
search direction. This defaults to zero, when the check is suppressed.par |
The best set of parameters found. |
value |
The value of fn corresponding to par . |
counts |
A two-element integer vector giving the number of calls
to fn and gr respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to fn to
compute a finite-difference approximation to the gradient. |
convergence |
An integer code. 0 indicates successful
convergence. Error codes are
|
message |
A character string giving any additional information
returned by the optimizer, or NULL . |
hessian |
Only if argument hessian is true. A symmetric
matrix giving an estimate of the Hessian at the solution found. Note
that this is the Hessian of the unconstrained problem even if the
box constraints are active. |
The code for methods "Nelder-Mead"
, "BFGS"
and
"CG"
was based originally on Pascal code in Nash (1990) that was
translated by p2c
and then hand-optimized. Dr Nash has agreed
that the code can be make freely available.
The code for method "L-BFGS-B"
is based on Fortran code by
Zhu, Byrd, Lu-Chen and Nocedal obtained from Netlib.
Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995) A limited memory algorithm for bound constrained optimization. SIAM J. Scientific Computing, 16, 11901208.
Fletcher, R. and Reeves, C. M. (1964) Function minimization by conjugate gradients. Computer Journal 7, 148154.
Nash, J. C. (1990) Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation. Adam Hilger.
Nelder, J. A. and Mead, R. (1965) A simplex algorithm for function minimization. Computer Journal 7, 308313.
## Rosenbrock Banana function fr <- function(x) { x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr) optim(c(-1.2,1), fr, grr, method = "BFGS") optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE) optim(c(-1.2,1), fr, grr, method = "CG") optim(c(-1.2,1), fr, grr, method = "CG", control=list(type=2)) optim(c(-1.2,1), fr, grr, method = "L-BFGS-B") flb <- function(x) sum(c(1, rep(4, length(x)-1))*(x - c(1, x[-length(x)])^2)^2) optim(rep(3, 25), flb, NULL, "L-BFGS-B", lower=rep(2, 25), upper=rep(4, 25))