Lines Matching refs:Hessian

44 the code for `FormHessian`, which evaluates the Hessian matrix for
70 gradient, and perhaps the Hessian matrix. The user then invokes TAO to
205 routines that evaluate the gradient vector and Hessian matrix.
328 Hessian evaluation routines. All these routines should return the
334 #### Hessian Evaluation
336 Some optimization routines also require a Hessian matrix from the user.
337 The routine that evaluates the Hessian should have the form
344 second argument is the point at which the Hessian should be evaluated.
345 The third argument is the Hessian matrix, and the sixth argument is a
346 user-defined context. Since the Hessian matrix is usually used in
350 the same as the Hessian matrix. The fifth argument is the flag used to
351 set the Hessian matrix and linear solver in the routine
354 One can set the Hessian evaluation routine by calling the
361 third arguments are, respectively, the Mat object where the Hessian will
364 evaluates the Hessian, and the fifth argument is a pointer to a
370 the Hessian of an objective function. These approximations will slow the
388 The efficiency of the finite-difference Hessian can be improved if the
391 finite-difference approximation by setting the Hessian evaluation
402 correctness of the gradient and/or Hessian evaluation routines. This
410 Hessian evaluation routine need not be conventional matrices; instead,
818 cases, this matrix will be the same as the Hessian matrix. The fifth
862 3. Function, gradient, and Hessian evaluations – Newton Krylov methods:
867 and the accuracy required in the solution. If a Hessian evaluation
869 trust-region methods will likely perform best. When a Hessian evaluation
894 to obtain a step $d_k$, where $H_k$ is the Hessian of the
896 objective function at $x_k$. For problems where the Hessian matrix
1115 the absolute value of the diagonal of the Hessian matrix, a
1116 limited-memory BFGS approximation to the Hessian matrix, or one of the
1121 of the inverse Hessian. See the PETSc manual for further information on
1149 the minimum eigenvalue $\lambda_1$ of the Hessian matrix is
1185 Hessian matrix will be positive-semidefinite; the perturbation will
1268 to obtain a direction $d_k$, where $H_k$ is the Hessian of
1460 use no preconditioner, the absolute value of the diagonal of the Hessian
1461 matrix, a limited-memory BFGS approximation to the Hessian matrix, or
1530 approximation to the Hessian matrix from a limited number of previous
1538 where $H_k$ is the Hessian approximation obtained by using the
1549 The current iterate and Hessian approximation are updated, and the
1557 type of Hessian approximation used, the number of vectors stored for the
1564 Hessian matrix $H_{0,k}$ through the interface function
1571 the inversion of the user-provided initial Hessian.
1620 $f(x)$. This algorithm does not require any gradient or Hessian
1672 function, gradient, and possibly Hessian information.
1740 diagonal matrix $D_k$ is an approximation of the Hessian inverse
1749 BNK algorithms invert the reduced Hessian using a Krylov iterative
1760 preconditioners are also used as the approximate inverse-Hessian in the
1761 active-set estimation. If neither are available, or if the Hessian
1772 of Newton iterations, in practice it simply trades off the Hessian
1775 problems where the Hessian evaluation is disproportionately more
1784 For problems with indefinite Hessian matrices, the step direction is
1793 trust-region conjugate gradient method is used for the Hessian
1824 calculation with a direct inverse application of the approximate Hessian
1827 positive-definite Hessian approximation. This algorithm is available via
1835 Hessian with a quasi-Newton approximation. The matrix-free forward
1842 guarantee positive-definiteness. The BNLS framework with Hessian
1844 successfully compensate for the Hessian approximation becoming
1917 One can also supply their own preconditioner, serving as a Hessian
2009 Any other combination of routines is currently not supported. Hessian
2013 certain cases where augmented Lagrangian’s Hessian may become nearly
2133 Here,$H$ is the Hessian matrix of the KKT system. For
2134 interior-point methods such as PDIPM, the Hessian matrix tends to be
2295 approximation to the reduced Hessian matrix, a positive-definite matrix,
2318 Because the Hessian approximation is positive definite and we know its
2366 limited-memory quasi-Newton approximation to the reduced Hessian matrix
2398 words, the Gauss-Newton method approximates the Hessian of the objective
2440 for evaluating the Hessian of the regularization term.
2802 where the gradient and the Hessian of the objective are both constant.
2809 Therefore, it evaluates the function, gradient, and Hessian only once.
2823 gradient, and Hessian only once. This method also requires the solution
2857 iteration of the TRON algorithm requires function, gradient, and Hessian
2884 optimization. It uses projected gradients to approximate the Hessian,
2885 eliminating the need for Hessian evaluations. The method can be set by
3013 quasi-Newton Hessian approximation from the previous `TaoSolve()`
3172 to evaluate the Hessian matrix or evaluate constraints. TAO may obtain