Skip to main content

Adaptive cubic overestimation methods for unconstrained optimization

Coralia Cartis‚ Nicholas I. M. Gould and Philippe L. Toint

Abstract

An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization, generalizing a method due to Nesterov & Polyak (Math. Programming 108, 2006, pp 177-205), is proposed. At each iteration of Nesterov & Polyak's approach, the global minimizer of a local cubic overestimator of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is Lipschitz continuous and its Lipschitz constant is available. The twin requirements of global model optimality and the availability of Lipschitz constants somewhat limit the applicability of such an approach, particularly for large-scale problems. However the promised powerful worst-case theoretical guarantees prompt us to investigate variants in which estimates of the required Lipschitz constant are refined and in which computationally-viable approximations to the global model-minimizer are sought. We show that the excellent global and local convergence properties and worst-case iteration complexity bounds obtained by Nesterov & Polyak are retained, and sometimes extended to a wider class of problems, by our ACO approach. Numerical experiments with small-scale test problems from the CUTEr set show superior performance of the ACO algorithm when compared to a trust-region implementation.

Institution
Oxford University Computing Laboratory
Month
October
Number
NA−07/20
Year
2007