AUTHORS: Issam A. R. Moghrabi
Download as PDF
We develop a framework for the construction of multi-step quasi-Newton methods which utilize values of the objective function. The model developed here is constructed via interpolants of the m+1 most recent iterates / gradient evaluations, and possesses double free parameters which introduce an additional degree of flexibility. This permits the interpolating polynomials to exploit function-values which are readily available at each iteration. A new algorithm is derived for which function values are incorporated in the update of the inverse Hessian approximation at each iteration, in an attempt to accelerate convergence. The idea of incorporating function values is not new within the context of quasi-Newton methods but the presentation made in this paper presents a new approach for such algorithms. It has been shown in several earlier works that Including function values data in the update of the Hessian approximation numerically improves the convergence of Secant-like methods. The numerical performance of the new method is assessed with promising results.
KEYWORDS: Unconstrained optimization, quasi-Newton methods, multi-step methods, function value algorithms, nonlinear programming, Newton method
REFERENCES:
[1] Al-Baali, M. “New property and global
convergence of the Fletcher–Reeves method
with inexact line searches”, IMA J. Numer.
Anal., 5, Feb. 1985, pp. 122–124.
[2] Broyden, C.G. “The convergence of a class of
double-rank minimization algorithms - Part 2:
The new algorithm”, J. Inst. Math. Applic., 6,
June 1970, pp. 222-231.
[3] Byrd, R.H., Schnabel, R.B., and Shultz, G.A.,
“Parallel quasi-Newton methods for
unconstrained optimization”, Math.
Programing, 42, April 1988, pp. 273-306.
[4] Dennis, J.E. and Schnabel, R.B., “Least change
Secant updates for quasi-Newton methods”,
SIAM Review, 21, Feb. 1979, pp. 443-459.
[5] Fletcher, R., Practical Methods of Optimization
(second edition), Wiley, New York, 1987.
[6] Fletcher, R., “A new approach to variable
metric algorithms”, Comput. J. , 13, April 1970,
pp. 317-322.
[7] Fletcher, R. and Reeves, C., “Function
minimization by conjugate gradients”,
Computer J., 7, Jan. 1964, pp. 149–154.
[8] Ford, J.A. and Moghrabi I.A.R., “Using
function-values in multi-step quasi-Newton
methods”, J. Comput. Appl. Math., 66, March
1996, pp. 201-211.
[9] Ford, J.A. and Moghrabi I.A.R., “Multi-step
quasi-Newton methods for optimization”, J.
Comput. Appl. Math., 50, Jan. 1994, pp. 305-
323.
[10] Ford, J.A. and Moghrabi I.A.R.,
“Alternative parameter choices for multi-step
quasi-Newton methods”, Optimization Methods
and Software, 2, June 1993, pp. 357-370.
[11] Moghrabi, I.A.R., “Implicit extraupdate multi-step quasi-newton methods”,
Int. J. Operational Research, 28, August
2017, pp. 69-81.
[12] Moré, J.J., Garbow, B.S., Hillstrom, K.E.,
“Testing unconstrained optimization software”,
ACM Trans. Math. Softw., 7, April 1981, pp.
17–41.
[13] Shanno, D.F. and Phua, K.H., “Matrix
conditioning and nonlinear optimization”, Math.
Programming ,14, May 1978, pp. 149-160.
[14] Shanno, D.F., “On the convergence of a
new conjugate gradient algorithm”, SIAM J.
Numer. Anal., 15, Jan. 1978, pp. 1247–1257.
[15] Wolfe, P., “Convergence conditions for
ascent methods II: Some corrections”, SIAM
Rev., 13, June 1971, pp. 185-188