会议专题

A New Memory Gradient Method for Unconstrained Optimization

This paper presents a new memory gradient method for unconstrained optimization problems. This method makes use of the current and previous multi-step iteration information to generate a new iteration and add the freedom of some parameters. Therefore it is suitable to solve large scale unconstrained optimization problems. The global convergence is proved under some mild conditions. Numerical experiments show the algorithm is efficient in many situations.

Unconstrained optimization Memory gradient method Global convergence

Zhiguang Zhang

Department of Mathematics, Dezhou University, Dezhou, Shandong 253023, P.R. China

国际会议

The First World Congress on Global Optimization in Engineering & Science(第一届工程与科学全局优化国际会议 WCGO2009)

长沙

英文

542-547

2009-06-01(万方平台首次上网日期,不代表论文的发表时间)