Superlinear First Order Conjugate Gradient Learning Algorithm
The article introduces first order superimear line search conjugate gradient learning algorithm for training neural networks. The algorithm is built on solid theoretical ground rather than heuristics. It features automatic adaptability of learning rote and momentum term at each iteration of training. Flexibility of learning rate and momentum term contributes to substantial elimination of oscillations and faster convergence speed, furthermore, it also incorporates the mechanism of escaping local minima that do not belong to the solution set. The algorithm has linear computational complexity and memory requirements. Performance of the proposed algorithm is extensively evaluated on five data sets and compared to the relevant standard first order optimization techniques.
Peter G(E)CZY Shiro USUI
Laboratory for Mathematical Neuroscience RIKEN Brain Science Institute 2-1 Hirosawa, Wako, Saitama 3 Department of Information and Computer Sciences Toyohashi University of Technology Hibarigaoka, Toyo
国际会议
8th International Conference on Neural Information Processing(ICONIP 2001)(第八届国际神经信息处理大会)
上海
英文
91-95
2001-11-14(万方平台首次上网日期,不代表论文的发表时间)