会议专题

Convergence of a Gradient Algorithm with Penalty for Training Two-layer Neural Networks

In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based on a linearly separable problem and simulation results are presented. The abstract goes here.

Hongmei Shao Lijun Liu Gaofeng Zheng

College of Math. and Comput. Science China University of Petroleum Dongying, 257061, China Department of Mathematics Dalian Nationalities University Dalian, 116605, China JANA Solutions, Inc. Shiba 1-15-13, Minato-ku Tokyo, 105-0014, Japan

国际会议

2009 2nd IEEE International Conference on Computer Science and Information Technology(第二届计算机科学与信息技术国际会议 ICCSIT2009)

北京

英文

2024-2027

2009-08-08(万方平台首次上网日期,不代表论文的发表时间)