会议专题

Revisit to Constrained Learning Algorithm

This paper revisits the constrained learning (CL) algorithm proposed by Perantonis et al, which is an efficient back propagation (BP) algorithm formed by imposing the constraint condition (called as priori information) implicit in the issues on the conventional BP algorithm. It can be found, after analyzing the CL algorithm, that the two learning parameters (a predetermined vector quantity δQ corresponding to the constraint conditions and the bound (δP)2 of the sum of square of the individual weight changes at each iteration) in the algorithm, must be suitably selected. Otherwise, the algorithm will be not able to converge within a limited time, even diverge. This paper gives a feasible adaptive method for selecting the two learning parameters. In addition, other possible selection method for δQ is also explored Finally, some omputer simulation results using several examples to verify the performance of our approach are reported.

Constrained Learning Algorithm Adaptive Parameter Selection Feedforward Neural Networks Linear Equations.

De-Shuang Huang

The Key Lab of Agriculture Information Technology, Hefei Institute of Intelligent Machines,Chinese Academy of Sciences, P.O.Box 1130, Hefei, Anhui 230031, China

国际会议

8th International Conference on Neural Information Processing(ICONIP 2001)(第八届国际神经信息处理大会)

上海

英文

497-502

2001-11-14(万方平台首次上网日期,不代表论文的发表时间)