Global Learning of Neural Networks by Using Hybrid Optimization Algorithm
This paper proposes a global learning of neural networks by hybrid optimization algorithm. The hybrid algorithm combines a stochastic approximation with a gradient descent. The stochastic approximation is first applied for estimating an approximation point inclined toward a global escaping from a local minimum, and then the backpropagation(BP) algorithm is applied for high-speed convergence as gradient descent. The proposed method has been applied to 8-bit parity check and 6-bit symmetry check problems, respectively. The experimental results show that the proposed method has superior convergence performances to the conventional method that is BP algorithm with randomized initial weights setting.
Neural networks Global learning Stochastic approximation Gradient descent Backpropagation algorithm
Yong-Hyun Cho Seong-Jun Hong
School of Computer and Information Comm. Eng., Catholic Univ. of Daegu, 330, Kumrakri, Hayangup,Kyungsan, Kyungbuk, 712-702, Korea(South)
国际会议
The 2007 International Conference on Intelligent Systems and Knowledge Engineering(第二届智能系统与知识工程国际会议)
成都
英文
1146-1151
2007-10-15(万方平台首次上网日期,不代表论文的发表时间)