An Interpretation of Forward-Propagation and Back-Propagation of DNN
Deep neural network(DNN)is hard to understand because the objective loss function is defined on the last layer,not directly on the hidden layers.To best understand DNN,we interpret the forward-propagation and back-propagation of DNN as two network structures,fp-DNN and bp-DNN.Then we introduce the direct loss function for hidden layers of fp-DNN and bp-DNN,which gives a way to interpret the fp-DNN as an encoder and bp-DNN as a decoder.Using this inter-pretation of DNN,we do experiments to analyze that fp-DNN learns to encode discriminant features in the hidden layers with the supervision of bp-DNN.Further,we use bp-DNN to visualize and explain DNN.Our experiments and analyses show the proposed interpretation of DNN is a good tool to understand and analyze the DNN.
Forward-propagation Back-propagation Encoder Decoder
Guotian Xie Jianhuang Lai
The School of Data and Computer Science,Sun Yat-sen University,Guangzhou 510006,China;Guangdong Key The School of Data and Computer Science,Sun Yat-sen University,Guangzhou 510006,China;Guangdong Key
国际会议
广州
英文
3-15
2018-11-23(万方平台首次上网日期,不代表论文的发表时间)