A pipelined Pre-training algorithm for DBNs
Deep networks have been widely used in many domains in recentyears.However,the pre-training of deep networks is time consuming with greedy layer-wise algorithm,and the scalability of this algorithm is greatly re-stricted by its inherently sequential nature where only one hidden layer can be trained at one time.In order to speed up the training of deep networks,this pa-per mainly focuses on pre-training phase and proposes a pipelined pre-training algorithm because it uses distributed cluster,which can significantly reduce the pre-training time at no loss of recognition accuracy.It”s more efficient than greedy layer-wise pre-training algorithm by using the computational cluster.The contrastive experiments between greedy layer-wise and pipelined layer-wise algorithm are conducted finally,so we have carried out a comparative ex-periment on the greedy layer-wise algorithm and pipelined pre-training algo-rithms on the TIMIT corpus,result shows that the pipelined pre-training algo-rithm is an efficient algorithm to utilize distributed GPU cluster.We achieve a 2.84 and 5.9 speed-up with no loss of recognition accuracy when we use 4 slaves and 8 slaves.Parallelization efficiency is close to 0.73.
component deep networks pre-training greedy layer-wise RBM pipelined
Zhiqiang Ma Tuya Li Shuangtao Yang Li Zhang
College of Information Engineering,Inner Mongolia University of Technology,Hohhot,CHINA
国内会议
第十六届全国计算语言学学术会议暨第五届基于自然标注大数据的自然语言处理国际学术研讨会
南京
英文
1-12
2017-10-13(万方平台首次上网日期,不代表论文的发表时间)