会议专题

An Experimental Perspective for Computation-Efficient Neural Networks Training

  Nowadays,as the tremendous requirements of computation efficient neural networks to deploy deep learning models on inexpensive and broadly-used devices,many lightweight networks have been presented,such as MobileNet series,ShuffleNet,etc.The computation efficient models are specifically designed for very limited computational budget,e.g.,10–150 MFLOPs,and can run efficiently on ARM-based devices.These models have smaller CMR than the large networks,such as VGG,ResNet,Inception,etc.However,it is quite efficient for inference on ARM,how about inference or training on GPU? Unfortunately,compact models usually cannot make full utilization of GPU,though it is fast for its small size.In this paper,we will present a series of extensive experiments on the training of compact models,including training on single host,with GPU and CPU,and distributed environment.Then we give some analysis and suggestions on the training.

Neural networks training Experiment Distributed

Lujia Yin Xiaotao Chen Zheng Qin Zhaoning Zhang Jinghua Feng Dongsheng Li

Science and Technology on Parallel and Distributed Laboratory,National University of Defense Technology,Changsha,China

国际会议

the 12th Conference on Advanced Computer Architecture?(ACA 2018)(2018年全国计算机体系结构学术年会)

辽宁营口

英文

168-178

2018-08-10(万方平台首次上网日期,不代表论文的发表时间)