会议专题

A Comparable Study on Model Averaging,Ensembling and Reranking in NMT

  Neural machine translation has become a benchmark method in machine translation.Many novel structures and methods have been proposed to improve the translation quality.However,it is difficult to train and turn parameters.In this paper,we focus on decoding techniques that boost translation performance by utilizing existing models.We address the problem from three aspects—parameter,word and sentence level,corresponding to checkpoint averaging,model ensembling and candidates reranking which all do not need to retrain the model.Experimental results have shown that the proposed decoding approaches can significantly improve the performance over baseline model.

Yuchen Liu Long Zhou Yining Wang Yang Zhao Jiajun Zhang Chengqing Zong

National Laboratory of Pattern Recognition,CASIA,University of Chinese Academy of Sciences,Beijing,C National Laboratory of Pattern Recognition,CASIA,University of Chinese Academy of Sciences,Beijing,C

国际会议

2018自然语言处理与中文计算国际会议(NLPCC2018)

呼和浩特

英文

299-308

2018-08-26(万方平台首次上网日期,不代表论文的发表时间)