会议专题

Otem&Utem:Over-and Under-Translation Evaluation Metric for NMT

  Although neural machine translation(NMT)yields promising translation performance,it unfortunately suffers from over-and under-translation issues 31,of which studies have become research hotspots in NMT.At present,these studies mainly apply the dominant automatic evaluation metrics,such as BLEU,to evaluate the overall translation quality with respect to both adequacy and fluency.However,they are unable to accurately measure the ability of NMT systems in dealing with the above-mentioned issues.In this paper,we propose two quantitative metrics,the Otem and Utem,to automatically evaluate the system performance in terms of over-and under-translation respectively.Both metrics are based on the proportion of mismatched ngrams between gold reference and system translation.We evaluate both metrics by comparing their scores with human evaluations,where the values of Pearson Correlation Coefficient reveal their strong correlation.Moreover,in-depth analyses on various translation systems indicate some inconsistency between BLEU and our proposed metrics,highlighting the necessity and significance of our metrics.

Evaluation metric Neural machine translation Over-translation Under-translation

Jing Yang Biao Zhang Yue Qin Xiangwen Zhang Qian Lin Jinsong Su

Xiamen University,Xiamen,China;Provincial Key Laboratory for Computer Information Processing Technol Xiamen University,Xiamen,China

国际会议

2018自然语言处理与中文计算国际会议(NLPCC2018)

呼和浩特

英文

291-302

2018-08-26(万方平台首次上网日期,不代表论文的发表时间)