会议专题

Generating Textual Entailment Using Residual LSTMs

  Generating textual entailment(GTE)is a recently proposed task to study how to infer a sentence from a given premise.Current sequence-to-se-quence GTE models are prone to produce invalid sentences when facing with complex enough premises.Moreover,the lack of appropriate evaluation criteria hinders researches on GTE.In this paper,we conjecture that the unpowerful en-coder is the major bottleneck in generating more meaningful sequences,and im-prove this by employing the residual LSTM network.With the extended model,we obtain state-of-the-art results.Furthermore,we propose a novel metric for GTE,namely EBR(Evaluated By Recognizing textual entailment),which could evaluate different GTE approaches in an objective and fair way without human effort while also considering the diversity of inferences.In the end,we point out the limitation of adapting a general sequence-to-sequence framework under GTE settings,with some proposals for future research,hoping to generate more public discussion.

Generating Textual Entailment Natural Language Generation Nat-ural Language Processing Artificial Intelligence

Maosheng Guo Yu Zhang Dezhi Zhao Ting Liu

School of Computer Science and Technology,Harbin Institute of Technology,Harbin 150001,China

国内会议

第十六届全国计算语言学学术会议暨第五届基于自然标注大数据的自然语言处理国际学术研讨会

南京

英文

1-10

2017-10-13(万方平台首次上网日期,不代表论文的发表时间)