Recognizing Textual Entailment via Multi-task Knowledge Assisted LSTM
Recognizing Textual Entailment(RTE)plays an important role in NLP applications like question answering,information retrieval,etc.Most previous works either use classifiers to employ elaborately designed features and lexical similarity or bring distant supervision and reasoning technique into RTE task.However,these approaches are hard to generalize due to the complexity of feature engineering and are prone to cascading errors and data sparsity problems.For alleviating the above problems,some work use LSTM-based recurrent neural network with word-by-word attention to recognize textual entailment.Nevertheless,these work did not make full use of knowledge base(KB)to help reasoning.In this paper,we propose a deep neural network architecture called Multi-task Knowledge Assisted LSTM(MKAL),which aims to conduct implicit inference with the assistant of KB and use predicate-to-predicate attention to detect the entailment between predicates.In addition,our model applies a multi-task architecture to further improve the performance.The experimental results show that our proposed method achieves a competitive result compared to the previous work.
Lei Sha Sujian Li Baobao Chang Zhifang Sui
Key Laboratory of Computational Linguistics,Ministry of Education School of Electronics Engineering and Computer Science,Peking University Collaborative Innovation Center for Language Ability,Xuzhou 221009 China
国内会议
第十五届全国计算语言学学术会议(CCL2016)暨第四届基于自然标注大数据的自然语言处理国际学术研讨会(NLP-NABD-2016)
烟台
英文
1-14
2016-10-14(万方平台首次上网日期,不代表论文的发表时间)