会议专题

Detecting Duplicates with Shallow and Parser-based Methods

Identifying duplicate texts is important in many areas like plagiarism detection, information retrieval, text summarization, and question answering. Current approaches are mostly surface-oriented (or use only shallow syntactic representations) and see each text only as a token list. In this work however, we describe a deep, semantically oriented method based on semantic networks which are derived by a syntactico-semantic parser. Semantically identical or similar semantic networks for each sentence of a given base text are efficiently retrieved by using a specialized semantic network index. In order to detect many kinds of paraphrases the current base semantic network is varied by applying inferences: lexicosemantic relations, relation axioms, and meaning postulates. Some important phenomena occurring in difficult-to-detect duplicates are discussed. The deep approach profits from background knowledge, whose acquisition from corpora like Wikipedia is explained briefly. This deep duplicate recognizer is combined with two shallow duplicate recognizers in order to guarantee high recall for texts which are not fully parsable. The evaluation shows that the combined approach preserves recall and increases precision considerably, in comparison to traditional shallow methods. For the evaluation, a standard corpus of German plagiarisms was extended by four diverse components with an emphasis on duplicates (and not just plagiarisms), e.g., news feed articles from different web sources and two translations of the same short story.

Duplicate detection Plagiarism Semantic networks Support vector machine Paraphrases Entailments

Sven HARTRUMPF Tim VOR DER BRüCK Christian EICHHORN

IICS, FernUniversitat in Hagen,Hagen, Germany Informatik 1, TU Dortmund,Dortmund, Germany

国际会议

The 6th International Conference on Natural Language Processing and Knowledge Engineering(第六届IEEE自然语言处理与知识工程国际会议 NLP-KE 2010)

北京

英文

1-8

2010-08-21(万方平台首次上网日期,不代表论文的发表时间)