会议专题

Progressive Alignment and Discriminative Error Correction for Multiple OCR Engines

This paper presents a novel method for improving optical character recognition (OCR). The method employs the progressive alignment of hypotheses from multiple OCR engines followed by final hypothesis selection using maximum entropy classification methods. The maximum entropy models are trained on a synthetic calibration data set. Although progressive alignment is not guaranteed to be optimal, the results are nonetheless strong. The synthetic data set used to train or calibrate the selection models is chosen without regard to the test data set; hence, we refer to it as “out of domain. It is synthetic in the sense that document images have been generated from the original digital text and degraded using realistic error models. Along with the true transcripts and OCR hypotheses, the calibration data contains sufficient information to produce good models of how to select the best OCR hypothesis and thus correct mistaken OCR hypotheses. Maximum entropy methods leverage that information using carefully chosen feature functions to choose the best possible correction. Our method shows a 24.6% relative improvement over the word error rate (WER) of the best performing of the five OCR engines employed in this work. Relative to the average WER of all five OCR engines, our method yields a 69.1% relative reduction in the error rate. Furthermore, 52.2% of the documents achieve a new low WER.

William B.Lund Daniel D.Walker Eric K.Ringger

Computer Science Department and the Harold B.Lee Library Brigham Young University Provo, Utah 84602, Computer Science Department Brigham Young University Provo, Utah 84602, USA

国际会议

第11届文档分析与识别国际会议(ICDAR)

北京

英文

764-768

2011-09-01(万方平台首次上网日期,不代表论文的发表时间)