Abstract

Whole-book recognition is a document image analysis strategy that operates on the complete set of a book's page images, attempting to improve accuracy by automatic unsupervised adaptation. Our algorithm expects to be given initial iconic and linguistic models---derived from (generally errorful) OCR results and (generally incomplete) dictionaries---and then, guided entirely by evidence internal to the test set, the algorithm corrects the models yielding improved accuracy. We have found that successful corrections are often closely associated with between the models which can be detected within the test set by measuring cross entropy between (a) the posterior probability distribution of character classes (the recognition results from image classification alone), and (b) the posterior probability distribution of word classes (the recognition results from image classification combined with linguistic constraints). We report experiments on long passages (up to 180 pages) revealing that: (1) disagreements and error rates are strongly correlated; (2) our algorithm can drive down recognition error rates by nearly an order of magnitude; and (3) the longer the passage, the lower the error rate achievable. We also propose formal models for a book's text, for iconic and linguistic constraints, and for our whole-book recognition algorithm---and, using these, we rigorously prove sufficient conditions for the whole-book recognition strategy to succeed in the ways illustrated in the experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call