Abstract
Recent large-scale digitization and preservation efforts have made images of original manuscripts, accompanied by transcripts, commonly available. An important challenge, for which no practical system exists, is that of aligning transcript letters to their coordinates in manuscript images. Here we propose a system that directly matches the image of a historical text with a synthetic image created from the transcript for the purpose. This, rather than attempting to recognize individual letters in the manuscript image using optical character recognition (OCR). Our method matches the pixels of the two images by employing a dedicated dense flow mechanism coupled with novel local image descriptors designed to spatially integrate local patch similarities. Matching these pixel representations is performed using a message passing algorithm. The various stages of our method make it robust with respect to document degradation, to variations between script styles and to non-linear image transformations. Robustness, as well as practicality of the system, are verified by comprehensive empirical experiments.
Original language | English |
---|---|
Article number | 6628826 |
Pages (from-to) | 1310-1314 |
Number of pages | 5 |
Journal | Proceedings of the International Conference on Document Analysis and Recognition, ICDAR |
DOIs | |
State | Published - 2013 |
Event | 12th International Conference on Document Analysis and Recognition, ICDAR 2013 - Washington, DC, United States Duration: 25 Aug 2013 → 28 Aug 2013 |