Abstract
Text information extraction from a tabular structure within a compound document image (CDI) is crucial to help better understand the document. The main objective of text extraction is to extract only helpful information since tabular data represents the relation between text lying in a tuple. Text from an image may be of low contrast, different style, size, alignment, orientation, and complex background. This work presents a three-step tabular text extraction process, including pre-processing, separation, and extraction. The pre-processing step uses the guide image filter to remove various kinds of noise from the image. Improved binomial thresholding (IBT) separates the text from the image. Then the tabular text is recognized and extracted from CDI using deep neural network (DNN). In this work, weights of DNN layers are optimized by the Harris Hawk optimization algorithm (HHOA). Obtained text and associated information can be used in many ways, including replicating the document in digital format, information retrieval, and text summarization. The proposed process is applied comprehensively to UNLV, TableBank, and ICDAR 2013 image datasets. The complete procedure is implemented in Python, and precision metrics performance is verified.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.