Abstract
Visually rich documents, such as forms, invoices, receipts, and ID cards, are ubiquitous in daily business and life. Various methods have been used to convey such diverse information, including text, layout, font size, or text position. Combining these elements in information extraction can improve the result performance. However, previous works have not effectively utilized the cooperation between these rich information sources. Text detection and recognition have been performed without semantic supervision (e.g., entity name annotation), and text information extraction has been performed using only serialized plain text, ignoring rich visual information. This paper presents a method for extracting information from such documents, which integrates textual, non-spatial, and spatial visual features. The method consists of two main steps and uses three deep neural networks. The first step, Text Reading, employs two CNN models (Lightweight DB and C-PREN) for OCR tasks, based on the state-of-the-art models DB and PREN, with two improvements. These improvements include reducing noise by removing the SE block of DB and integrating both context and position features in PREN. The second step, Text Information Extraction, uses a graph convolutional network, RGCN, for name entity recognition. Experiments on self-collected and two public datasets have demonstrated that our method improves the performance of the original models and outperforms other state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.