Abstract

The recognition of two-dimensional structure of tables and forms from document images is a challenge due to the complexity of document structures and the diversity of layouts. In this paper, we propose a graph neural network (GNN) based unified framework named Table Structure Recognition Network (TSRNet) to jointly detect and recognize the structures of various tables and forms. First, a multi-task fully convolutional network (FCN) is used to segment primitive regions such as text segments and ruling lines from document images, then a GNN is used to classify and group these primitive regions into page objects such as tables and cells. At last, the relationships between neighboring page objects are analyzed using another GNN based parsing module. The parameters of all the modules in the system can be trained end-to-end to optimize the overall performance. Experiments of table detection and structure recognition for modern documents on the POD 2017, cTDaR 2019 and PubTabNet datasets and template-free form parsing for historical documents on the NAF dataset show that the proposed method can handle various table/form structures and achieve superior performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.