Abstract

Corporations require screening critical information from numerous resumes with different formats and content for managerial decision-making. However, traditional manual screening methods have low accuracy to meet the demand. Therefore, we propose a multimodal network model incorporating entity semantic graphs, ESGNet, for accurately extracting critical information from Chinese resumes. Firstly, each resume is partitioned into distinct blocks according to content while constructing an entity semantic graph according to entity categories. Then we interact with associated features within image and text modalities to capture the latent semantic information. Furthermore, we employ Transformer containing multimodal self-attention to establish relationships among modalities and incorporate supervised comparative learning concepts into the loss function for categorizing feature information. The experimental results on the real Chinese resume dataset demonstrate that ESGNet achieves the best information extraction results on all three indicators compared with other models, with the comprehensive indicator F1 score reaching 91.65%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.