Abstract

Corporations require screening critical information from numerous resumes with different formats and content for managerial decision-making. However, traditional manual screening methods have low accuracy to meet the demand. Therefore, we propose a multimodal network model incorporating entity semantic graphs, ESGNet, for accurately extracting critical information from Chinese resumes. Firstly, each resume is partitioned into distinct blocks according to content while constructing an entity semantic graph according to entity categories. Then we interact with associated features within image and text modalities to capture the latent semantic information. Furthermore, we employ Transformer containing multimodal self-attention to establish relationships among modalities and incorporate supervised comparative learning concepts into the loss function for categorizing feature information. The experimental results on the real Chinese resume dataset demonstrate that ESGNet achieves the best information extraction results on all three indicators compared with other models, with the comprehensive indicator F1 score reaching 91.65%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call