Abstract

The state-of-the-art performance on entity resolution (ER) has been achieved by deep learning. However, deep models usually need to be trained on large quantities of accurately labeled training data, and cannot be easily tuned towards a target workload. However, in real scenarios, there may not be sufficient training data; even if they are abundant, their distribution is almost certainly different from target data to some extent.To alleviate such limitation, this paper proposes a novel risk-based adaptive training approach for ER that can tune a deep model towards its target workload by the workload’s particular characteristics. Built on the recent advances on risk analysis for ER, the proposed approach first trains a deep model on labeled training data, and then fine-tunes it on unlabeled target data by minimizing its misprediction risk. Our theoretical analysis shows that risk-based adaptive training can correct the label status of a mispredicted instance with a fairly good chance. Finally, we empirically validate its efficacy on real benchmark data by a comparative study. Our extensive experiments show that it can considerably improve the performance of deep models. Furthermore, in the scenario of distribution misalignment, it can similarly outperform the state-of-the-art alternatives of transfer learning by considerable margins. Using ER as a test case, we demonstrate that risk-based adaptive training is a promising approach potentially applicable to various challenging classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call