ABSTRACT In machine reading comprehension, answers to questions may be found in either text or tables. Previous studies have shown that table-specific pre-trained language models perform well when applied to tables; however, applying such models to input data that consists of both tables and text can be challenging. To address this issue, we introduce the hybrid reader model that can manage both tables and text using a modified K-Adapter architecture for effectively encoding the structured information of tables. The training process infuses knowledge for tabular data into a pre-trained model while retaining its original weights from pre-training. Hence, the pre-trained model is able to learn and utilize table information without sacrificing its previous training. Our proposed hybrid reader model achieved comparable or superior performance to that of a specialized model on the Korean MRC dataset, KorQuAD 2.0, using the provided adapters. Furthermore, we conducted experiments on an additional English MRC dataset and confirmed that our proposed model achieves performance comparable to that of the existing model. Our study indicates that employing a single hybrid model instead of two separate models can require fewer computing resources and less time while achieving comparable or superior performance, especially for techniques that apply projection and adapter.
Read full abstract