Abstract

This study presents three deidentified large medical text datasets, named DISCHARGE, ECHO and RADIOLOGY, which contain 50 K, 16 K and 378 K pairs of report and summary that are derived from MIMIC-III, respectively. We implement convincing baselines of automated abstractive summarization on the created datasets with pre-trained encoder-decoder language models, including BERT2BERT, BERTShare, RoBERTaShare, Pegasus, ProphetNet, T5-large, BART and GSUM. Further, based on the BART model, we leverage the sampled summaries from the training set as prior knowledge guidance, for encoding additional contextual representations of the guidance with the encoder and enhancing the decoding representations in the decoder. The experimental results confirm the improvement of ROUGE scores and BERTScore made by the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call