Abstract
The automatic medical report generation is a challenging task because it requires accurate capture and description of abnormal regions, especially for those discrepancies between patient and normal. In most cases, normal region descriptions dominate the entire medical report, and existing methods may fail to focus on abnormal regions due to data bias. Medical reports can be automatically generated by combining contrastive learning with feature difference in order to capture and describe abnormal regions effectively. By capturing discrepancy attributes between the input image and normal images, this method can provide more accurate diagnostic reports and better represent the visual features of abnormal regions. Specifically, we propose the feature difference approach to make the model focus more on abnormal regions, and on the other hand, we propose the combination of contrastive learning for enhancing the visual representation of feature difference based on the feature difference approach, thus improving the performance of the model. Experimental results on the IU-Xray and MIMIC-CXR datasets demonstrate the effectiveness of our approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have