Abstract
In this study, we developed a similar text retrieval system using Sentence-BERT (SBERT) for our database of closed medical malpractice claims and investigated its retrieval accuracy. We assigned each case in the database a short Japanese summary of the accident as well as two labels: the category was classified as a hospital department mainly, and the process indicated a failed medical procedure. We evaluated the accuracies of a similar text retrieval system with the two labels using three different multilabel evaluation metrics. For the encoders of SBERT, we employed two pretrained BERT models, UTH-BERT and NICT-BERT, that were trained on huge Japanese corpora, and we performed iterative optimization to train the SBERTs. The accuracies of the similar text retrieval systems using the trained SBERTs were more than 15 points higher than those of the Okapi BM25 system and the pretrained SBERT system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.