Abstract

Question answering systems (QA systems) stand as a new alternative for information retrieval systems. We conducted a study to evaluate the efficiency of QA systems as terminological sources for physicians, specialized translators and users in general. To this end we analysed the performance of two open-domain and two restricted-domain QA systems. The research entailed a collection of 150 definitional questions from WebMed. We studied the sources that QA systems used to retrieve the answers, and later applied a range of evaluation measures to mark the quality of answers. Through analysing the results obtained by asking the 150 questions in the QA systems MedQA, START, QuALiM and HONqa, it was possible to evaluate the systems’ operation through applying specific metrics (MRR, FHS, TRR, Precision, Recall). Despite the limitations demonstrated by these systems, it has been confirmed that these four QA systems are valid and useful for obtaining definitional medical information in that they offer coherent and precise answers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.