Abstract

The goal of our research is to design improved interfaces for medical expert systems. Previously, the use of graphical techniques was explored to improve the acceptance by clinicians of the user interface. Now that devices that accept spoken input are available, we wish to design interfaces that take advantage of this potentially more natural modality for interaction. To understand how clinicians might want to speak to a medical decision-support system, we carried out an experiment that simulated the availability of a spoken interface to the ONCOCIN medical expert system. ONCOCIN provides therapy advice for patients on complex cancer therapy protocols based on a description of the patient's current medical status and laboratory-test values. In the experiment, we had oncologists present a clinical case while observing the ONCOCIN flowsheet display. A project member listened to the presentation and filled in values for the flowsheet, as well as introducing purposeful misunderstandings of the input. The results suggest that each individual developed a stereotypical grammar for communicating with the program. Our experience with the purposeful miscommunications suggests particular ways to tailor requests for repetition based on the part of the utterance that was not understood.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.