Abstract

The application of Artificial Intelligence (AI) on cancer drug recommendation can prompt the development of personalized cancer therapy. However, most of the current AI drug recommendations cannot give explainable inferences, where their prediction procedures are black boxes, and are difficult to earn the trust of doctors or patients. In explainable inference, the key steps during the recommendation procedures can be located easily, facilitating model adjustment for wrong predictions and model generalization for new drugs/samples. In this paper, we analyze the necessity of developing explainable AI drug recommendation, and propose an evaluation metric called traceability rate. The traceability rate is calculated as the proportion of correct predictions that are traceable along the knowledge graph in all the ground truths. We further conduct an experiment on a benchmark drug response dataset to apply the traceability rate as evaluation metric, where the results show a trade-off between model performance and explainability. Therefore, the explainable AI drug recommendation still demands for further improvement to meet the requirement of clinical personalized therapy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call