Abstract

Thanks to advances in instrumentation and analysis methods, particularly deep convolutional neural networks (CNNs), imaging is used as an analytical method in the pharmaceutical industry providing information about a sample or process. CNNs are capable in extracting image features to perform tasks including classification of particulates samples such as protein aggregates in liquid formulations as well as amorphous solid dispersions (ASDs) and crystals in small molecule space. Using an example, image-based characterization of ASDs, we highlight aspects related to interpretability and explainability of AI/machine learning image analysis models and how they relate to stakeholders’ confidence. Using attribution and saliency maps obtained from Integrated Gradients and Class Activation Maps, we study the mechanisms behind models’ decision-making to build more robust models and identify failure modes after deployment. We propose good practices for future applications of deep learning image classification systems to enhance confidence and trust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call