Abstract
With increasing accuracy and availability of more data, the potential of using machine learning (ML) methods in medical and clinical applications has gained considerable interest. However, the main hurdle in translational use of ML methods is the lack of explainability, especially when non-linear methods are used. Explainable (i.e. human-interpretable) methods can provide insights into disease mechanisms but can equally importantly promote clinician-patient trust, in turn helping wider social acceptance of ML methods. Here, we empirically test a method to engineer complex, yet interpretable, representations of base features via evolution of context-free grammar (CFG). We show that together with a simple ML algorithm evolved features provide higher accuracy on several benchmark datasets and then apply it to a real word problem of diagnosing Alzheimer’s disease (AD) based on magnetic resonance imaging (MRI) data. We further demonstrate high performance on a hold-out dataset for the prognosis of AD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.