Abstract

Theoretical advances in machine learning have been reflected in many research implementations including in safety-critical domains such as medicine. However this has not been reflected in a large number of practical applications used by domain experts. This bottleneck is in a significant part due to lack of interpretability of the non-linear models derived from data. This lecture will review five broad categories of interpretability in machine learning - nomograms, rule induction, fuzzy logic, graphical models & topographic mapping. Links between the different approaches will be made around the common theme of designing interpretability into the structure of machine learning models, then using the armoury of advanced analytical methods to achieve generic non-linear approximation capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call