Abstract A challenge for implementing deep learning research in the real-world is the availability of techniques that explain predictions of a model, particularly in light of potential legal requirements to give an account of algorithmic outcomes for certain use-cases. Convolutional neural networks are regularly proposed as an effective method of Android malware detection at scale. However there is a lack of studies into actually explaining the predictions of these classifiers, and how well they correlate with other state-of-the-art explainability methods. In this paper we present two research contributions that begin to address this issue. Firstly, a novel method to identify locations deemed important by our CNN in an Android app’s opcode sequence which appear to contribute to malware detection. Secondly, a comparison of such locations highlighted by the CNN with those locations considered important from the state-of-the-art explainability method LIME. Using the publicly available Drebin benchmark dataset, our results show that for eight different malware families, when averaging location importance across all samples in each family, locations in an opcode sequence considered most malicious by our CNN match closely to those considered most malicious by LIME. This gives confidence the CNN is able to focus its attention on patterns of malicious opcodes in an Android app. As well as helping to validate use of CNNs for Android malware detection, our method may benefit the wider field of Android malware analysis to highlight opcode sequence locations worthy of investigation.