With over 55 million people globally affected by dementia and nearly 10 million new cases reported annually, Alzheimer's disease is a prevalent and challenging neurodegenerative disorder. Despite significant advancements in machine learning techniques for Alzheimer's disease detection, the widespread adoption of deep learning models raises concerns about their explainability. The lack of explainability in deep learning models for online handwriting analysis is a critical gap in the literature in the context of Alzheimer's disease detection. This paper addresses this challenge by interpreting predictions from a Convolutional Neural Network applied to multivariate time series data, generated by online handwriting data associated with continuous loop series handwritten on a graphical tablet. Our explainability methods reveal distinct motor behavior characteristics for healthy individuals and those diagnosed with Alzheimer's. Healthy subjects exhibited consistent, smooth movements, while Alzheimer's patients demonstrated erratic patterns marked by abrupt stops and direction changes. This emphasizes the critical role of explainability in translating complex models into clinically relevant insights. Our research contributes to the enhancement of early diagnosis, providing significant and reliable insights to stakeholders involved in patient care and intervention strategies. Our work bridges the gap between machine learning predictions and clinical insights, fostering a more effective and understandable application of advanced models for Alzheimer's disease assessment.