• Use of logic approaches, taking advantage of their characteristics of explainability and expressiveness in order to design ethical, explainable and justified medical systems. • Use of argumentation theory in Medical Informatics by overviewing existing approaches in the literature. • Argumentation Systems for Medical Informatics. Artificial Intelligence algorithms are powerful in performing accurate predictions, but they are often considered black boxes as they do not provide any explanation about how outputs are derived from inputs or why a decision is taken. Therefore, urgent is the need of a completely transparent and eXplainable Artificial Intelligence (XAI) as also recognized by the explicit inclusion of the right to explanation in the General Data Protection Regulation (GDPR). There has been much study on diagnosis, decision support, and interpretability, and there is significant interest in the development of Explainable AI in the realm of medicine. Interpretability in the medical field is not just an intectual curiosity, but a key factor. Medical choices impact the life of patients, and include risk and responsibility for the clinicians. This proposal investigates the benefit of using logic approaches for eXplainable AI by evidencing how their natural characteristics of explainability and expressiveness help in the design of ethical, explainable and justified intelligent systems. More specifically, the paper focuses on a detailed topic related to the use of argumentation theory in Medical Informatics by overviewing existing approaches in the literature. The overview categorizes approaches on the basis of the specific purpose the argumentation is used for, into the following categories: Argumentation for Medical Decision Making, Argumentation for Medical Explanations and Argumentation for Medical Dialogues .