This paper investigates the applications of explainable AI (XAI) in healthcare, which aims to provide transparency, fairness, accuracy, generality, and comprehensibility to the results obtained from AI and ML algorithms in decision-making systems. The black box nature of AI and ML systems has remained a challenge in healthcare, and interpretable AI and ML techniques can potentially address this issue. Here we critically review previous studies related to the interpretability of ML and AI methods in medical systems. Descriptions of various types of XAI methods such as layer-wise relevance propagation (LRP), Uniform Manifold Approximation and Projection (UMAP), Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), ANCHOR, contextual importance and utility (CIU), Training calibration-based explainers (TraCE), Gradient-weighted Class Activation Mapping (Grad-CAM), t-distributed Stochastic Neighbor Embedding (t-SNE), NeuroXAI, Explainable Cumulative Fuzzy Class Membership Criterion (X-CFCMC) along with the diseases which can be explained through these methods are provided throughout the paper. The paper also discusses how AI and ML technologies can transform healthcare services. The usability and reliability of the presented methods are summarized, including studies on the usability and reliability of XGBoost for mediastinal cysts and tumors, a 3D brain tumor segmentation network, and the TraCE method for medical image analysis. Overall, this paper aims to contribute to the growing field of XAI in healthcare and provide insights for researchers, practitioners, and decision-makers in the healthcare industry. Finally, we discuss the performance of XAI methods applied in medical health care systems. It is also needed to mention that a brief implemented method is provided in the methodology section.