Amidst the coronavirus disease 2019 (COVID-19) pandemic, researchers are exploring innovative approaches to enhance diagnostic accuracy. One avenue is utilizing deep learning models to analyze lung X-ray images for COVID-19 diagnosis, complementing existing tests like reverse transcription polymerase chain reaction (RT-PCR). However, trusting these models, often viewed as black boxes, presents a challenge. To address this, six explainable artificial intelligence (XAI) techniques: local interpretable model agnostic explanations (LIME), Shapley additive explanations (SHAP), integrated gradients, smooth-grad, gradient-weighted class activation mapping (Grad-CAM), and Layer-CAM are applied to interpret four transfer learning models. These models: VGG16, ResNet50, InceptionV3, and DenseNet121 are analyzed to understand their workings and the rationale behind their predictions. Validating the results with medical experts poses difficulties due to time and resource constraints, alongside the scarcity of annotated X-ray datasets. To address this, a voting mechanism employing different XAI methods across various models is proposed. This approach highlights regions of lung infection, potentially reducing individual model biases stemming from their structures. If successful, this research could pave the way for an automated system for annotating infection regions, bolstering confidence in predictions and aiding in the development of more effective diagnostic tools for COVID-19.