Interpretability is just as important as accuracy when it comes to complex models, especially in the context of deep learning models. Explainable artificial intelligence (XAI) approaches have been developed to address this problem. The literature on XAI for spectroscopy mainly emphasizes independent feature analysis with limited application of zone analysis. Individual feature analysis methods, such as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), have limitations due to their dependence on perturbations. These methods measure how AI models respond to sudden changes in the individual feature values. While they can help identify the most impactful features, the abrupt shifts introduced by replacing these values with zero or the expected ones may not accurately represent real-world scenarios. This can lead to mathematical and computational interpretations that are neither physically realistic nor intuitive to humans. Our proposed method does not rely on individual disturbances. Instead, it targets "spectral zones" to directly estimate the effect of group disturbances on a trained model. Consequently, factors such as sample size, hyperparameter selection, and other training-related considerations are not the primary focus of the XAI methods. To achieve this, we have developed a modified version of LIME and SHAP capable of performing group perturbations, enhancing explainability and realism while minimizing noise in the plots used for interpretability. Additionally, we employed an efficient approach to calculate spectral zones for complex spectra with indistinct spectral boundaries. Users can also define the zones themselves using their domain-specific knowledge.