In the evolving landscape of cyber threats, phishing attacks pose significant challenges, particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy. Conventional and machine learning (ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics. Furthermore, newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable. Hence, effective detection depends on identifying the most critical features. Traditional feature selection (FS) methods often struggle to enhance ML model performance and instead decrease it. To combat these issues, we propose an innovative method using eXplainable AI (XAI) to enhance FS in ML models and improve the identification of phishing websites. Specifically, we employ SHapley Additive exPlanations (SHAP) for global perspective and aggregated local interpretable model-agnostic explanations (LIME) to determine specific localized patterns. The proposed SHAP and LIME-aggregated feature selection (SLA-FS) framework pinpoints the most informative features, enabling more precise, swift, and adaptable phishing detection. Applying this approach to an up-to-date web phishing dataset, we evaluate the performance of three ML models before and after FS to assess their effectiveness. Our findings reveal that random forest (RF), with an accuracy of 97.41% and XGBoost (XGB) at 97.21% significantly benefit from the SLA-FS framework, while KNN lags. Our framework increases the accuracy of RF and XGB by 0.65% and 0.41%, respectively, outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset, showcasing its potential.