Smart farming is undergoing a transformation with the integration of machine learning (ML) and artificial intelligence (AI) to improve crop recommendations. Despite the advancements, a critical gap exists in opaque ML models that need to explain their predictions, leading to a trust deficit among farmers. This research addresses the gap by implementing explainable AI (XAI) techniques, specifically focusing on the crop recommendation technique in smart farming. An experiment was conducted using a Crop recommendation dataset, applying XAI algorithms such as Local Interpretable Model-agnostic Explanations (LIME), Differentiable InterCounterfactual Explanations (dice_ml), and SHapley Additive exPlanations (SHAP). These algorithms were used to generate local and counterfactual explanations, enhancing model transparency in compliance with the General Data Protection Regulation (GDPR), which mandates the right to explanation. The results demonstrated the effectiveness of XAI in making ML models more interpretable and trustworthy. For instance, local explanations from LIME provided insights into individual predictions, while counterfactual scenarios from dice_ml offered alternative crop cultivation suggestions. Feature importance from SHAP gave a global perspective on the factors influencing the model's decisions. The study's statistical analysis revealed that the integration of XAI increased the farmers' understanding of the AI system's recommendations, potentially reducing food insufficiency by enabling the cultivation of alternative crops on the same land.
Read full abstract