Abstract

The demand for explainable sentiment analysis has intensified, emphasizing the need for models that are both accurate and interpretable. This research introduces the Multi-Aspect Framework for Explainable Sentiment Analysis (MAFESA), a groundbreaking model that seamlessly integrates aspect extraction, sentiment prediction, and explainability. By harnessing the power of Latent Dirichlet Allocation (LDA) for aspect extraction and leveraging hierarchical neural networks for sentiment prediction, MAFESA achieves remarkable performance metrics. Notably, our framework outperforms state-of-the-art baseline models across benchmark datasets such as IMDB Movie Reviews, Amazon Product Reviews, and Twitter Sentiment Analysis. The inclusion of an explainability module, built around techniques like LIME, offers unparalleled insights into the model’s decision-making, ensuring predictions are transparent and trustworthy. Our performance evaluations, underscored by a thorough ablation study, cross-validation, and rigorous statistical tests, attest to MAFESA’s robustness, generalizability, and superiority. A detailed qualitative analysis further showcases the model’s adeptness at discerning aspect-level nuances and delivering clear explanations for sentiment predictions. This research not only sets a new benchmark in explainable sentiment analysis but also provides a holistic framework that balances prediction precision with interpretability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call