The use of artificial intelligence (AI) systems is significantly increased in the past few years. AI system is expected to provide accurate predictions and it is also crucial that the decisions made by the AI systems are humanly interpretable i.e. anyone must be able to understand and comprehend the results produced by the AI system. AI systems are being implemented even for simple decision support and are easily accessible to the common man on the tip of their fingers. The increase in usage of AI has come with its own limitation, i.e. its interpretability. This work contributes towards the use of explainability methods such as local interpretable model-agnostic explanations (LIME) to interpret the results of various black box models. The conclusion is that, the bidirectional long short-term memory (LSTM) model is superior for sentiment analysis. The operations of a random forest classifier, a black box model, using explainable artificial intelligence (XAI) techniques like LIME is used in this work. The features used by the random forest model for classification are not entirely correct. The use of LIME made this possible. The proposed model can be used to enhance performance, which raises the trustworthiness and legitimacy of AI systems.