Humans, in general, are social creatures who communicate themselves through an assortment of spoken languages. Deaf and Mute individuals converse in a manner that's comparable, however many others are ignorant of their sign language. As a result, there is a need to develop a system that facilitates communication among the hearing and hard-of-hearing communities. This research offers a real-time Indian Sign Language (ISL) recognition system for 24 dynamic signals using the Mediapipe framework and an LSTM network. The method proposed in the study involves training a LSTM to differentiate between different signs using a dataset created of 24 dynamic gesture signs. To accomplish dataset creation, a pre-trained Holistic model of the Mediapipe framework is used as a feature extractor. The results of the study demonstrate that the above approach achieves 97% test accuracy.