Abstract: This study introduces a real-time system designed to recognize hand poses and gestures from the Indian Sign Language (ISL) using grid-based (control points) features. The primary aim is to bridge communication barriers between the hearing and speech impaired individuals and the broader society.Existing solutions often struggle with either accuracy or realtime performance, whereas our system excels in both aspects. It can accurately identify hand gestures in Indian Sign Language. In addition to recognition capabilities, our system offers a ’Learning Portal’ for users to efficiently learn and practice ISL, ASL, etc., enhancing its accessibility and effectiveness. Notably, the system operates solely on smartphone camera input, eliminating the need for any external hardware like gloves or specialized sensors, thus ensuring user-friendliness. Key techniques employed include hand detection via MediaPipe, cvzone, etc modules, and grid-based feature extraction, which transforms hand poses into concise feature vectors. These features are then compared with a TensorFlow- provideddatabase for classification for accurate translation