Abstract
In the digital era, effective and inclusive communication is essential, particularly in video conferencing platforms widely adopted post-pandemic for professional and social interactions. However, these platforms often lack accessibility features for individuals with hearing and speech impairments. This research presents a lightweight, real-time sign language translator designed for seamless integration with popular video conferencing platforms like Google Meet, Microsoft Teams, Zoom, and Discord. Utilizing deep learning algorithms, image processing techniques, and virtual camera technology, the system translates sign language into written captions, displayed in real-time. By fostering accessibility and inclusivity, this innovation empowers deaf and mute individuals to participate actively in virtual meetings, bridging communication gaps and advancing equal opportunities for all.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have