In our modern society, effective communication is crucial for social interaction, personal growth, and professional success. Yet, individuals with hearing and speech impairments often encounter significant obstacles in expressing themselves and understanding others, leading to communication barriers and social isolation. Sign language serves as a vital means of communication for the deaf and dumb community, allowing them to convey thoughts, ideas, and emotions through gestures and expressions. This research paper introduces a groundbreaking "Sign Language Detection and Recognition using Deep Learning" system to address these challenges. Utilizing Convolutional Neural Networks (CNN) and image processing techniques like Histogram of Oriented Gradients , our system accurately detects and interprets sign language gestures in real-time. Additionally, it features a two-way communication interface that facilitates seamless interaction between individuals with different communication modalities, enabling inclusive communication among deaf, dumb, and normal individuals. Through a comprehensive methodology encompassing data collection, preprocessing, model training, and system integration, our research aims to advance technology-driven solutions that promote inclusivity, equality, and dignity for individuals with hearing and speech impairments. By empowering effective communication and full participation in social, educational, and professional contexts, our research endeavors to create a more inclusive and accessible world for all members of society.