This work focuses on detection of (sign) hand gesture techniques and introduces the merits and limitations in various circumstances. The hand segmentation theories and hand detection system is used to construct hand gesture recognition using Python with OpenCV. The hand gestures use as a natural interface motivates research in gesture representations, taxonomies and recognition methods/algorithms, and software platforms/ frameworks, all of which are briefly covered in this work. All the processes have been done using webcam by keras and tensorflow. The ever-increasing public acceptance and funding for multinational projects emphasizes the need for sign language. The desire for computer-based solution is significant in recent age of technology for deaf people. Still, researchers are attacking the problem for quite sometimes and the results are showing promises. This work represents a comprehensive review of vision-based sign recognition methodologies, emphasizing importance of taking the things into consideration in addition with algorithm's recognition accuracy during predicting their success in real world applications. This project matches the sign language action with dataset images with various categories of sign (gestures) that already been trained using webcam. This project applies neural network to compare the actions with data set images. The coding language used is Python 3.10.