People lacking the sense of hearing and the ability to speak have undeniable communication problems in their life. People with hearing and speech problems communicate using sign language with themselves and others. These communicating signs are made up of the shape of the hand and movement. Sign language is not essentially known to a more significant portion of the human population who uses spoken and written language for communication. Therefore, it is a necessity to develop technological tools for interpretation of sign language. Much research have been carried out to acknowledge sign language using technology for most global languages. But there are still scopes of development of tools and techniques for sign language development for local dialects. This work attempts to develop a technical approach for recognizing American Sign Language Using machine learning techniques, this work tried to establish a system for identifying the hand gestures from American Sign Language. A combination of two-dimensional and three-dimensional images of Assamese gestures has been used to prepare a dataset. The Media Pipe framework has been implemented to detect landmarks in the images. The results reveal that the method implemented in this work is effective for the recognition of the other alphabets and gestures in Sign Language. This method could also be tried and tested for the recognition of signs and gestures for various other local languages of India