Sign language is a way of communicating through body movements. Body language recognition has been one of the most challenging research problems in the last few years. Their recognition plays an increasingly important role due to the huge spread of digital technologies. With the development of the fields of deep learning and computer vision, researchers have developed various automatic motion language recognition methods that can understand body movement. The idea of this study is to examine the existing sign language recognition systems in the world. These works are mainly divided into sensor-based systems and vision-based systems. Studies have shown that sensor-based tracking is more resource-intensive and difficult to implement than traditional image-based research, and there are also combinations of these two methods. As a result of this study, it can be seen that there are many sign languages in the world, most of which do not have their own databases available, and also dynamic gesture recognition systems still need new research to improve the results. During the work, several points have been formed that can help improve the quality of the work by adding studies and databases for unknown languages, using them also to obtain an acceptable accuracy of dynamic gesture detection, as well as ensuring the system work in real-time and use of few resources.