This report outlines the development of an air gesture keyboard utilizing machine learning technology, designed to assist visually impaired individuals in entering text more easily. This innovative system removes the need for physical keyboards, instead employing a remote-based system that translates hand gestures into text through the use of an accelerometer, Arduino, and switches. There is a significant need for alternative input methods for visually impaired individuals due to the limitations of existing technologies. The proposed air gesture keyboard captures and interprets hand motions in the air, enabling users to input text without physical contact. A literature review was conducted to examine related research on hand gesture recognition using various techniques.