Human-Computer Interaction (HCI) has undergone significant transformations with the advent of Artificial Intelligence (AI) and Machine Learning (ML), enhancing the ways in which users engage with computing systems. This paper introduces AirCanvas, a novel hands-free digital interaction tool that leverages air gestures for intuitive and seamless computer control. The system uses advanced image processing techniques, specifically OpenCV for visual data analysis and MediaPipe for accurate hand gesture recognition, enabling users to manipulate virtual environments without physical touch. By integrating a standard webcam as the primary sensor, AirCanvas offers an accessible solution for gesture-based interaction, eliminating the need for specialized external hardware like motion sensors or gloves. Users can perform a variety of tasks such as virtual drawing, cursor control, and presentation navigation with simple hand gestures in 3D space. The system's gesture recognition capabilities are powered by deep learning models trained on large datasets of hand movements, ensuring robust performance in diverse lighting and environmental conditions. The potential applications of AirCanvas are far-reaching, ranging from interactive art creation to more practical uses in assistive technology, where people with mobility impairments can benefit from hands-free control. In the field of robotics, the tool can enable more natural human-robot interaction, while in gaming, it can introduce new forms of immersive gameplay through gesture-based interfaces. Furthermore, the tool's open-source nature allows for further customization and enhancement, fostering innovation and collaboration in various industries. As human-computer interaction continues to evolve, AirCanvas represents a significant step forward in making technology more intuitive, engaging, and accessible for all users.
Read full abstract