Abstract

The robotics domain has a couple of specific general design requirements which requires the close integration of planning, sensing, control and modeling and for sure the robot must take into account the interactions between itself, its task and its environment surrounding it. Thus considering the fundamental configurations, the main motive is to design a system with user-friendly interfaces that possess the ability to control embedded robotic systems by natural means. While earlier works have focused primarily on issues such as manipulation and navigation only, this proposal presents a conceptual and intuitive approach towards man-machine interaction in order to provide a secured live biometric logical authorization to the user access, while making an intelligent interaction with the control station to navigate advanced gesture controlled wireless Robotic prototypes or mobile surveillance systems along desired directions through required displacements. The intuitions are based on tracking real-time 3-Dimensional Face Motions using skin tone segmentation and maximum area considerations of segmented face-like blobs, Or directing the system with voice commands using real-time speech recognition. The system implementation requires designing a user interface to communicate between the Control station and prototypes wirelessly, either by accessing the internet over an encrypted Wi-Fi Protected Access (WPA) via a HTML web page for communicating with face motions or with the help of natural voice commands like “Trace 5 squares”, “Trace 10 triangles”, “Move 10 meters”, etc. evaluated on an iRobot Create over Bluetooth connectivity using a Bluetooth Access Module (BAM). Such an implementation can prove to be highly effective for designing systems of elderly aid and maneuvering the physically challenged.

Highlights

  • In today’s age, the robotic industry has been developing many new trends to increase the efficiency, accessibility and accuracy of the systems in order to automate the processes involved in task completion

  • The User interface involves the Human-Computer Interaction between the User and the Control station which has been accomplished in two different simulation modes – First is with the help of Voice command control, by speech recognition, while the Second is with the help of real-time face detection and tracking by skin tone extraction and analysis

  • If there is no face available, even after the cache Count equals to 10, an alert message appears on the command window ―No Face Present‖ and the video acquisition stops after a while

Read more

Summary

Introduction

In today’s age, the robotic industry has been developing many new trends to increase the efficiency, accessibility and accuracy of the systems in order to automate the processes involved in task completion. Beyond controlling the robotic system through physical or electronic devices, if the recent gesture control method is applied to embedded robotic systems it provides a rich and intuitive form of interaction with the system which mainly involves Image Processing and Machine Learning algorithms for application development. It requires some hardware and software interfacing with the system for gesture acquisition and corresponding control signal generation. This method did not prove to be effective as it is very much prone to the environmental noise and may result in inefficient outcomes until and unless Speech recognition is taken into account

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call