Abstract

This paper describes the development of a convolutional neural network for the control of a home monitoring robot (FumeBot). The robot is fitted with a Raspberry Pi for on board control and a Raspberry Pi camera is used as the data feed for the neural network. A wireless connection between the robot and a graphical user interface running on a laptop allows for the diagnostics and development of the neural network. The neural network, running on the laptop, was trained using a supervised training method. The robot was put through a series of obstacle courses to test its robustness, with the tests demonstrating that the controller has learned to navigate the obstacles to a reasonable level. The main problem identified in this work was that the neural controller did not have memory of past actions it took and a past state of the world resulting in obstacle collisions. Options to rectify this issue are suggested.

Highlights

  • A range of home use robots are currently commercially available including robotic vacuum cleaners [1,2] and lawn mowers [3,4]

  • The accuracy of the model created by the convolutional neural network (CNN) was calculated by using samples that were withheld from the training samples

  • The robot was placed in the same obstacle course with the obstacles in the middle of the corridor, as shown in Figure 4a, and the model was used to predict which directional keys were required to avoid the obstacle in front of it

Read more

Summary

Introduction

A range of home use robots are currently commercially available including robotic vacuum cleaners [1,2] and lawn mowers [3,4]. The additional needs of effective monitoring and remote communication with the user are required [6]. To develop monitoring and surveillance robots for practical implementation in a home environment, a range of issues must be resolved. These include working in cluttered environments [7], the need to perform surveillance in conjunction with simultaneous localization and mapping [8], object recognition and tracking [9], integration of static and mobile surveillance systems [10], and simplifying the computing requirements whilst maintaining fully autonomous monitoring capabilities [11]. Making them truly autonomous in a way that they can act in different scenarios and environments without any human intervention is a challenging task

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.