Abstract
Autonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.