Abstract

Visual perception is critical and essential to understand phenomenon and environments of the world. Pervasively configured devices like cameras are key in dynamic status monitoring, object detection and recognition. As such, visual sensor environments using one single or multiple cameras must deal with a huge amount of high-resolution images, videos or other multimedia. In this paper, to promote smart advancement and fast detection of visual environments, we propose a deep transfer learning strategy for real-time target detection for situations where acquiring large-scale data is complicated and challenging. By employing the concept of transfer learning and pre-training the network with established datasets, apart from the outstanding performance in target localization and recognition can be achieved, time consumption of training a deep model is also significantly reduced. Besides, the original clustering method, k-means, in the You Only Look Once (YOLOv3) detection model is sensitive to the initial cluster centers when estimating the initial width and height of the predicted bounding boxes, thereby processing large-scale data is extremely time-consuming. To handle such problems, an improved clustering method, mini batch k-means++ is incorporated into the detection model to improve the clustering accuracy. We examine the sustainable outperformance in three typical applications, digital pathology, smart agriculture and remote sensing, in vision-based sensing environments.

Highlights

  • Vision is a significant and basic way to acquire information and explore the essence of the real world

  • To reduce the computational load and accuracy of generating anchor boxes for customized data, and to improve the performance and inference speed of target localization and recognition, with the help of advanced deep learning method, we propose a mini batch k-means++ method and a transfer learning strategy for real-time object detection using the You Only Look Once (YOLOv3) model

  • After testing the two values, we find that there is no big difference between them, we follow the routine of the original setting and set k to be 9

Read more

Summary

INTRODUCTION

Vision is a significant and basic way to acquire information and explore the essence of the real world. Using unmanned aerial vehicles (UAVs), drones or satellite embarked a camera in field monitoring allows people to inspect/gather geological information or surface features in the actual situation, or undertake daily land surveys by checking the status of oil tanks/pipelines and alarming/locating the wildfire in a no man’s land or distant forest, thereby minimizing people’ exposure to wild and hazardous zones in industrial and remote environments [7] These vision-based applications are developing towards smart and digital processing, in which a camera is configured to capture high-resolution images or videos at a high relatively frames per second (FPS). To reduce the computational load and accuracy of generating anchor boxes for customized data, and to improve the performance and inference speed of target localization and recognition, with the help of advanced deep learning method, we propose a mini batch k-means++ method and a transfer learning strategy for real-time object detection using the You Only Look Once (YOLOv3) model.

DEEP LEARNING AND OBJECT DETECTION
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.