Abstract

Improvement of productivity and efficiency in the agriculture sector points towards the necessity of developing autonomous vehicles. Here, the detection and classification of objects plays a major role, which can be achieved by using camera sensors and convolutional neural networks (CNNs). In order to train CNNs to perform correctly, good datasets are required. However, especially for the case of agriculture, datasets that contain relevant objects and are labeled (i.e. annotated with ground truth information) are not only scarce but also difficult to generate as this entails a high cost in resources and human labor. Therefore, we propose a different approach: using 3D simulation technology to generate relevant simulated sensor data which are implicitly labeled and a cost efficient solution to train neural networks. In this contribution, we assess the viability of training a CNN with simulated sensor data by comparing the achieved performance to a network trained with real sensor data. In addition, we evaluate the benefits of combining simulated data with real data for training CNNs, including complementary as well as Transfer Learning approaches. Finally, we show that using simulated sensor data for training CNNs is viable yet less accurate than using comparable real datasets and propose ways to improve simulations in this regard. To this end, we analyze various simulation factors in terms of their impact on the CNN performance and introduce further benefits of using simulated scenarios in general.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call