Abstract

Manual plant phenotyping is slow, error prone, and labor intensive. In this letter, we present an automated robotic system for fast, precise, and noninvasive measurements using a new deep-learning-based next-best view planning pipeline. Specifically, we first use a deep neural network to estimate a set of candidate voxels for the next scanning. Next, we cast rays from these voxels to determine the optimal viewpoints. We empirically evaluate our method in simulations and real-world robotic experiments with up to three robotic arms to demonstrate its efficiency and effectiveness. One advantage of our new pipeline is that it can be easily extended to a multi-robot system where multiple robots move simultaneously according to the planned motions. Our system significantly outperforms the single robot in flexibility and planning time. High-throughput phenotyping can be made practically.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call