Abstract
In this paper an experimental study of real-time modeling method with Kinect Fusion and 3D Scan respectively was carried out from the visual effect and the accuracy of scanning to obtain point cloud data. Cauliflower, pitaya and carrot, which size, shape, and surface roughness are all different, were selected as the research crops and Kinect V2 (Kinect for windows V2) developed by Microsoft was selected as the sensor. The acquisition environment was designed and processed to remove a lot of noise points around the scene. The experimental results show that the visual effect is the best and the modeling time is the shortest with Kinect Fusion method when the distance between object and Kinect is 83.5cm, VMIW is 200, and VVPM is 512. According to the global fluctuations at different angles of the three crops' point cloud data, it can be concluded that the accuracy of point cloud data is higher, and the details of the objects is more complete and detailed obtained with Kinect Fusion method than with 3D Scan method. However, with the improvement of surface roughness the capability of modeling method is reduce. The proposed modeling method based on Kinect V2 can be used as reference for other crops.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.