Abstract
Point cloud data is hard to obtain and time-consuming to be labelled. Self-supervised methods can utilize data without label, but it still needs large amount of data. The key to self-supervised methods lies in the design of pretext tasks. In this work, we propose a new self-supervised pretext task in few-shot learning scenario to further alleviate the data scarcity problem. Our self-supervised method learns by training the network to restore the original point cloud from the down-sampled point cloud. Although our point up-sampling pretext task as a kind of reconstruction task can ensure the learned representation contains sufficient information, it cannot guarantee its distinguishability. Thus, we introduce a Mutual Information Estimation and Maximization task to increase the distinguishability of the learned representation. Classification and segmentation results have shown that our method can learn efficient feature and increase the performance of down-stream models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.