Abstract

<p indent=0mm>The existing sketch-based 3D model retrieval methods often regard data as static input, and utilize convolutional neural network to extract features, which ignore the dynamic attributes of input data, resulting in partial loss of useful information and not ideal retrieval effect. In order to solve this problem, a spatiotemporal information joint embedding-based end-to-end sketch-3D model retrieval method is proposed. Firstly, the sketch is represented as a dynamic drawing sequence, which reflects the temporal information contained in the drawing process; meanwhile, the 3D model is represented as a multi-view sequence to reflect the position relationship between views. Secondly, an end-to-end dual-stream network including static spatial feature extraction and dynamic time series feature extraction is constructed. Combined with triplet central metric learning, a joint spatiotemporal feature embedding of cross domain data is established to fully capture the static and dynamic features contained in sketches and 3D models, and reduce the difference between cross domain. Finally, experiments are carried out on the standard public data sets SHREC2013 and SHREC2014. Compared with the existing work, the accuracy rate is higher, which verifies the feasibility and effectiveness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call