With the continuous improvement of image processing capabilities, a three-dimensional (3D) model that can contain rich information is becoming the fourth type of multimedia data (in addition to sound, image, and video). Moreover, since there is a wide range of applications of 3D models, how to quickly and effectively obtain the correct target model from the massive data has become a key issue. To date, 3D model retrieval approaches have been proposed, and in these approaches, view-based 3D model retrieval methods can achieve satisfactory performance. In the 3D model retrieval task, the latent relationship mining of all images in a 3D model, the adaptive fusion of different images, and the discriminative feature extraction are the main challenges, but in most existing solutions, these issues are separately performed and they are not explored in an end-to-end network architecture. To solve these issues, in this work, we propose a novel and effective multi-level view associative convolution network ( <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">MLVACN</b> ) to realize view-based 3D model retrieval, where the relationship exploration of multiple-view images, the fusion of different images, and the feature discrimination learning are realized in a unified end-to-end framework. Specifically, we design the group association layer and the block association layer to study the latent relationships among different views from the view-level and the block-level, respectively. Moreover, the weight fusion layer is further designed to adaptively fuse different views in a 3D model. In addition, these three layers are embedded into the <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">MLVACN</b> . Finally, the pairwise discrimination loss function is proposed to learn the discriminative features of the 3D model. Extensive experimental results on three 3D model retrieval datasets including ModelNet40, ModelNet10, and ShapeNetCore55 demonstrate that <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">MLVACN</b> can outperform state-of-the-art methods in term of mAP. When the ModelNet40 dataset is used, the mAP of <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">MLVACN</b> is improved by 13.25%, 7.75%, 3.95%, and 0.61% as compared to those of the MVCNN, GVCNN, PVNet, and MLVCNN methods, respectively.