Estimating the 6D pose and size of objects is crucial in the task of visual grasping for robotic arms. Most current algorithms still require the 3D CAD model of the target object to match with the detected points, and they are unable to predict the object’s size, which significantly limits the generalizability of these methods. In this paper, we introduce category priors and extract high-dimensional abstract features from both the observed point cloud and the prior to predict the deformation matrix of the reconstructed point cloud and the dense correspondence between the reconstructed and observed point clouds. Furthermore, we propose a staged geometric correction and dense correspondence refinement mechanism to enhance the accuracy of regression. In addition, a novel lightweight attention module is introduced to further integrate the extracted features and identify potential correlations between the observed point cloud and the category prior. Ultimately, the object’s translation, rotation, and size are obtained by mapping the reconstructed point cloud to a normalized canonical coordinate system. Through extensive experiments, we demonstrate that our algorithm outperforms existing methods in terms of performance and accuracy on commonly used benchmarks for this type of problem. Additionally, we implement the algorithm in robotic arm-grasping simulations, further validating its effectiveness.