An object’s six-degree-of-freedom (6DoF) pose information has great importance in various fields. Existing methods of pose estimation usually detect two-dimensional (2D)-three-dimensional (3D) feature point pairs, and directly estimates the pose information through Perspective-n-Point (PnP) algorithms. However, this approach ignores the spatial association between pixels, making it difficult to obtain high-precision results. In order to apply pose estimation based on deep learning methods to real-world scenarios, we hope to design a method that is robust enough in more complex scenarios. Therefore, we introduce a method for 3D object pose estimation from color images based on farthest point sampling (FPS) and object 3D bounding box. This method detects the 2D projection of 3D feature points through a convolutional neural network, matches it with the 3D model of the object, and then uses the PnP algorithm to restore the feature point pair to the object pose. Due to the global nature of the bounding box, this approach can be considered effective even in partially occluded or complex environments. In addition, we propose a heatmap suppression method based on weighted coordinates to further improve the prediction accuracy of feature points and the accuracy of the PnP algorithm in solving the pose position. Compared with other algorithms, this method has higher accuracy and better robustness. Our method yielded 93.8% of the ADD(-s) metrics on the Linemod dataset and 47.7% of the ADD(-s) metrics on the Occlusion Linemod dataset. These results show that our method is more effective than existing methods in pose estimation of large objects.