Automated robots are emerging as a solution for labor-intensive fruit orchard management. Three-dimensional (3D) reconstruction of tree branches is a fundamental requirement for robots to perform tasks like pruning and fruit harvesting. Current branch sensing methods often rely on planar segmentation with limited 3D information or computationally expensive point cloud segmentation, which may not be suitable for natural orchards with obscured tree branches. This study proposes a novel scheme that reconstructs occluded branches from RGB-D (Red-Green-Blue-Depth) images by integrating the point clouds converted from planar segmentation masks and depth images. The proposed approach extends the existing 2D branch sensing techniques to 3D, leveraging multi-view information. The deep learning models DeeplabV3+ and Pix2pix are employed to generate the segmentation masks, separately. And the Fast Global Registration (FGR) is used to register the multi-view point clouds. The results demonstrate that the output point clouds have at least a 24 % increase in the number of corresponding points after FGR. Furthermore, the time cost per hundred corresponding points is reduced by 85 % and 69 % when using the DeepLabV3 + and Pix2pix-based schemes, respectively, compared to the PointNet++ approach. These findings indicate that the proposed scheme significantly improves the sensing of occluded branches in terms of output richness and computational efficiency, making it applicable to natural orchard working spaces.