The goal of this work has been the development of a deep-learning model which may be used in conjunction with a novel so-called “Meat Factory Cell” platform, namely to enable an industrial robot within the system to identify and successfully grip the limbs of an entire pig carcass. The model consists of three main components: (1) a U-Net-based deep learning model that predicts heatmaps with a probability distribution of gripping and key point locations on the limbs, within RGB-D images; (2) a post-processing element for the extraction of keypoints from heatmaps, and transferral of these points into 3D space using a pinhole camera model; and (3) gripper orientation estimation, which uses the predicted limb key points to define gripper orientation in 3D space. The proposed system demonstrates high precision and robustness in estimating gripping points on pig limbs based on a data test set, which includes two gripping definitions: Norwegian and Danish. These gripping definitions account for variation in the slaughter process in two different European countries. The Norwegian definition gives mAP(0.5…0.95) = 0.971, mAR(0.5…0.95) = 0.982, and distance error 13 mm, while the Danish definition gives mAP(0.5…0.95) = 0.985, mAR(0.5…0.95) = 0.995, and distance error 14 mm. The model was validated in practice during experimental trials at the Meat Factory Cell test facility at the Norwegian University of Life Science (Ås, Norway), with whole pig carcasses (n=25).