Reducing the risk of disease-related malnutrition requires routine monitoring of nutritional intake in hospitalised patients. There is still a definite need for a more dependable and totally automated way to estimate nutrient intake, since this might increase data accuracy and lessen the burden on participants as well as health expenditures. In this research, we offer an innovative artificial intelligence (AI) method that uses simple RGB Depth (RGBD) picture pair processing to estimate nutritional intake effectively. The system contains a unique multi-task contextual network for food segmentation, an algorithm for 3D surface generation, and a few-shot learning-based classifier for food identification. This enables entirely automated calculation of the nutritional intake for each meal via the sequential segmentation, identification, and estimation of the eaten food volume. A new database specifically created for the system's development and testing, complete with 322 meals' nutritional information and photos of each, has also been built. The predicted nutritional intake outperforms currently available methodologies for nutrient intake evaluation, according to experimental data, which reveal that it is well linked (> 0.91) to the ground truth and exhibits extremely modest mean relative errors ( 20%).