The future of personalised medicine lies in the development of increasingly sophisticated digital twins, where the patient-specific data is fed into predictive computational models that support the decisions of clinicians on the best therapies or course actions to treat the patient’s afflictions. The development of these personalised models from image data requires a segmentation of the geometry of interest, an estimation of intermediate or missing slices, a reconstruction of the surface and generation of a volumetric mesh and the mapping of the relevant data into the reconstructed three-dimensional volume. There exist a wide number of tools, including both classical and artificial intelligence methodologies, that help to overcome the difficulties in each stage, usually relying on the combination of different software in a multistep process. In this work, we develop an all-in-one approach wrapped in a Python library called im2mesh that automatizes the whole workflow, which starts reading a clinical image and ends generating a 3D finite element mesh with the interpolated patient data. In this work, we apply this workflow to an example of a patient-specific neuroblastoma tumour. The main advantages of our tool are its straightforward use and its easy integration into broader pipelines.