The segmentation and analysis of coronary arteries from intravascular optical coherence tomography (IVOCT) is an important aspect of diagnosing and managing coronary artery disease. Current image processing methods are hindered by the time needed to generate expert-labelled datasets and the potential for bias during the analysis. Therefore, automated, robust, unbiased and timely geometry extraction from IVOCT, using image processing, would be beneficial to clinicians. With clinical application in mind, we aim to develop a model with a small memory footprint that is fast at inference time without sacrificing segmentation quality. Using a large IVOCT dataset of 12,011 expert-labelled images from 22 patients, we construct a new deep learning method based on capsules which automatically produces lumen segmentations. Our dataset contains images with both blood and light artefacts (22.8 %), as well as metallic (23.1 %) and bioresorbable stents (2.5 %). We split the dataset into a training (70 %), validation (20 %) and test (10 %) set and rigorously investigate design variations with respect to upsampling regimes and input selection. We show that our developments lead to a model, DeepCap, that is on par with state-of-the-art machine learning methods in terms of segmentation quality and robustness, while using as little as 12 % of the parameters. This enables DeepCap to have per image inference times up to 70 % faster on GPU and up to 95 % faster on CPU compared to other state-of-the-art models. DeepCap is a robust automated segmentation tool that can aid clinicians to extract unbiased geometrical data from IVOCT.
Read full abstract