Photoacoustic tomography (PAT) is a non-destructive, non-ionizing, and rapidly expanding hybrid biomedical imaging technique, yet it faces challenges in obtaining clear images due to limited data from detectors or angles. As a result, the methodology suffers from significant streak artifacts and low-quality images. The integration of deep learning (DL), specifically convolutional neural networks (CNNs), has recently demonstrated powerful performance in various fields of PAT. This work introduces a post-processing-based CNN architecture named residual-dense UNet (RDUNet) to address the stride artifacts in reconstructed PA images. The framework adopts the benefits of residual and dense blocks to form high-resolution reconstructed images. The network is trained with two different types of datasets to learn the relationship between the reconstructed images and their corresponding ground truths (GTs). In the first protocol, RDUNet (identified as RDUNet I) underwent training on heterogeneous simulated images featuring three distinct phantom types. Subsequently, in the second protocol, RDUNet (referred to as RDUNet II) was trained on a heterogeneous composition of 81% simulated data and 19% experimental data. The motivation behind this is to allow the network to adapt to diverse experimental challenges. The RDUNet algorithm was validated by performing numerical and experimental studies involving single-disk, T-shape, and vasculature phantoms. The performance of this protocol was compared with the famous backprojection (BP) and the traditional UNet algorithms. This study shows that RDUNet can substantially reduce the number of detectors from 100 to 25 for simulated testing images and 30 for experimental scenarios.
Read full abstract