Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Read full abstract