Deep regression 2D-3D ultrasound registration for liver motion correction in focal tumour thermal ablation.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Liver tumour ablation procedures require accurate placement of the needle applicator at the tumour centroid. The lower-cost and real-time nature of ultrasound (US) has advantages over computed tomography for applicator guidance, however, in some patients, liver tumours may be occult on US and tumour mimics can make lesion identification challenging. Image registration techniques can aid in interpreting anatomical details and identifying tumours, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance, particularly when compensating for liver motion due to patient breathing or movement. Therefore, we propose a 2D-3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion. Specifically, our approach can correlate imbalanced 2D and 3D US image features and use continuous 6D rotation representations to enhance the model's training stability. The dataset was divided into 2388, 196, and 193 image pairs for training, validation and testing, respectively. Our approach achieved a mean Euclidean distance error of and a mean geodesic angular error of , with a runtime of per 2D-3D US image pair. These results demonstrate that our approach can achieve accurate alignment and clinically acceptable runtime, indicating potential for clinicaltranslation.

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1002/mp.13548
Three-dimensional therapy needle applicator segmentation for ultrasound-guided focal liver ablation.
  • May 6, 2019
  • Medical physics
  • Derek J Gillies + 6 more

Minimally invasive procedures, such as microwave ablation, are becoming first-line treatment options for early-stage liver cancer due to lower complication rates and shorter recovery times than conventional surgical techniques. Although these procedures are promising, one reason preventing widespread adoption is inadequate local tumor ablation leading to observations of higher local cancer recurrence compared to conventional procedures. Poor ablation coverage has been associated with two-dimensional (2D) ultrasound (US) guidance of the therapy needle applicators and has stimulated investigation into the use of three-dimensional (3D) US imaging for these procedures. We have developed a supervised 3D US needle applicator segmentation algorithm using a single user input to augment the addition of 3D US to the current focal liver tumor ablation workflow with the goals of identifying and improving needle applicator localization efficiency. The algorithm is initialized by creating a spherical search space of line segments around a manually chosen seed point that is selected by a user on the needle applicator visualized in a 3D US image. The most probable trajectory is chosen by maximizing the count and intensity of threshold voxels along a line segment and is filtered using the Otsu method to determine the tip location. Homogeneous tissue mimicking phantom images containing needle applicators were used to optimize the parameters of the algorithm prior to a four-user investigation on retrospective 3D US images of patients who underwent microwave ablation for liver cancer. Trajectory, axis localization, and tip errors were computed based on comparisons to manual segmentations in 3D US images. Segmentation of needle applicators in ten phantom 3D US images was optimized to median (Q1, Q3) trajectory, axis, and tip errors of 2.1 (1.1, 3.6)°, 1.3 (0.8, 2.1) mm, and 1.3 (0.7, 2.5) mm, respectively, with a mean±SD segmentation computation time of 0.246±0.007s. Use of the segmentation method with a 16 invivo 3D US patient dataset resulted in median (Q1, Q3) trajectory, axis, and tip errors of 4.5 (2.4, 5.2)°, 1.9 (1.7, 2.1) mm, and 5.1 (2.2, 5.9) mm based on all users. Segmentation of needle applicators in 3D US images during minimally invasive liver cancer therapeutic procedures could provide a utility that enables enhanced needle applicator guidance, placement verification, and improved clinical workflow. A semi-automated 3D US needle applicator segmentation algorithm used invivo demonstrated localization of the visualized trajectory and tip with less than 5° and 5.2mm errors, respectively, in less than 0.31s. This offers the ability to assess and adjust needle applicator placements intraoperatively to potentially decrease the observed liver cancer recurrence rates associated with current ablation procedures. Although optimized for deep and oblique angle needle applicator insertions, this proposed workflow has the potential to be altered for a variety of image-guided minimally invasive procedures to improve localization and verification of therapy needle applicators intraoperatively.

  • Research Article
  • Cite Count Icon 13
  • 10.1364/ao.55.004024
Speckle-reduction algorithm for ultrasound images in complex wavelet domain using genetic algorithm-based mixture model.
  • May 13, 2016
  • Applied Optics
  • Muhammad Shahin Uddin + 5 more

Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.

  • Research Article
  • Cite Count Icon 16
  • 10.1118/1.4903945
Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images.
  • Jan 1, 2015
  • Medical Physics
  • Chijun Weon + 4 more

Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient's body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.

  • Abstract
  • 10.1016/j.ijrobp.2021.07.536
Self-Supervised Learning-Based High-Resolution Ultrasound Imaging for Prostate Brachytherapy
  • Oct 22, 2021
  • International Journal of Radiation Oncology*Biology*Physics
  • X Yang + 10 more

Self-Supervised Learning-Based High-Resolution Ultrasound Imaging for Prostate Brachytherapy

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-319-19387-8_53
Fast Registration of Intraoperative Ultrasound and Preoperative MR Images Based on Calibrations of 2D and 3D Ultrasound Probes
  • Jan 1, 2015
  • Fang Chen + 2 more

During the intraoperative-ultrasound-guided intervention, ultrasound (US) is often registered with other highquality preoperative images like computed tomography (CT) or magnetic resonance (MR) to improve the navigation accuracy. However, real-time registration is difficult to achieve due to the difference of image modality and dimensionality. To solve this problem, we apply preoperative 3D US image collected with a 3D calibrated probe to simplify 2D US and 3D MR image registration into two easy-achieved steps: 2D-3D US intra-modal registration and 3D US 3D MR pre-operative registration. To achieve fast intraoperative 2D US and preoperative 3D US registration, we take advantage of effective 2D and 3D US probes’ calibration results and get a near optimal registration transform. Then intraoperatively we just need to do an automatic local adjustment, which will make real-time registration become possible. To achieve effective calibrations, we design an improved calibration phantom and propose a warm-start iterative closest points (ICP) method.

  • Research Article
  • Cite Count Icon 48
  • 10.1016/j.media.2016.06.038
Automatic segmentation approach to extracting neonatal cerebral ventricles from 3D ultrasound images.
  • Jul 9, 2016
  • Medical Image Analysis
  • Wu Qiu + 6 more

Automatic segmentation approach to extracting neonatal cerebral ventricles from 3D ultrasound images.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-319-24574-4_11
Automatic 3D US Brain Ventricle Segmentation in Pre-Term Neonates Using Multi-phase Geodesic Level-Sets with Shape Prior
  • Jan 1, 2015
  • Wu Qiu + 7 more

Pre-term neonates born with a low birth weight (< 1500g) are at increased risk for developing intraventricular hemorrhage (IVH). 3D ultrasound (US) imaging has been used to quantitatively monitor the ventricular volume in IVH neonates, instead of typical 2D US used clinically, which relies on linear measurements from a single slice and visually estimates to determine ventricular dilation. To translate 3D US imaging into clinical setting, an accurate segmentation algorithm would be desirable to automatically extract the ventricular system from 3D US images. In this paper, we propose an automatic multi-region segmentation approach for delineating lateral ventricles of pre-term neonates from 3D US images, which makes use of multi-phase geodesic level-sets (MP-GLS) segmentation technique via a variational region competition principle and a spatial shape prior derived from pre-segmented atlases. Experimental results using 15 IVH patient images show that the proposed GPU-implemented approach is accurate in terms of the Dice similarity coefficient (DSC), the mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). To the best of our knowledge, this paper reports the first study on automatic segmentation of ventricular system of premature neonatal brains from 3D US images.

  • Research Article
  • Cite Count Icon 7
  • 10.1109/tuffc.2022.3180980
Actuated Reflector-Based 3-D Ultrasound Imaging With Synthetic Aperture Focusing.
  • Jul 29, 2022
  • IEEE transactions on ultrasonics, ferroelectrics, and frequency control
  • Yichuan Tang + 3 more

Three-dimensional (3D) ultrasound (US) imaging addresses the limitation in field-of-view (FOV) in conventional two-dimensional (2D) US imaging by providing 3D viewing of the anatomy. 3D US imaging has been extensively adapted for diagnosis and image-guided surgical intervention. However, conventional approaches to implement 3D US imaging require either expensive and sophisticated 2D array transducers or external actuation mechanisms to move a one-dimensional array mechanically. Here, we propose a 3D US imaging mechanism using actuated acoustic reflector instead of the sensor elements for volume acquisition with significantly extended 3D FOV, which can be implemented with simple hardware and compact size. To improve image quality on the elevation plane, we implemented the synthetic aperture focusing (SAF) method according to the diagonal geometry of the virtual element array in the proposed imaging mechanism for elevation beamforming. We first evaluated the proposed imaging mechanism and SAF with simulated point targets and cyst targets. Results of point targets suggested improved image quality on the elevation plane, and results of cysts targets demonstrated a potential to improve 3D visualization of human anatomy. We built a prototype imaging system with a 3D FOV of 38 mm (lateral) by 38 mm (elevation) by 50 mm (axial) and collected data in imaging experiments with phantoms. Experimental data showed consistency with simulation results. The SAF method enhanced quantifying the cyst volume size in the breast mimicking phantom compared to without elevation beamforming. These results suggested that the proposed 3D US imaging mechanism could potentially be applied in clinical scenarios.

  • Research Article
  • Cite Count Icon 11
  • 10.1002/mp.14946
Self-supervised learning for accelerated 3D high-resolution ultrasound imaging.
  • Jun 2, 2021
  • Medical physics
  • Xianjin Dai + 9 more

Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images. We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models. The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90±0.15, the peak signal-to-noise ratio (PSNR) value of 37.88±0.88dB, and the visual information fidelity (VIF) value of 0.69±0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases. We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.

  • Research Article
  • 10.1093/pch/21.supp5.e86
Quantitative 3D and 2D Head Ultrasound to Determine Thresholds for Intervention In Preterm Neonates with Posthemorrhagic Ventricular Dilation
  • Jun 1, 2016
  • Paediatrics &amp; Child Health
  • J Kishimoto + 5 more

BACKGROUND: Preterm neonates with intraventricular hemorrhage (IVH) often acquire post hemorrhagic ventricle dilation (PHVD), which, when severe, can lead to neurological impairment. Cranial 2D ultrasound (US) images are used for the diagnosis and monitoring of PHVD; however, there is no consensus on the use of 2D US images to guide treatment. This can lead to delays in interventions, and the potential for brain injury. We have developed a 3D US system that has been shown to accurately detect changes ventricle volumes (VV). OBJECTIVES: We investigate the utility of using 3D and 2D US measurements to determine thresholds for treatment of neonates with PHVD and to predict the need for further treatments. DESIGN/METHODS: Neonates were imaged twice weekly in accordance to a protocol approved by the research ethics board. 3D US images were manually segmented to obtain VV. 2D measurements included ventricle index, anterior horn widths, third ventricle width, and largest thalamo-occipital distance. The rate of change for each measurement was calculated. Decisions to perform ventricular taps (VTs) to relieve intracranial pressure were made independently by neurosurgeons who were blinded to study images. Receiver operator curves (ROC) were generated using the sensitivity and specificity of the rates of change of sonographic parameters in predicting the need for V T. For each parameter optimal threshold for intervention was estimated by the area under ROC; and positive and negative predictive values (PPV, NPV) were calculated. Additionally, we investigated whether US measurements predicted the need for multiple interventions. RESULTS: 23 neonates with PHVD were enrolled, 8 required interventions. The best predictor to determine initial intervention was the rate of change in VV when a threshold of &amp;gt;2.04 cm3/day was used within the first three weeks of life (NPV and PPV of 1) and, this measurement was able to determine if then a patient would require further interventions when a threshold of -0.04 cm3/day was used looking at imaging time points after the first intervention (NPV and PPV of 1). 2D measurements were less sensitive and/or less specific (sensitivity of 88-57%, specificity of 100-79%, PPV of 0.88-0.57 and NPV of 0.93-0.79). CONCLUSION: 3D US VV can predict the requirement for interven-tional ventricular tap in neonates with IVH, and can identify patients that have resolving PHVD following initial intervention, with higher sensitivity and specificity than 2D US measurements. These findings show promise for early classification of neonates using 3D US for prediction of interven-tional therapy, potentiallyaiding in timely management of these patients.

  • Abstract
  • Cite Count Icon 1
  • 10.1016/j.annemergmed.2017.07.364
394 Brain Imaging Using a Novel Three-Dimensional Ultrasound System
  • Sep 18, 2017
  • Annals of Emergency Medicine
  • J.S Broder + 6 more

394 Brain Imaging Using a Novel Three-Dimensional Ultrasound System

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.2581749
Automatic deep learning-based segmentation of neonatal cerebral ventricles from 3D ultrasound images
  • Feb 15, 2021
  • Zachary Szentimrey + 3 more

In comparison to two-dimensional (2D) ultrasound (US), three-dimensional (3D) US imaging is a more sensitive alternative for monitoring the size and shape of neonatal cerebral lateral ventricles. It can be used when following posthemorrhagic ventricular dilatation, after intraventricular hemorrhaging (IVH), which is bleeding inside the lateral ventricles of the brain in preterm infants. Tracking ventricular dilatation is important in neonates as it can cause increased intracranial pressure, leading to neurological damage. However, manually segmenting 3D US images is time-consuming and tedious due to poor image contrast and the complex shape of cerebral ventricles. In this paper, we describe an automated segmentation method based on the U-Net model for the segmentation of 3D US images that may contain one or both ventricle(s). We trained and tested two models, a 3D U-Net and slice-based 2D U-Net, on a total of 193 3D US images (105 one ventricle and 88 two ventricle images). To mitigate the class imbalance of the object vs. background, we augmented the images through rotation and translation. As a benchmark comparison, we also trained a U-Net++ model and compared the results with the original U-Net. When all the images were used in a single U-Net model, the 3D U-Net and 2D U-Net yielded a Dice similarity coefficient (DSC) of 0.67&plusmn;0.16 and 0.76&plusmn;0.09 respectively. When two 2D U-Net models were trained separately, they yielded a DSC of 0.82&plusmn;0.09 and 0.74&plusmn;0.07 for one ventricle and two ventricle images, respectively. Compared to the best previous fully automated method, the proposed 2D U-Net method reported a comparable DSC when using all images but an increased DSC of 0.05 when using only one ventricle image.

  • Research Article
  • Cite Count Icon 23
  • 10.1118/1.3056458
Nonrigid registration of three‐dimensional ultrasound and magnetic resonance images of the carotid arteries
  • Jan 12, 2009
  • Medical Physics
  • Nuwan D Nanayakkara + 6 more

Atherosclerosis at the carotid bifurcation can result in cerebral emboli, which in turn can block the blood supply to the brain causing ischemic strokes. Noninvasive imaging tools that better characterize arterial wall, and atherosclerotic plaque structure and composition may help to determine the factors which lead to the development of unstable lesions, and identify patients at risk of plaque disruption and stroke. Carotid magnetic resonance (MR) imaging allows for the characterization of carotid vessel wall and plaque composition, the characterization of normal and pathological arterial wall, the quantification of plaque size, and the detection of plaque integrity. On the other hand, various ultrasound (US) measurements have also been used to quantify atherosclerosis, carotid stenosis, intima-media thickness, total plaque volume, total plaque area, and vessel wall volume. Combining the complementary information provided by 3D MR and US carotid images may lead to a better understanding of the underlying compositional and textural factors that define plaque and wall vulnerability, which may lead to better and more effective stroke prevention strategies and patient management. Combining these images requires nonrigid registration to correct the nonlinear misalignments caused by relative twisting and bending in the neck due to different head positions during the two image acquisition sessions. The high degree of freedom and large number of parameters associated with existing nonrigid image registration methods causes several problems including unnatural plaque morphology alteration, high computational complexity, and low reliability. Thus, a "twisting and bending" model was used with only six parameters to model the normal movement of the neck for nonrigid registration. The registration technique was evaluated using 3D US and MR carotid images at two field strengths, 1.5 and 3.0 T, of the same subject acquired on the same day. The mean registration error between the segmented carotid artery wall boundaries in the target US image and the registered MR images was calculated using a distance-based error metric after applying a "twisting and bending" model based nonrigid registration algorithm. An average registration error of 1.4 +/- 0.3 mm was obtained for 1.5 T MR and 1.5 +/- 0.4 mm for 3.0 T MR, when registered with 3D US images using the nonrigid registration technique presented in this paper. Visual inspection of segmented vessel surfaces also showed a substantial improvement of alignment with this nonrigid registration technique compared to rigid registration.

  • Research Article
  • Cite Count Icon 13
  • 10.1109/jbhi.2021.3085019
A Deep Learning Localization Method for Measuring Abdominal Muscle Dimensions in Ultrasound Images.
  • May 31, 2021
  • IEEE Journal of Biomedical and Health Informatics
  • Alzayat Saleh + 5 more

Health professionals extensively use Two-Dimensional (2D) Ultrasound (US) videos and images to visualize and measure internal organs for various purposes including evaluation of muscle architectural changes. US images can be used to measure abdominal muscles dimensions for the diagnosis and creation of customized treatment plans for patients with Low Back Pain (LBP), however, they are difficult to interpret. Due to high variability, skilled professionals with specialized training are required to take measurements to avoid low intra-observer reliability. This variability stems from the challenging nature of accurately finding the correct spatial location of measurement endpoints in abdominal US images. In this paper, we use a Deep Learning (DL) approach to automate the measurement of the abdominal muscle thickness in 2D US images. By treating the problem as a localization task, we develop a modified Fully Convolutional Network (FCN) architecture to generate blobs of coordinate locations of measurement endpoints, similar to what a human operator does. We demonstrate that using the TrA400 US image dataset, our network achieves a Mean Absolute Error (MAE) of 0.3125 on the test set, which almost matches the performance of skilled ultrasound technicians. Our approach can facilitate next steps for automating the process of measurements in 2D US images, while reducing inter-observer as well as intra-observer variability for more effective clinical outcomes.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/isbi.2010.5490329
Sensorless and real-time registration between 2D ultrasound and preoperative images of the liver
  • Jan 1, 2010
  • Duhgoon Lee + 4 more

Synchronization between real-time ultrasound (US) and preoperative images can provide much information for US-guided intervention. For the synchronization, we present a real-time registration system between the two images of the liver without any help of sensors. In this system, we first generate a 4D preoperative image, which is composed of multiple 3D images along the respiration, by considering their local deformation. In the intraoperative stage, we achieve the pose information of a pose-fixed 3D US transducer by using several 3D US images. We then acquire 2D US images and find their corresponding images in real-time from the 4D preoperative image. The related registration is done by comparing a gradient-based similarity measure between a 2D US image and generated 2D preoperative image candidates. By the visual assessment of registration results, we confirm the feasibility of the proposed system for image-guidance.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.