41,001 publications found
Sort by
End-to-end memory-efficient reconstruction for cone beamCT.

Cone beam computed tomography (CBCT) plays an important role in many medical fields nowadays. Unfortunately, the potential of this imaging modality is hampered by lower image quality compared to the conventional CT, and producing accurate reconstructions remains challenging. A lot of recent research has been directed towards reconstruction methods relying on deep learning, which have shown great promise for various imaging modalities. However, practical application of deep learning to CBCT reconstruction is complicated by several issues, such as exceedingly high memory costs of deep learning methods when working with fully 3D data. Additionally, deep learning methods proposed in the literature are often trained and evaluated only on data from a specific region of interest, thus raising concerns about possible lack of generalization to other regions. In this work, we aim to address these limitations and propose LIRE: a learned invertible primal-dual iterative scheme for CBCTreconstruction. LIRE is a learned invertible primal-dual iterative scheme for CBCT reconstruction, wherein we employ a U-Net architecture in each primal block and a residual convolutional neural network (CNN) architecture in each dual block. Memory requirements of the network are substantially reduced while preserving its expressive power through a combination of invertible residual primal-dual blocks and patch-wise computations inside each of the blocks during both forward and backward pass. These techniques enable us to train on data with isotropic 2 mm voxel spacing, clinically-relevant projection count and detector panel resolution on current hardware with 24 GBvideo random access memory (VRAM). Two LIRE models for small and for large field-of-view (FoV)setting were trained and validated on a set of 260 + 22 thorax CT scans and tested using a set of 142 thorax CT scans plus an out-of-distribution dataset of 79 head and neck CT scans. For both settings, our method surpasses the classical methods and the deep learning baselines on both test sets. On the thorax CT set, our method achieves peak signal-to-noise ratio (PSNR) of 33.84 ± 2.28 for the small FoV setting and 35.14 ± 2.69 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. On the head and neck CT set, our method achieves PSNR of 39.35 ± 1.75 for the small FoV setting and 41.21 ± 1.41 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. Additionally, we demonstrate that LIRE can be finetuned to reconstruct high-resolution CBCT data with the same geometry but 1 mm voxel spacing and higher detector panel resolution, where it outperforms the U-Net baseline aswell. Learned invertible primal-dual schemes with additional memory optimizations can be trained to reconstruct CBCT volumes directly from the projection data with clinically-relevant geometry and resolution. Such methods can offer better reconstruction quality and generalization compared to classical deep learning baselines.

Relevant
Development and benchmarking of a dose rate engine for raster-scanned FLASH helium ions.

Radiotherapy with charged particles at high dose and ultra-high dose rate (uHDR) is a promising technique to further increase the therapeutic index of patient treatments. Dose rate is a key quantity to predict the so-called FLASH effect at uHDR settings. However, recent works introduced varying calculation models to report dose rate, which is susceptible to the delivery method, scanning path (in active beam delivery) and beam intensity. This work introduces an analytical dose rate calculation engine for raster scanned charged particle beams that is able to predict dose rate from the irradiation plan and recorded beam intensity. The importance of standardized dose rate calculation methods is explored here. Dose is obtained with an analytical pencil beam algorithm, using pre-calculated databases for integrated depth dose distributions and lateral penumbra. Dose rate is then calculated by combining dose information with the respective particle fluence (i.e., time information) using three dose-rate-calculation models (mean, instantaneous, and threshold-based). Dose rate predictions for all three models are compared to uHDR helium ion beam (145.7 MeV/u, range in water of approximatively 14.6cm) measurements performed at the Heidelberg Ion Beam Therapy Center (HIT) with a diamond-detector prototype. Three scanning patterns (scanned or snake-like) and four field sizes are used to investigate the dose rate differences. Dose rate measurements were in good agreement with in-silico generated distributions using the here introduced engine. Relative differences in dose rate were below 10% for varying depths in water, from 2.3 to 14.8cm, as well as laterally in a near Bragg peak area. In the entrance channel of the helium ion beam, dose rates were predicted within 7% on average for varying irradiated field sizes and scanning patterns. Large differences in absolute dose rate values were observed for varying calculation methods. For raster-scanned irradiations, the deviation between mean and threshold-based dose rate at the investigated point was found to increase with the field size up to 63% for a 10mm × 10mm field, while no significant differences were observed for snake-like scanning paths. This work introduces the first dose rate calculation engine benchmarked to instantaneous dose rate, enabling dose rate predictions for physical and biophysical experiments. Dose rate is greatly affected by varying particle fluence, scanning path, and calculation method, highlighting the need for a consensus among the FLASH community on how to calculate and report dose rate in the future. The here introduced engine could help provide the necessary details for the analysis of the sparing effect and uHDR conditions.

Open Access
Relevant
Single-shot quantitative x-ray imaging using a primary modulator and dual-layer detector.

Conventional x-ray imaging and fluoroscopy have limitations in quantitation due to several challenges, including scatter, beam hardening, and overlapping tissues. Dual-energy (DE) imaging, with its capability to quantify area density of specific materials, is well-suited to address such limitations, but only if the dual-energy projections are acquired with perfect spatial and temporal alignment and corrected for scatter. In this work, we propose single-shot quantitative imaging (SSQI) by combining the use of a primary modulator (PM) and dual-layer (DL) detector, which enables motion-free DE imaging with scatter correction in a single exposure. The key components of our SSQI setup include a PM and DL detector, where the former enables scatter correction for the latter while the latter enables beam hardening correction for the former. The SSQI algorithm allows simultaneous recovery of two material-specific images and two scatter images using four sub-measurements from the PM encoding. The concept was first demonstrated using simulation of chest x-ray imaging for a COVID patient. For validation, we set up SSQI geometry on our tabletop system and imaged acrylic and copper slabs with known thicknesses (acrylic: 0-22.5cm; copper: 0-0.9mm), estimated scatter with our SSQI algorithm, and compared the material decomposition (MD) for different combinations of the two materials with ground truth. Second, we imaged an anthropomorphic chest phantom containing contrast in the coronary arteries and compared the MD with and without SSQI. Lastly, to evaluate SSQI in dynamic applications, we constructed a flow phantom that enabled dynamic imaging of iodine contrast. Our simulation study demonstrated that SSQI led to accurate scatter correction and MD, particularly for smaller focal blur and finer PM pitch. In the validation study, we found that the root mean squared error (RMSE) of SSQI estimation was 0.13cm for acrylic and 0.04mm for copper. For the anthropomorphic phantom, direct MD resulted in incorrect interpretation of contrast and soft tissue, while SSQI successfully distinguished them quantitatively, reducing RMSE in material-specific images by 38%-92%. For the flow phantom, SSQI was able to perform accurate dynamic quantitative imaging, separating contrast from the background. We demonstrated the potential of SSQI for robust quantitative x-ray imaging. The integration of SSQI is straightforward with the addition of a PM and upgrade to a DL detector, which may enable its widespread adoption, including in techniques such as radiography and dynamic imaging (i.e., real-time image guidance and cone-beam CT).

Open Access
Relevant
Improvement of LED-based photoacoustic imaging using lag-coherence factor (LCF) beamforming.

Owing to its portability, affordability, and energy-efficiency, LED-based photoacoustic (PA) imaging is increasingly becoming popular when compared to its laser-based alternative, mainly for superficial vascular imaging applications. However, this technique suffers from low SNR and thereby limited imaging depth. As a result, visual image quality of LED-based PA imaging is not optimal, especially in sub-surface vascular imaging applications. Combination of linear ultrasound (US) probes and LED arrays are the most common implementation in LED-based PA imaging, which is currently being explored for different clinical imaging applications. Traditional delay-and-sum (DAS) is the most common beamforming algorithm in linear array-based PA detection. Side-lobes and reconstruction-related artifacts make the DAS performance unsatisfactory and poor for a clinical-implementation. In this work, we explored a new weighting-based image processing technique for LED-based PAs to yield improved image quality when compared to the traditionalmethods. We are proposing a lag-coherence factor (LCF), which is fundamentally based on the combination of the spatial auto-correlation of the detected PA signals. In LCF, the numerator contains lag-delay-multiply-and-sum (DMAS) beamformer instead of a conventional DAS beamformer. A spatial auto-correlation operation is performed between the detected US array signals before using DMAS beamformer. We evaluated the new method on both tissue-mimicking phantom (2D) and human volunteer imaging (3D) data acquired using a commercial LED-based PA imagingsystem. Our novel correlation-based weighting technique showed LED-based PA image quality improvement when it is combined with conventional DAS beamformer. Both phantom and human volunteer imaging results gave a direct confirmation that by introducing LCF, image quality was improved and this method could reduce side-lobes and artifacts when compared to the DAS and coherence-factor (CF) approaches. Signal-to-noise ratio, generalized contrast-to-noise ratio, contrast ratio and spatial resolution were evaluated and compared with conventional beamformers to assess the reconstruction performance in a quantitative way. Results show that our approach offered image quality enhancement with an average signal-to-noise ratio and spatial resolution improvement of around 20% and 25% respectively, when compared with conventional CF based DASalgorithm. Our results demonstrate that the proposed LCF based algorithm performs better than the conventional DAS and CF algorithms by improving signal-to-noise ratio and spatial resolution. Therefore, our new weighting technique could be a promising tool to improve the performance of LED-based PA imaging and thus accelerate its clinicaltranslation.

Relevant
Dual energy CT reconstruction using the constrained one step spectral image reconstruction algorithm.

The constrained one-step spectral CT Image Reconstruction method (cOSSCIR) has been developed to estimate basis material maps directly from spectral CT data using a model of the polyenergetic x-ray transmissions and incorporating convex constraints into the inversion problem. This 'one-step' approach has been shown to stabilize the inversion in the case of photon-counting CT, and may provide similar benefits to dual-kV systems that utilize integrating detectors. Since the approach does not require the same rays be acquired for every spectral measurement, cOSSCIR can apply to dual energy protocols and systems used clinically, such as fast and slow kV switching systems and dual sourcescanning. The purpose of this study is to investigate the use of cOSSCIR applied to dual-kV data, using both registered and unregistered spectral acquisitions, specifically slow and fast kV switching imaging protocols. For this application, cOSSCIR is investigated using inverse crime simulations and dual-kV experiments. This study is the first demonstration of cOSSCIR on the dual-kV reconstructionproblem. An integrating detector model was developed for the purpose of reconstructing dual-kV data, and an inverse crime study was used to validate the detector model within the cOSSCIR framework using a simulated pelvic phantom. Experiments were also used to evaluate cOSSCIR on the dual energy problem. Dual-kV data was obtained from a physical phantom containing analogs of adipose, bone, and liver tissues, with the aim of recovering the material coefficients in the bone and adipose basis material maps. cOSSCIR was applied to acquisitions where all rays performed both spectral measurements (registered) and fast and slow kV switching acquisitions (unregistered). cOSSCIR was also compared to two image-domain decomposition approaches, where image-domain methods are the conventional approach for decomposing unregistered spectraldata. Simulations demonstrate the application of cOSSCIR to the dual-kV inversion problem by successfully recovering the material basis maps on ideal data, while further showing that unregistered data presents a more challenging inversion problem. In our experimental reconstructions, the recovered basis material coefficient errors were found to be less than 6.5% in the bone, adipose, and liver regions for both registered and unregistered protocols. Similarly, the errors were less than 4% in the 50 keV virtual mono-energetic images, and the recovered material decomposition vectors nearly overlap their corresponding ground-truth vectors. Additionally, a preliminary two material decomposition study of iodine quantification recovered an average concentration of 9.2 mg/mL from a 10 mg/mL experimental iodineanalog. Using our integrating detector and spectral models, cOSCCIR is capable of accurately recovering material basis maps from dual-kV data for both registered and unregistered data. The material decomposition quantification compare favorably to the image domain approaches, and our results were not affected by the imaging protocol. Our results also suggest the extension of cOSSCIR to iodine quantification using two materialdecomposition.

Relevant
Quality control for digital tomosynthesis in the ECOG-ACRIN EA1151 TMIST trial.

The Tomosynthesis Mammography Imaging Screening Trial (TMIST), EA1151 conducted by the Eastern Cooperative Oncology Group (ECOG)/American College of Radiology Imaging Network (ACRIN) is a randomized clinical trial designed to assess the effectiveness for breast cancer screening of digital breast tomosynthesis (TM) compared to digital mammography (DM). Equipment from multiple vendors is being used in the study. For the findings of the study to be valid and capture the true capacities of the two technology types, it is important that all equipment is operated within appropriate parameters with regard to image quality and dose. A harmonized QC program was established by a core physics team. Since there are over 120 trial sites, a centralized, automated QC program was chosen as the most practical design. This report presents results of the weekly QC testing program. A companion paper will review quality monitoring based on data from the headers of the patient images. Study images are collected centrally after de-identification using the "TRIAD" application developed by ACR. The core physics team devised and implemented a minimal set of quality control (QC) tests to evaluate the tomosynthesis and 2D mammography systems. Weekly, monthly and annual testing is performed by the site mammography technologists with images submitted directly to the physics core. The weekly physics QC tests are described: SDNR of a low-contrast mass object, artifact spread, spatial resolution, tracking of technical factors, and in-slice noise power spectra. As of December 31, 2022 (5 years), 145 sites with 411 machines had submitted QC data. A total of 136742 TMIST participant screening imaging studies had been performed. The 5th and 95th percentile mean glandular doses for a single tomosynthesis exposure to a 4.0cm thick PMMA phantom ("standard breast phantom") were 1.24 and 1.68 mGy respectively. The largest sources of QC non-conformance were: operator error, not following the QC protocol exactly, unreported software updates and preventive maintenance activities that affected QC setpoints. Noise power spectra were measured, however, standardization of performance targets across machine types and software revisions was difficult. Nevertheless, for each machine type, test measurement results were very consistent when the protocol was followed. Deviations in test results were mostly related to software and hardware changes. Most systems performed very consistently. Although this is a harmonized program using identical phantoms and testing protocols, it is not appropriate to apply universal threshold or target metrics across the machine types because the systems have different non-linear reconstruction algorithms and image display filters. It was found to be more useful to assess pass/fail criteria in terms of relative deviations from baseline values established when a system is first characterized and after equipment is changed. Generally, systems which needed repair failed suddenly, but in retrospect, for a few cases, drops in SDNR and increases in mAs were observed prior to tube failure. TMIST is registered as NCT03233191 by Clinicaltrials.gov.

Open Access
Relevant
A preliminary study of dynamic interactive simulation and computational CT scan of the ideal alveolus model.

While the development of CT imaging technique has brought cognition of in vivo organs, the resolution of CT images and their static characteristics have gradually become barriers of microscopic tissue research. Previous research used the finite element method to study the airflow and gas exchange in the alveolus and acinar to show the fate of inhaled aerosols and studied the diffusive, convective, and sedimentation mechanisms. Our study combines these techniques with CT scan simulation to study the mechanisms of respiratory movement and its imaging appearance. We use 3D fluid-structure interaction simulation to study the movement of an ideal alveolus under regular and forced breathing situations and ill alveoli with different tissue elasticities. Additionally, we use the Monte Carlo algorithm within the OpenGATE platform to simulate the computational CT images of the dynamic process with different designated resolutions. The resolutions show the relationship between the kinematic model of the human alveolus and its imaging appearance. The results show that the alveolus and the wall thickness can be seen with an image resolution smaller than 15.6μm. With ordinary CT resolution, the alveolus is expressed with four voxels. This is a preliminary study concerning the imaging appearance of the dynamic alveolus model. This technique will be used to study the imaging appearance of the dynamic bronchial tree and the lung lobe models in the future.

Relevant
Technical note: Evaluation of deep learning based synthetic CTs clinical readiness for dose and NTCP driven head and neck adaptive proton therapy.

Adaptive proton therapy workflows rely on accurate imaging throughout the treatment course. Our centre currently utilizes weekly repeat CTs (rCTs) for treatment monitoring and plan adaptations. However, deep learning-based methods have recently shown to successfully correct CBCT images, which suffer from severe imaging artifacts, and generate high quality synthetic CT (sCT) images which enable CBCT-based proton dose calculations. To compare daily CBCT-based sCT images to planning CTs (pCT) and rCTs of head and neck (HN) cancer patients to investigate the dosimetric accuracy of CBCT-based sCTs in a scenario mimicking actual clinical practice. Data of 56 HN cancer patients, previously treated with proton therapy was used to generate 1.962 sCT images, using a previously developed and trained deep convolutional neural network. Clinical IMPT treatment plans were recalculated on the pCT, weekly rCTs and daily sCTs. The dosimetric accuracy of sCTs was compared to same day rCTs and the initial planning CT. As a reference, rCTs were also compared to pCTs. The dose difference between sCTs and rCTs/pCT was quantified by calculating the D98 difference for target volumes and Dmean difference for organs-at-risk. To investigate the clinical relevancy of possible dose differences, NTCP values were calculated for dysphagia and xerostomia. For target volumes, only minor dose differences were found for sCT versus rCT and sCT versus pCT, with dose differences mostly within ±1.5%. Larger dose differences were observed in OARs, where a general shift towards positive differences was found, with the largest difference in the left parotid gland. Delta NTCP values for grade 2 dysphagia and xerostomia were within ±2.5% for 90% of the sCTs. Target doses showed high similarity between rCTs and sCTs. Further investigations are required to identify the origin of the dose differences at OAR levels and its relevance in clinical decision making.

Open Access
Relevant
Quantitative analysis of aortic Na[18 F]F uptake in macrocalcifications and microcalcifications in PET/CT scans.

Currently, computed tomography (CT) is used for risk profiling of (asymptomatic) individuals by calculating coronary artery calcium scores. Although this score is a strong predictor of major adverse cardiovascular events, this method has limitations. Sodium [18 F]fluoride (Na[18 F]F) positron emission tomography (PET) has shown promise as an early marker for atherosclerotic progression. However, evidence on Na[18 F]F as a marker for high-risk plaques is limited, particularly on its presentation in clinical PET/CT. Besides, the relationship between microcalcifications visualized by Na[18 F]F PET and macrocalcifications detectable on CT is unknown. To establish a match/mismatch score in the aorta between macrocalcified plaque content on CT and microcalcification Na[18 F]F PET uptake. Na[18 F]F-PET/CT scans acquired in our centre in 2019-2020 were retrospectively collected. The aorta of each low-dose CT was manually segmented. Background measurements were placed in the superior vena cava. The vertebrae were automatically segmented using an open-source convolutional neural network, dilated with 10mm, and subtracted from the aortic mask. Per patient, calcium and Na[18 F]F-hotspot masks were retrieved using an in-house developed algorithm. Three match/mismatch analyses were performed: a population analysis, a per slice analysis, and an overlap score. To generate a population image of calcium and Na[18 F]F hotspot distribution, all aortic masks were aligned. Then, a heatmap of calcium HU and Na[18 F]F-uptake on the surface was obtained by outward projection of HU and uptake values from the centerline. In each slice of the aortic wall of each patient, the calcium mass score and target-to-bloodpool ratios (TBR) were calculated within the calcium masks, in the aortic wall except the calcium masks, and in the aortic wall in slices without calcium. For the overlap score, three volumes were identified in the calcium and Na[18 F]F masks: volume of PET (PET+/CT-), volume of CT (PET-/CT+), and overlapping volumes (PET+/CT+). A Spearman's correlation analysis with Bonferroni correction was performed on the population image, assessing the correlation between all HU and Na[18 F]F vertex values. In the per slice analysis, a paired Wilcoxon signed-rank test was used to compare TBR values within each slice, while an ANOVA with post-hoc Kruskal-Wallis test was employed to compare TBR values between slices. p-values<0.05 were considered significant. In total, 186 Na[18 F]F-PET/CT scans were included. A moderate positive exponential correlation was observed between total aortic calcium mass and total aortic TBR (r=0.68, p<0.001). A strong positive correlation (r=0.77, p<0.0001) was observed between CT values and Na[18 F]F values on the population image. Significantly higher TBR values were found outside calcium masks than inside calcium masks (p<0.0001). TBR values in slices where no calcium was present, were significantly lower compared with outside calcium and inside calcium (both p<0.0001). On average, only 3.7% of the mask volumes were overlapping. Na[18 F]F-uptake in the aorta behaves similarly to macrocalcification detectable on CT. Na[18 F]F-uptake values are also moderately correlated to calcium mass scores (match). Higher uptake values were found just outside macrocalcification masks instead of inside the macrocalcification masks (mismatch). Also, only a small percentage of the Na[18 F]F-uptake volumes overlapped with the calcium volumes (mismatch).

Open Access
Relevant
Dose reduction in sequence scanning 4D CT imaging through respiratory signal-guided tube current modulation: A feasibility study.

Respiratory signal-guided 4D CT sequence scanning such as the recently introduced Intelligent 4D CT (i4DCT) approach reduces image artifacts compared to conventional 4D CT, especially for irregular breathing. i4DCT selects beam-on periods during scanning such that data sufficiency conditions are fulfilled for each couch position. However, covering entire breathing cycles during beam-on periods leads to redundant projection data and unnecessary dose to the patient during long exhalation phases. We propose and evaluate the feasibility of respiratory signal-guided dose modulation (i.e., temporary reduction of the CT tube current) to reduce the i4DCT imaging dose while maintaining high projection data coverage for image reconstruction. The study is designed as an in-silico feasibility study. Dose down- and up-regulation criteria were defined based on the patients' breathing signals and their representative breathing cycle learned before and during scanning. The evaluation (including an analysis of the impact of the dose modulation criteria parameters) was based on 510 clinical 4D CT breathing curves. Dose reduction was determined as the fraction of the downregulated dose delivery time to the overall beam-on time. Furthermore, under the assumption of a 10-phase 4D CT and amplitude-based reconstruction, beam-on periods were considered negatively affected by dose modulation if the downregulation period covered an entire phase-specific amplitude range for a specific breathing phase (i.e., no appropriate reconstruction of the phase image possible for this specific beam-on period). Corresponding phase-specific amplitude bins are subsequently denoted as compromised bins. Dose modulation resulted in a median dose reduction of 10.4% (lower quartile: 7.4%, upper quartile: 13.8%, maximum: 28.6%; all values corresponding to a default parameterization of the dose modulation criteria). Compromised bins were observed in 1.0% of the beam-on periods (72 / 7370 periods) and affected 10.6% of the curves (54/510 curves). The extent of possible dose modulation depends strongly on the individual breathing patterns and is weakly correlated with the median breathing cycle length (Spearman correlation coefficient 0.22, p<0.001). Moreover, the fraction of beam-on periods with compromised bins is weakly anti-correlated with the patient's median breathing cycle length (Spearman correlation coefficient -0.24; p<0.001). Among the curves with the 17% longest average breathing cycles, no negatively affected beam-on periods were observed. Respiratory signal-guided dose modulation for i4DCT imaging is feasible and promises to significantly reduce the imaging dose with little impact on projection data coverage. However, the impact on image quality remains to be investigated in a follow-up study.

Open Access
Relevant