A ComparativeEvaluation of Microimpedance TomographyReconstruction Algorithms for in Vitro Imaging

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

This paper presentsthe development of a novel miniatureelectricalimpedance tomography (EIT) system made out of glass, along with thetraining, validation, and testing of an accompanying open-source machinelearning image reconstruction model. Our 1-dimensional convolutionalneural network (1D-CNN) models were uniquely benchmarked, both qualitativelyand quantitatively, using synthetic and experimental data, againstwell-established image reconstruction methods: the one-step Gauss–Newtonmethod and the total variation reconstruction method. Image reconstructionresults obtained using our 1D-CNN show significant advantages overthese traditional methods, achieving an up to 5-fold reduction inmean square error on synthetic data. These results were replicatedfor two common excitation/measurement modes and extended to objectswith varying conductivity and quantities. The superior EIT reconstructioncapabilities of our 1D-CNN were further validated experimentally acrossa similarly broad range of parameters, achieving an average positionalaccuracy of 147 μm and an average dimensional resolution of70 μm. To demonstrate potential applications in in vitro monitoring,we used our platform to observe zebrafish development through threedistinct phases, from embryo to larvae, showcasing our platform’scompatibility with biological imaging.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 31
  • 10.1107/s160057752000017x
Limited angle tomography for transmission X-ray microscopy using deep learning.
  • Feb 13, 2020
  • Journal of Synchrotron Radiation
  • Yixing Huang + 3 more

In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U-Net significantly reduces the root-mean-square error (RMSE) from 2.55 × 10-3 µm-1 in the FBP reconstruction to 1.21 × 10-3 µm-1 in the U-Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least-square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10-3 µm-1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.

  • Research Article
  • Cite Count Icon 14
  • 10.1088/1674-1056/ac0dab
Deep learning for image reconstruction in thermoacoustic tomography
  • Feb 1, 2021
  • Chinese Physics B
  • Qiwen Xu + 2 more

Microwave-induced thermoacoustic tomography (TAT) is a rapidly-developing noninvasive imaging technique that integrates the advantages of microwave imaging and ultrasound imaging. While an image reconstruction algorithm is critical for the TAT, current reconstruction methods often creates significant artifacts and are computationally costly. In this work, we propose a deep learning-based end-to-end image reconstruction method to achieve the direct reconstruction from the sinogram data to the initial pressure density image. We design a new network architecture TAT-Net to transfer the sinogram domain to the image domain with high accuracy. For the scenarios where realistic training data are scarce or unavailable, we use the finite element method (FEM) to generate synthetic data where the domain gap between the synthetic and realistic data is resolved through the signal processing method. The TAT-Net trained with synthetic data is evaluated through both simulations and phantom experiments and achieves competitive performance in artifact removal and robustness. Compared with other state-of-the-art reconstruction methods, the TAT-Net method can reduce the root mean square error to 0.0143, and increase the structure similarity and peak signal-to-noise ratio to 0.988 and 38.64, respectively. The results obtained indicate that the TAT-Net has great potential applications in improving image reconstruction quality and fast quantitative reconstruction.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/ithings-greencom-cpscom-smartdata-cybermatics50389.2020.00057
Image Reconstruction of IoT based on Parallel CNN
  • Nov 1, 2020
  • Chunyan Zeng + 2 more

As an effective signal acquisition and reconstruction scheme, compressed sensing (CS) is widely used in measurement and reconstruction of Internet of things (IoT). CS can recover images from fewer measurements compared with traditional signal acquisition and reconstruction methods. Recently, many deep learning based CS methods are proposed for image reconstruction and achieve better performance compared with traditional CS reconstruction methods. However these methods usually divide the image into blocks and utilize random measurement matrix for block measurement, which ignores the correlation between blocks. Furthermore, some existing reconstruction methods based on deep learning only adopt simple channel convolutional neural networks (CNN) to complete image reconstruction, which does not make full use of CNN's presentation ability. In order to solve the above problems, we proposes a novel image measurement and reconstruction framework to achieve high quaulity reconstruction. In measurement part, we use a convolutional layer instead of random measurement matrix to directly acquire all measurements which provides more construction information for subsequent image reconstruction and removes block effect. In reconstruction part, this paper firstly uses a deconvolution layer to obtain an initial reconstructed image which has same dimension as the input image. Then we employ multiple parallel CNN to obtain multiple feature information. The multiple parallel CNN include dilated convolution kernels of different receptive field to increase the network's receptive field, which can obtain more image structural information for reconstruction. The results show that the performance of image reconstruction is greatly ameliorated compared with the existing the state-of-the-art methods.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s12149-020-01562-8
Comparison of Alzheimer's disease patients and healthy controls in the easy Z-score imaging system with differential image reconstruction methods using SPECT/CT: verification using normal database of our institution.
  • Jan 4, 2021
  • Annals of nuclear medicine
  • Makoto Ohba + 9 more

The easy Z-score imaging system (eZIS) analysis is used for the diagnosis of dementia by cerebral blood flow on single photon emission computed tomography (SPECT). Differences in the acquisition and reconstruction conditions in SPECT may affect the eZIS analysis results. The present study aimed to construct our institutional normal database (NDB) and Alzheimer's disease (AD)-specific volumes of interest (VOIs) in eZIS analysis, and to compare the differential diagnostic ability between healthy controls (HC) and patients with AD in the image reconstruction filtered back projection (FBP) and ordered subset expectation maximization (OSEM) methods. An NDB was constructed at our institution from 30 healthy individual using the FBP and OSEM reconstruction methods. We divided 51 HC and 51 AD patients into two groups, one for AD disease-specific VOI construction (HC, AD) and the other for NDB verification (HC, AD); image reconstruction was performed using FBP and OSEM. The areas of reduced blood flow in AD patients were compared with those of HC using the two types of image reconstruction methods. We used AD disease-specific VOI and NDB from each reconstruction method in eZIS analysis and compared the differential diagnostic ability for HC and AD with the different reconstruction methods. Comparing the areas of reduced blood flow in AD patients using the different image reconstruction methods, OSEM showed decreased blood flow in the medial region of the temporal lobes compared to FBP. Comparing the differential diagnostic ability for HC and AD using eZIS, the Severity, Extent, and Ratio showed higher values in the analysis performed using OSEM image reconstruction compared to FBP. With the 99mTc-ECD SPECT, the eZIS analysis equipped with our institutional AD-specific VOI and NDB using OSEM image reconstruction could distinguish HC from AD better than eZIS analysis using FBP image reconstruction. This study is registered in UMIN Clinical Trials Registry (UMIN-CTR) as UMIN study ID: UMIN000042362.

  • Conference Article
  • 10.1109/fskd.2018.8687120
Reconstruction of Fine Grayscale Image with Low Sampling Rate Based on the Salience Analysis
  • Jul 1, 2018
  • Tianhao Wang + 1 more

This paper introduces a new method for image reconstruction, which mainly aims at fine grayscale image reconstruction and it is based on visual salience analysis. Considering there is much difference of the texture and details in one image, so the feature of each pixel should be employed. In this paper, an algorithm is proposed to seize this kind of feature and use it to help image reconstruction. For two-dimensional fine grayscale image reconstruction, LC+ASPL algorithm can get better results. LC means Luminance contrast, which is used as a criterion to describe salience of single pixel. ASPL is a weighted image reconstruction algorithm, it is obtained from the method of Smoothed Projected Landweber. For data sampling, analyze the LC feature of the image in advance, then according to the analysis results and Block Compressed Sensing (BCS)theory, choose the sampling data selectively. Give higher weight to high salient area and lower weight to matching areas. For the process of image reconstruction, lock the salient regions of the image for further analysis and processing. It has been proved that this method of image measurement and reconstruction can help to reconstruct the 2D fine grayscale image more accurately and reproduce the details more clearly with fewer data. This method can be applied to medical image reconstruction, like CT and MRI reconstruction. For one thing, reduce the sampling data can reduce the scanning time for the patient; for another thing, finely reconstruction can help the doctors diagnose disease. This method can also give some theoretical guidance for fine image reconstruction.

  • Research Article
  • 10.1360/n972017-00818
A global-shared and low-exchange parallel method of high resolution solar image reconstruction
  • Nov 24, 2017
  • Chinese Science Bulletin
  • Hui Deng + 7 more

The high resolution image reconstruction takes an important place in the solar physics research, but the solar high resolution observation has been hindered severely for a long time due to huge observation data volume, slow reconstruction speed and other factors. In order to treat the huge volume of quasi real-time solar observation data and cope with the computing burden of the same magnitude for high resolution reconstruction of solar image, a number of advanced ground-based solar telescopes at home and abroad have adopted the speckle masking, a method of reconstruction algorithm that can be parallely realized, to reconstruct the high resolution image. Good treatment results are obtained from this method. However, it is still hard to meet the demand for solar observation data treatment at the current efficiency, since the volume of solar telescope observation data is increasing. This paper is directed at the demands of China′s new solar telescopes such as the NVST (The 1 m New Vacuum Solar Telescope) and ONSET (Optical and Near-Infrared Solar Eruption Tracer). By analyzing the computing time of each module with the Triple-Spectral method through actual measurement, the paper reaches a conclusion that the data exchange performance is a key bottleneck affecting the reconstruction effect. On this basis, the paper puts forward a universal method of global-shared low-exchange parallel high resolution image reconstruction. This method takes the Triple-Spectral as the core algorithm for image reconstruction and uses the message passing interface (MPI) and shared memory mechanism, allowing the reconstruction computing process to read and write the data in the shared memory at high speed after algorithm optimization. The shared memory, which is created for storage of image data and image reconstruction results respectively according to the size of the image, will be subsequently mapped into each process, giving access to read the processing data and store the reconstruction results independently. While computing the image reconstruction, each child process will not apply the MPI communication to obtain the sub-block image data. Instead, it will read the associated data from the shared memory according to the sub-block image numbering. After the sub-block image reconstruction is done, each child process will store the reconstruction results directly into the shared memory according to the sub-block image numbering, instead of sending the results to the host process via MPI communication. In this way the communication between processes is reduced, the comunication time saved and the data exchange efficiency improved. The experiment results show that on one 16-core PC server, it takes only about 12.4 s for reconstructing the 100-frame ONSET 1660×1660 pixel image and 5.6 s for the 100-frame NVST 1024×1024 pixel image under this method. Good efficiency is achieved in the reconstruction of solar telescope data for ONSET and NVST with different apertures, proving that this method has certain efficiency and universality. The parallel combination of the Triple-Spectral and K-T has greatly reduced the communication process and data exchange in the course of image reconstruction, saved the communication time and improved the reconstruction efficiency. To be further mentioned, the achievement of good efficiency on one server would bring the method into flexible application and save the equipment cost substantially. With the aid of the research outcome of the paper, it is expected to tackle the puzzles remained in the high resolution reconstruction of NVST and ONSET, and bring down the data storage burden. Its fulfillment of real-time high resolution reconstruction has laid a solid foundation for the follow-up researches.

  • Conference Article
  • 10.1109/icpr.2010.586
Paired Transform Slice Theorem of 2-D Image Reconstruction from Projections
  • Aug 1, 2010
  • Serkan Dursun + 2 more

This paper discusses the paired transform-based method of reconstruction of 2-D images from their projections. The complete set of basic functions of the 2-D discrete paired transform are defined by specific directions, i.e. the transform is directional and can be calculated from the projection data. A simple formula is presented for image reconstruction without calculating the 2-D discrete Fourier transform in the case, when the size of image is L <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">r</sup> × L <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">r</sup> , when L is prime. The image reconstruction is described by the discrete model that is used in the series expansion methods of image reconstruction. The proposed method of reconstruction has been implemented and successfully applied for modeled images on Cartesian grid of sizes up to 256×256.

  • Research Article
  • 10.1118/1.4889136
MO‐C‐18A‐01: Advances in Model‐Based 3D Image Reconstruction
  • May 29, 2014
  • Medical Physics
  • G Chen + 3 more

Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient‐specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task‐specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization‐based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task‐based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications.Learning Objectives: Learn the general methodologies associated with model‐based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model‐based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model‐based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.3390/bioengineering10030358
Synthesizing Complex-Valued Multicoil MRI Data from Magnitude-Only Images.
  • Mar 14, 2023
  • Bioengineering (Basel, Switzerland)
  • Nikhil Deveshwar + 4 more

Despite the proliferation of deep learning techniques for accelerated MRI acquisition and enhanced image reconstruction, the construction of large and diverse MRI datasets continues to pose a barrier to effective clinical translation of these technologies. One major challenge is in collecting the MRI raw data (required for image reconstruction) from clinical scanning, as only magnitude images are typically saved and used for clinical assessment and diagnosis. The image phase and multi-channel RF coil information are not retained when magnitude-only images are saved in clinical imaging archives. Additionally, preprocessing used for data in clinical imaging can lead to biased results. While several groups have begun concerted efforts to collect large amounts of MRI raw data, current databases are limited in the diversity of anatomy, pathology, annotations, and acquisition types they contain. To address this, we present a method for synthesizing realistic MR data from magnitude-only data, allowing for the use of diverse data from clinical imaging archives in advanced MRI reconstruction development. Our method uses a conditional GAN-based framework to generate synthetic phase images from input magnitude images. We then applied ESPIRiT to derive RF coil sensitivity maps from fully sampled real data to generate multi-coil data. The synthetic data generation method was evaluated by comparing image reconstruction results from training Variational Networks either with real data or synthetic data. We demonstrate that the Variational Network trained on synthetic MRI data from our method, consisting of GAN-derived synthetic phase and multi-coil information, outperformed Variational Networks trained on data with synthetic phase generated using current state-of-the-art methods. Additionally, we demonstrate that the Variational Networks trained with synthetic k-space data from our method perform comparably to image reconstruction networks trained on undersampled real k-space data.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 35
  • 10.1007/s12194-024-00780-3
Deep learning-based PET image denoising and reconstruction: a review
  • Feb 6, 2024
  • Radiological physics and technology
  • Fumio Hashimoto + 5 more

This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3389/fmars.2023.1093665
ESPC-BCS-Net: A network-based CS method for underwater image compression and reconstruction
  • Feb 3, 2023
  • Frontiers in Marine Science
  • Zhenyue Li + 2 more

The Internet of Underwater Things (IoUT) is a typical energy-limited and bandwidth-limited system where the technical bottleneck is the asymmetry between the massive demand for information access and the limited communication bandwidth. Therefore, storing and transmitting high-quality underwater images is a challenging task. The data measured by cameras need to be effectively compressed before transmission to reduce storage and reconstruc-ted with minor errors, which is the best solution. Compressed sensing (CS) theory breaks through the Nyquist sampling theorem and has been widely used to reconstruct sparse signals accurately. For adaptive sampling underwater images and improving the reconstruction performance, we propose the ESPC-BCS-Net by combining the advantages of CS and Deep Learning. The ESPC-BCS-Net consists of three parts: Sampling-Net, ESPC-Net, and BCS-Net. The parameters (e.g. sampling matrix, sparse transforms, shrinkage thresholds, etc.) in ESPC-BCS-Net are learned end-to-end rather than hand-crafted. The Sampling-Net achieves adaptive sampling by replacing the sampling matrix with a convolutional layer. The ESPC-Net implements image upsampling, while the BCS-Net is used to image reconstruction. The efficient sub-pixel layer of ESPC-Net effectively avoids blocking artifacts. The visual and quantitative evaluation of the experimental results shows that the underwater image reconstruction still performs well when the CS ratio is 0.1 and the PSNR of the reconstructed underwater images is above 29.

  • Research Article
  • Cite Count Icon 57
  • 10.1002/mp.14170
AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse-data CT.
  • Apr 30, 2020
  • Medical Physics
  • Gaoyu Chen + 11 more

Sparse-data computed tomography (CT) frequently occurs, such as breast tomosynthesis, C-arm CT, on-board four-dimensional cone-beam CT (4D CBCT), and industrial CT. However, sparse-data image reconstruction remains challenging due to highly undersampled data. This work develops a data-driven image reconstruction method for sparse-data CT using deep neural networks (DNN). The new method so-called AirNet is designed to incorporate the benefits from analytical reconstruction method (AR), iterative reconstruction method (IR), and DNN. It is built upon fused analytical and iterative reconstruction (AIR) that synergizes AR and IR via the optimization framework of modified proximal forward-backward splitting (PFBS). By unrolling PFBS into IR updates of CT data fidelity and DNN regularization with residual learning, AirNet utilizes AR such as FBP during the data fidelity, introduces dense connectivity into DNN regularization, and learns PFBS coefficients and DNN parameters that minimize the loss function during the training stage; and then AirNet with trained parameters can be used for end-to-end image reconstruction. A CT atlas of 100 prostate scans was used to validate the AirNet in comparison with state-of-art DNN-based postprocessing and image reconstruction methods. The validation loss in AirNet had the fastest decreasing rate, owing to inherited fast convergence from AIR. AirNet was robust to noise in projection data and content differences between the training set and the images to be reconstructed. The impact of image quality on radiotherapy treatment planning was evaluated for both photon and proton therapy, and AirNet achieved the best treatment plan quality, especially for proton therapy. For example, with limited-angle data, the maximal target dose for AirNet was 109.5% in comparison with the ground truth 109.1%, while it was significantly elevated to 115.1% and 128.1% for FBPConvNet and LEARN, respectively. A new image reconstruction AirNet is developed for sparse-data CT image reconstruction. AirNet achieved the best image reconstruction quality both visually and quantitatively among all methods under comparison for all sparse-data scenarios (sparse-view and limited-angle), and provided the best photon and proton treatment plan quality based on sparse-data CT.

  • Research Article
  • Cite Count Icon 5
  • 10.1007/s11227-020-03367-y
Classification and recognition of computed tomography images using image reconstruction and information fusion methods
  • Jun 29, 2020
  • The Journal of Supercomputing
  • Pengzhi Li + 5 more

In this paper, we propose a diagnosis and classification method of hydrocephalus computed tomography (CT) images using deep learning and image reconstruction methods. The proposed method constructs pathological features differing from the other healthy tissues. This method tries to improve the accuracy of pathological images identification and diagnosis. Identification of pathological features from CT images is an essential subject for the diagnosis and treatment of diseases. However, it is difficult to accurately distinguish pathological features owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions, etc. Some study results reported that the ResNet network has a better classification and diagnosis performance than other methods, and it has broad application prospectives in the identification of CT images. We use an improved ResNet network as a classification model with our proposed image reconstruction and information fusion methods. First, we evaluate a classification experiment using the hydrocephalus CT image datasets. Through the comparative experiments, we found that gradient features play an important role in the classification of hydrocephalus CT images. The classification effect of CT images with small information entropy is excellent in the evaluation of hydrocephalus CT images. A reconstructed image containing two channels of gradient features and one channel of LBP features is very effective in classification. Second, we apply our proposed method in classification experiments on CT images of colonography polyps for an evaluation. The experimental results have consistency with the hydrocephalus classification evaluation. It shows that the method is universal and suitable for classification of CT images in these two applications for the diagnosis of diseases. The original features of CT images are not ideal characteristics in classification, and the reconstructed image and information fusion methods have a great effect on CT images classification for pathological diagnosis.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1117/1.jbo.29.s1.s11516
Spatiotemporal image reconstruction to enable high-frame-rate dynamic photoacoustic tomography with rotating-gantry volumetric imagers.
  • Jan 19, 2024
  • Journal of biomedical optics
  • Refik Mert Cam + 5 more

Dynamic photoacoustic computed tomography (PACT) is a valuable imaging technique for monitoring physiological processes. However, current dynamic PACT imaging techniques are often limited to two-dimensional spatial imaging. Although volumetric PACT imagers are commercially available, these systems typically employ a rotating measurement gantry in which the tomographic data are sequentially acquired as opposed to being acquired simultaneously at all views. Because the dynamic object varies during the data-acquisition process, the sequential data-acquisition process poses substantial challenges to image reconstruction associated with data incompleteness. The proposed image reconstruction method is highly significant in that it will address these challenges and enable volumetric dynamic PACT imaging with existing preclinical imagers. The aim of this study is to develop a spatiotemporal image reconstruction (STIR) method for dynamic PACT that can be applied to commercially available volumetric PACT imagers that employ a sequential scanning strategy. The proposed reconstruction method aims to overcome the challenges caused by the limited number of tomographic measurements acquired per frame. A low-rank matrix estimation-based STIR (LRME-STIR) method is proposed to enable dynamic volumetric PACT. The LRME-STIR method leverages the spatiotemporal redundancies in the dynamic object to accurately reconstruct a four-dimensional (4D) spatiotemporal image. The conducted numerical studies substantiate the LRME-STIR method's efficacy in reconstructing 4D dynamic images from tomographic measurements acquired with a rotating measurement gantry. The experimental study demonstrates the method's ability to faithfully recover the flow of a contrast agent with a frame rate of 10 frames per second, even when only a single tomographic measurement per frame is available. The proposed LRME-STIR method offers a promising solution to the challenges faced by enabling 4D dynamic imaging using commercially available volumetric PACT imagers. By enabling accurate STIRs, this method has the potential to significantly advance preclinical research and facilitate the monitoring of critical physiological biomarkers.

  • Research Article
  • 10.1109/icorr66766.2025.11062970
Evaluating Convolution Neural Network Architecture for Neural Drive Decoding from High-Density Surface Electromyography.
  • May 12, 2025
  • IEEE ... International Conference on Rehabilitation Robotics : [proceedings]
  • Jirui Fu + 2 more

Prior studies demonstrated encouraging results in the application of convolution neural network models (CNN), one-dimensional (1D CNN), or three-dimensional (3D CNN) convolutional layers to decode the neural drive to muscles from highdensity surface electromyography (HD-sEMG) signals. However, the impact of the dimensionality (1D or 3D) of the convolutional layers on the performance of the deep CNN models using the same dataset has yet to be investigated. This study assesses the performance of 3D CNNs and 1D CNNs in extracting the neural drive as a cumulative spike train (CST) under various window sizes and step sizes that are critical parameters in decoding neural drives. Experimental HD-sEMG dataset sourced from the gastrocnemius medialis muscle of three participants, alongside the corresponding neural drive decoded using the convolution kernel compensation (CKC) algorithm, was employed to train and validate the 1D and 3D CNN models. We compared the F1 score and correlation coefficient between the CST from CKC and those from both 1D and 3D CNN models, revealing that 1D CNN performs more effectively with larger sliding window sizes (80 or 120 samples) with a peak F1 score of 0.84 and a correlation of 0.94. In contrast, 3D CNN achieves peak F1 score (0.83) and correlation (0.92) with smaller sliding window sizes (20 or 40 samples), indicating reduced latency in using 3D CNN to decode neural drives. Both models experience a performance decline as the step size increases. Furthermore, this research evaluates the computational cost of 1D and 3D CNN models, finding that the 3D CNN model requires significantly more computational resources (938G FLOPs) than the 1D CNN model (60G FLOPs). The results elucidate significant distinctions between CNN architectures and identify optimal parameters and model selection for precise and real-time neural drive decoding.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon