Articles published on Total variation minimization
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
666 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s11547-025-02157-x
- Jan 6, 2026
- La Radiologia medica
- João Mendes + 4 more
Triple-negative breast cancer (TNBC) is the most aggressive molecular subtype of breast cancer (BC). TNBC lacks targeted treatment options, which results in poor clinical outcomes. TNBC lesions usually present benign characteristics on mammograms, complicating their early diagnosis. This retrospective multicenter study presents a convolutional neural network (CNN) model to distinguish TNBC from benign lesions on 566 mammograms (277 benign/289 TNBC), acquired at three different institutions across the UK. Each mammogram had its quality enhanced using a combination of total variation minimization filtering and contrast local adaptive histogram equalization (CLAHE). The proposed model achieved a test set AUC of 0.984, with a sensitivity and specificity of 94.2% and 91.9%, respectively. Explainability with GRAD-CAM was applied to the test set, revealing that the model was using not only lesion characteristics but also tumor microenvironment regions to make predictions. The same test set was analyzed by an expert radiologist who achieved a sensitivity of 71% and a specificity of 60%. The comparison of results between the developed model and the expert radiologist highlights the model's performance and underscores its potential as a complementary diagnostic tool. This model might help in the task of TNBC early diagnosis, potentially diminishing the number of false negatives.
- Research Article
- 10.12732/ijam.v38i5s.378
- Oct 8, 2025
- International Journal of Applied Mathematics
- Ravi Krishan Pandey
Introduction: This paper introduces a novel hybrid approach to computed tomography (CT) image reconstruction, designed to enhance medical imaging techniques. The study meticulously compares the performance of this innovative method with established algorithms, including back projection, simultaneous algebraic reconstruction (SAR), and simultaneous algebraic reconstruction iteration (SART) coupled with a total variation minimization algorithm. The evaluation utilizes the NIH-AAPM-Mayo Clinic CT Grand Challenge dataset, ensuring robust and relevant results. Two key performance metrics are taken for comparison: the Structural Similarity Index Measure (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). Objectives: To enhance the quality of filtered back projection (FBP) images in low-dose imaging, Methods: A hybrid model is proposed and compared with SART variants. It combines FBP with a modified CNN featuring three 2D convolution layers (32, 64, and 128 filters of size 3x3) and a ReLU activation function, with an input shape of 150x150 in a batch size of 8. Three 2D max-pooling layers with 2x2 kernel sizes are included. The output is flattened and passed through a dense layer (128 units, ReLU activation), followed by a dropout layer (0.5) to reduce overfitting. The final dense layer uses Softmax for activation. The model is compiled with categorical cross-entropy the loss function and the ADAM as optimizer. Training occurs on a machine with an NVIDIA GEFORCE RTX GPU (6 GB memory). Results: The hybrid algorithm achieved an SSIM value of 0.7916, indicating superior structural fidelity in reconstructed images. Additionally, it demonstrated a PSNR of 19.0424 dB, confirming its effectiveness in producing higher-quality images. Conclusions: The findings have crucial implications for medical imaging, promoting safer practices to reduce radiation exposure by enhancing image quality. This balance between quality and safety highlights the significance of the research, which could revolutionize diagnostic methods in healthcare. Overall, it represents a pivotal advancement in hybrid CT reconstruction, paving the way for further innovations in medical imaging technologies.
- Research Article
- 10.1007/s10444-025-10254-8
- Aug 21, 2025
- Advances in Computational Mathematics
- Martin Alkämper + 2 more
Abstract Based on previous work, we extend a primal-dual semi-smooth Newton method for minimizing a general $$\varvec{L^1}$$ L 1 - $$\varvec{L^2}$$ L 2 - $$\varvec{TV}$$ TV functional over the space of functions of bounded variations by adaptivity in a finite element setting. For automatically generating an adaptive grid, we introduce indicators based on a-posteriori error estimates. Further, we discuss data interpolation methods on unstructured grids in the context of image processing and present a pixel-based interpolation method. The efficiency of our derived adaptive finite element scheme is demonstrated on image inpainting and the task of computing the optical flow in image sequences. In particular, for optical flow estimation, we derive an adaptive finite element coarse-to-fine scheme which allows resolving large displacements and speeds up the computing time significantly.
- Research Article
- 10.3390/axioms14080605
- Aug 1, 2025
- Axioms
- Gengsheng L Zeng
In compressed sensing, it is believed that the norm minimization is the best way to enforce a sparse solution. However, the norm is difficult to implement in a gradient-based iterative image reconstruction algorithm. The total variation (TV) norm minimization is considered a proper substitute for the norm minimization. This paper points out that the TV norm is not powerful enough to enforce a piecewise-constant image. This paper uses the limited-angle tomography to illustrate the possibility of using the norm to encourage a piecewise-constant image. However, one of the drawbacks of the norm is that its derivative is zero almost everywhere, making a gradient-based algorithm useless. Our novel idea is to replace the zero value of the norm derivative with a zero-mean random variable. Computer simulations show that the proposed norm minimization outperforms the TV minimization. The novelty of this paper is the introduction of some randomness in the gradient of the objective function when the gradient is zero. The quantitative evaluations indicate the improvements of the proposed method in terms of the structural similarity (SSIM) and the peak signal-to-noise ratio (PSNR).
- Research Article
- 10.1142/s0219530525500150
- May 21, 2025
- Analysis and Applications
- Xinling Liu + 4 more
Over the past three decades, total variation (TV) has successfully been applied in image processing, compressed sensing, and many other fields. Consider the problem of tensor recovery from compressed measurements with noise, TV minimization has been shown to provide good approximations to tensors such as hyperspectral image and videos, even if the measurements are far less than the ambient dimension. By combining the recently developed transformed L1 function with TV, this paper explores the transformed total variation (TTV) minimization for recovering a tensor [Formula: see text]. Specifically, it is an extension of a recent work specially designed for two-dimensional image recovery, which has been shown to provide a robust recovery guarantee and outperform TV minimization in image recovery tasks. However, this extension is challenging because tensors have more complicated structures, leading that some algebraic tools designed for two-dimensional images are infeasible. Based on the restricted isometry property (RIP), we demonstrate that TTV minimization recovers [Formula: see text] from [Formula: see text] linear measurements, and an error bound composed of its best [Formula: see text]-term approximation to its gradient tensor and noise level is derived, which is optimal up to a logarithmic factor [Formula: see text]. Furthermore, the restricted isometry condition is also improved compared with that of TV minimization.
- Research Article
- 10.2140/pjm.2025.335.53
- Mar 24, 2025
- Pacific Journal of Mathematics
- Samer Dweik
Weighted total variation minimization problem with mixed Dirichlet–Neumann boundary conditions
- Research Article
- 10.1007/s11075-025-02044-6
- Mar 19, 2025
- Numerical Algorithms
- Thomas Jacumin + 1 more
Abstract In this paper, we propose an adaptive finite difference scheme in order to numerically solve total variation type problems for image processing tasks. The automatic generation of the grid relies on indicators derived from a local estimation of the primal-dual gap error. This process leads in general to a non-uniform grid for which we introduce an adjusted finite difference method. Further we quantify the impact of the grid refinement on the respective discrete total variation. In particular, it turns out that a finer discretization may lead to a higher value of the discrete total variation for a given function. To compute a numerical solution on non-uniform grids we derive a semi-smooth Newton algorithm in 2D for scalar and vector-valued total variation minimization. We present numerical experiments for image denoising and the estimation of motion in image sequences to demonstrate the applicability of our adaptive scheme.
- Research Article
1
- 10.5802/ojmo.39
- Mar 13, 2025
- Open Journal of Mathematical Optimization
- Axel Flinth + 2 more
We propose an adaptive refinement algorithm to solve total variation regularized measure optimization problems. The method iteratively constructs dyadic partitions of the unit cube based on (i) the resolution of discretized dual problems and (ii) the detection of cells containing points that violate the dual constraints. The detection is based on upper-bounds on the dual certificate, in the spirit of branch-and-bound methods. The interest of this approach is that it avoids the use of heuristic approaches to find the maximizers of dual certificates. We prove the convergence of this approach under mild hypotheses and a linear convergence rate under additional non-degeneracy assumptions. These results are confirmed by simple numerical experiments.
- Research Article
- 10.1177/08953996241313121
- Feb 4, 2025
- Journal of X-ray science and technology
- Shunli Zhang + 3 more
Computed tomography (CT) is capable of generating detailed cross-sectional images of the scanned objects non-destructively. So far, CT has become an increasingly vital tool for 3D modelling of cultural relics. Compressed sensing (CS)-based CT reconstruction algorithms, such as the algebraic reconstruction technique (ART) regularized by total variation (TV), enable accurate reconstructions from sparse-view data, which consequently reduces both scanning time and costs. However, the implementation of the ART-TV is considerably slow, particularly in cone-beam reconstruction. In this paper, we propose an efficient and high-quality scheme for cone-beam CT reconstruction based on the traditional ART-TV algorithm. Our scheme employs Joseph's projection method for the computation of the system matrix. By exploiting the geometric symmetry of the cone-beam rays, we are able to compute the weight coefficients of the system matrix for two symmetric rays simultaneously. We then employ multi-threading technology to speed up the reconstruction of ART, and utilize graphics processing units (GPUs) to accelerate the TV minimization. Experimental results demonstrate that, for a typical reconstruction of a 512 × 512 × 512 volume from 60 views of 512 × 512 projection images, our scheme achieves a speedup of 14 × compared to a single-threaded CPU implementation. Furthermore, high-quality reconstructions of ART-TV are obtained by using Joseph's projection compared with that using traditional Siddon's projection.
- Research Article
2
- 10.1080/10589759.2025.2457575
- Jan 31, 2025
- Nondestructive Testing and Evaluation
- A Mercy Latha + 3 more
ABSTRACT Terahertz (THz) imaging technique has a wide range of applications, from industrial non-destructive testing (NDT) to various biomedical applications. Although the capabilities of terahertz radiation in defect detection are well-proven, the practical limitations have been associated with the long image acquisition time. Hence, to improve the image acquisition speed, the compressive sensing technique has been employed. However, the challenge is identifying the optimal modulation mask for compressive sensing as the image reconstruction metrics hugely depend on the mask. Hence, a systematic investigation of the different modulation masks has been done to reconstruct THz images of glass fibre-reinforced polymer (GFRP) composites using TVAL3 (Total Variation minimization by Augmented Lagrangian and ALternating direction Algorithm). The image reconstruction quality has been studied using mean square error, peak signal-to-noise ratio, and structural similarity index. From the results, it can be noticed with a 0.3 sampling ratio, reliable reconstruction of the THz image is possible, saving 70% of the image acquisition time. Further, the discrete cosine transform (DCT) mask is ideal for high frame rate NDT in low and medium noise scenarios. However, the Bernoulli basis offers high resistance to noise by resulting in the best image reconstruction results at high noise cases, outperforming the DCT.
- Research Article
1
- 10.1021/acs.analchem.4c04718
- Jan 21, 2025
- Analytical chemistry
- Harsshit Agrawaal + 1 more
Glow discharge optical emission spectrometry (GDOES) allows fast and simultaneous multielemental analysis directly from solids and depth profiling down to the nanometer scale, which is critical for thin-film (TF) characterization. Nevertheless, operating conditions for the best limits of detection (LODs) are compromised in lieu of the best sputtering crater shapes for depth resolution. In addition, the fast transient signals from ultra-TFs do not permit the optimal sampling statistics of bulk analysis such that LODs are further compromised. Furthermore, commercial GDOES instruments rely on a slit-based light dispersion that favors high spectral resolution at the expense of light throughput. Here, a new technique called glow discharge optical emission coded aperture spectrometry (GOCAS) is shown to allow both a higher spectral resolution and higher light throughput by using a coded aperture (CA) with multiple thin slits at the spectrograph's entrance to measure the convoluted spectra and compressed sensing (CS) algorithms to recover the deconvoluted spectra from the full field of view. The effects of CA characteristics on spectral reconstruction fidelity were studied and showed the best fidelity for smaller slits, 50% transmittance, and wider CA with a higher number of slits. In addition, Shearlet enhanced snapshot compressive imaging (SeSCI)GPU showed the best performance of the CS algorithms studied, including SeSCICPU, two-step iterative shrinkage/thresholding (TwIST), and alternating direction method of multipliers total variation minimization (ADMM-TV). Moreover, GOCAS is shown to be very robust against increasing detector Gaussian noise. Finally, standard reference materials are used to show up to ∼30× improved S/N and an order-of-magnitude improved LODs, at the fastest acquisition times (fraction of a ms), which has the potential to be transformative for depth profiling of nanostructured materials.
- Research Article
1
- 10.1107/s1600577524010956
- Jan 1, 2025
- Journal of synchrotron radiation
- Erik Malm + 1 more
Coherent diffractive imaging experiments often collect incomplete datasets containing regions that lack any measurements. These regions can arise because of beamstops, gaps between detectors, or, in tomography experiments, a missing wedge of data due to a limited sample rotation range. We describe practical and effective approaches to mitigate reconstruction artifacts by bringing uniqueness back to the phase retrieval problem. This is accomplished by looking for a solution that both matches the data and has minimum total variation, which essentially sets the unconstrained modes to reduce oscillations within the reconstruction. Two algorithms are described. The first algorithm assumes that there is an accurate estimate of the phase and can be used for pre- and post-processing. The second algorithm attempts to simultaneously minimize the total variation and recover the phase. We demonstrate the utility of these algorithms with numerical simulations and, experimentally, on a large, three-dimensional dataset.
- Research Article
- 10.33545/2707661x.2025.v6.i1a.109
- Jan 1, 2025
- International Journal of Communication and Information Technology
- Saif Al-Deen Sabah Mahmood + 1 more
Denoising Medical Images: A review of total variation minimization methods
- Research Article
3
- 10.1515/nanoph-2024-0238
- Nov 28, 2024
- Nanophotonics
- Mees Dieperink + 4 more
Abstract The optical cross sections of plasmonic nanoparticles are intricately linked to their morphologies. Accurately capturing this link could allow determination of particles’ shapes from their optical cross sections alone. Electromagnetic simulations bridge morphology and optical properties, provided they are sufficiently accurate. This study examines key factors affecting simulation precision, comparing common methods and detailing the impacts of meshing accuracy, dielectric function selection, and substrate inclusion within the boundary element method. To support the method’s complex parameterization, we develop a workflow incorporating reconstruction, meshing, and mesh simplification, to enable the use of electron tomography data. We analyze how choices of reconstruction algorithm and image segmentation affect simulated optical cross sections, relating these to shape errors minimized during data processing. Optimal results are obtained using the total variation minimization (TVM) reconstruction method with Otsu thresholding and light smoothing, ensuring reliable, watertight surface meshes through the marching cubes algorithm, even for complex shapes.
- Research Article
2
- 10.1088/1361-6560/ad8c93
- Nov 12, 2024
- Physics in Medicine & Biology
- Jeremy E Hallett + 4 more
Purpose.Cherenkov imaging during radiotherapy provides a real time visualization of beam delivery on patient tissue, which can be used dynamically for incident detection or to review a summary of the delivered surface signal for treatment verification. Very few photons form the images, and one limitation is that the noise level per frame can be quite high, and mottle in the cumulative processed images can cause mild overall noise. This work focused on removing or suppressing noise via image postprocessing.Approach.Images were analyzed for peak-signal-to-noise and spatial frequencies present, and several established noise/mottle reduction algorithms were chosen based upon these observations. These included total variation minimization (TV-L1), non-local means filter (NLM), block-matching 3D (BM3D), alpha (adaptive) trimmed mean (ATM), and bilateral filtering. Each were applied to images acquired using a BeamSite camera (DoseOptics) imaged signal from 6x photons from a TrueBeam linac delivering dose at 600 MU min-1incident on an anthropomorphic phantom and tissue slab phantom in various configurations and beam angles. The standard denoised images were tested for PSNR, noise power spectrum (NPS) and image sharpness.Results.The average peak-signal-to-noise ratio (PSNR) increase was 17.4% for TV-L1. NLM denoising increased the average PSNR by 19.1%, BM3D processing increased it by12.1% and the bilateral filter increased the average PSNR by 19.0%. Lastly, the ATM filter resulted in the lowest average PSNR increase of 10.9%. Of all of these, the NLM and bilateral filters produced improved edge sharpness with, generally, the lowest NPS curve.Conclusion.For cumulative image Cherenkov data, NLM and the bilateral filter yielded optimal denoising with the TV-L1 algorithm giving comparable results. Single video frame Cherenkov images exhibit much higher noise levels compared to cumulative images. Noise suppression algorithms for these frame rates will likely be a different processing pipeline involving these filters incorporated with machine learning.
- Research Article
1
- 10.62411/jcta.11488
- Oct 6, 2024
- Journal of Computing Theories and Applications
- Md Al-Imran + 4 more
Image denoising is a fundamental challenge in image processing, where the objective is to remove noise while preserving critical image features. Traditional denoising methods, such as Wavelet, Total Variation (TV) minimization, and Non-Local Means (NLM), often struggle to maintain the topological integrity of image features, leading to the loss of essential structures. This study proposes a Cubical Persistent Homology-Based Technique (CPHBT) that leverages persistence barcodes to identify significant topological features and reduce noise. The method selects filtration levels that preserve important features like loops and connected components. Applied to digit images, our method demonstrates superior performance, achieving a Peak Signal-to-Noise Ratio (PSNR) of 46.88 and a Structural Similarity Index Measure (SSIM) of 0.99, outperforming TV (PSNR: 21.52, SSIM: 0.9812) and NLM (PSNR: 22.09, SSIM: 0.9822). These results confirm that cubical persistent homology offers an effective solution for image denoising by balancing noise reduction and preserving critical topological features, thus enhancing overall image quality.
- Research Article
- 10.3233/xst-240111
- Sep 27, 2024
- Journal of X-ray science and technology
- Yu He + 3 more
Due to the incomplete projection data collected by limited-angle computed tomography (CT), severe artifacts are present in the reconstructed image. Classical regularization methods such as total variation (TV) minimization, ℓ0 minimization, are unable to suppress artifacts at the edges perfectly. Most existing regularization methods are single-objective optimization approaches, stemming from scalarization methods for multiobjective optimization problems (MOP). To further suppress the artifacts and effectively preserve the edge structures of the reconstructed image. This study presents a multiobjective optimization model incorporates both data fidelity term and ℓ0-norm of the image gradient as objective functions. It employs an iterative approach different from traditional scalarization methods, using the maximization of structural similarity (SSIM) values to guide optimization rather than minimizing the objective function.The iterative method involves two steps, firstly, simultaneous algebraic reconstruction technique (SART) optimizes the data fidelity term using SSIM and the Simulated Annealing (SA) algorithm for guidance. The degradation solution is accepted in the form of probability, and guided image filtering (GIF) is introduced to further preserve the image edge when the degradation solution is rejected. Secondly, the result from the first step is integrated into the second objective function as a constraint, we use ℓ0 minimization to optimize ℓ0-norm of the image gradient, and the SSIM, SA algorithm and GIF are introduced to guide optimization process by improving SSIM value like the first step. With visual inspection, the peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and SSIM values indicate that our approach outperforms other traditional methods. The experiments demonstrate the effectiveness of our method and its superiority over other classical methods in artifact suppression and edge detail restoration.
- Research Article
1
- 10.1016/j.sigpro.2024.109706
- Sep 11, 2024
- Signal Processing
- Xinling Liu + 4 more
Guaranteed matrix recovery using weighted nuclear norm plus weighted total variation minimization
- Research Article
4
- 10.1016/j.asoc.2024.111909
- Jun 25, 2024
- Applied Soft Computing
- Burhan Ul Haque Sheikh
Mitigating adversarial threats in deep CT image diagnosis models via a dual-stage inference-time defense
- Research Article
3
- 10.1007/s10278-023-00919-5
- Jun 17, 2024
- Journal of imaging informatics in medicine
- Burhan Ul Haque Sheikh + 1 more
Deep learning has significantly advanced the field of radiology-based disease diagnosis, offering enhanced accuracy and efficiency in detecting various medical conditions through the analysis of complex medical images such as X-rays. This technology's ability to discern subtle patterns and anomalies has proven invaluable for swift and accurate disease identification. The relevance of deep learning in radiology has been particularly highlighted during the COVID-19 pandemic, where rapid and accurate diagnosis is crucial for effective treatment and containment. However, recent research has uncovered vulnerabilities in deep learning models when exposed to adversarial attacks, leading to incorrect predictions. In response to this critical challenge, we introduce a novel approach that leverages total variation minimization to combat adversarial noise within X-ray images effectively. Our focus narrows to COVID-19 diagnosis as a case study, where we initially construct a classification model through transfer learning designed to accurately classify lung X-ray images encompassing no pneumonia, COVID-19 pneumonia, and non-COVID pneumonia cases. Subsequently, we extensively evaluated the model's susceptibility to targeted and un-targetedadversarial attacksby employing the fast gradient sign gradient (FGSM) method. Our findings reveal a substantial reduction in the model's performance, with the average accuracy plummeting from 95.56 to 19.83% under adversarial conditions. However, the experimental results demonstrate the exceptional efficacy of the proposed denoising approach in enhancing the performance of diagnosis models when applied to adversarial examples. Post-denoising, the model exhibits a remarkable accuracy improvement, surging from 19.83 to 88.23% on adversarial images. These promising outcomes underscore the potential of denoising techniques to fortify the resilience and reliability of AI-based COVID-19 diagnostic systems, laying the foundation for their successful deployment in clinical settings.