Declouding of Satellite Images for Crop Growth Monitoring Via Unrolling of Gradient Graph Laplacian Regularizer
Spectral images periodically captured by satellites are often obscured by clouds. For crop monitoring, cloud removal—called “declouding”—in satellite images is important, so that crop growth at a field level can be estimated from restored images. In this paper, we adopt a graph signal processing (GSP) approach to satellite image declouding to capture neighboring pixel correlations that persist over time. We first assume an atmospherical scattering model (ASM) for image formation, where observation $\mathbf{y}$ is a product of piecewise constant (PWC) target image x and piecewise planar (PWP) transmission map t. To decompose y back into $\mathbf{x}$ and $\mathbf{t}$, we formulate respective quadratic programming (QP) problems, without / with box constraints, using — graph Laplacian regularizer (GLR) / gradient graph Laplacian regularizer (GGLR) as prior, to compute $\mathbf{x} / \mathbf{t}$ alternately. For efficient optimization, we compute $\mathbf{x}$ in an unconstrained QP via conjugate gradient (CG) without matrix inverse, while we compute t in a constrained QP via proximal gradient descent (PGD). We unroll iterations of our alternating algorithm into neural layers for end-to-end data-driven parameter optimization, resulting in an interpretable, algorithm-specific feed-forward network. Experimental results show that our unrolled network outperforms model-based and pure deeplearning schemes in declouded image quality, objectively and subjectively.
- Conference Article
2
- 10.1109/mmsp.2018.8547124
- Aug 1, 2018
Depth information is being widely used in many real-world applications. However, due to the limitation of depth sensing technology, the captured depth map in practice usually has much lower resolution than that of color image counterpart. In this paper, we propose to joint exploit the internal smoothness prior and external gradient consistency constraint in graph domain for depth super-resolution. On one hand, a new graph Laplacian regularizer is proposed to the preserve the inherent piecewise smooth characteristic of depth, which has desirable filtering properties. On the other hand, inspired by an observation that the gradient of depth is zero except at edge separating regions, we introduce a graph gradient consistency constraint to enforce that the graph gradient of depth is close to the thresholded gradient of guidance. Finally, the internal and external regularizations are casted into a unified optimization framework, which can be efficiently addressed by ADMM. Experiments results demonstrate that our method outperforms the state-of-the-art with respect to both objective and subjective quality evaluations.
- Conference Article
- 10.69997/sct.126733
- Jul 1, 2025
- Systems and Control Transactions
Anomaly detection is a key technique for maintaining process suitability and safety; however, the quality of process data often deteriorates due to missing or noisy values caused by sensor malfunctions. Such data imperfections may obscure real faults. If anomaly detection models are too sensitive to such abnormal data, they may cause false positives resulting in unnecessary alarms, which may obstruct detection of true process faults. Thus, deterioration of the quality of process data may affect process performance and safety. We propose a new anomaly detection method that utilizes graph Laplacian regularization as a loss function considering data-specific temporal relationships. Graph Laplacian regularization is a mathematical tool used in image processing and denoising to smooth data. We assume that successive process data temporally close to each other have similar values and maintain temporal dependencies among variables. In this study, Laplacian regularization imposes significant penalties when the outputs of neighboring samples lose smoothness, under the assumption that neighboring samples keep similar relationships. Such temporal dependencies can be expressed as a graph structure and extracted with the Nearest Correlation (NC) method. To demonstrate the usefulness of the proposed anomaly detection method, we applied it to an anomaly detection problem in a vinyl acetate monomer (VAM) process. The results show that the model with graph Laplacian regularization achieved higher performance than without graph Laplacian regularization in some fault scenarios. It was confirmed that the proposed method is effective for anomaly detection.
- Research Article
4
- 10.1109/lgrs.2022.3143302
- Jan 1, 2022
- IEEE Geoscience and Remote Sensing Letters
Sparse unmixing (SU) aims to express the observed image signatures as a linear combination of pure spectra known <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">a priori</i> and has become a very popular technique with promising results in analyzing hyperspectral images (HSIs) over the past ten years. In SU, utilizing the spatial–contextual information allows for more realistic abundance estimation. To make full use of the spatial–spectral information, in this letter, we propose a pointwise mutual information (PMI)-based graph Laplacian (GL) regularization for SU. Specifically, we construct the affinity matrices via PMI by modeling the association between neighboring image features through a statistical framework and then we use them in the GL regularizer. We also adopt a double reweighted <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\ell _{1}$ </tex-math></inline-formula> norm minimization scheme to promote the sparsity of fractional abundances. Experimental results on simulated and real datasets prove the effectiveness of the proposed method and its superiority over competing algorithms in the literature.
- Research Article
6
- 10.1007/s13042-019-01059-5
- Jan 18, 2020
- International Journal of Machine Learning and Cybernetics
The presence of inflection points on data manifold or rapid variation makes it difficult for the second order derivative based graph Laplacian and Hessian regularization techniques to accurately approximate the marginal distribution parameters. Moreover, in general, function over-fitting on seen unlabeled instances due to lack of extrapolation power which makes graph Laplacian regularization based solution biased towards constant. Hessian solves this problem by opting a generic function based on the function’s divergence in more than one direction. However, due to the presence of inflection points in the dense region, the function remains unpenalized by Hessian manifold regularization. We propose a Jerk based manifold regularization (JR) for dense, oscillating and manifolds with inflection points. JR approximates the rate of change of curvature from the underlying manifold which appropriately identifies the unpenalized geodesic deviating functions and accurately penalizes them. It also helps in identifying the optimal function in the presence of inflection points. Extensive experiments on synthetic and real-world datasets show that the proposed JR technique approximates accurate and generic input space geometrical constraints to outperform existing state-of-the-art manifold regularization techniques by a significant margin.
- Research Article
- 10.1093/jigpal/jzae025
- Mar 22, 2024
- Logic Journal of the IGPL
Multi-Task Learning tries to improve the learning process of different tasks by solving them simultaneously. A popular Multi-Task Learning formulation for SVM is to combine common and task-specific parts. Other approaches rely on using a Graph Laplacian regularizer. Here we propose a combination of these two approaches that can be applied to L1, L2 and LS-SVMs. We also propose an algorithm to iteratively learn the graph adjacency matrix used in the Laplacian regularization. We test our proposal with synthetic and real problems, both in regression and classification settings. When the task structure is present, we show that our model is able to detect it, which leads to better results, and we also show it to be competitive even when this structure is not present.
- Research Article
283
- 10.1109/tip.2017.2651400
- Jan 11, 2017
- IEEE Transactions on Image Processing
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
- Conference Article
1
- 10.1145/3338533.3366582
- Dec 15, 2019
High-quality depth information has been increasingly used in many real-world multimedia applications in recent years. Due to the limitation of depth sensor and sensing technology, actually, the captured depth map usually has low resolution and black holes. In this paper, inspired by the geometric relationship between surface normal of a 3D scene and their distance from camera, we discover that surface normal map can provide more spatial geometric constraints for depth map reconstruction, as depth map is a special image with spatial information, which we called 2.5D image. To exploit this property, we propose a novel surface normal data guided depth recovery method, which uses surface normal data and observed depth value to estimate missing or interpolated depth values. Moreover, to preserve the inherent piecewise smooth characteristic of depth maps, graph Laplacian prior is applied to regularize the inverse problem of depth maps recovery and a graph Laplacian regularizer(GLR) is proposed. Finally, the spatial geometric constraint and graph Laplacian regularization are integrated into a unified optimization framework, which can be efficiently solved by conjugate gradient(CG). Extensive quantitative and qualitative evaluations compared with state-of-the-art schemes show the effectiveness and superiority of our method.
- Conference Article
25
- 10.1109/apsipa.2014.7041627
- Dec 1, 2014
Image denoising is the most basic inverse imaging problem. As an under-determined problem, appropriate definition of image priors to regularize the problem is crucial. Among recent proposed priors for image denoising are: i) graph Laplacian regularizer where a given pixel patch is assumed to be smooth in the graph-signal domain; and ii) self-similarity prior where image patches are assumed to recur throughout a natural image in non-local spatial regions. In our first contribution, we demonstrate that the graph Laplacian regularizer converges to a continuous time functional counterpart, and careful selection of its features can lead to a discriminant signal prior. In our second contribution, we redefine patch self-similarity in terms of patch gradients and argue that the new definition results in a more accurate estimate of the graph Laplacian matrix, and thus better image denoising performance. Experiments show that our designed algorithm based on graph Laplacian regularizer and gradient-based self-similarity can outperform non-local means (NLM) denoising by up to 1.4 dB in PSNR.
- Conference Article
1
- 10.1109/infocomtech.2017.8340598
- Nov 1, 2017
Graph based Semi-supervised learning methods are more natural way of data representation and processing but inclusion of infinite unlabeled points leads to a dense matrix which precludes generalization. Moreover, a major problem of graph Laplacian regularization method is its inability to scale. The graph Laplacian computation load does not allow exploitation of the entire information contained in the unlabeled data in SSL. In this paper, we address the scalability issue which ails the graph Laplacian and iterated graph Laplacian regularization in graph-based semi-supervised learning via parallelization using MapReduce approach. MapReduce is a programming model that can be used for processing large data sets by distributing parallel computations and data storage across a distributed cluster of machines. By splitting data into small chunks, the algorithm mimics processing small sparse data matrix in place of dense matrix. Experiment results show that without dip in accuracy, we are able to use resources more efficiently by load balancing.
- Conference Article
3
- 10.1109/siprocess.2016.7888259
- Aug 1, 2016
Removing the noise while keeping the image features like edges, textures is a challenging problem in image denoising. Because it is an under-determined problem, defining appropriate image priors to regularize the problem plays an important role. Recently a popular one among proposed image priors is the graph Laplacian regularizer, which can exploit the local geometry structure of the image. Introducing a graph Laplacian matrix term and a dictionary learning term, in this paper we propose a new model to restore the original image. The objective consists of a data fidelity term, a graph Laplacian regularizer term and a sparse representation term. To solve this non-convex model, we propose an alternating minimization method via Lagrangian optimization. In addition, we choose the eigenvectors of the normalized graph Laplacian matrix as the initial dictionary for the sparse coding. Experimental results demonstrate that the proposed model outperforms BF and NLM, in terms of both objective measurements and perceptual quality.
- Research Article
79
- 10.1016/j.knosys.2023.110521
- Mar 29, 2023
- Knowledge-Based Systems
Hessian-based semi-supervised feature selection using generalized uncorrelated constraint
- Conference Article
80
- 10.1109/icassp.2015.7178380
- Apr 1, 2015
Image denoising is an under-determined problem, and hence it is important to define appropriate image priors for regularization. One recent popular prior is the graph Laplacian regularizer, where a given pixel patch is assumed to be smooth in the graph-signal domain. The strength and direction of the resulting graph-based filter are computed from the graph's edge weights. In this paper, we derive the optimal edge weights for local graph-based filtering using gradient estimates from non-local pixel patches that are self-similar. To analyze the effects of the gradient estimates on the graph Laplacian regularizer, we first show theoretically that, given graph-signal h <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">D</sup> is a set of discrete samples on continuous function h(x; y) in a closed region Ω, graph Laplacian regularizer (h <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">D</sup> ) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sup> Lh <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">D</sup> converges to a continuous functional S <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Ω</sub> integrating gradient norm of h in metric space G-i.e., (∇h) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sup> G <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> (∇h)-over Ω. We then derive the optimal metric space G*: one that leads to a graph Laplacian regularizer that is discriminant when the gradient estimates are accurate, and robust when the gradient estimates are noisy. Finally, having derived G* we compute the corresponding edge weights to define the Laplacian L used for filtering. Experimental results show that our image denoising algorithm using the per-patch optimal metric space G* outperforms non-local means (NLM) by up to 1.5 dB in PSNR.
- Book Chapter
1
- 10.1007/978-3-031-08751-6_39
- Jan 1, 2022
The stock market is a complex network that consists of individual stocks exhibiting various financial properties and different data distribution. For stock prediction, it is natural to build separate models for each stock but also consider the complex hidden correlation among a set of stocks. We propose a federated multi-task stock predictor with financial graph Laplacian regularization (FMSP-FGL). Specifically, we first introduce a federated multi-task framework with graph Laplacian regularization to fit separate but related stock predictors simultaneously. Then, we investigate the problem of graph Laplacian learning, which represents the association of the dynamic stock. We show that the proposed optimization problem with financial Laplacian constraints captures both the inter-series correlation between each pair of stocks and the relationship within the same stock cluster, which helps improve the predictive performance. Empirical results on two popular stock indexes demonstrate that the proposed method outperforms baseline approaches. To the best of our knowledge, this is the first work to utilize the advantage of graph Laplacian in multi-task learning for financial data to predict multiple stocks in parallel.KeywordsFederated learningMulti-task learningGraph learningStock prediction
- Conference Article
3
- 10.1109/camsap.2017.8313136
- Dec 1, 2017
This paper presents a new algorithm for the joint restoration of depth and intensity (DI) images constructed using a gated SPAD-array imaging system. The three dimensional (3D) data consists of two spatial dimensions and one temporal dimension, and contains photon counts (i.e., histograms). The algorithm is based on two steps: (i) construction of a graph connecting patches of pixels with similar temporal responses, and (ii) estimation of the DI values for pixels belonging to homogeneous spatial classes. The first step is achieved by building a graph representation of the 3D data, while giving a special attention to the computational complexity of the algorithm. The second step is achieved using a Fisher scoring gradient descent algorithm while accounting for the data statistics and the Laplacian regularization term. Results on laboratory data show the benefit of the proposed strategy that improves the quality of the estimated DI images.
- Research Article
48
- 10.1109/tnb.2017.2690365
- Mar 31, 2017
- IEEE Transactions on NanoBioscience
In modern molecular biology, the hotspots and difficulties of this field are identifying characteristic genes from gene expression data. Traditional reconstruction-error-minimization model principal component analysis (PCA) as a matrix decomposition method uses quadratic error function, which is known sensitive to outliers and noise. Hence, it is necessary to learn a good PCA method when outliers and noise exist. In this paper, we develop a novel PCA method enforcing P-norm on error function and graph-Laplacian regularization term for matrix decomposition problem, which is called as PgLPCA. The heart of the method designing for reducing outliers and noise is a new error function based on non-convex proximal P-norm. Besides, Laplacian regularization term is used to find the internal geometric structure in the data representation. To solve the minimization problem, we develop an efficient optimization algorithm based on the augmented Lagrange multiplier method. This method is used to select characteristic genes and cluster the samples from explosive biological data, which has higher accuracy than compared methods.