A Novel Approach to Large-Scale Dynamically Weighted Directed Network Representation.

  • Abstract
  • Literature Map
  • References
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A dynamically weighted directed network (DWDN) is frequently encountered in various big data-related applications like a terminal interaction pattern analysis system (TIPAS) concerned in this study. It consists of large-scale dynamic interactions among numerous nodes. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant DWDN High Dimensional and Incomplete (HDI). An HDI DWDN, in spite of its incompleteness, contains rich knowledge regarding involved nodes various behavior patterns. To extract such knowledge from an HDI DWDN, this paper proposes a novel Alternating direction method of multipliers (ADMM)-based Nonnegative Latent-factorization of Tensors (ANLT) model. It adopts three-fold ideas: a) building a data density-oriented augmented Lagrangian function for efficiently handling an HDI tensors incompleteness and nonnegativity; b) splitting the optimization task in each iteration into an elaborately designed subtask series where each one is solved based on the previously solved ones following the ADMM principle to achieve fast convergence; and c) theoretically proving that its convergence is guaranteed with its efficient learning scheme. Experimental results on six DWDNs from real applications demonstrate that the proposed ANLT outperforms state-of-the-art models significantly in both computational efficiency and prediction accuracy.

ReferencesShowing 10 of 71 papers
  • Cite Count Icon 15
  • 10.1109/tkde.2018.2872602
Modeling Large-Scale Dynamic Social Networks via Node Embeddings
  • Oct 1, 2019
  • IEEE Transactions on Knowledge and Data Engineering
  • Aakas Zhiyuli + 3 more

  • Open Access Icon
  • Cite Count Icon 9
  • 10.1109/tetc.2014.2330517
An Overlay-Based Data Mining Architecture Tolerant to Physical Network Disruptions
  • Sep 1, 2014
  • IEEE Transactions on Emerging Topics in Computing
  • Katsuya Suto + 5 more

  • Open Access Icon
  • Cite Count Icon 73
  • 10.1109/tpami.2019.2906603
Social Anchor-Unit Graph Regularized Tensor Completion for Large-Scale Image Retagging.
  • Mar 25, 2019
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Jinhui Tang + 4 more

  • Open Access Icon
  • Cite Count Icon 13464
  • 10.1561/2200000016
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
  • Jan 1, 2010
  • Foundations and Trends® in Machine Learning
  • Stephen Boyd

  • Cite Count Icon 49
  • 10.1016/j.neucom.2017.10.040
Effects of preprocessing and training biases in latent factor models for recommender systems
  • Nov 3, 2017
  • Neurocomputing
  • Ye Yuan + 2 more

  • Cite Count Icon 6
  • 10.1109/tetci.2019.2951813
Efficient Search and Lookup in Unstructured P2P Overlay Networks Inspired by Swarm Intelligence
  • Dec 5, 2019
  • IEEE Transactions on Emerging Topics in Computational Intelligence
  • Vesna Sesum-Cavic + 2 more

  • Cite Count Icon 208
  • 10.1109/jas.2017.7510817
An online fault detection model and strategies based on SVM-grid in clouds
  • Mar 1, 2018
  • IEEE/CAA Journal of Automatica Sinica
  • Peiyun Zhang + 2 more

  • Cite Count Icon 334
  • 10.1109/tase.2010.2052042
A Shapley Value-Based Approach to Discover Influential Nodes in Social Networks
  • Jan 1, 2011
  • IEEE Transactions on Automation Science and Engineering
  • Ramasuri Narayanam + 1 more

  • Cite Count Icon 27
  • 10.1016/j.neucom.2019.08.026
A momentum-incorporated latent factorization of tensors model for temporal-aware QoS missing data prediction
  • Aug 13, 2019
  • Neurocomputing
  • Qingxian Wang + 3 more

  • Cite Count Icon 405
  • 10.1109/tkde.2021.3056502
Learning Dynamics and Heterogeneity of Spatial-Temporal Graph Data for Traffic Forecasting
  • Nov 1, 2022
  • IEEE Transactions on Knowledge and Data Engineering
  • Shengnan Guo + 4 more

CitationsShowing 10 of 167 papers
  • Research Article
  • Cite Count Icon 82
  • 10.1109/tnnls.2022.3226301
Context-Aware Poly(A) Signal Prediction Model via Deep Spatial-Temporal Neural Networks.
  • Jun 1, 2024
  • IEEE transactions on neural networks and learning systems
  • Yanbu Guo + 4 more

Polyadenylation [Poly(A)] is an essential process during messenger RNA (mRNA) maturation in biological eukaryote systems. Identifying Poly(A) signals (PASs) from the genome level is the key to understanding the mechanism of translation regulation and mRNA metabolism. In this work, we propose a deep dual-dynamic context-aware Poly(A) signal prediction model, called multiscale convolution with self-attention networks (MCANet), to adaptively uncover the spatial-temporal contextual dependence information. Specifically, the model automatically learns and strengthens informative features from the temporalwise and the spatialwise dimension. The identity connectivity performs contextual feature maps of Poly(A) data by direct connections from previous layers to subsequent layers. Then, a fully parametric rectified linear unit (FP-RELU) with dual-dynamic coefficients is devised to make the training of the model easier and enhance the generalization ability. A cross-entropy loss (CL) function is designed to make the model focus on samples that are easy to misclassify. Experiments on different Poly(A) signals demonstrate the superior performance of the proposed MCANet, and an ablation study shows the effectiveness of the network design for the feature learning and prediction of Poly(A) signals.

  • Conference Article
  • 10.1109/msn60784.2023.00089
A Well-Designed Regularization Scheme for Latent Factorization of High-Dimensional and Incomplete Water-Quality Tensors from Sensor Networks
  • Dec 14, 2023
  • Xuke Wu + 3 more

A Well-Designed Regularization Scheme for Latent Factorization of High-Dimensional and Incomplete Water-Quality Tensors from Sensor Networks

  • Research Article
  • Cite Count Icon 6
  • 10.1109/tnse.2023.3246427
Distributed $H_{\infty }$-Consensus Estimation for Random Parameter Systems Over Binary Sensor Networks: A Local Performance Analysis Method
  • Jul 1, 2023
  • IEEE Transactions on Network Science and Engineering
  • Fei Han + 4 more

This paper deals with the distributed <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_{\infty }$</tex-math></inline-formula> -consensus estimation problem for a class of discrete time-varying random parameter systems over binary sensor networks, where the statistical information of the random parameter matrix is characterized by a generalized covariance matrix known a priori. As a binary sensor can only provide one bit of information according to a given threshold, an indicator variable is introduced so as to extract functional information (from the sensor output) that can be employed to estimate the system state. With the introduced indicator variable, a distributed estimator is constructed for each binary sensor with guaranteed <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_{\infty }$</tex-math></inline-formula> -consensus performance constraint on the estimation error dynamics over a finite horizon. By means of a local performance analysis method, indicator-variable-dependent conditions are established for the existence of the desired distributed estimators whose gains are calculated by solving a set of recursive linear matrix inequalities. Finally, the applicability and effectiveness of the developed distributed estimation scheme are demonstrated through a numerical example.

  • Conference Article
  • 10.1109/icnsc55942.2022.10004071
A Novel Block Transmission Model in Blockchain Networks
  • Dec 15, 2022
  • Peiyun Zhang + 2 more

In a blockchain network, the instability of the block transmission process can affect the speed of block transmission. If blocks cannot be accepted by nodes and saved on a blockchain in time, which may lead to inconsistent blockchain ledgers stored by nodes, thus reducing the security of blockchain networks. However, when nodes transmit blocks, they often encounter problems of too large blocks and insufficient bandwidth, which results in slow block transmission speed and low efficiency. To solve the problems, it proposes a block transmission model, which encodes units into packets. Based on the model, the corresponding encoding and decoding processes are designed. The proposed method is compared with two state-of-the-art methods: Velocity and Kadcast. Experimental results show that the proposed method performs better than its peers in terms of block synchronization time, block transmission success ratio, and packet retransmission ratio.

  • Open Access Icon
  • Conference Article
  • 10.1109/isas61044.2024.10552611
An ADRC-Incorporated Stochastic Gradient Descent Algorithm for Latent Factor Analysis
  • May 7, 2024
  • Jinli Li + 1 more

High-dimensional and incomplete (HDI) matrix contains many complex interactions between numerous nodes. A stochastic gradient descent (SGD)-based latent factor analysis (LFA) model is remarkably effective in extracting valuable information from an HDI matrix. However, such a model commonly encounters the problem of slow convergence because a standard SGD algorithm only considers the current learning error to compute the stochastic gradient without considering the historical and future state of the learning error. To address this critical issue, this paper innovatively proposes an ADRC-incorporated SGD (ADS) algorithm by refining the instance learning error by considering the historical and future state by following the principle of an ADRC controller. With it, an ADS-based LFA model is further achieved for fast and accurate latent factor analysis on an HDI matrix. Empirical studies on two HDI datasets demonstrate that the proposed model outperforms the state-of-the-art LFA models in terms of computational efficiency and accuracy for predicting the missing data of an HDI matrix.

  • Conference Article
  • 10.1109/icnsc55942.2022.10004094
Accurate Occupational Pneumoconiosis Staging with Imbalanced Data
  • Dec 15, 2022
  • Kaiguang Yang + 4 more

Occupational pneumoconiosis (OP) staging is a vital task concerning the lung healthy of a subject. The staging result of a patient is depended on the staging standard and his chest X-ray. It is essentially an image classification task. However, the distribution of OP data is commonly imbalanced, which largely reduces the effect of classification models which are proposed under the assumption that data follow a balanced distribution and causes inaccurate staging results. To achieve accurate OP staging, we proposed an OP staging model who is able to handle imbalance data in this work. The proposed model adopts gray level co-occurrence matrix (GLCM) to extract texture feature of chest X-ray and implements classification with a weighted broad learning system (WBLS). Empirical studies on six data cases provided by a hospital indicate that proposed model can perform better OP staging than state-of-the-art classifiers with imbalanced data.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icnsc55942.2022.10004082
Highly-Accurate Robot Calibration Based on Plane Constraint via Integrating Square-Root Cubature Kalman filter and Levenberg-Marquardt Algorithm
  • Dec 15, 2022
  • Tinghui Chen + 2 more

In the field of modern industrial manufacturing, industrial robots are indispensable intelligent automatic mechanical equipment for advanced industrial production. However, due to long-term mechanical wear and structural deformation, the absolute positioning accuracy is low, which greatly hinders the development of the manufacturing industry. Calibrating the kinematic parameters of the robot is an effective way to address it. However, the main measuring equipment such as laser trackers and coordinate measuring machines are expensive and need special personnel to operate. Additionally, in the measurement process, due to the influence of extensive environmental factors, measurement noises are generated affecting the calibration accuracy of the robot. Based on these, we have done the following work: a) developing a robot calibration method based on plane constraint to simplify measurement steps; b) employing square-root culture Kalman filter (SCKF) algorithm for reducing the influence of measurement noises; c) proposing a novel algorithm for identifying kinematic parameters based on SCKF algorithm and Levenberg-Marquardt (LM) algorithm to achieve the high calibration accuracy; d) adopting the dial indicator as the measuring equipment for slashing costs. Enough experiments verify the effectiveness of the proposed calibration algorithm and experimental platform.

  • Book Chapter
  • 10.1007/978-3-662-72243-5_14
Fourier-Enhanced Adaptive Manifold Latent Feature Analysis for Spatiotemporal Signal Recovery
  • Oct 4, 2025
  • Yuting Ding + 3 more

Fourier-Enhanced Adaptive Manifold Latent Feature Analysis for Spatiotemporal Signal Recovery

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s10489-023-04686-2
Regularized label relaxation-based stacked autoencoder for zero-shot learning
  • Jun 27, 2023
  • Applied Intelligence
  • Jianqiang Song + 4 more

Regularized label relaxation-based stacked autoencoder for zero-shot learning

  • Research Article
  • Cite Count Icon 29
  • 10.1109/jas.2023.123474
Proximal Alternating-Direction-Method-of-Multipliers-Incorporated Nonnegative Latent Factor Analysis
  • Jun 1, 2023
  • IEEE/CAA Journal of Automatica Sinica
  • Fanghui Bi + 4 more

High-dimensional and incomplete (HDI) data subject to the nonnegativity constraints are commonly encountered in a big data-related application concerning the interactions among numerous nodes. A nonnegative latent factor analysis (NLFA) model can perform representation learning to HDI data efficiently. However, existing NLFA models suffer from either slow convergence rate or representation accuracy loss. To address this issue, this paper proposes a proximal alternating-direction-method-of-multipliers-based nonnegative latent factor analysis (PAN) model with two-fold ideas: 1) adopting the principle of alternating-direction-method-of-multipliers to implement an efficient learning scheme for fast convergence and high computational efficiency; and 2) incorporating the proximal regularization into the learning scheme to suppress the optimization fluctuation for high representation learning accuracy to HDI data. Theoretical studies verify that PAN converges to a Karush-Kuhn-Tucker (KKT) stationary point of its nonnegativity-constrained learning objective with its learning scheme. Experimental results on eight HDI matrices from real applications demonstrate that the proposed PAN model outperforms several state-of-the-art models in both estimation accuracy for missing data of an HDI matrix and computational efficiency.

Similar Papers
  • Conference Article
  • Cite Count Icon 5
  • 10.1109/case49439.2021.9551506
Discovering Hidden Pattern in Large-scale Dynamically Weighted Directed Network via Latent Factorization of Tensors
  • Aug 23, 2021
  • Hao Wu + 2 more

A dynamically weighted directed network (DWDN) is frequently encountered in various big data-related applications like a terminal interaction pattern analysis system (TIPAS) concerned in this study. It consists of large-scale dynamic interactions among numerous entities. Moreover, as the involved entities increase drastically, it becomes impossible to observe their full interactions at each time span, making a corresponding DWDN high-dimensional and incomplete. However, it contains vital knowledge regarding involved entities' behavior patterns. To extract such knowledge from DWDN, this paper proposes a novel Alternating direction method of multipliers (ADMM)-based Nonnegative Latent-factorization of Tensors (ANLT) model. It adopts two novel ideas: a) building a data density-oriented augmented Lagrangian function for efficiently handling a tensor's incompleteness and nonnegativity; and b) splitting an optimization task in each iteration into an elaborately designed subtask series where each one is solved based on the previously solved ones following the ADMM principle to achieve fast model convergence. Experimental results on two large-scale DWDNs from a real TIPAS demonstrate that the proposed ANLT model outperforms state-of-the-art models significantly in both computational efficiency and prediction accuracy when addressing missing link prediction on DWDW.

  • Research Article
  • 10.1118/1.4957359
MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning
  • Jun 1, 2016
  • Medical Physics
  • X Liu + 2 more

Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimization and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.

  • Book Chapter
  • 10.1007/978-981-19-8934-6_5
ADMM-Based Nonnegative Latent Factorization of Tensors
  • Jan 1, 2023
  • Hao Wu + 2 more

An HDI dynamic network contains a multitude of knowledge regarding involved nodes’ various behavior patterns. How to accurately represent such an HDI dynamic network is of the essence to effectively extract knowledge. Therefore, in order to accomplish precisely represent to an HDI dynamic network, this chapter present a novel Alternating direction method of multipliers (ADMM)-based Nonnegative Latent-factorization of Tensors (ANLT) model. It adopts two-fold ideas: (a) building a data density-oriented augmented Lagrangian function to efficiently handle the incompleteness and non-negativity of an HDI tensor; and (b) dividing the optimization task in each iteration into a skillfully designed subtask series where each one is solved depended on the previously solved ones following the principle of ADMM to achieve fast convergence. Empirical studies on six different size dynamic networks demonstrate that compared with several state-of-the-art models, the proposed ANLT model achieves significant gain in prediction accuracy and computational efficiency for predicting missing links of an HDI dynamic network.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.1007/s12532-020-00192-5
Managing randomization in the multi-block alternating direction method of multipliers for quadratic optimization
  • Sep 23, 2020
  • Mathematical Programming Computation
  • Krešimir Mihić + 2 more

The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.

  • Research Article
  • Cite Count Icon 758
  • 10.1109/tci.2016.2629286
Plug-and-Play ADMM for Image Restoration: Fixed-Point Convergence and Applications
  • Mar 1, 2017
  • IEEE Transactions on Computational Imaging
  • Stanley H Chan + 2 more

Alternating direction method of multiplier (ADMM) is a widely used algorithm for solving constrained optimization problems in image restoration. Among many useful features, one critical feature of the ADMM algorithm is its modular structure, which allows one to plug in any off-the-shelf image denoising algorithm for a subproblem in the ADMM algorithm. Because of the plug-in nature, this type of ADMM algorithms is coined the name “Plug-and-Play ADMM.” Plug-and-Play ADMM has demonstrated promising empirical results in a number of recent papers. However, it is unclear under what conditions and by using what denoising algorithms would it guarantee convergence. Also, since Plug-and-Play ADMM uses a specific way to split the variables, it is unclear if fast implementation can be made for common Gaussian and Poissonian image restoration problems. In this paper, we propose a Plug-and-Play ADMM algorithm with provable fixed-point convergence. We show that for any denoising algorithm satisfying an asymptotic criteria, called bounded denoisers, Plug-and-Play ADMM converges to a fixed point under a continuation scheme. We also present fast implementations for two image restoration problems on superresolution and single-photon imaging. We compare Plug-and-Play ADMM with state-of-the-art algorithms in each problem type and demonstrate promising experimental results of the algorithm.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.ins.2024.121641
Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning
  • Nov 14, 2024
  • Information Sciences
  • Yi Zhang + 6 more

Privacy-preserving and communication-efficient stochastic alternating direction method of multipliers for federated learning

  • Research Article
  • Cite Count Icon 35
  • 10.1007/s10107-019-01423-x
On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming
  • Aug 26, 2019
  • Mathematical Programming
  • Liang Chen + 3 more

In this paper, we show that for a class of linearly constrained convex composite optimization problems, an (inexact) symmetric Gauss–Seidel based majorized multi-block proximal alternating direction method of multipliers (ADMM) is equivalent to an inexact proximal augmented Lagrangian method. This equivalence not only provides new perspectives for understanding some ADMM-type algorithms but also supplies meaningful guidelines on implementing them to achieve better computational efficiency. Even for the two-block case, a by-product of this equivalence is the convergence of the whole sequence generated by the classic ADMM with a step-length that exceeds the conventional upper bound of $$(1+\sqrt{5})/2$$ , if one part of the objective is linear. This is exactly the problem setting in which the very first convergence analysis of ADMM was conducted by Gabay and Mercier (Comput Math Appl 2(1):17–40, 1976), but, even under notably stronger assumptions, only the convergence of the primal sequence was known. A collection of illustrative examples are provided to demonstrate the breadth of applications for which our results can be used. Numerical experiments on solving a large number of linear and convex quadratic semidefinite programming problems are conducted to illustrate how the theoretical results established here can lead to improvements on the corresponding practical implementations.

  • Conference Article
  • 10.1109/robio49542.2019.8961736
Fast ADMM â„“1minimization by applying SMW formula and multi-row simultaneous estimation for Light Transport Matrix acquisition
  • Dec 1, 2019
  • Naoya Chiba + 2 more

The Light Transport Matrix (LTM) is a fundamental expression of the light propagation of the projector-camera system. The matrix includes all the characteristics of light rays transferred from the projector to the camera, and it is used for scene relighting, understanding the light path, and 3D measurement. Especially, LTM enables robust 3D measurement even if the scene includes metallic or semi-transparent objects; thus it is already used for robot vision. The LTM is often estimated by l 1 minimization because the LTM has a huge number of elements. l 1 minimization methods, which utilize the Alternating Direction Method of Multipliers (ADMM), can reduce the number of observations. In addition, a powerful extended ADMM l 1 minimization method named Saturation ADMM, which can estimate the LTM under saturated conditions, also exists. In the study presented in this paper, we reduce the computational cost of ADMM l 1 minimization by applying the Sherman-Morrison-Woodbury (SMW) formula. Furthermore, we propose multi-row simultaneous LTM estimation, which is a new method to improve the computational efficiency. The contribution of this paper is to propose the use of these two methods to speed up LTM estimation and demonstrate that our methods reduce the computational cost in theory and the calculation time in practice. Experiments indicate that our method accelerates ADMM l 1 minimization by up to 4.64 times, and Saturation ADMM l 1 minimization by up to 2.54 times compared to the original methods.

  • Research Article
  • Cite Count Icon 170
  • 10.1109/tsp.2013.2295055
Decentralized Dynamic Optimization Through the Alternating Direction Method of Multipliers
  • Mar 1, 2014
  • IEEE Transactions on Signal Processing
  • Qing Ling + 1 more

This paper develops the application of the alternating direction method of multipliers (ADMM) to optimize a dynamic objective function in a decentralized multi-agent system. At each time slot, agents in the network observe local functions and cooperate to track the optimal time-varying argument of the sum objective. This cooperation is based on maintaining local primal variables that estimate the value of the optimal argument and auxiliary dual variables that encourage proximity with neighboring estimates. Primal and dual variables are updated by an ADMM iteration that can be implemented in a distributed manner whereby local updates require access to local variables and the most recent primal variables from adjacent agents. For objective functions that are strongly convex and have Lipschitz continuous gradients, the distances between the primal and dual iterates to their corresponding time-varying optimal values are shown to converge to a steady state gap. This gap is explicitly characterized in terms of the condition number of the objective function, the condition number of the network that is defined as the ratio between the largest and smallest nonzero Laplacian eigenvalues, and a bound on the drifts of the optimal primal variables and the optimal gradients. Numerical experiments corroborate theoretical findings and show that the results also hold for non-differentiable and non-strongly convex primal objectives.

  • Conference Article
  • Cite Count Icon 16
  • 10.1109/bigdata47090.2019.9005716
Differentially Private Robust ADMM for Distributed Machine Learning
  • Dec 1, 2019
  • Jiahao Ding + 5 more

To embrace the era of big data, there has been growing interest in designing distributed machine learning to exploit the collective computing power of the local computing nodes. Alternating Direction Method of Multipliers (ADMM) is one of the most popular methods. This method applies iterative local computations over local datasets at each agent and computation results exchange between the neighbors. During this iterative process, data privacy leakage arises when performing local computation over sensitive data. Although many differentially private ADMM algorithms have been proposed to deal with such privacy leakage, they still have to face many challenging issues such as low model accuracy over strict privacy constraints and requiring strong assumptions of convexity of the objective function. To address those issues, in this paper, we propose a differentially private robust ADMM algorithm (PR-ADMM) with Gaussian mechanism. We employ two kinds of noise variance decay schemes to carefully adjust the noise addition in the iterative process and utilize a threshold to eliminate the too noisy results from neighbors. We also prove that PR-ADMM satisfies dynamic zero-concentrated differential privacy (dynamic zCDP) and a total privacy loss is given by $ (\epsilon, \delta)$-differential privacy. From a theoretical point of view, we analyze the convergence rate of PR-ADMM for general convex objectives, which is $\mathcal{O}(1 /K)$ with K being the number of iterations. The performance of the proposed algorithm is evaluated on real-world datasets. The experimental results show that the proposed algorithm outperforms other differentially private ADMM based algorithms under the same total privacy loss.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/math12010043
An Adaptive Low Computational Cost Alternating Direction Method of Multiplier for RELM Large-Scale Distributed Optimization
  • Dec 22, 2023
  • Mathematics
  • Ke Wang + 4 more

In a class of large-scale distributed optimization, the calculation of RELM based on the Moore–Penrose inverse matrix is prohibitively expensive, which hinders the formulation of a computationally efficient optimization model. Attempting to improve the model’s convergence performance, this paper proposes a low computing cost Alternating Direction Method of Multipliers (ADMM), where the original update in ADMM is solved inexactly with approximate curvature information. Based on quasi-Newton techniques, the ADMM approach allows us to solve convex optimization with reasonable accuracy and computational effort. By introducing this algorithm into the RELM model, the model fitting problem can be decomposed into a set of subproblems that can be executed in parallel to achieve efficient classification performance. To avoid the storage of expensive Hessian for large problems, BFGS with limited memory is proposed with computational efficiency. And the optimal parameter values of the step-size search method are obtained through Wolfe line search strategy. To demonstrate the superiority of our methods, numerical experiments are conducted on eight real-world datasets. Results on problems arising in machine learning suggest that the proposed method is competitive with other similar methods, both in terms of better computational efficiency as well as accuracy.

  • Research Article
  • Cite Count Icon 3
  • 10.3103/s0146411618010078
Improving Medical CT Image Blind Restoration Algorithm Based on Dictionary Learning by Alternating Direction Method of Multipliers
  • Jan 1, 2018
  • Automatic Control and Computer Sciences
  • Yunshan Sun + 4 more

In this paper, the medical CT image blind restoration is translated into two sub problems, namely, image estimation based on dictionary learning and point spread function estimation. A blind restoration algorithm optimized by the alternating direction method of multipliers for medical CT images was proposed. At present, the existing methods of blind image restoration based on dictionary learning have the problem of low efficiency and precision. This paper aims to improve the effectiveness and accuracy of the algorithm and to improve the robustness of the algorithm. The local CT images are selected as training samples, and the K-SVD algorithm is used to construct the dictionary by iterative optimization, which is beneficial to improve the efficiency of the algorithm. Then, the orthogonal matching pursuit algorithm is employed to implement the dictionary update. Dictionary learning is accomplished by sparse representation of medical CT images. The alternating direction method of multipliers (ADMM) is used to solve the objective function and realize the local image restoration, so as to eliminate the influence of point spread function. Secondly, the local restoration image is used to estimate the point spread function, and the convex quadratic optimization method is used to solve the point spread function sub problems. Finally, the optimal estimation of point spread function is obtained by iterative method, and the global sharp image is obtained by the alternating direction method of multipliers. Experimental results show that, compared with the traditional adaptive dictionary restoration algorithm, the new algorithm improves the objective image quality metrics, such as peak signal to noise ratio, structural similarity, and universal image quality index. The new algorithm optimizes the restoration effect, improves the robustness of noise immunity and improves the computing efficiency.

  • Conference Article
  • Cite Count Icon 8
  • 10.1145/3340531.3411860
Towards Plausible Differentially Private ADMM Based Distributed Machine Learning
  • Oct 19, 2020
  • Jiahao Ding + 4 more

The Alternating Direction Method of Multipliers (ADMM) and its distributed version have been widely used in machine learning. In the iterations of ADMM, model updates using local private data and model exchanges among agents impose critical privacy concerns. Despite some pioneering works to relieve such concerns, differentially private ADMM still confronts many research challenges. For example, the guarantee of differential privacy (DP) relies on the premise that the optimality of each local problem can be perfectly attained in each ADMM iteration, which may never happen in practice. The model trained by DP ADMM may have low prediction accuracy. In this paper, we address these concerns by proposing a novel (Improved) Plausible differentially Private ADMM algorithm, called PP-ADMM and IPP-ADMM. In PP-ADMM, each agent approximately solves a perturbed optimization problem that is formulated from its local private data in an iteration, and then perturbs the approximate solution with Gaussian noise to provide the DP guarantee. To further improve the model accuracy and convergence, an improved version IPP-ADMM adopts sparse vector technique (SVT) to determine if an agent should update its neighbors with the current perturbed solution. The agent calculates the difference of the current solution from that in the last iteration, and if the difference is larger than a threshold, it passes the solution to neighbors; or otherwise the solution will be discarded. Moreover, we propose to track the total privacy loss under the zero-concentrated DP (zCDP) and provide a generalization performance analysis. Experiments on real-world datasets demonstrate that under the same privacy guarantee, the proposed algorithms are superior to the state of the art in terms of model accuracy and convergence rate.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/cdc.2017.8264213
A customized ADMM for rank-constrained optimization problems with approximate formulations
  • Dec 1, 2017
  • Chuangchuang Sun + 1 more

This paper proposes a customized Alternating Direction Method of Multipliers (ADMM) algorithm to solve the Rank-Constrained Optimization Problems (RCOPs) with approximate formulations. Here RCOP refers to an optimization problem whose objective and constraints are convex except a (nonconvex) matrix rank constraint. We first present an approximate formulation for the RCOP with high accuracy by selecting an appropriate parameter set. Then a general ADMM frame is employed to solve the approximated problem without requiring singular value decomposition in each subproblem. The new formulation and the customized ADMM algorithm greatly enhance the computational efficiency and scalability. While ADMM has been extensively investigated for convex optimization problems, its convergence property is still open for nonconvex problems. Another contribution of this paper is to prove that the proposed ADMM globally converges to a stationary point of the approximate problem of RCOP. Simulation examples are provided to demonstrate the feasibility and efficiency of the proposed method.

  • Conference Article
  • Cite Count Icon 3
  • 10.2118/213022-ms
Distributed Agent Optimization for Large-Scale Network Models
  • May 15, 2023
  • Zhenyu Guo + 2 more

Optimization of production networks is key for managing efficient hydrocarbon production as part of closed-loop asset management. Large-scale surface network optimization is a challenging task that involves high nonlinearity with numerous constraints. In existing tools, the computational cost of solving the surface network optimization can exponentially increase with the size and complexities of the network using traditional approaches involving nonlinear programming methods. In this study, we accelerate the large-scale surface network optimization by using a distributed agent optimization algorithm called alternating direction method of multipliers (ADMM). We develop and apply the ADMM algorithm for large-scale network optimization with over 1000 wells and interconnecting pipelines. In the ADMM framework, a large-scale network system is broken down into many small sub-network systems. Then, a smaller optimization problem is formulated for each sub-network. These sub-network optimization problems are solved in parallel using multiple computer cores so that the entire system optimization will be accelerated. A large-scale surface network involves many inequality and equality constraints, which are effectively handled by using augmented Lagrangian method to enhance the robustness of convergence quality. Additionally, proxy or hybrid models can also be used for pipe flow and pressure calculation for every network segment to further speed up the optimization. The proposed ADMM optimization method is validated by several synthetic cases. We first apply the proposed method to surface network simulation problems of various sizes and complexities (configurations, fluid types, pressure regimes, etc.), where the pressure for all nodes and fluxes in all links will be calculated with a specified separator pressure and reservoir pressures. High accuracy was obtained from the ADMM framework compared with a commercial simulator. Next, the ADMM is applied to network optimization problems, where we optimize the pressure drop across a surface choke for every well to maximize oil production. In a large-scale network case with over 1000 wells, we achieve 2X – 3X speedups in computation time with reasonable accuracy from the ADMM framework compared with benchmarks. Finally, we apply the proposed method to a field case, and validate that the ADMM framework properly works for the actual field applications. A novel framework for surface network optimization was developed using the distributed agent optimization algorithm. The proposed framework provides superior computational efficiency for large- scale network optimization problems compared with existing benchmark methods. It enables more efficient and frequent decision-making of large-scale petroleum field management to maximize the hydrocarbon production subject to numerous system constraints.

More from: IEEE Transactions on Pattern Analysis and Machine Intelligence
  • New
  • Research Article
  • 10.1109/tpami.2025.3630635
Towards Visual Grounding: A Survey.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Linhui Xiao + 4 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630339
DELTA: Deep Low-Rank Tensor Representation for Multi-Dimensional Data Recovery.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Guo-Wei Yang + 4 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630577
Variational Bayesian Semi-supervised Keyword Extraction.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Yaofang Hu + 3 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630505
Large-scale Logo Detection.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Sujuan Hou + 6 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630673
A Survey of Graph Neural Networks in Real World: Imbalance, Noise, Privacy and OOD Challenges.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Wei Ju + 12 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630317
Large-Scale Omnidirectional Person Positioning.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Lu Yang + 5 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630242
SPAN: Learning Similarity between Scene Graphs and Images with Transformers.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Yuren Cong + 3 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630185
Sparse-PGD: A Unified Framework for Sparse Adversarial Perturbations Generation.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Xuyang Zhong + 1 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630605
Graph Quality Matters on Revealing the Semantics behind the Data in Physical World.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Jielong Yan + 3 more

  • New
  • Research Article
  • 10.1109/tpami.2025.3630209
Dynamic Bit-Wise Semantic Transformer Hashing for Multi-Modal Retrieval.
  • Nov 7, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Wentao Tan + 6 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon