Abstract

Graph learning (GL) is a tool for finding direct relationships between the nodes of a network, and hence, inferring the graph topology from the data. Recently, many GL algorithms have been proposed in the field of graph signal processing, which are based on smoothness of the graph signals on the learned graph. However, although it is possible for the input graph signals to be contaminated by outliers, for example due to sensor failures or temporary faulty information records, existing techniques are very vulnerable to outliers. So, the goal is to infer a graph topology to be, as much as possible, insensitive to this kind of data corruptions. To this aim, due to the sparse nature of outlier data, we propose a new approach for robustifying GL algorithms by incorporating L1-norm or squared L1-norm terms into the objective function of smoothness based GL methods, yielding to a non-convex minimization problem. A novel iterative minimization method is introduced to solve the resulting non-convex problem. Moreover, the convergence of the algorithm is established despite of its non-convex nature. In simulations, the high performance of the proposed algorithm is demonstrated in presence of a considerably large amount of outliers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.