Abstract

Graph learning (GL) is a tool for finding direct relationships between the nodes of a network, and hence, inferring the graph topology from the data. Recently, many GL algorithms have been proposed in the field of graph signal processing, which are based on smoothness of the graph signals on the learned graph. However, although it is possible for the input graph signals to be contaminated by outliers, for example due to sensor failures or temporary faulty information records, existing techniques are very vulnerable to outliers. So, the goal is to infer a graph topology to be, as much as possible, insensitive to this kind of data corruptions. To this aim, due to the sparse nature of outlier data, we propose a new approach for robustifying GL algorithms by incorporating L1-norm or squared L1-norm terms into the objective function of smoothness based GL methods, yielding to a non-convex minimization problem. A novel iterative minimization method is introduced to solve the resulting non-convex problem. Moreover, the convergence of the algorithm is established despite of its non-convex nature. In simulations, the high performance of the proposed algorithm is demonstrated in presence of a considerably large amount of outliers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call