Abstract

Inferring causal relationships is key to data science. Learning causal structures in the form of directed acyclic graphs (DAGs) has been widely adopted for uncovering causal relationships, nonetheless, it is a challenging task owing to its exponential search space. A recent approach formulates the structure learning problem as a continuous constrained optimization task that aims to learn causal relation matrix. Following it are nonlinear variants that can uncover nonlinear causal relationships. However, the nonlinear variant which considers the ℓ1 penalty as part of its optimization objective may not effectively eliminate false predictions. In this paper, we investigate the defect of the model that the ℓ1 penalty cannot effectively make the relation matrix sparse, thus introduces false predictions. Besides, the acyclicity constraint is unable to identify large circles within the margin of identification error, thus is unable to guarantee acyclicity of inferred causal relationships. Based on the theoretical and empirical analysis of the defects, we propose the normalized ℓ1 penalty which replaces the original ℓ1 penalty with a normalized first-order matrix norm, and propose a constraint based on eigenvalue to substitute the original acyclicity constraint. We then compare our proposed model NEC with three models to show considerable performance improvement. We further conduct experiments to show the effectiveness of the normalized ℓ1 penalty and the eigenvalue constraint.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call