Abstract
Spurred by causal structure learning (CSL) ability to reveal the cause–effect connection, significant research efforts have been made to enhance the scalability of CSL algorithms in various artificial intelligence applications. However, less effort has been made regarding the stability and the interpretability of CSL algorithms. Thus, this work proposes a self-correction mechanism that embeds domain knowledge for CSL, improving the stability and accuracy even in low-dimensional but high-noise environments by guaranteeing a meaningful output. The suggested algorithm is challenged against multiple classic and influential CSL algorithms in synthesized and field datasets. Our algorithm achieves a superior accuracy on the synthesized dataset, while on the field dataset, our method interprets the learned causal structure as a human preference for investment, coinciding with domain expert analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.