Abstract

Symbolic regression (SR) can be utilized to unveil the underlying mathematical expressions that describe a given set of observed data. At present, SR can be categorized into two methods: learning-from-scratch and learning-with-experience. Compared to learning-from-scratch, learning-with-experience yields results that are comparable to those of several benchmarks and incurs significantly lower time costs for obtaining expressions. However, the learning-with-experience model performs poorly in terms of unseen data distributions and lacks a rectification tool, apart from constant optimization, which exhibits limited performance. In this study, we propose a Symbolic Network-based Rectifiable Learning Framework (SNR) that possesses the ability to correct errors. SNR adopts Symbolic Network (SymNet) to represent an expression, and the encoding of SymNet is designed to provide supervised information, with numerous self-generated expressions, to train a policy net (PolicyNet). The training of PolicyNet can offer prior knowledge to guide effective searches. Subsequently, the incorrectly predicted expressions are revised via a rectification mechanism. This rectification mechanism endows SNR with broader applicability. Experimental results demonstrate that our proposed method achieves the highest averaged coefficient of determination on self-generated datasets when compared with other state-of-the-art methods and yields more accurate results in public datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.