Abstract

Incremental extreme learning machine (IELM), convex incremental extreme learning machine (C-IELM) and other variants of extreme learning machine (ELM) algorithms provide low computational complexity techniques for training single layer feed-forward networks (SLFNs). However, the original IELM and C-IELM consider faultless network situations only. This paper investigates the performance of IELM and C-IELM under the multiplicative weight noise situation, where the input weights and the output weights are contaminated by noise. In addition, we propose two incremental fault tolerant algorithms, namely weight deviation tolerant-IELM (WDT-IELM) and weight deviation tolerant convex-IELM (WDTC-IELM). The performance of the two proposed algorithms is better than that of the two original ELM algorithms. Moreover, the convergence properties of the proposed algorithms are presented.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.