Abstract

A novel model called Self-Training-Transductive-Learning Broad Learning System (STTL-BLS) is proposed for image classification. The model consists of two key blocks: Feature Block (FB) and Enhancement Block (EB). The FB utilizes the Proportion of Large Values Attention (PLVA) technique and an Encoder for feature extraction. Multiple FBs are cascaded in the model to learn discriminative features. The Enhancement Block (EB) enhances feature learning and prevents under-fitting on complex datasets. Additionally, an architecture that combines characteristics of Broad Learning System (BLS) and gradient descent is designed for STTL-BLS, enabling the model to leverage the advantages of both BLS and Convolutional Neural Networks (CNNs). Moreover, a training algorithm (STTL) that combines self-training and transductive learning is presented for the model to improve its generalization ability. Experimental results demonstrate that the accuracy of the proposed model surpasses all compared BLS variants and performs comparably or even superior to deep networks: on small-scale datasets, STTL-BLS has an average accuracy improvement of 14.82 percentage points compared to other models; on large-scale datasets, 12.95 percentage points. Notably, the proposed model exhibits low time complexity, particularly with the shortest testing time on the small-scale datasets among all compared models: it has an average testing time of 46.4 s less than other models. It proves to be an additional valuable solution for image classification tasks on both small- and large-scale datasets. The source code for this paper can be accessed at https://github.com/threedteam/sttl_bls.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call