Broad learning system (BLS) is an efficient incremental learning machine algorithm. However, there are some disadvantages in such an algorithm. For example, the number of hidden layer nodes needs to be manually adjusted during the training process, meanwhile the large uncertainty will be caused by two random mappings. To solve these problems, based on the optimization ability of the kernel function, a double-kernel broad learning system (DKBLS) is proposed to eliminate the uncertainty of random mapping by using additive kernel strategy. Meanwhile, to reduce the computing costs and training time of DKBLS, a double-kernel based bayesian approximation broad learning system with dropout (Dropout-DKBLS) is further proposed. Ablation experiments show that the output accuracy of Dropout-DKBLS does not decrease even if the node is dropped. In addition, function approximation experiments show that DKBLS and Dropout-DKBLS have good robustness and can accurately predict noise data. The regression and classification experiments on multiple datasets are compared with the latest kernel-based learning methods. The comparison results show that both DKBLS and Dropout-DKBLS have good regression and classification performance. By further comparing the training time of these kernel-based learning methods, we prove that the Dropout-DKBLS can reduce the computational cost while ensuring the output accuracy.
Read full abstract