Abstract

The Convolutional Tsetlin Machine (CTM), a variant of Tsetlin Machine (TM), represents patterns as straightforward AND-rules, to address the high computational complexity and the lack of interpretability of Convolutional Neural Networks (CNNs). CTM has shown competitive performance on MNIST, Fashion-MNIST, and Kuzushiji-MNIST pattern classification benchmarks, both in terms of accuracy and memory footprint. In this paper, we propose the Convolutional Regression Tsetlin Machine (C-RTM) that extends the CTM to support continuous output problems in image analysis. C-RTM identifies patterns in images using the convolution operation as in the CTM and then maps the identified patterns into a real-valued output as in the Regression Tsetlin Machine (RTM). The C-RTM thus unifies the two approaches. We evaluated the performance of C-RTM using 72 different artificial datasets, with and without noise in the training data. Our empirical results show the competitive performance of C-RTM compared to two standard CNNs. Additionally, the interpretability of the identified sub-patterns by C-RTM clauses is analyzed and discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call