Abstract

In this paper, we propose a real-time hardware naive Bayes classifier (NBC) which is implemented on field programmable gate array (FPGA). We first use logarithm transformation based look-up table and float-to-fixed point process to simplify the calculations in naive Bayes classification algorithm. The methods clear up the multiplication and division operations of floating points completely. Based the simplified algorithm, we design our hardware architecture which includes both training and inference part. A novel format of logarithm look-up table with very limited items and a shifter in it are working together to calculate the logarithm value of any number. There are several processing element (PE) arrays in the accelerator where each PE in an array is running in parallel, which speed up the classification process remarkably. The experiments prove that the proposed accelerator has much better real-time efficiency than the general processor, some hardware Bayes classifiers and convolutional neural network (CNN) accelerators. It outperforms the NBC and semi-NBC accelerators and costs far less resources on chip than many CNN accelerators. Its utilization of LUT, FF and BRAM is only 10%, 0.05% and 2% of CNN accelerators on average. The experimental results over five datasets of different magnitudes show the accelerator has almost no loss of classification accuracy comparing with ARM Cortex-A9 processor. Their deviation of the classification accuracy is only 0.39% on average. What's more, it improves the performance of the training phase and the inference phase about 7.9+e4 and 8.3+e4 on average, respectively.

Highlights

  • Artificial Intelligence & Internet of Things (AIOT) [28] has gained widely concern with the rapid development of 5G communication

  • We use logarithm transformation basing on look-up table (LUT) and float-to-fixed point process to simplify the naive Bayes classifier (NBC) algorithm

  • We design the NBC specific accelerator which is comprised of a controller, a training part and an inference part

Read more

Summary

INTRODUCTION

Artificial Intelligence & Internet of Things (AIOT) [28] has gained widely concern with the rapid development of 5G communication. Motivated by all the above, we design our hardware NBC accelerator including both training and inference part It completely avoids multiplications and divisions of floating points by shift operations and logarithm transformation Based a novel logarithm look-up table (LUT). (3) A set of experiments demonstrate that our design has a much better real-time performance than software NBC, some hardware Bayes classifiers and CNN accelerators. It has almost no accuracy loss comparing with general processors implementation and outperforms the NBC and semi-NBC accelerator.

RELATED WORKS
HARDWARE DESIGN ORIENTED NBC ALGORITHM
LOGARITHM TRANSFORMATION
LOGARITHM LUT
COUNTING ADDER- LUT AND PROBABILITY LUT
COUNTING PE
EXPERIMENTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call