Abstract

Tree-based machine learning models, like random forests and decision trees, are low-complexity solutions that provide an efficient prediction for a wide range of applications. These models are particularly interesting for energy-constrained platforms since they can be implemented with simple logical operations. Tree-based accelerators are also intrinsically resilient to errors, and this can be leveraged to boost energy efficiency with approximate computing techniques. The key operations in these models are comparisons to constants, making comparators excellent candidates for approximation. This paper presents a technique to approximate comparisons to constants called C2PAx, which is capable of reducing the area and energy of tree-based accelerators. The method consists in finding alternative constants that reduce circuit area while keeping an efficient prediction performance. It is also shown that the selection of the constant parameters directly influences both hardware complexity and model performance, demanding cross-layer optimization. For that, we extend an existing framework that generates VLSI tree-based accelerators, inserting our approximation proposal that allows selecting the constant parameters that maximize energy efficiency at the cost of minor accuracy drops. Simulation results demonstrate that C2PAx outperforms the Don’t Care logic approximation technique when accuracy and energy are jointly considered. C2PAx trades accuracy for significant reductions in the VLSI area, power, delay, and energy consumption compared to precise models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call