Abstract

Training machine learning models on edge devices is always a conflict with power consumption and computing cost. This paper introduces a hardware-oriented training method called ExtraFerns for a unique subset of decision tree ensembles, which significantly decreases memory access and optimizes each tree in parallel. ExtraFerns benefits from the advantages of both extraTrees and randomFerns. As extraTrees does, it generates nodes by randomly selecting attributes and generating thresholds. Then, as randomFerns does, it builds ferns, which are decision trees that share identical nodes at each depth. In contrast to other ensemble methods using greedy optimization, ExtraFerns attempts global optimization of each fern. Experimental results show that ExtraFerns requires only 4.3% and 4.1% memory access for training models with 3.0% and 1.2% accuracy drops compared with randomForest and extraTrees, respectively. This paper also proposes applying lightweight random projection to ExtraFerns as a preprocessing step, which achieved a further accuracy improvement of up to 2.0% for image datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.