Abstract

Abstract As the new generation of smart sensors is evolving towards high sampling acquisitions systems, the amount of information to be handled by learning algorithms has been increasing. The Graphics Processing Unit (GPU) architectures provide a greener alternative with low energy consumption for mining big-data, harnessing the power of thousands of processing cores in a single chip, opening a widely range of possible applications. Here, we design a novel evolutionary computing GPU parallel function evaluation mechanism, in which different parts of time series are evaluated by different processing threads. By applying a metaheuristics fuzzy model in a low-frequency data for household electricity demand forecasting, results suggested that the proposed GPU learning strategy is scalable as the number of training rounds increases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call