Abstract

This paper investigates the speed improvements available when using a graphics processing unit (GPU) for evaluation of individuals in a genetic programming (GP) environment. An existing GP system is modified to enable parallel evaluation of individuals on a GPU device. Several issues related to implementing GP on GPU are discussed, including how to perform tree-based GP on a device without recursion support, as well as the effect that proper memory layout can have on speed increases when using CUDA-enabled nVidia GPU devices. The specific GP implementation is designed to evolve stock trading strategies using technical analysis indicators. The second goal of this research is to investigate the possible improvement in performance when training individuals on a larger number of stocks and training days. This increased training size (nearly 100,000 training points) is enabled due to the speedups realized by GPU evaluation. Several different scenarios were used to test various speed optimizations of GP evaluation on the GPU device, with a peak speedup factor of over 600 (when compared to sequential evaluation on a 2.4 GHz CPU). Also, it is found that increasing the number of stocks and the length of the training period can result in higher out-of-training testing profitability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call