Abstract

Domain-specific constraints can be exploited to implement compiler optimizations that are not otherwise feasible. Compilers for neural network learning algorithms can achieve near-optimal colocality of data and processes and near-optimal balancing of load over processors, even for dynamically irregular problems. This is impossible for general programs, but restricting programs to the neural algorithm domain allows for the exploitation of domain-specific properties. The operations performed by neural algorithms are broadcasts, reductions, and object-local operations only; the load distribution is regular with respect to the (perhaps irregular) network topology; changes of network topology occur only from time to time. A language, compilation techniques, and a compiler implementation on the MasPar MP-1 are described and quantitative results for the effects of various optimizations used in the compiler are shown. Conservative experiments with weight pruning algorithms yield performance improvements of 27 percent due to load balancing and 195 percent improvement is achieved due to data locality, both compared to unoptimized versions. Two other optimizations-connection allocation and selecting the number of replicates-speed programs up by about 50 percent and 100 percent, respectively. This work can be viewed as a case study in exploiting domain-specific information; some of the principles presented here may apply to other domains as well.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.