Abstract

The field of Deep Learning has benefitted greatly by the availability of accelerators like Graphics Processing Units (GPUs) and of open source deep learning frameworks which can exploit these GPUs. As the GPUs become faster, a key aspect of system design is being able to supply them data at a rate where they can be kept busy. A balanced system design is important. In this paper, we investigate these issues with experiments on a distributed deep learning system called Phalanx. Phalanx is a data parallel distributed deep learning system which exploits Caffe as its basic learning engine in the nodes. These nodes run on GPUs and uses infiniband for communication1

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.