Abstract

Scaling-up scientific data analysis and machine learning algorithms for data-driven discovery is a grand challenge that we face today. Despite the growing need for analysis from science domains that are generating ‘Big Data’ from instruments and simulations, building high-performance analytical workflows of data-intensive algorithms have been daunting because: (i) the ‘Big Data’ hardware and software architecture landscape is constantly evolving, (ii) newer architectures impose new programming models, and (iii) data-parallel kernels of analysis algorithms and their performance facets on different architectures are poorly understood. To address these problems, we have: (i) identified scalable data-parallel kernels of popular data analysis algorithms, (ii) implemented ‘Mini-Apps’ of those kernels using different programming models (e.g. Map Reduce, MPI, etc.), (iii) benchmarked and validated the performance of the kernels in diverse architectures. In this paper, we discuss two of those Mini-Apps and show the execution of principal component analysis built as a workflow of the Mini-Apps. We show that Mini-Apps enable scientists to (i) write domain-specific data analysis code that scales on most HPC hardware and (ii) and offers the ability (most times with over a 10x speed-up) to analyze data sizes 100 times the size of what off-the-shelf desktop/workstations of today can handle.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.