Abstract

The effective utilization at scale of complex machine learning (ML) techniques for HEP use cases poses several technological challenges, most importantly on the actual implementation of dedicated end-to-end data pipelines. A solution to these challenges is presented, which allows training neural network classifiers using solutions from the Big Data and data science ecosystems, integrated with tools, software, and platforms common in the HEP environment. In particular, Apache Spark is exploited for data preparation and feature engineering, running the corresponding (Python) code interactively on Jupyter notebooks. Key integrations and libraries that make Spark capable of ingesting data stored using ROOT format and accessed via the XRootD protocol, are described and discussed. Training of the neural network models, defined using the Keras API, is performed in a distributed fashion on Spark clusters by using BigDL with Analytics Zoo and also by using TensorFlow, notably for distributed training on CPU and GPU resourcess. The implementation and the results of the distributed training are described in detail in this work.

Highlights

  • High energy physics (HEP) experiments like those at the Large Hadron Collider (LHC) are paramount examples of “big-data” endeavors: chasing extremely rare physics processes requires producing, managing and analyzing large amounts of complex data

  • Physics data analysis is profiting to a large extent from modern Machine Learning (ML) techniques, which are revolutionizing each processing step, from physics objects reconstruction, to parameter estimation and signal selection

  • Among the most popular analytics engines for big data processing, Spark allows performing interactive analysis and data exploration through its mature data processing engine and API for distributed data processing, its integration with cluster systems and by featuring ML libraries giving the possibility to train in a distributed fashion all common classifiers and regressors on large datasets

Read more

Summary

Introduction

High energy physics (HEP) experiments like those at the Large Hadron Collider (LHC) are paramount examples of “big-data” endeavors: chasing extremely rare physics processes requires producing, managing and analyzing large amounts of complex data. Physics data analysis is profiting to a large extent from modern Machine Learning (ML) techniques, which are revolutionizing each processing step, from physics objects reconstruction (feature engineering), to parameter estimation (regression) and signal selection (classification). In this scope, Apache Spark [1] represents a very promising tool to extend the traditional HEP approach, by combining in a unique system powerful means for both sophisticated data engineering and Machine Learning. Among the most popular analytics engines for big data processing, Spark allows performing interactive analysis and data exploration through its mature data processing engine and API for distributed data processing, its integration with cluster systems and by featuring ML libraries giving the possibility to train in a distributed fashion all common classifiers and regressors on large datasets

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.