Abstract

Measurements in Liquid Argon Time Projection Chamber neutrino detectors feature large, high fidelity event images. Deep learning techniques have been extremely successful in classification tasks of photographs, but their application to these event images is challenging, due to the large size of the events, more two orders of magnitude larger than images found in classical challenges like MNIST or ImageNet. This leads to extremely long training cycles, which slow down the exploration of new network architectures and hyperpa-rameter scans to improve the classification performance. We present studies of scaling an LArTPC classification problem on multiple architectures, spanning multiple nodes. The studies are carried out in simulated events in the Micro-BooNE detector.

Highlights

  • The MicroBooNE detector is a Liquid Argon Time Projection Chamber (LArTPC) at the Fermi National Accelerator Laboratory (Fermilab)

  • We focus in our report on the data that is recorded by the 8256 wires that are arranged in three readout planes at -60, 60 and 90 degrees with respect to the neutrino beam, with 2400 wires in each of the two induction planes (U and V) and 3456 wires in the collection plane (Y)

  • Tracks and showers result from the charge that drifts to the readout planes, where it is digitized

Read more

Summary

Introduction

The MicroBooNE detector is a Liquid Argon Time Projection Chamber (LArTPC) at the Fermi National Accelerator Laboratory (Fermilab). The purpose of this note is to describe a tool that allows the use of multiple GPUs to train deep learning models to classify simulated events in the MicroBooNE detector, and to evaluate its performance. It is structured as follows: Section 2 describes the samples that were used in this study, Section 3 gives a brief description of the network used to categorize the samples, Section 4 gives an overview of the MaTEx tool that underpins our scaling studies, and Section 6 describes our measurements in detail. The evaluation and improvement of the physics performance of different network architectures is left for future studies

Data Loading
Performance across different numbers of GPUs
Performance across different numbers of CPUs
Summary
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call