Abstract

Measurements in Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors, such as the MicroBooNE detector at Fermilab [1], feature large, high fidelity event images. Deep learning techniques have been extremely successful in classification tasks of photographs, but their application to LArTPC event images is challenging, due to the large size of the events. Events in these detectors are typically two orders of magnitude larger than images found in classical challenges, like recognition of handwritten digits contained in the MNIST database or object recognition in the ImageNet database. Ideally, training would occur on many instances of the entire event data, instead of many instances of cropped regions of interest from the event data. However, such efforts lead to extremely long training cycles, which slow down the exploration of new network architectures and hyperparameter scans to improve the classification performance. We present studies of scaling a LArTPC classification problem on multiple architectures, spanning multiple nodes. The studies are carried out on simulated events in the MicroBooNE detector. We emphasize that it is beyond the scope of this study to optimize networks or extract the physics from any results here. Institutional computing at Pacific Northwest National Laboratory and the SummitDev machine at Oak Ridge National Laboratory’s Leadership Computing Facility have been used. To our knowledge, this is the first use of state-of-the-art Convolutional Neural Networks for particle physics and their attendant compute techniques onto the DOE Leadership Class Facilities. We expect benefits to accrue particularly to the Deep Underground Neutrino Experiment (DUNE) LArTPC program, the flagship US High Energy Physics (HEP) program for the coming decades.

Highlights

  • Use of convolutional networks to analyze time projection chamber data is often performed on cropped data because of large image sizes

  • A simplified convolutional neural network (CNN) model was developed for testing2

  • Results show increasing the number of GPUs for both dense (JishNet) and sparse (SCNet) CNNs was successful in decreasing the training time on large images in large datasets

Read more

Summary

Introduction

Use of convolutional networks to analyze time projection chamber data is often performed on cropped data because of large image sizes. Training and inference on uncropped TPC data is desired to minimize physics information loss before training. The high fidelity and large size of the image data requires scaling of computing resources past the 1s to 10s and 100s of GPUs. 1.1. The MicroBooNE Detector and data format used This work formats its simulated data with inspiration from the MicroBooNE experiment [1]. MicroBooNE is a 170 tonne liquid argon time projection chamber (LArTPC) with the express interest of analyzing neutrino physics. Readout of MicroBooNE consists of 2 induction planes with 2400 wires each and 1 collection plane with 3156 wires. Readout occurs every 4.8 ms (which is 2.2× the TPC drift time) for 9600 digitizations

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.