Abstract

The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment based in the USA which is expected to start taking data in 2029. DUNE aims to precisely measure neutrino oscillation parameters by detecting neutrinos from the LBNF beamline (Fermilab) at the Far Detector, 1,300 kilometres away, in South Dakota at the Sanford Underground Research Facility. The Far Detector will consist of four cryogenic Liquid Argon Time Projection Chamber detectors of 17 kT, each producing more than 1 TB/sec of data. The main requirements for the data acquisition system are the ability to run continuously for extended periods of time, with a 99% up-time requirement, and the functionality to record both beam neutrinos and low energy neutrinos from the explosion of a neighbouring supernova, should one occur during the lifetime of the experiment. The key challenges are the high data rates that the detectors generate and the deep underground environment, which places constraints on power and space. To overcome these challenges, DUNE plans to use a highly optimised C++ software suite and a server farm of about 110 nodes continuously running about two hundred multicore processes located close to the detector, 1.5 kilometres underground. Thirty nodes will be at the surface and will run around two hundred processes simultaneously. DUNE is studying the use of the Kubernetes framework to manage containerised workloads and take advantage of its resource definitions and high up-time services to run the DAQ system. Progress in deploying these systems at the CERN neutrino platform on the prototype DUNE experiments is reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call