Abstract

Many decades ago Patrick Suppes argued rather convincingly that theoretical hypotheses are not confronted with the direct, raw results of an experiment, rather, they are typically compared with models of data. What exactly is a data model however? And how do the interactions of particles at the subatomic scale give rise to the huge volumes of data that are then moulded into a polished data model? The aim of this paper is to answer these questions by presenting a detailed case study of the construction of data models at the LHCb for testing Lepton Flavour Universality in rare decays of B-mesons. The close examination of the scientific practice at the LHCb leads to the following four main conclusions: (i) raw data in their pure form are practically useless for the comparison of experimental results with theory, and processed data are in some cases epistemically more reliable, (ii) real and simulated data are involved in the co-production of the final data model and cannot be easily distinguished, (iii) theory-ladenness emerges at three different levels depending on the scope and the purpose for which background theory guides the overall experimental process and (iv) the overall process of acquiring and analysing data in high energy physics is too complicated to be fully captured by a generic methodological description of the experimental practice.

Highlights

  • The constantly growing integration of science and technology during the last decades has brought science in the new ‘era of big data’

  • It is designed to profit from the enormous production rate of b quarks in proton-proton collisions at the Large Hadron Collider (LHC) which happen at a rate of around 3 × 1011 per fb−1.8 The LHCb detector collects about 25% of the b quarks produced in these collisions, and provides the necessary data for making precise measurements of various observables related to the rare B-decays

  • The description of the four stages in High-Energy Physics (HEP) data modelling and the following remarks on the two distinctions between raw/processed data and real/simulated data bring us to the end of our discussion

Read more

Summary

Introduction

European Journal for Philosophy of Science (2021) 11:101 other advanced methods of data collection often result in enormous datasets calling for more and more sophisticated methods of data analysis in order to enable the comparison of the experimental results with theoretical hypotheses. As will become evident the very nature of experimental HEP makes the interpretation of raw, unprocessed data impossible, and the only way to achieve progress in the field is by collecting and analysing large volumes of processed data. This suggests that a clear distinction between raw data and processed data cannot be applied in the context of large-scale HEP experiments.

Stretching the hierarchy of models account
B-anomalies and Lepton Flavour Universality
Data processing at the LHCb
The three levels of theory-ladenness
The LHCb trigger system
Constructing data models for the RK ratio
Selection criteria
Efficiency calculations
Data fits
Uncertainty calculations
Two dubious distinctions
Findings
Concluding remarks
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.