Abstract

This paper presents a data compression algorithm to be used for data coming from the digitization of signals from particle physics detectors. Modern detectors in high energy physics experiments produce a huge amount of information. Therefore, efficient data reduction methods right at the front end are required to reduce the amount of data to be transmitted and stored. The data compression algorithm proposed in this paper is based on a combination of vector quantization and Huffman coding and is a lossless compression method. This compression algorithm is designed to be implemented in a new front end ASIC for amplification and digitization of signals in TPC detectors. The performance of the described compression method is presented by using a set of measured data from the ALICE TPC recording cosmic rays at CERN. The compression method is compared with the simpler arithmetic coding and Huffman coding. The obtained results show that a compression factor of 2 can be obtained on relevant data. The compression method results in a reduction better than what predicted by the simple entropy of the used data set. An implementation proposal is given and the required memory requirements for an ASIC are estimated.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.