Abstract

As part of the LAA project at CERN [1], we have studied the suitability of using fine-grain parallel computers in the data-acquisition system of experiments at high energy particle accelerators. Running feature extraction algorithms on such computer architectures may allow to build up triggers or to compact fine-grain data, reducing event rates and/or data volumes. Our goal was to get a clear understanding of the problems involved in implementing trigger algorithms on parallel structures, and to assess the difficulties and limitations of embedding such structures in data acquisition systems. To this end, we have defined a set of representative benchmarks, and have used them to evaluate several commercially available parallel processor systems, both in hardware implementations and simulation. Overviews of the benchmark and of the other architectures, and the numerical results of the benchmark are presented. In conclusion, the parallel systems studied are seen to outperform other forms of commercially available computers by large factors, for suitable algorithms on partial detector data. They will be serious competitors to custom-designed processors in triggering and data compaction tasks, in future experiments having to deal with high-intensity beams. The applications will be found in clustering, track finding, calorimetry, and in particle identification devices like TRDs or RICH counters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call