Abstract

This final report is presented by University (LU) for the project entitled Langston University High Energy Physics (LUHEP) under the direction of principal investigator (PI) and project director Professor Joel Snow. The project encompassed high energy physics research performed at hadron colliders. The PI is a collaborator on the DZero experiment at Fermi National Accelerator Laboratory in Batavia, IL, USA and the ATLAS experiment at CERN in Geneva, Switzerland and was during the entire project period from April 1, 1999 until May 14, 2012. Both experiments seek to understand the fundamental constituents of the physical universe and the forces that govern their interactions. In 1999 as member of the Online Systems group for Run 2 the PI developed a cross-platform Python-based, Graphical User Interface (GUI) application for monitoring and control of EPICS based devices for control room use. This served as a model for other developers to enhance and build on for further monitoring and control tasks written in Python. Subsequently the PI created and developed a cross-platform C++ GUI utilizing a networked client-server paradigm and based on ROOT, the object oriented analysis framework from CERN. The GUI served as a user interface to the Examine tasks running inmore » the D\O control room which monitored the status and integrity of data taking for Run 2. The PI developed the histogram server/control interface to the GUI client for the EXAMINE processes. The histogram server was built from the ROOT framework and was integrated into the D\O framework used for online monitoring programs and offline analysis. The PI developed the first implementation of displaying histograms dynamically generated by ROOT in a Web Browser. The PI's work resulted in several talks and papers at international conferences and workshops. The PI established computing software infrastructure at LU and U. Oklahoma (OU) to do analysis of DZero production data and produce simulation data for the experiment. Eventually this included the FNAL SAM data grid system, the SAMGrid (SG) infrastructure, and the Open Science Grid software stacks for computing and storage elements. At the end of 2003 Snow took on the role of global Monte Carlo production coordinator for the DO experiment. A role which continues til this day. In January of 2004 Snow started working with the SAMGrid development team to help debug, deploy, and integrate SAMGrid with DO Monte Carlo production. Snow installed and configured SG execution and client sites at LUHEP and OUHEP, and a SG scheduler site at LUHEP. The PI developed a python based GUI (DAJ) that acts as a front end for job submission to SAMGrid. The GUI interfaces to the DZero Mone Carlo (MC) request system that uses SAM to manage MC requests by the physics analysis groups. DAJ significantly simplified SG job submission and was deployed in DZero in an effort to increase the user base of SG. The following year was the advent of SAMGrid job submission to the Open Science Grid (OSG) and LHC Computing Grid (LCG) through a forwarding mechanism. The PI oversaw the integration of these grids into the existing production infrastructure. The PI developed an automatic MC (Automc) request processing system capable of operating without user intervention (other than getting grid credentials), and able to submit to any number of sites on various grids. The system manages production at all but 2 sites. The system was deployed at Fermilab and remains operating there today. The PI's work in distributed computing resulted in several talks at international conferences. UTA, OU, and LU were chosen as the collaborating institutions that form the Southwest Tier 2 Center (SWT2) for ATLAS. During the project period the PI contributed to the online and offline software infrastructure through his work with the Run 2 online group, and played a major role in Monte Carlo production for DZero. During the part of the project period in which the PI served as MC production coordinator MC production increased very significantly. In the first year of the PI's tenure as production coordinator production was 159M events and 6.7~TB of data. During the last year of the project period production was 2,342~M events and 262~TB of data. That is a factor of 15 increase in events and 39 in data volume. The increase occurred with improvements in computer hardware and networks, through the use of grid technology on diverse resources, and through increased automation and efficiency of the production process. LU HEP developed and deployed the automatic MC request processing system in use at FNAL. The complementary strategies of automation and grid production served DZero well. Fermilab has recognized LU HEP's contribution to DZero by allowing the PI to devote full time to research activities by appointing him a guest scientist for the last six years of the project period.« less

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.