Abstract
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce it, instead of performing the integration at the end of each software module. In this paper, we extend a previous work on a benchmark suite for the YAP Prolog system, and we propose a fully automated test bench environment for Prolog systems, named Yet Another Prolog Test Bench Environment (YAPTBE), aimed to assist developers in the development and CI of Prolog systems. YAPTBE is based on a cloud computing architecture and relies on the Jenkins framework as well as a new Jenkins plugin to manage the underlying infrastructure. We present the key design and implementation aspects of YAPTBE and show its most important features, such as its graphical user interface (GUI) and the automated process that builds and runs Prolog systems and benchmarks.
Highlights
We present the key design and implementation aspects of Yet Another Prolog Test Bench Environment (YAPTBE) and show its most important features, such as its graphical user interface (GUI) and the automated process that builds and runs Prolog systems and benchmarks
In the early years of software development, it was a well-known rule of thumb that in a typical software project, approximately 50% of the elapsed time and more than 50% of the total cost were spent in benchmarking the components of the software under development
We developed a first benchmark suite framework based on the continuous integration (CI) and black-box approaches to support the development of the YAP Prolog system [10]
Summary
In the early years of software development, it was a well-known rule of thumb that in a typical software project, approximately 50% of the elapsed time and more than 50% of the total cost were spent in benchmarking the components of the software under development. We developed a first benchmark suite framework based on the CI and black-box approaches to support the development of the YAP Prolog system [10]. This framework was very important, mainly to ensure YAP’s correctness in the context of several improvements and new features added to its tabling engine [19,20]. The framework handles the comparison of outputs obtained through the run of benchmarks for specific Prolog queries and for the answers stored in the table space if using tabled evaluation It supports the different dialects of the B-Prolog and XSB Prolog systems. We outline some conclusions and indicate further working directions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.