Abstract

Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis.

Highlights

  • Conformity assessment is a fundamental activity for industry and society [1]

  • In the case of testing laboratories involved in conformity assessment activities, the accreditation is a declaration that the lab fulfills the requirements of ISO/IEC 17025 or national equivalent standard, in addition to the recognition of the technical capacity to perform a set of tests in the scope of the activities of the laboratory

  • Overview The research method used in our study aims at: (1) acquiring and organizing an initial set of evidence on recent proficient testing methods; (2) if possible, identifying which evidence is related to sensor/Information and Communication Technology (ICT)/software-based products; (3) capturing insights that could be reused or adapted in the context of software conformity assessment; and (4) organizing a set of steps that may permit performing some rounds of interlaboratory comparisons

Read more

Summary

Introduction

Conformity assessment is a fundamental activity for industry and society [1]. Based on activities such as technical standardization, metrology, testing, calibration, certification and accreditation—which are jointly known as the national quality infrastructure—it is possible to provide assurance that products, processes, people and management systems meet standard technical requirements. In countries where a solid national quality infrastructure is established, the activities related to conformity assessment are carried out by bodies that meet requirements described in the ISO/IEC 17000 family of standards. One of these standards applies, in particular, to calibration and testing laboratories: to have their calibrations and tests nationally and internationally recognized, these laboratories must meet the requirements of the current ISO/IEC 17025 and be accredited by an official national accreditation body [2,3]. The role of accredited laboratories is carrying out tests to demonstrate that a given product meets a set of requirements normally associated with a regulation or technical standard such as ISO standards [16,19]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call