Abstract

While software in industries such as aviation has a good safety record, little is known about whether standards for software in other safety-critical applications “work” — or even what that means. Safe use of software in safety-critical applications requires well-founded means of determining whether the software is fit for such use. It is often implicitly argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard “works,” such reliance is an experiment and without carefully collecting assessment data, that experiment is unplanned. To help “plan” the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on our workshop discussion, which revealed subtle, but important, study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to “work,” and key assessment strategies and study techniques. Finally, we conclude with a discussion of the kinds of research that will be required and how academia, industry and regulators might collaborate to overcome these noted barriers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call