Safety-critical cyber-physical systems require evidence they are indeed safe. In practice, such evidence is results of system tests. Unfortunately, tests can only demonstrate the presence of software errors, not their absence, and can practically cover a tiny fraction of system state space. Valiant efforts to formally verify program correctness have either been excruciatingly difficult (theorem proving) or incomplete (static analysis, model checking, SAT or SMT solving). The BLESS Methodology was created specifically for engineers in industry to formally verify software controlling machines. The BLESS Methodology transforms programs that control machines, annotated with assertions to form proof outlines, into deductive proofs that every possible program execution will conform to its specification. To the extent that cyber-physical system specifications have been validated to express system safety and performance, a deductive proof can be a convincing argument to a person of program correctness. This paper uses a simple safety-critical system to argue that behavior correctness proof under the BLESS Methodology is a convincing verification artifact in addition to customary testing.
Read full abstract