Abstract

When one is interested in assuring the safety, security, or functional correctness of a computing system, the formal verification of its operating system (OS) is one of the obvious places to start. The operating system has privileged access to hardware and is therefore able to undermine any assurance that might have been derived independently for other parts of the system. This was recognised early, and a number of projects set out to formally verify the functional correctness of operating systems in the late 1970s and early 1980s. These pioneering efforts included UCLA Secure Unix, the PSOS project, and later Bevier’s small KIT. It turned out that operating system verification is a hard nut to crack and none of the initial efforts ended up with a formally verified, realistic operating system or operating system kernel. OS verification is hard because the flaws one is interested in uncovering often occur in the implementation layer. This is because operating systems are commonly implemented in low-level languages like C that are hard to reason about, and because convenient abstractions such as virtual memory, message passing, and memory allocation are services that are implemented by the OS and cannot be assumed. In recent years, there is a renewed interest in the formal analysis and verification of operating systems, both in the OS research community and in the formal verification area. Formal verification techniques and proof assistants have advanced dramatically in the past 30 years, as has our understanding of language semantics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call