Abstract

Modern society is faced with a fundamental problem: the reliability of complex, evolving software systems on which society critically depends cannot be guaranteed by the established, non-mathematical computer engineering techniques such as informal prose specification and ad hoc testing. The situation is worsening: modern companies are moving fast, leaving little time for code analysis and testing; the behaviour of concurrent and distributed programs cannot be adequately assessed using traditional testing methods; users of mobile applications often neglect to apply software fixes; and malicious users increasingly exploit even simple programming errors, causing major security disruptions. Building trustworthy, reliable software is becoming harder and harder to achieve, while new business and cybersecurity challenges make it of escalating critical importance. The challenge is to bring program specification and verification to the heart of the software design process. Most code validation is based on outdated ideas of trusting that the internal, unpublished procedures of a company are robust and the assumption that the developer is not malicious. High-grade industry players, such as aerospace companies, do use sophisticated internal processes and tools to establish some degree of trust in their code, which can then be certified by government bodies such as the UK National Cyber Security Centre. We should, however, be able to do better than this. Software should be judged on fundamental scientific principles, with proper answers to questions such …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call