Submitted by Rachel Gardner on Tue, 26/11/2024 - 11:36
In today's world, systems software – low-level software such as operating systems and hypervisors – is absolutely vital in running our computers and mobile phones.
If an operating system goes wrong, the consequences range from irritating (such as an app crashing) to serious: being unable to turn on your computer. And if errors in an operating system enable a virus or malware to infect it, what follows can be far-reaching, leading to security breaches, leaks of confidential information or financial losses.
That's why we really need to find better ways to check that systems software is behaving as it should. So says Dr Christopher Pulte, a Senior Research Associate in our department. He is just starting an independent research programme in this area after being awarded a coveted £1.83m Royal Society University Research Fellowship.
The Royal Society makes these awards to exceptional researchers to tackle key scientific questions. Christopher will be using his to advance new principles and tools for verifying the correctness of systems software.
If errors in an operating system allow a virus or malware to infect it, it can lead to security breaches, leaks of confidential information or financial losses.
Dr Christopher Pulte
He aims to test how well they work by using them to verify key, security-critical components in widely used open source systems software. "I'm especially interested in correctness proof," he explains, "because it offers the highest level of assurance – a mathematical proof of correctness – that software is working properly."
But why do we need to do maths to ensure that software is working correctly? Why can't we just test it?
"Well, that's typically what happens in practice," Christopher says. "Software developers will run the software on examples and check that it behaves properly. But while it might work well on those particular examples, it doesn’t guarantee that you will find all the errors in the software. So if we want a high level of assurance – and for systems software we should – then we have to find different ways to validate it."
Open problems in systems software verification
Hence his interest in developing new techniques to mathematically analyse systems software and prove it correct. But though there has been a lot of progress in this area in the last twenty years, the verification of systems software is still very challenging.
One open problem of particular interest is the mutual dependence between the systems software and the programming language it is written in.
"Systems software relies on functionality provided by the language – but whether the language can provide this depends on the correctness of the systems software," Christopher says. And this has made it challenging to develop sound reasoning principles for mathematical verification.
"You want to mathematically reason about the systems software at the level of the programming language and show that the systems software is correct. But the systems software and the programming language mutually depend on each other because if the systems software is broken, this can break the programming language, and vice versa. We don’t have the theoretical machinery for handling this mutual dependency. So that’s the main problem I’m looking at."
Testing doesn't guarantee that you will find all the errors in the software. So if we want a high level of assurance, then we have to find different ways to validate it.
Dr Christopher Pulte
Another issue he'll grapple with is that systems software directly interacts with the hardware underneath it – since its purpose is to manage access to hardware resources – and the hardware behaviour is extremely complicated.
Difficulties of reasoning about hardware behaviour
"For the mathematical reasoning to be meaningful, it has to consider how the hardware behaves. Previous mathematical proofs have often relied on simplified assumptions – for example, that the hardware executes code step by step. In reality, in order to speed up the operation, hardware can execute code out of order and speculatively, and this makes it very difficult to reason about it."
He is hopeful that he can overcome these issues and develop reasoning principles and tools for verifying systems software. A final challenge will be making them practical enough that they could be used on realistic software – at least by experts.
The long-term goal of his research is a future where better programming languages and verification tools can be employed during software development to avoid many errors before deployment.
This, he says, would "radically improve the reliability and security of our software systems."