I am interested in the testing and verification of machine learning, and also the use of machine learning in the testing and verification of other systems.
My current research leverages generative models to automatically construct useful test cases. Specifically, I have been focusing on so-called adversarial examples as a type of test inputs to evaluate the robustness of neural networks. By using generative machine learning, it is possible to create unrestricted adversarial examples which are not constrained to the rather contrived threat model that has dominated the literature so far. In Adaptive Generation of Unrestricted Adversarial Inputs, we describe a method which has a number of advantages over prior work in unrestricted adversarial examples, chiefly that our method cannot be easily mitigated by standard adversarial training.
In ongoing work, I am generalising this approach to generate tests for other kinds of system where white-box gradients are not available. I am also investigating the extent to which adversarial training on generated unrestricted adversarial inputs results in a more robust classifier.
After completing my undergraduate degree in computer science at the University of Cambridge in 2016, I spent two years teaching computer science at a school in Birmingham through the Teach First Leadership Development programme. I began studying for a PhD under Profs. Daniel Kroening and Tom Melham in October 2018.