Democratising Safe and Robust AI through Public Challenges in Bayesian Deep Learning
Probabilistic approaches to deep learning AI, such as Bayesian Deep Learning, are already in use in industry and academia. In medical applications they are used to solve problems of AI safety and robustness by identifying when an AI system is guessing at random. However, major obstacles stand in the way of wide adoption. Expert knowledge in AI is required for practitioners to build safe and robust AI tools into their applications. Further, expert knowledge in downstream applications themselves is required for AI researchers to identify gaps in current methodology.
To solve these problems, this project aims to build new AI challenges assessing specifically for safety and robustness, derived from real-world applications of AI in industry. With the community competing on our public challenges, contributed models will form an open-source toolbox of well-tested safe AI tools for practitioners to use, reducing domain-expertise barriers for using safe and robust AI in industry.
The project will set the course for a community-driven effort leading to a self-sustained ecosystem. Practitioners will use open-source tools, tested for safety and robustness, using metrics derived from industry downstream applications such as medical imaging. Researchers in AI will publish new results using benchmarks from this project based on these same metrics to compare new AI tools to old ones, and contribute their new tools as baselines to the toolbox for other researchers to compare to, and industry to use.
The public challenges will form a bridge between practitioners and AI researchers, which in turn will unravel new research opportunities for the AI community, pushing research forward to develop new, safe and robust AI tools, available to all. This effort to democratise safe and robust AI, in alignment with the UK's strategic plan set by Hall and Pesenti, will put the UK at the forefront of AI on the world stage.