Description: Machine learning is a range of methods for classifying or predicting real-world phenomena using data. When ML
models are used to predict employee success, or set insurance prices, they can have significant effects on people’s lives.
Understandably, in such circumstances people might want explanations for why a system has given them a negative score. Various
methods have been proposed to generate interpretable ML models; including systematically perturbing the inputs to observe
effects on the output, or providing power indices for each input. However, few of these techniques have been translated into
user-friendly explanation interfaces. In this project, you would explore ways to improve explanation systems. This could involve
generating novel explanation methods; translating existing ones into graphical or natural language formats; and / or testing
them out on users for different purposes.