Skip to main content

Deconstructing Active Learning

Supervisor

Suitable for

MSc in Computer Science

Abstract

Deep learning with small amounts of data - eg deep active learning - is a very attractive topic of research in Deep Learning, widely used in industry applications, while allowing for many different approaches. Close examination of the active learning literature though suggests many baselines used in previous research to be very weak. Even naive baselines such as random acquisition often perform much better than reported in the literature. This project aims to deconstruct previous research, reproduce results and show that random acquisition performs much better than reported. Moreover, we would investigate the effect of training with pseudo-labels on active learning and using random acquisition on low-confidence samples only (which should boost performance at low cost and could be competitive). 

Requirements ·         Strong Python coder ·         Comfortable with reproducing/reimplementing existing papers, adapting published research code ·         Interested in Deep Learning, Computer Vision and Active Learning