Skip to main content

Group invariant global pooling

Supervisors

Suitable for

MSc in Advanced Computer Science

Abstract

Background:

In GDL often much effort is invested in defining expressive equivariant layers while only simple invariant layers are used. For example, when designing networks over molecular data, there exists a plethora of layers preserving rotations and translations while invariance is attained through pooling. This can restrict the model’s expressivity, making some functions far harder to learn than others. By aggregating over the group’s orbits one can create a rich invariant layer, improving on existing models.

Focus:

Research questions – how can one design rich invariant layers and (optionally) which functions can they approximate. Expected contribution – building upon and testing an existing invariant architecture. Mostly empirical work, with possibility for theory, depending on the student’s capabilities.

Method: The student would chiefly build upon https://arxiv.org/abs/2305.19207.

Goals:

  • Benchmark existing architecture on a variety of synthetic and real geometric datasets, eg. QM9
  • Identify optimisation and expressivity bottlenecks - (for the mathematically inclined) prove which class of functions this architecture can approximate
  • Test (and optionally prove) the effect of positional/group embeddings

Pre-requisites: a geometric deep learning course, ideally also some familiarity with group theory