Skip to main content

Learning Statistical Objects

Michael Benedikt

Given a hypothesis class, say of real-valued values, there are a number of derived classes one can form based on random objects from the class. For example, we can randomize the parameters that define individual functions in the class: given an input element, we return the expected value over a distribution on parameters. Conversely, we cab randomize the inputs, obtaining a function that maps a parameter to the expected value across all inputs. We discuss "tranfer results" saying that from learnability of the base class, one can derive learnability of the associated randomized classes. There are variations of the result for Probably Approximately Correct (PAC) learning as well for online learning, and a distinction between the phenomena one sees for agnostic learning and realizable learning.

The problems studied here were first posed databases, motivated by learning database statistics. But they are closely related to a line of work in model theory, which concerns ``randomizing a structure''.
In this talk I will try not to assume much machine learning theory background or much logic background, and I will focus just on overviewing our results and explaining some of these connnections informally.
This is joint work with Aaron Anderson.