Keynotes 2018

Inductive Bias in Deep Learning

Max Welling
University of Amsterdam

Deep learning is often considered a ‘black box’ predictor, that is, a highly flexible mapping from input variables to target variables which is hard to interpret. In almost all other scientific disciplines researchers build highly intuitive models with few variables in which decades of accumulated expertise is embedded. Not surprisingly, black box models need a lot of data to be successful as predictors while generative models need much less data. One natural question to ask is if we can inject more inductive bias in black box models, such as deep neural networks.
We will look at two different ways to achieve this. First, data often has certain symmetries, i.e. a satellite image will have no useful information in the orientation of the objects of interest. This is of course similar to the fact that in natural images there is typically no useful information in the absolute location of the objects. Convolutions implement the latter inductive bias and lead to very significant gains in terms of data efficiency. We will argue that there may be other symmetries present in data (such as orientation) which can also be hardcoded in a deep architecture and result in data efficiency gains. We will illustrate this idea in pathology slide analysis.
A second way to inject inductive bias into predictors is to consider the data generating process of the data. I will argue that for certain tasks, such as image reconstruction, the generative process can be directly embedded into the classifier by, at every layer of the network, comparing the data generated by the current reconstruction with the observations and feeding the difference back into the network. We will illustrate the resulting model, which we call the “Recurrent Inference Machine” on the task MRI image reconstruction.

Semantic spaces across diverse languages

Asifa Majid
University of York

Across diverse disciplines there is a wide-spread assumption that natural languages are equally expressible: anything that can be thought can be said. In fact, words are held to label categories that exist independently of language, such that language merely captures these pre-existing categories. In this talk, I will illustrate through cross-linguistic comparison across diverse domains that named distinctions are not nearly as self-evident as they may seem on first examination. Even for basic perceptual experiences, languages vary in which notions they lexicalise, and which concepts are coded at all. Crucially, in order to develop a universal theory of semantics, scholars must first seriously engage with the cultural variation found worldwide.