- This event has passed.
Thursday Morning Talk: Tim Kietzmann (University of Osnabrück), “Large language models offer a rich representational format for understanding the transformation of visual information in the human brain.”
Abstract: Originating from the connectionist movement of cognitive science, deep neural networks (DNNs) have had tremendous influence on artificial intelligence, operating at the core of today’s most powerful applications. At the same time, cognitive computational neuroscientists have recognised their promise to act as “Goldilocks” models of brain function: DNNs are grounded in sensory data, can be trained to perform complex tasks in a distributed fashion, are fully configurable/accessible to the experimenter, and can be mapped to brain function across various levels of explanation. This has led to a fruitful research cycle in which biological aspects are integrated into network design, and the corresponding networks are then tested for their ability to predict neural and behavioural data. This talk will present this emerging approach, which we call neuroconnectionism, as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation. As a case study, I will focus on a collaborative effort in which we test the ability of large-language models (LLMs) to provide a good representational format for modelling human visual responses to natural scenes. By running tightly controlled model comparisons, we demonstrate that recurrent neural networks, trained to map from pixels to semantic LLM embedding, provide the current best account of a large-scale, 7T fMRI dataset (NSD), outperforming other supervised as well as unsupervised ANN models. These findings point towards the view that vision may not be optimised for visual categorisation alone, but instead maps from retinal input into a high-dimensional semantic format that can be captured by contextual learning in language.
This talk will take place in person at SCIoI.