PI Lecture with Tim Landgraf, “The hidden shallows of explaining deep models”

Loading Events
  • This event has passed.

PI Lecture with Tim Landgraf, “The hidden shallows of explaining deep models”

Abstract:

In the cognitive-, behavioral- or neuro-sciences we often match a computational model to observations and then, analyzing the model, hope to find results that generalize to the underlying system. With deep neural networks (DNNs) quite powerful function approximators are available that can be fitted to huge data sets, accelerated by cheap hardware and elaborate software stacks. It seems tempting to use DNNs as a default model but how do we analyze their behavior? DNNs are essentially black boxes: although we can write down the network function, it does not tell us anything about the features it extracts or about the rules animals employ when interacting with one another. In recent years, a new field has emerged and proposed a variety of methods to explain deep neural networks. In my talk, I will (1) introduce you to some ideas that explanation algorithms are based on, (2) show how quantifying their performance on proxy tasks can be misleading, (3) provide an intuition why some popular proponents of these algorithms won’t work in deep networks, (4) will introduce you to a new dataset generator that enables us to create challenging problems to test and evaluate explanation methods and (5) discuss why we need extensive (and expensive) user studies to investigate whether explanation methods actually provide additional information that would be available from the model’s outputs alone. I hope to stimulate a discussion about the use-cases for which “explain a DNN to discover the hidden rules of my study system” may work, and in which it may not.

***Want to attend one of our events? Sign up here.
To get regular updates, subscribe to our mailing list from this page.
The Zoom Link will be sent the day before the lecture.

Event Details

Date: October 14, 2021 @ 4:00 pm - 5:30 pm CEST
Time: 4:00 pm - 5:30 pm