BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211014T160000
DTEND;TZID=Europe/Berlin:20211014T173000
DTSTAMP:20260406T072421
CREATED:20210722T073413Z
LAST-MODIFIED:20250604T095433Z
UID:10435-1634227200-1634232600@www.scienceofintelligence.de
SUMMARY:Tim Landgraf (Science of Intelligence)\, “The Hidden Shallows of Explaining Deep Models”
DESCRIPTION:Abstract:  \nIn the cognitive-\, behavioral- or neuro-sciences we often match a computational model to observations and then\, analyzing the model\, hope to find results that generalize to the underlying system. With deep neural networks (DNNs) quite powerful function approximators are available that can be fitted to huge data sets\, accelerated by cheap hardware and elaborate software stacks. It seems tempting to use DNNs as a default model but how do we analyze their behavior? DNNs are essentially black boxes: although we can write down the network function\, it does not tell us anything about the features it extracts or about the rules animals employ when interacting with one another. In recent years\, a new field has emerged and proposed a variety of methods to explain deep neural networks. In my talk\, I will (1) introduce you to some ideas that explanation algorithms are based on\, (2) show how quantifying their performance on proxy tasks can be misleading\, (3) provide an intuition why some popular proponents of these algorithms won’t work in deep networks\, (4) will introduce you to a new dataset generator that enables us to create challenging problems to test and evaluate explanation methods and (5) discuss why we need extensive (and expensive) user studies to investigate whether explanation methods actually provide additional information that would be available from the model’s outputs alone. I hope to stimulate a discussion about the use-cases for which “explain a DNN to discover the hidden rules of my study system” may work\, and in which it may not. \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-tim-landgraf/
CATEGORIES:PI Lecture
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2019/10/landgraf_800.jpg
END:VEVENT
END:VCALENDAR