BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211014T160000
DTEND;TZID=Europe/Berlin:20211014T173000
DTSTAMP:20260430T173745
CREATED:20210722T073413Z
LAST-MODIFIED:20250604T095433Z
UID:10435-1634227200-1634232600@www.scienceofintelligence.de
SUMMARY:Tim Landgraf (Science of Intelligence)\, “The Hidden Shallows of Explaining Deep Models”
DESCRIPTION:Abstract:  \nIn the cognitive-\, behavioral- or neuro-sciences we often match a computational model to observations and then\, analyzing the model\, hope to find results that generalize to the underlying system. With deep neural networks (DNNs) quite powerful function approximators are available that can be fitted to huge data sets\, accelerated by cheap hardware and elaborate software stacks. It seems tempting to use DNNs as a default model but how do we analyze their behavior? DNNs are essentially black boxes: although we can write down the network function\, it does not tell us anything about the features it extracts or about the rules animals employ when interacting with one another. In recent years\, a new field has emerged and proposed a variety of methods to explain deep neural networks. In my talk\, I will (1) introduce you to some ideas that explanation algorithms are based on\, (2) show how quantifying their performance on proxy tasks can be misleading\, (3) provide an intuition why some popular proponents of these algorithms won’t work in deep networks\, (4) will introduce you to a new dataset generator that enables us to create challenging problems to test and evaluate explanation methods and (5) discuss why we need extensive (and expensive) user studies to investigate whether explanation methods actually provide additional information that would be available from the model’s outputs alone. I hope to stimulate a discussion about the use-cases for which “explain a DNN to discover the hidden rules of my study system” may work\, and in which it may not. \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-tim-landgraf/
CATEGORIES:PI Lecture
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2019/10/landgraf_800.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211019T100000
DTEND;TZID=Europe/Berlin:20211019T180000
DTSTAMP:20260430T173745
CREATED:20210917T093720Z
LAST-MODIFIED:20250604T095425Z
UID:10636-1634637600-1634666400@www.scienceofintelligence.de
SUMMARY:Mental Health in PhD Students. The Role of Graduate Schools\, the Integration of International Students and Impostor Syndrome
DESCRIPTION:Scholar Minds\, in collaboration with Science of Intelligence and the Berlin School of Mind and Brain\, would like to invite you to a special conference on the topic “Mental Health in PhD students: The role of graduate schools\, the integration of international students and impostor syndrome”. The event will take place online on October 19th from 10AM-6PM. The event will be held in English and is free of charge. \nThe event will incorporate keynotes\, workshops\, and hackathons to work on improving the current situations for early career researchers. The conference was initiated by Scholar Minds & the Berlin Cluster of Excellence “Science of Intelligence” and is a collaborative project of several mental health initiatives across Germany. \nWe are pleased to present a keynote lecture by Gordon Feld who will give an overview of how Germany needs to address the structural challenges of the academic system to best promote early career researchers. \nFurther\, we offer workshops on a selection of recurrent topics for early career researchers such as impostor syndrome (Mental Health Collective)\, stress management (Innerminder) or how to coordinate your projects (Scholar Minds). \nAnother highlight of the conference will be several hackathons tackling current challenges of the academic world such as how to improve the situation for international early career researchers in Germany (MATH+)\, bridging mental health and academic (N2 Network) or how the perfect graduate school (Dragonfly Mental Health) could look like. \nLast but not least\, there will be a panel discussion on the topic “Towards sunnier days: How to overcome the mental health crisis in academia?” opening the stage for different players of the academic system such as Jule Specht (Humboldt-Universität zu Berlin)\, Martin Grund (Max Planck Institute Leipzig)\, Ralf Kurvers (Science of Intelligence)\, and Aite Kashef (Lise-Meitner-Gesellschaft). \nPlease use this link to register: https://bit.ly/mh-event2021 \nIf you have any questions\, please contact  scholar-minds@charite.de. \nHomepage: http://www.scholar-minds.net/ \nE-Mail: scholar-minds@charite.de \nTwitter: @BerlinMinds
URL:https://www.scienceofintelligence.de/event/mental-health-in-phd-students-the-role-of-graduate-schools-the-integration-of-international-students-and-impostor-syndrome/
LOCATION:On Zoom
CATEGORIES:External Event
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2021/09/Screenshot-2021-09-17-at-11.36.28.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211028T160000
DTEND;TZID=Europe/Berlin:20211028T173000
DTSTAMP:20260430T173745
CREATED:20210908T113520Z
LAST-MODIFIED:20240813T093828Z
UID:10612-1635436800-1635442200@www.scienceofintelligence.de
SUMMARY:Cameron Buckner (Univ. of Houston)\, Imagination and the Prospects for Empiricist Artificial Intelligence
DESCRIPTION:Abstract: In current debates over deep-neural-network-based AI\, deep learning researchers have adopted the mantle of philosophical empiricism and associationism\, and its critics have taken up the side of philosophical rationalism and nativism.  These rationalist critics\, however\, often interpret associationism and empiricism in a way which is too caricatured to fit the views of any significant thinker in the empiricist tradition.  In particular\, most empiricists were faculty theorists; while they generally eschewed innate knowledge\, they appealed to a variety of domain-general innate faculties like memory\, imagination\, and attention to explain how the mind abstracts knowledge from experience.  This dynamic is vividly illustrated in a centuries-displaced debate between David Hume and Jerry Fodor over the role of imagination in cognitive architecture.  Fodor famously claimed that the ability to synthesize novel ideas and create new compositional representations is required for cognition.  Fodor applauds Hume for agreeing on these points\, but criticizes Hume’s use of the imagination to discharge these burdens.  Fodor claims that such an appeal for an associationist is “cheating”\, and notes that Hume never explains how the empiricist imagination actually works\, merely assigning it a variety of essential functions to perform “as if by magic”. \nMore recently\, deep learning researchers have claimed to create generative deep neural network models that perform one or more of the roles ascribed to the imagination by cognitive psychology and neuroscience.  In this talk\, I canvass these models and their achievements (especially Generative Adversarial Networks\, Variational Autoencoders\, and Generative Transformers) to arbitrate this dispute between Humean empiricism and Fodorian rationalism.  Of particular interest will be various methods of latent space vector interpolation which appear to allow these models to create novel compositional representations\, whether these methods still count as associationist in nature\, and whether the purportedly crucial distinction between interpolation and extrapolation remains viable in the higher-order dimensional spaces over which these models operate. \nClick here for Cameron Buckner’s Bio. \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-cameron-buckner-university-of-houston/
LOCATION:On Zoom
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2021/09/head-1.jpg
END:VEVENT
END:VCALENDAR