BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250701T140000
DTEND;TZID=Europe/Berlin:20250701T153000
DTSTAMP:20260428T154245
CREATED:20250407T095720Z
LAST-MODIFIED:20250627T085132Z
UID:24183-1751378400-1751383800@www.scienceofintelligence.de
SUMMARY:POSTPONED: Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:This event has been postponed to 29 July 2025. \nHow should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250703T100000
DTEND;TZID=Europe/Berlin:20250703T110000
DTSTAMP:20260428T154245
CREATED:20250429T085710Z
LAST-MODIFIED:20250603T123649Z
UID:24490-1751536800-1751540400@www.scienceofintelligence.de
SUMMARY:Raina Zakir (Université Libre De Bruxelles)\, “Robust Decision-Making in Minimalistic Robot Swarms Under Social Noise”
DESCRIPTION:Abstract \nMinimalistic robot swarms hold great promise for applications in healthcare\, disaster response\, and environmental monitoring. A key challenge lies in enabling these robots to rapidly and reliably reach consensus using limited communication\, computation\, and memory. In this talk\, we explore how robot swarms can collectively identify the best among multiple discrete options in their environment. We analyze and compare several prominent decision-making algorithms through both simulations and theoretical modeling. Particular attention is given to how asocial behaviors—introducing social noise—affect convergence and robustness. Our results offer insights into designing simple yet effective voting rules for robust consensus in decentralized swarm systems. \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/raina-zakir-universite-libre-de-bruxelles/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp13.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250711T140000
DTEND;TZID=Europe/Berlin:20250711T160000
DTSTAMP:20260428T154245
CREATED:20250402T102151Z
LAST-MODIFIED:20250711T082214Z
UID:24013-1752242400-1752249600@www.scienceofintelligence.de
SUMMARY:William Warren (Brown University)\, "The Dynamics of Perception and Action: From Pedestrian Interactions to Collective Behavior"
DESCRIPTION:It’s a perplexing time in the study of visual perception. On the one hand\, there is a resurgence of models that freely posit a priori structure in the visual system\, including priors\, generative world models\, and physics engines. On the other hand\, there is the astonishing a posteriori success of deep neural networks trained only on natural images and image sequences. Although their performance offers an existence proof of the sufficiency of image information for certain visual tasks\, the black box of deep learning does not easily offer up that information or how it’s extracted by the visual system. \nA science of perception depends on understanding the visual information that is available in natural environments and is used to guide natural behavior. I propose that we take seriously James Gibson’s information hypothesis: For every perceivable property of the environment\, however subtle\, there must be a variable of information\, however complex\, that uniquely specifies it. The project is to identify the information that the visual system uses to perceive and act within the constraints of a species’ ecological niche. \nTwo decades ago I decided to work out a test case to see whether an information-based account of a natural behavior could be sustained. In this talk I will offer a status report on our effort to build a model of visually controlled human locomotion – a pedestrian model – that scales up from individual behaviors like steering and obstacle avoidance\, to pedestrian interactions like following and collision avoidance\, to the collective behavior of human crowds. Surprisingly\, linear combinations of these nonlinear components can account for the emergence of more complex behavior\, such as self-organized ‘flocking’\, crowd bifurcations\, and stripe formation in crossing flows. \nBio \nBill (he/him) earned his undergraduate degree at Hampshire College (1976)\, his Ph.D. in Experimental Psychology from the University of Connecticut (1982)\, did post-doctoral work at the University of Edinburgh\, and has been a professor at Brown ever since. He served as Chair of the Department of Cognitive and Linguistic Sciences from 2002-10. Warren is the recipient of a Fulbright Research Fellowship\, an NIH Research Career Development Award\, and Brown’s Elizabeth Leduc Teaching Award for Excellence in the Life Sciences. Warren’s research focuses on the visual control of action – in particular\, human locomotion and navigation. He seeks to explain how this behavior is adaptively regulated by multi-sensory information\, within a dynamical systems framework. Using virtual reality techniques\, his research team investigates problems such as the visual control of steering\, obstacle avoidance\, wayfinding\, pedestrian interactions\, and the collective behavior of crowds. Experiments in the Virtual Environment Navigation Lab (VENLab) enable his group to manipulate what participants see as they walk through a virtual landscape\, and to measure and model their behavior. The aim of this research is to understand how adaptive behavior emerges from the dynamic interaction between an organism and its environment. He believes the answers will not be found only in the brain\, but will strongly depend on the physical and informational regularities that the brain exploits. This work contributes to basic knowledge that is needed to understand visual-motor disorders in humans\, and to develop mobile robots that can operate in novel environments. For more information\, visit his faculty profile or the VENLab website. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/william-warren-brown-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp3.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250717T100000
DTEND;TZID=Europe/Berlin:20250717T100000
DTSTAMP:20260428T154245
CREATED:20250623T124834Z
LAST-MODIFIED:20250716T123207Z
UID:25730-1752746400-1752746400@www.scienceofintelligence.de
SUMMARY:Matthias Nau (Vrije Universiteit Amsterdam)\, "Revealing General Principles Underlying Active Vision and Memory"
DESCRIPTION:Abstract:\nCognitive neuroscience seeks theories that jointly explain behavioral\, neural\, and mental states. The dominant approach is to use specialized tasks designed to optimally probe a concept of interest (e.g.\, episodic memory)\, and to disentangle behavioral\, sensory\, and mnemonic factors through design (e.g.\, by constraining gaze during image recognition). I will present an alternative framework that instead recognizes that concepts such as perception\, memory\, and action are often inextricable\, both theoretically and empirically\, which I demonstrate for example by showing that brain activity during movie viewing and recall is linked through eye movements. I will argue that new generalizable concepts are needed to explain phenomena across domains\, and outline how such concepts may be empirically derived through multi-task studies: by testing generalization of results across tasks and data modalities\, we reveal the mutual constraints task demands impose on behavioral\, neural\, and mental states. In this context\, I will also highlight the importance of ‘naturalistic’ tasks and behavioral tracking for cognitive neuroscience\, and briefly introduce open-source tools for camera-free MR-based eye tracking. \nImage created by Maria Ott with DALL-E.
URL:https://www.scienceofintelligence.de/event/matthias-nau-vrije-universiteit-amsterdam/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/abstract_ai_vs_human_thought-e1748620484784.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250718T140000
DTEND;TZID=Europe/Berlin:20250718T153000
DTSTAMP:20260428T154245
CREATED:20250429T090411Z
LAST-MODIFIED:20250716T122502Z
UID:24495-1752847200-1752852600@www.scienceofintelligence.de
SUMMARY:Jacob Yates (UC Berkeley)\, "The Role of Motor Signals in Visual Cortex"
DESCRIPTION:Embodiment is fundamental to biological intelligence. Brains do not passively receive the world\, they actively shape what they sense through self-motion. For nearly a century\, we have known that perception and action are deeply entangled\, and that organisms must constantly infer whether a sensory change comes from the environment or from themselves. A longstanding idea holds that sensory signals are either suppressed during movement or that movement effects are subtracted out. However\, recent discoveries in neuroscience\, especially in rodents\, suggest that spontaneous movements strongly influence sensory cortex. In this talk\, I will share our work re-examining this question in primates. We found that movements do not broadly modulate visual cortex unless they move the retina\, creating an inherent ambiguity between motor effects and changes in sensory input. I will describe our new approach to disentangling sensorimotor interactions during natural behavior\, combining high-resolution eye tracking with high-density neural recordings and modern machine learning. By precisely measuring the retinal input during natural vision\, we find that much of what appears to be a motor signal is actually visual reafference\, the lawful\, structured sensory consequences of an animal’s own actions. I will discuss how measuring and modeling this loop can deepen our understanding of active inference in the brain and what it means for designing truly embodied agents that adapt to the world as brains do. \nBio \nJacob Yates (he/him) is an Assistant Professor of Optometry & Vision Science at UC Berkeley and leads the Active Vision and Neural Computation Lab. His research explores how populations of neurons in the cortex and early visual pathways encode the visual world\, with a particular focus on how eye movements generate and utilize information for perception. By combining statistical and machine learning approaches\, his lab builds computational models to better understand neural activity and human perception\, ultimately aiming to bridge the gap between neural coding and real-world visual behavior. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nPhoto by Soliman Cifuentes on Unsplash.
URL:https://www.scienceofintelligence.de/event/jacob-yates-uc-berkeley/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/soliman-cifuentes-RXGLTHZ6Mo8-unsplash-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250724T100000
DTEND;TZID=Europe/Berlin:20250724T110000
DTSTAMP:20260428T154245
CREATED:20250616T105829Z
LAST-MODIFIED:20250723T143455Z
UID:25596-1753351200-1753354800@www.scienceofintelligence.de
SUMMARY:POSTPONED: Alican Mertan (University of Vermont)\, "Morphological Cognition: Evolving Robots Exhibiting Cognitive Behavior without Abstract Controllers"
DESCRIPTION:With the rise of modern deep learning\, neural networks have become an essential part of virtually every artificial intelligence system\, making it difficult to imagine different models for intelligent behavior. In contrast\, nature provides us with many different mechanisms for intelligent behavior\, most of which we have yet to utilize. One such underinvestigated aspect of intelligence is embodiment and the role it plays in intelligent behavior. We suspect that “the unreasonable effectiveness of deep learning” overshadowed the investigation into what bodies mean for intelligence\, especially how they can be a source of intelligent behavior\, as opposed to passively participating in its display.\nTo investigate how bodies alone give rise to intelligent behavior\, we suggest treating bodies not just as an aid to the brain\, but also studying them as doing full cognitive behavior end-to-end. We term such robots that demonstrate cognitive behaviors without an abstract control layer as possessing “morphological cognition”. I will present our initial work on morphological cognition\, where we use simple shape-changing processes to create robots that can perform a range of tasks from locomotion to image classification without any abstract controller (i.e.\, no neural network). \n  \nImage created by Maria Ott with DALL-E
URL:https://www.scienceofintelligence.de/event/alican-mertan-university-of-vermont-morphological-cognition-evolving-robots-exhibiting-cognitive-behavior-without-abstract-controllers/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250729T140000
DTEND;TZID=Europe/Berlin:20250729T160000
DTSTAMP:20260428T154245
CREATED:20250627T090602Z
LAST-MODIFIED:20250724T124821Z
UID:25799-1753797600-1753804800@www.scienceofintelligence.de
SUMMARY:Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:How should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence-2/
LOCATION:Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
END:VCALENDAR