BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250513T140000
DTEND;TZID=Europe/Berlin:20250513T153000
DTSTAMP:20260424T002730
CREATED:20250226T122030Z
LAST-MODIFIED:20250409T100403Z
UID:23609-1747144800-1747150200@www.scienceofintelligence.de
SUMMARY:Mate Nagy (MTA-ELTE Lendület Collective Behaviour Research Group\, Budapest)
DESCRIPTION:More details to follow. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/mate-nagy-mta-elte-lendulet-collective-behaviour-research-group-budapest/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp3.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250527T140000
DTEND;TZID=Europe/Berlin:20250527T153000
DTSTAMP:20260424T002730
CREATED:20250226T122357Z
LAST-MODIFIED:20250409T100410Z
UID:23614-1748354400-1748359800@www.scienceofintelligence.de
SUMMARY:Jacob Davidson (Max Planck Institute for Animal Behavior\, Konstanz)
DESCRIPTION:More details to follow. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jacob-davidson-max-planck-institute-for-animal-behavior-konstanz/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp4.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250530T140000
DTEND;TZID=Europe/Berlin:20250530T160000
DTSTAMP:20260424T002730
CREATED:20250402T094459Z
LAST-MODIFIED:20250603T123920Z
UID:23998-1748613600-1748620800@www.scienceofintelligence.de
SUMMARY:Cornelia Fermüller (University of Maryland)\, “Computational Principles of Embodied Intelligence for Robust Motion Perception and Action”
DESCRIPTION:Abstract\nUnderstanding the computational principles of embodied intelligence is central to advancing robotic systems that perceive and act in complex environments. This talk explores key principles—low power consumption\, robustness\, and generalizability—as they emerge in the context of motion perception and action. For visual navigation\, evidence is presented that challenges the conventional SLAM paradigm\, which relies on correspondence estimation and 3D scene reconstruction. Instead\, 3D motion estimation and scene segmentation can be achieved using 1D normal flow measurements derived from image gradients\, offering a simpler and more robust alternative. The effectiveness of this approach is demonstrated through implementations on drones equipped with both standard and neuromorphic dynamic vision sensors. Further\, it is shown that physical interaction tasks do not necessarily require explicit depth estimation; rather\, distance can be inferred in action-dependent units grounded in control dynamics. Finally\, the role of visual motion in action understanding is examined\, focusing on how motion-derived primitives support robust and generalizable representations of action\, opening new avenues for embodied intelligence in robotic systems. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nBio \nCornelia Fermüller is a Research Scientist at the University of Maryland’s Institute for Advanced Computer Studies (UMIACS)\, where she co-founded the Autonomy Cognition and Robotics (ARC) Lab and co-leads the Perception and Robotics Group. Her research lies at the intersection of computer vision\, robotics\, and human vision\, with a focus on biologically inspired solutions for active vision systems. She has made significant contributions to the understanding of visual perception by developing computational models for visual motion analysis\, 3D motion and shape estimation\, texture analysis\, and action recognition\, as well as integrating perception\, action\, and reasoning to enable cognitive robots to learn and interpret human manipulation actions. \nDr. Fermüller holds an M.S. from the University of Technology\, Graz\, and a Ph.D. in Applied Mathematics from the Technical University of Vienna. Her recent work emphasizes the use of event-based\, bio-inspired sensors for robust motion perception in challenging environments\, with applications ranging from fast motion perception for drones to autonomous driving in diverse lighting conditions. She is the principal investigator of an NSF-sponsored Science of Learning Center Network for Neuromorphic Engineering\, co-organizes the Neuromorphic Engineering and Cognition Workshop\, and has been recognized for her leadership in interdisciplinary research bridging computational modeling and psychophysical studies of human vision. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/cornelia-fermuller/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp16.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250603T140000
DTEND;TZID=Europe/Berlin:20250603T153000
DTSTAMP:20260424T002730
CREATED:20250226T122648Z
LAST-MODIFIED:20250606T131027Z
UID:23618-1748959200-1748964600@www.scienceofintelligence.de
SUMMARY:Jens Krause (Science of Intelligence)\, "The Adaptive Value of Collective Behavior"
DESCRIPTION:In this talk Jens Krause will discuss the adaptive value of collective behaviour from different perspectives. One perspective is the potential ability of groups or collectives to make better and even faster decisions. In this context Jens will show some of the modelling approaches to explain collective intelligence and the empirical support for them in the laboratory and in the field. Furthermore\, he will show some empirical findings regarding collective intelligence which challenge our current understanding of the underlying mechanisms. Another perspective is that of collective behaviour as a defense against predators. It has been found in a number of different species that various forms of collective spirals and waves can fend off predators. This implies that at a global group-wide level\, collective patterns are not just beautiful to look at but can provide anti-predator functions which are just beginning to understand. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jens-krause-science-of-intelligence/
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/KK_2-scaled-e1748593902816.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250605T140000
DTEND;TZID=Europe/Berlin:20250605T180000
DTSTAMP:20260424T002730
CREATED:20250407T093220Z
LAST-MODIFIED:20250530T112036Z
UID:24159-1749132000-1749146400@www.scienceofintelligence.de
SUMMARY:Martina Poletti (University of Rochester)\, "Active Foveal Vision" and Michele Rucci (University of Rochester)\, "Active Space-Time Encoding: The Inseparable Link Between Vision and Action"
DESCRIPTION:Martina Poletti’s talk will focus on active foveal vision. Vision is an active process even at its finest scale in the 1-deg foveola\, the visual system is primarily sensitive to changes in the visual input and it has been shown that fixational eye movements reformat the spatiotemporal flow to the retina in a way that is optimal for fine spatial vision. Using high-precision eye-tracking coupled with a system for gaze-contingent display capable of localizing the line of sight with arcminute precision\, and an Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) for high-resolution retinal imaging enabling retinal-contingent manipulations of the visual input\, their results show that the need for active foveolar vision also stems from the non-uniformity of fine spatial vision across this region. Further\, they show that the visual system is highly sensitive even to a small sub-foveolar loss of vision and fixation behavior is readjusted to compensate for this loss. Overall\, the emerging picture is that of a highly non-homogenous foveolar vision characterized by a refined level of control of attention and fixational eye movements at this scale. \nMichele Rucci’s talk explores how the human visual system constructs spatial representations. Unlike other sensory modalities\, where spatial information must be inferred from incoming signals\, vision begins with a sophisticated imaging system—the eye—that explicitly preserves spatial structure on the retina. This might suggest that human vision is primarily a passive spatial process\, in which the eye simply transmits the retinal image to the cortex—much like uploading a digital photograph—to form a map of the scene. However\, this analogy is misleading\, as it overlooks the strong temporal sensitivity of visual neurons and contradicts theoretical models and experimental findings that examine vision in the context of natural motor behavior. Here\, Michele Rucci will review recent evidence supporting active space-time encoding—the idea that\, as with other senses\, vision relies on motor strategies to encode spatial information in the temporal domain. This concept has important implications for understanding the normal functioning of the visual system\, the effects of abnormal oculomotor behavior\, and the development of visual prostheses. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/active-seeing-with-martina-poletti-university-of-rochester-and-michele-rucci-university-of-rochester/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250606T140000
DTEND;TZID=Europe/Berlin:20250606T160000
DTSTAMP:20260424T002730
CREATED:20250407T093540Z
LAST-MODIFIED:20250603T094631Z
UID:24164-1749218400-1749225600@www.scienceofintelligence.de
SUMMARY:Tony Prescott (University of Sheffield)\, "The Psychology of Artificial Intelligence"
DESCRIPTION:Artificial intelligence and robotics have been making great progress in recent years but how close are we to emulating human intelligence?  This talk will explore the similarities and differences between humans and AIs and discuss the development of biomimetic cognitive systems that more directly think and behave like us.  A key focus will be on layered control architectures for robots inspired by the mammalian brain. The talk will be illustrated with work from my lab on active sensing\, memory\, and sense of self for animal-like and humanoid robots. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nPhoto generated with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/tony-prescott-university-of-sheffield/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/abstract_ai_vs_human_thought-e1748620484784.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250610T140000
DTEND;TZID=Europe/Berlin:20250610T153000
DTSTAMP:20260424T002730
CREATED:20250226T122854Z
LAST-MODIFIED:20250606T131115Z
UID:23624-1749564000-1749569400@www.scienceofintelligence.de
SUMMARY:Andrew J. King (Swansea University)\,"Understanding Animal Collective Behaviour Across Systems"
DESCRIPTION:Andrew King is a scientist driven by curiosity\, exploring questions across species\, contexts\, and methods. His research group investigates how and why individuals engage in collective behaviour\, using a wide range of systems\, perspectives\, and tools. In this seminar\, he will present their fundamental work in behavioural biology\, as well as its applied themes\, including animal management and bio-inspired engineering. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/andrew-j-king-shoal-group-swansea-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp13.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250612T140000
DTEND;TZID=Europe/Berlin:20250612T180000
DTSTAMP:20260424T002730
CREATED:20250407T094009Z
LAST-MODIFIED:20250611T105232Z
UID:24168-1749736800-1749751200@www.scienceofintelligence.de
SUMMARY:Jennifer Groh (Duke University) and Kristen Grauman (University of Texas)\, "What Eye Movements Have to Do with Hearing"
DESCRIPTION:Jennifer Groh (Duke University) \nHearing works in concert with vision\, such as when we watch someone’s lips move to help us understand what they are saying.  But bridging between these two senses poses computational challenges for the brain.  One such challenge involves movements of the eyes – every time the eyes move with respect to the head\, the relationship between visual spatial input (the retina) and auditory spatial input (sound localization cues anchored to the head) changes.  I will describe this problem from early computational and experimental work showing how and where signals regarding eye movements are incorporated into auditory processing\, closing with a recent discovery from our group that a signal regarding eye movements is sent by the brain to the ears themselves.  This signal casues the eardrum to oscillate in conjunction with eye movements (Gruters et al PNAS 2018) and carries detailed spatial information about the direction and amplitude of the eye movement (Lovich et al PNAS 2023). I will also present new findings concerning the underlying mechanism of this effect\, involving the contributions of the middle ear muscles and outer hair cells\, and the potential impact on sound transduction. \n  \nKristen Grauman (University of Texas)\, “Audio-visual learning in 3D environments” \nPerception systems that can both see and hear have great potential to unlock problems in video understanding\, augmented reality\, and embodied AI. I will present our recent work in egocentric audio-visual (AV) perception. First\, we explore how audio’s spatial signals can augment visual understanding of 3D environments. This includes ideas for self-supervised feature learning from echoes\, AV floorplan reconstruction\, and active source separation\, where an agent intelligently moves to hear things better in a busy environment. Throughout this line of work\, we leverage our open-source SoundSpaces platform\, which allows state-of-the-art rendering of highly realistic audio in real-world scanned environments. Next\, building on these spatial AV and scene acoustics ideas\, we introduce new ways to enhance the audio stream – making it possible to transport a sound to a new physical environment observed in a photo\, or to dereverberate speech so it is intelligible for machine and human ears alike. \n  \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jennifer-groh-duke-university-and-kristen-grauman-university-of-texas-active-hearing/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp11.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250613T140000
DTEND;TZID=Europe/Berlin:20250613T160000
DTSTAMP:20260424T002730
CREATED:20250407T094415Z
LAST-MODIFIED:20250610T100714Z
UID:24172-1749823200-1749830400@www.scienceofintelligence.de
SUMMARY:Fumiya lida (University of Cambridge) "Info-Bodiment: Informatization of Robot Embodiment for the Next Generation AI Robots"
DESCRIPTION:There is growing interest in applying AI technologies to the control of intelligent robotic systems. While this research has led to promising developments\, it still faces major challenges due to its heavy reliance on learning from limited datasets—often dominated by visual information. In this talk\, I will introduce “Info-Embodiment” as a new research framework for realizing Embodied Intelligence\, along with its underlying technological foundations. As advances in soft robotics and functional materials enable deeper integration between the informational and physical realms\, we are beginning to see the emergence of novel forms of embodied intelligence. Within this evolving landscape\, I will explore how rapidly advancing fields such as machine learning can help accelerate progress. Going beyond conventional models of body control and AI as abstract computational systems\, this approach positions the body itself as an active site of information processing and generation\, opening new possibilities for intelligent behavior. \nBio\nFumiya Iida is Professor of Robotics at the Department of Engineering\, University of Cambridge. Previously he was an assistant professor for bio-inspired robotics at ETH Zurich (2009-2014) and a lecturer at Cambridge (2014-2018). He received his bachelor and master degrees in mechanical engineering at Tokyo University of Science (Japan\, 1999)\, and Dr. sc. nat. in Informatics at University of Zurich (2006). In 2004 and 2005 he was also engaged in biomechanics research of human locomotion at Locomotion Laboratory\, University of Jena (Germany). From 2006 to 2009 he worked as a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory\, Massachusetts Institute of Technology in USA. In 2006 he was awarded the Fellowship for Prospective Researchers from the Swiss National Science Foundation and\, in 2009\, the Swiss National Science Foundation Professorship. He was a recipient of the IROS2016 Fukuda Young Professional Award\, Royal Society Translation Award in 2017\, Tokyo University of Science Award in 2021. His research interests include biologically inspired robotics\, embodied artificial intelligence\, and biomechanics of human locomotion and manipulation\, where he was involved in a number of research projects related to dynamic legged locomotion\, navigation of autonomous robots\, and human-machine interactions. For more information\, visit the Bio-Inspired Robotics Laboratory website. \n  \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n 
URL:https://www.scienceofintelligence.de/event/fumiya-iida-university-of-cambridge/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/zp-TU-HU-ExcelenzForschung-20240122-073-scaled-e1749550030237.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250617T140000
DTEND;TZID=Europe/Berlin:20250617T153000
DTSTAMP:20260424T002730
CREATED:20250226T124956Z
LAST-MODIFIED:20250617T121156Z
UID:23627-1750168800-1750174200@www.scienceofintelligence.de
SUMMARY:Heiko Hamann (Science of Intelligence)\, "From Models to Machines: A Roboticist’s View on Collective Behavior"
DESCRIPTION:Swarm robotics investigates how large numbers of relatively simple\, autonomous robots can coordinate to complete complex collective tasks. In this lecture\, we explore how models of collective behavior can guide the design of such systems. We highlight how modeling collective behavior is not only a tool for understanding natural systems\, but a powerful method to synthesize coordinated behaviors in robot swarms. We contrast bio-mimicry to more abstract bio-inspired paradigms. Through examples like task allocation and flocking\, we demonstrate how biological insights can shape engineering choices.  An impressive insight from biology is that ‘less is more\,’ that is\, less communication or less knowledge can sometimes increase the swarm’s performance. We conclude by briefly discussing swarm robotics applications that diverge from biological analogies and reflect on future directions. \n--\nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/heiko-hamann-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp19.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250626T140000
DTEND;TZID=Europe/Berlin:20250626T180000
DTSTAMP:20260424T002730
CREATED:20250402T101646Z
LAST-MODIFIED:20250618T134257Z
UID:24009-1750946400-1750960800@www.scienceofintelligence.de
SUMMARY:Michael Brecht\, "Active touch and Large-Brain Neuroscience in Elephants" and Yasemin Vardar\, "Active Synthetic Touch: Generating Naturalistic Multisensory Tactile Stimuli for Active Exploration"
DESCRIPTION:Michael Brecht (BCCN Berlin) will present data on a systemic investigation of brains and of grasping behavior in elephants. The analysis of sensory nerves suggests that elephants are extremely tactile animals. In elephants\, trunk whisker length is lateralized as a result of heavily lateralized trunk behaviors. The elephant trunk tip appears to be represented by a large cortical three-dimensional trunk-tip model; this observation is reminiscent of the somatosensory cortical snout representation in pigs. The trunk musculature of elephants is breath-takingly complex and filigree. Trunk morphology\, motor neuron organization and grasping differs between African elephants (which pinch objects with their two trunk fingers) and Asian elephants (which have only one finger and wrap objects with their trunk).\nHe will discuss the potential of novel X-ray technologies for large brain analysis. Both behavioral analysis and elephant neuroanatomy reveal striking individual differences between individual elephants. Thus\, it appears that elephants are less equal than other animals. \nImagine you could feel your pet’s fur on a Zoom call\, the fabric of the clothes you are considering purchasing online\, or tissues in medical images. We are all familiar with the impact of digitization of audio and visual information in our daily lives – every time we take videos or pictures on our phones. Yet\, there is no such equivalent for our sense of touch. This talk will encompass Yasemin Vardars (Delft University of Technology) scientific efforts in digitizing naturalistic tactile information for the last decade. She will explain the methodologies and interfaces we have been developing with my team and collaborators for capturing\, encoding\, and recreating the perceptually salient features of tactile textures for active bare-finger interactions. She will also discuss current challenges\, future research paths\, and potential applications in tactile digitization. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/michael-brecht-bccn-berlin-and-yasemin-vardar-delft-university-of-technology-active-touch/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp12.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250701T140000
DTEND;TZID=Europe/Berlin:20250701T153000
DTSTAMP:20260424T002730
CREATED:20250407T095720Z
LAST-MODIFIED:20250627T085132Z
UID:24183-1751378400-1751383800@www.scienceofintelligence.de
SUMMARY:POSTPONED: Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:This event has been postponed to 29 July 2025. \nHow should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250711T140000
DTEND;TZID=Europe/Berlin:20250711T160000
DTSTAMP:20260424T002730
CREATED:20250402T102151Z
LAST-MODIFIED:20250711T082214Z
UID:24013-1752242400-1752249600@www.scienceofintelligence.de
SUMMARY:William Warren (Brown University)\, "The Dynamics of Perception and Action: From Pedestrian Interactions to Collective Behavior"
DESCRIPTION:It’s a perplexing time in the study of visual perception. On the one hand\, there is a resurgence of models that freely posit a priori structure in the visual system\, including priors\, generative world models\, and physics engines. On the other hand\, there is the astonishing a posteriori success of deep neural networks trained only on natural images and image sequences. Although their performance offers an existence proof of the sufficiency of image information for certain visual tasks\, the black box of deep learning does not easily offer up that information or how it’s extracted by the visual system. \nA science of perception depends on understanding the visual information that is available in natural environments and is used to guide natural behavior. I propose that we take seriously James Gibson’s information hypothesis: For every perceivable property of the environment\, however subtle\, there must be a variable of information\, however complex\, that uniquely specifies it. The project is to identify the information that the visual system uses to perceive and act within the constraints of a species’ ecological niche. \nTwo decades ago I decided to work out a test case to see whether an information-based account of a natural behavior could be sustained. In this talk I will offer a status report on our effort to build a model of visually controlled human locomotion – a pedestrian model – that scales up from individual behaviors like steering and obstacle avoidance\, to pedestrian interactions like following and collision avoidance\, to the collective behavior of human crowds. Surprisingly\, linear combinations of these nonlinear components can account for the emergence of more complex behavior\, such as self-organized ‘flocking’\, crowd bifurcations\, and stripe formation in crossing flows. \nBio \nBill (he/him) earned his undergraduate degree at Hampshire College (1976)\, his Ph.D. in Experimental Psychology from the University of Connecticut (1982)\, did post-doctoral work at the University of Edinburgh\, and has been a professor at Brown ever since. He served as Chair of the Department of Cognitive and Linguistic Sciences from 2002-10. Warren is the recipient of a Fulbright Research Fellowship\, an NIH Research Career Development Award\, and Brown’s Elizabeth Leduc Teaching Award for Excellence in the Life Sciences. Warren’s research focuses on the visual control of action – in particular\, human locomotion and navigation. He seeks to explain how this behavior is adaptively regulated by multi-sensory information\, within a dynamical systems framework. Using virtual reality techniques\, his research team investigates problems such as the visual control of steering\, obstacle avoidance\, wayfinding\, pedestrian interactions\, and the collective behavior of crowds. Experiments in the Virtual Environment Navigation Lab (VENLab) enable his group to manipulate what participants see as they walk through a virtual landscape\, and to measure and model their behavior. The aim of this research is to understand how adaptive behavior emerges from the dynamic interaction between an organism and its environment. He believes the answers will not be found only in the brain\, but will strongly depend on the physical and informational regularities that the brain exploits. This work contributes to basic knowledge that is needed to understand visual-motor disorders in humans\, and to develop mobile robots that can operate in novel environments. For more information\, visit his faculty profile or the VENLab website. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/william-warren-brown-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp3.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250718T140000
DTEND;TZID=Europe/Berlin:20250718T153000
DTSTAMP:20260424T002730
CREATED:20250429T090411Z
LAST-MODIFIED:20250716T122502Z
UID:24495-1752847200-1752852600@www.scienceofintelligence.de
SUMMARY:Jacob Yates (UC Berkeley)\, "The Role of Motor Signals in Visual Cortex"
DESCRIPTION:Embodiment is fundamental to biological intelligence. Brains do not passively receive the world\, they actively shape what they sense through self-motion. For nearly a century\, we have known that perception and action are deeply entangled\, and that organisms must constantly infer whether a sensory change comes from the environment or from themselves. A longstanding idea holds that sensory signals are either suppressed during movement or that movement effects are subtracted out. However\, recent discoveries in neuroscience\, especially in rodents\, suggest that spontaneous movements strongly influence sensory cortex. In this talk\, I will share our work re-examining this question in primates. We found that movements do not broadly modulate visual cortex unless they move the retina\, creating an inherent ambiguity between motor effects and changes in sensory input. I will describe our new approach to disentangling sensorimotor interactions during natural behavior\, combining high-resolution eye tracking with high-density neural recordings and modern machine learning. By precisely measuring the retinal input during natural vision\, we find that much of what appears to be a motor signal is actually visual reafference\, the lawful\, structured sensory consequences of an animal’s own actions. I will discuss how measuring and modeling this loop can deepen our understanding of active inference in the brain and what it means for designing truly embodied agents that adapt to the world as brains do. \nBio \nJacob Yates (he/him) is an Assistant Professor of Optometry & Vision Science at UC Berkeley and leads the Active Vision and Neural Computation Lab. His research explores how populations of neurons in the cortex and early visual pathways encode the visual world\, with a particular focus on how eye movements generate and utilize information for perception. By combining statistical and machine learning approaches\, his lab builds computational models to better understand neural activity and human perception\, ultimately aiming to bridge the gap between neural coding and real-world visual behavior. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nPhoto by Soliman Cifuentes on Unsplash.
URL:https://www.scienceofintelligence.de/event/jacob-yates-uc-berkeley/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/soliman-cifuentes-RXGLTHZ6Mo8-unsplash-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250729T140000
DTEND;TZID=Europe/Berlin:20250729T160000
DTSTAMP:20260424T002730
CREATED:20250627T090602Z
LAST-MODIFIED:20250724T124821Z
UID:25799-1753797600-1753804800@www.scienceofintelligence.de
SUMMARY:Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:How should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence-2/
LOCATION:Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250925T160000
DTEND;TZID=Europe/Berlin:20250925T173000
DTSTAMP:20260424T002730
CREATED:20250429T091548Z
LAST-MODIFIED:20250924T073557Z
UID:24506-1758816000-1758821400@www.scienceofintelligence.de
SUMMARY:Ariana Strandburg-Peshkin (MPI-AB & the University of Konstanz)\, "Communication and coordination in animal societies"
DESCRIPTION:Abstract: \nMany social species use signals such as vocalizations to coordinate a range of group behaviors\, from coming to consensus on where to move to banding together against threats. Despite their widespread importance\, these behaviors remain challenging to study in the wild because doing so requires monitoring many individuals simultaneously. In this talk\, I will give an overview of our work tracking the movements and vocalizations of entire social groups in the wild to tackle questions at the interface of communication and collective behavior. What roles does vocal signaling play in the coordination of collective movement? What drives groups to split up? And how do vocalizations mediate collective action against external threats? I will explore how we and our collaborators are addressing these questions in three species of social carnivore that coordinate across varying spatial scales – highly cohesive meerkat groups\, moderately cohesive coati groups\, and fission-fusion hyena clans. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \nPhoto by Gertrūda Valasevičiūtė on Unsplash.
URL:https://www.scienceofintelligence.de/event/ariana-strandburg-peshkin-mpi-ab-the-university-of-konstanz/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/gertruda-valaseviciute-xMObPS6V_gY-unsplash-scaled.jpg
END:VEVENT
END:VCALENDAR