BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250605T100000
DTEND;TZID=Europe/Berlin:20250605T110000
DTSTAMP:20260404T112708
CREATED:20250429T084014Z
LAST-MODIFIED:20250603T123901Z
UID:24475-1749117600-1749121200@www.scienceofintelligence.de
SUMMARY:Palina Bartashevich and David Bierbach (Science of Intelligence)\, “Collective Air Breathing In the Largest Freshwater Fish on Earth”
DESCRIPTION:More details to follow. \nPhoto by David Clode on Unsplash.
URL:https://www.scienceofintelligence.de/event/palina-bartashevich-and-david-bierbach-collective-air-breathing-in-the-largest-freshwater-fish-on-earth/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/david-clode-rpA8tpa4QO0-unsplash-1-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250605T140000
DTEND;TZID=Europe/Berlin:20250605T180000
DTSTAMP:20260404T112708
CREATED:20250407T093220Z
LAST-MODIFIED:20250530T112036Z
UID:24159-1749132000-1749146400@www.scienceofintelligence.de
SUMMARY:Martina Poletti (University of Rochester)\, "Active Foveal Vision" and Michele Rucci (University of Rochester)\, "Active Space-Time Encoding: The Inseparable Link Between Vision and Action"
DESCRIPTION:Martina Poletti’s talk will focus on active foveal vision. Vision is an active process even at its finest scale in the 1-deg foveola\, the visual system is primarily sensitive to changes in the visual input and it has been shown that fixational eye movements reformat the spatiotemporal flow to the retina in a way that is optimal for fine spatial vision. Using high-precision eye-tracking coupled with a system for gaze-contingent display capable of localizing the line of sight with arcminute precision\, and an Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) for high-resolution retinal imaging enabling retinal-contingent manipulations of the visual input\, their results show that the need for active foveolar vision also stems from the non-uniformity of fine spatial vision across this region. Further\, they show that the visual system is highly sensitive even to a small sub-foveolar loss of vision and fixation behavior is readjusted to compensate for this loss. Overall\, the emerging picture is that of a highly non-homogenous foveolar vision characterized by a refined level of control of attention and fixational eye movements at this scale. \nMichele Rucci’s talk explores how the human visual system constructs spatial representations. Unlike other sensory modalities\, where spatial information must be inferred from incoming signals\, vision begins with a sophisticated imaging system—the eye—that explicitly preserves spatial structure on the retina. This might suggest that human vision is primarily a passive spatial process\, in which the eye simply transmits the retinal image to the cortex—much like uploading a digital photograph—to form a map of the scene. However\, this analogy is misleading\, as it overlooks the strong temporal sensitivity of visual neurons and contradicts theoretical models and experimental findings that examine vision in the context of natural motor behavior. Here\, Michele Rucci will review recent evidence supporting active space-time encoding—the idea that\, as with other senses\, vision relies on motor strategies to encode spatial information in the temporal domain. This concept has important implications for understanding the normal functioning of the visual system\, the effects of abnormal oculomotor behavior\, and the development of visual prostheses. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/active-seeing-with-martina-poletti-university-of-rochester-and-michele-rucci-university-of-rochester/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250606T140000
DTEND;TZID=Europe/Berlin:20250606T160000
DTSTAMP:20260404T112708
CREATED:20250407T093540Z
LAST-MODIFIED:20250603T094631Z
UID:24164-1749218400-1749225600@www.scienceofintelligence.de
SUMMARY:Tony Prescott (University of Sheffield)\, "The Psychology of Artificial Intelligence"
DESCRIPTION:Artificial intelligence and robotics have been making great progress in recent years but how close are we to emulating human intelligence?  This talk will explore the similarities and differences between humans and AIs and discuss the development of biomimetic cognitive systems that more directly think and behave like us.  A key focus will be on layered control architectures for robots inspired by the mammalian brain. The talk will be illustrated with work from my lab on active sensing\, memory\, and sense of self for animal-like and humanoid robots. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nPhoto generated with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/tony-prescott-university-of-sheffield/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/abstract_ai_vs_human_thought-e1748620484784.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250610T140000
DTEND;TZID=Europe/Berlin:20250610T153000
DTSTAMP:20260404T112708
CREATED:20250226T122854Z
LAST-MODIFIED:20250606T131115Z
UID:23624-1749564000-1749569400@www.scienceofintelligence.de
SUMMARY:Andrew J. King (Swansea University)\,"Understanding Animal Collective Behaviour Across Systems"
DESCRIPTION:Andrew King is a scientist driven by curiosity\, exploring questions across species\, contexts\, and methods. His research group investigates how and why individuals engage in collective behaviour\, using a wide range of systems\, perspectives\, and tools. In this seminar\, he will present their fundamental work in behavioural biology\, as well as its applied themes\, including animal management and bio-inspired engineering. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/andrew-j-king-shoal-group-swansea-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp13.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250612T140000
DTEND;TZID=Europe/Berlin:20250612T180000
DTSTAMP:20260404T112708
CREATED:20250407T094009Z
LAST-MODIFIED:20250611T105232Z
UID:24168-1749736800-1749751200@www.scienceofintelligence.de
SUMMARY:Jennifer Groh (Duke University) and Kristen Grauman (University of Texas)\, "What Eye Movements Have to Do with Hearing"
DESCRIPTION:Jennifer Groh (Duke University) \nHearing works in concert with vision\, such as when we watch someone’s lips move to help us understand what they are saying.  But bridging between these two senses poses computational challenges for the brain.  One such challenge involves movements of the eyes – every time the eyes move with respect to the head\, the relationship between visual spatial input (the retina) and auditory spatial input (sound localization cues anchored to the head) changes.  I will describe this problem from early computational and experimental work showing how and where signals regarding eye movements are incorporated into auditory processing\, closing with a recent discovery from our group that a signal regarding eye movements is sent by the brain to the ears themselves.  This signal casues the eardrum to oscillate in conjunction with eye movements (Gruters et al PNAS 2018) and carries detailed spatial information about the direction and amplitude of the eye movement (Lovich et al PNAS 2023). I will also present new findings concerning the underlying mechanism of this effect\, involving the contributions of the middle ear muscles and outer hair cells\, and the potential impact on sound transduction. \n  \nKristen Grauman (University of Texas)\, “Audio-visual learning in 3D environments” \nPerception systems that can both see and hear have great potential to unlock problems in video understanding\, augmented reality\, and embodied AI. I will present our recent work in egocentric audio-visual (AV) perception. First\, we explore how audio’s spatial signals can augment visual understanding of 3D environments. This includes ideas for self-supervised feature learning from echoes\, AV floorplan reconstruction\, and active source separation\, where an agent intelligently moves to hear things better in a busy environment. Throughout this line of work\, we leverage our open-source SoundSpaces platform\, which allows state-of-the-art rendering of highly realistic audio in real-world scanned environments. Next\, building on these spatial AV and scene acoustics ideas\, we introduce new ways to enhance the audio stream – making it possible to transport a sound to a new physical environment observed in a photo\, or to dereverberate speech so it is intelligible for machine and human ears alike. \n  \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jennifer-groh-duke-university-and-kristen-grauman-university-of-texas-active-hearing/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp11.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250613T140000
DTEND;TZID=Europe/Berlin:20250613T160000
DTSTAMP:20260404T112708
CREATED:20250407T094415Z
LAST-MODIFIED:20250610T100714Z
UID:24172-1749823200-1749830400@www.scienceofintelligence.de
SUMMARY:Fumiya lida (University of Cambridge) "Info-Bodiment: Informatization of Robot Embodiment for the Next Generation AI Robots"
DESCRIPTION:There is growing interest in applying AI technologies to the control of intelligent robotic systems. While this research has led to promising developments\, it still faces major challenges due to its heavy reliance on learning from limited datasets—often dominated by visual information. In this talk\, I will introduce “Info-Embodiment” as a new research framework for realizing Embodied Intelligence\, along with its underlying technological foundations. As advances in soft robotics and functional materials enable deeper integration between the informational and physical realms\, we are beginning to see the emergence of novel forms of embodied intelligence. Within this evolving landscape\, I will explore how rapidly advancing fields such as machine learning can help accelerate progress. Going beyond conventional models of body control and AI as abstract computational systems\, this approach positions the body itself as an active site of information processing and generation\, opening new possibilities for intelligent behavior. \nBio\nFumiya Iida is Professor of Robotics at the Department of Engineering\, University of Cambridge. Previously he was an assistant professor for bio-inspired robotics at ETH Zurich (2009-2014) and a lecturer at Cambridge (2014-2018). He received his bachelor and master degrees in mechanical engineering at Tokyo University of Science (Japan\, 1999)\, and Dr. sc. nat. in Informatics at University of Zurich (2006). In 2004 and 2005 he was also engaged in biomechanics research of human locomotion at Locomotion Laboratory\, University of Jena (Germany). From 2006 to 2009 he worked as a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory\, Massachusetts Institute of Technology in USA. In 2006 he was awarded the Fellowship for Prospective Researchers from the Swiss National Science Foundation and\, in 2009\, the Swiss National Science Foundation Professorship. He was a recipient of the IROS2016 Fukuda Young Professional Award\, Royal Society Translation Award in 2017\, Tokyo University of Science Award in 2021. His research interests include biologically inspired robotics\, embodied artificial intelligence\, and biomechanics of human locomotion and manipulation\, where he was involved in a number of research projects related to dynamic legged locomotion\, navigation of autonomous robots\, and human-machine interactions. For more information\, visit the Bio-Inspired Robotics Laboratory website. \n  \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n 
URL:https://www.scienceofintelligence.de/event/fumiya-iida-university-of-cambridge/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/zp-TU-HU-ExcelenzForschung-20240122-073-scaled-e1749550030237.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250617T140000
DTEND;TZID=Europe/Berlin:20250617T153000
DTSTAMP:20260404T112708
CREATED:20250226T124956Z
LAST-MODIFIED:20250617T121156Z
UID:23627-1750168800-1750174200@www.scienceofintelligence.de
SUMMARY:Heiko Hamann (Science of Intelligence)\, "From Models to Machines: A Roboticist’s View on Collective Behavior"
DESCRIPTION:Swarm robotics investigates how large numbers of relatively simple\, autonomous robots can coordinate to complete complex collective tasks. In this lecture\, we explore how models of collective behavior can guide the design of such systems. We highlight how modeling collective behavior is not only a tool for understanding natural systems\, but a powerful method to synthesize coordinated behaviors in robot swarms. We contrast bio-mimicry to more abstract bio-inspired paradigms. Through examples like task allocation and flocking\, we demonstrate how biological insights can shape engineering choices.  An impressive insight from biology is that ‘less is more\,’ that is\, less communication or less knowledge can sometimes increase the swarm’s performance. We conclude by briefly discussing swarm robotics applications that diverge from biological analogies and reflect on future directions. \n--\nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/heiko-hamann-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp19.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250619T100000
DTEND;TZID=Europe/Berlin:20250619T130000
DTSTAMP:20260404T112708
CREATED:20250602T130520Z
LAST-MODIFIED:20250616T085913Z
UID:25190-1750327200-1750338000@www.scienceofintelligence.de
SUMMARY:Symposium – Plants As Model Systems for Distributed Intelligence?
DESCRIPTION:The SCIoI Symposium  – Plants as Model Systems for Distributed Intelligence investigates plant systems as potential model organisms for distributed intelligence. \nPlants respond adaptively and context-sensitively to environmental stimuli without a central nervous system—an exemplary case of non-neuronal\, decentralized intelligence. \nRainer Hedrich (University of Würzburg) will present current insights into molecular information processing in plants. Pawel Romanczuk (SCIoI) will connect this biological perspective to SCIoI’s principle-based approach to understanding intelligence across systems. The concluding panel discussion will bring together experts from plant biology and behavioral science to explore future research directions. \n10:00 – Opening\n10:05 – Invited Talk: Rainer Hedrich (University of Würzburg)\n“Plant Sensory Biology: Molecular Mechanisms of Information Management”\n10:50 – SCIoI Talk: Pawel Romanczuk (HU Berlin\, SCIoI)\n“An Inclusive Principle-Based Approach to Intelligence”\n11:35 – Break\n12:00–13:00 – Panel Discussion with Audience Participation:\nRainer Hedrich\, Kerstin Kaufmann\, Pawel Romanczuk\, Jens Krause
URL:https://www.scienceofintelligence.de/event/workshop-plants-as-model-systems-for-distributed-intelligence/
LOCATION:MAR 2.057
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/06/250606_SCIoI-Plant_Poster_page-0001-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250620T141500
DTEND;TZID=Europe/Berlin:20250620T154500
DTSTAMP:20260404T112708
CREATED:20250317T111505Z
LAST-MODIFIED:20250616T132501Z
UID:23745-1750428900-1750434300@www.scienceofintelligence.de
SUMMARY:John Tsotsos (York University)\, "Attentional Mechanisms Bridge Seeing to Looking"
DESCRIPTION:David Marr wrote ‘What does it mean\, to see? The plain man’s answer (and Aristotle’s\, too) would be\, to know what is where by looking‘. Modern vision science has moved beyond Aristotle’s view as well as Marr’s\, although it certainly would not have advanced without the influence of both. Seeing and Looking are different and although related in a plain  manner\, at a deeper mechanistic level it is not plain at all: they are spatially\, temporally and causally connected.  \nWe examine Looking and Seeing and the roles they play in a rational visual agent that functions purposefully in a real three-dimensional world\, as a plain person\, Marr\, or Aristotle would behave during their lifetimes. The vast bulk of theoretical\, experimental and empirical research has focussed on how an agent views and perceives an image\, singly or in video sequence. We add to the small but growing literature that addresses how an agent chooses how to view a three-dimensional world in the context of a real world task. Looking is the result of a change of gaze while Seeing is what occurs during the analysis of what is being looked at and causes a particular next Looking act. Gaze change ranges over a full 6 degrees-of-freedom for head pose and 3 degrees-of-freedom for each of  two eyes within that head.  \nAlthough our past research has shown that sensor viewpoint planning has provably exponential complexity properties\, we propose that an array of attentional mechanisms\, as found in our Selective Tuning model\, tame the complexity of such behaviour and provides the bridge between Seeing and Looking. Through extensive human experiment (one of these is the pictured Same-Different Task) and foraging through the history of computational vision\, we are gradually constructing a picture of a complex blend of orchestrated attentional\, visual\, reasoning\, planning and motor behaviours required for real-world 3D visual tasks.  \nBio \nJohn Tsotsos (he/him) is Distinguished Research Professor of Vision Science at York University and also holds an Adjunct Professorship in Ophthalmology and Vision Sciences at the University of Toronto. Internationally recognized for his pioneering work on visual attention and active vision\, Prof. Tsotsos developed the influential Selective Tuning theory\, which has shaped understanding of both human and computational vision. His research spans computer vision\, computational neuroscience\, robotics\, and artificial intelligence\, with over 300 refereed publications and major contributions to areas such as motion interpretation\, visual search\, and medical image analysis. \nProf. Tsotsos has received numerous honors\, including Fellowships in the Royal Society of Canada\, IEEE\, and the Canadian Academy of Engineering\, as well as the Sir John William Dawson Medal for sustained excellence in interdisciplinary research—the first computer scientist to receive this distinction. He has held the NSERC Tier I Canada Research Chair in Computational Vision since 2003 and was the founding Director of York’s Centre for Vision Research\, which he led to international prominence. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nPhoto provided by the speaker.
URL:https://www.scienceofintelligence.de/event/john-tsotsos-york-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/03/john_foto.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250626T100000
DTEND;TZID=Europe/Berlin:20250626T110000
DTSTAMP:20260404T112708
CREATED:20250429T085415Z
LAST-MODIFIED:20250616T083952Z
UID:24486-1750932000-1750935600@www.scienceofintelligence.de
SUMMARY:Max Ploner (Science of Intelligence)\, “Evaluating Sample Efficiency: How Language Models Learn to Recall Facts from Data"
DESCRIPTION:More details to follow. \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/max-ploner-modeling-neurogenesis-for-continuous-learning/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp18-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250626T140000
DTEND;TZID=Europe/Berlin:20250626T180000
DTSTAMP:20260404T112708
CREATED:20250402T101646Z
LAST-MODIFIED:20250618T134257Z
UID:24009-1750946400-1750960800@www.scienceofintelligence.de
SUMMARY:Michael Brecht\, "Active touch and Large-Brain Neuroscience in Elephants" and Yasemin Vardar\, "Active Synthetic Touch: Generating Naturalistic Multisensory Tactile Stimuli for Active Exploration"
DESCRIPTION:Michael Brecht (BCCN Berlin) will present data on a systemic investigation of brains and of grasping behavior in elephants. The analysis of sensory nerves suggests that elephants are extremely tactile animals. In elephants\, trunk whisker length is lateralized as a result of heavily lateralized trunk behaviors. The elephant trunk tip appears to be represented by a large cortical three-dimensional trunk-tip model; this observation is reminiscent of the somatosensory cortical snout representation in pigs. The trunk musculature of elephants is breath-takingly complex and filigree. Trunk morphology\, motor neuron organization and grasping differs between African elephants (which pinch objects with their two trunk fingers) and Asian elephants (which have only one finger and wrap objects with their trunk).\nHe will discuss the potential of novel X-ray technologies for large brain analysis. Both behavioral analysis and elephant neuroanatomy reveal striking individual differences between individual elephants. Thus\, it appears that elephants are less equal than other animals. \nImagine you could feel your pet’s fur on a Zoom call\, the fabric of the clothes you are considering purchasing online\, or tissues in medical images. We are all familiar with the impact of digitization of audio and visual information in our daily lives – every time we take videos or pictures on our phones. Yet\, there is no such equivalent for our sense of touch. This talk will encompass Yasemin Vardars (Delft University of Technology) scientific efforts in digitizing naturalistic tactile information for the last decade. She will explain the methodologies and interfaces we have been developing with my team and collaborators for capturing\, encoding\, and recreating the perceptually salient features of tactile textures for active bare-finger interactions. She will also discuss current challenges\, future research paths\, and potential applications in tactile digitization. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/michael-brecht-bccn-berlin-and-yasemin-vardar-delft-university-of-technology-active-touch/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp12.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250627T140000
DTEND;TZID=Europe/Berlin:20250627T160000
DTSTAMP:20260404T112708
CREATED:20250402T102518Z
LAST-MODIFIED:20250625T112746Z
UID:24016-1751032800-1751040000@www.scienceofintelligence.de
SUMMARY:Dario Floreano (EPFL)\, "Avian-Inspired Drones"
DESCRIPTION:In less than 20 years drones transitioned from research labs to the real world and had a major impact on inspection\, security\, rescue\, logistics\, and entertainment. However\, today’s drones do not match the agility\, endurance\, adaptability\, and intelligence of birds. Birds are not only the masters of the sky but are also at ease on the ground and in water. Stringent aerodynamical constraints shaped their bodies and brains to leverage morphological change to adapt to diverse locomotion conditions that are still poorly understood. I will show examples of abstracting principles of avian morphological design and flight control to design agile aerial robots that can also be used to test biological hypotheses and improve our understanding of embodied intelligence in avian vertebrates. \nBio \nProf. Dario Floreano is director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne (EPFL). Between 2010 and 2022\, he was the founding director of the Swiss National Center of Competence in Robotics\, a research program that graduated almost 200 PhD students and more than 100 postdocs\, funded two professorships at EPFL and University of Zurich\, created the EPFL Master’s program in Robotics and the annual Swiss Robotics Day\,  helped launch Cybathlon\, and generated more than 15 robotics  spinoffs that created several hundred jobs. \nProf. Floreano holds an M.A. in Vision\, an M.S. in Neural Computation\, and a PhD in Robotics. He has held research positions at Sony Computer Science Laboratory\, at Caltech/JPL\, and at Harvard University. His research interests are Robotics and A.I. at the convergence of biology and engineering. Prof. Floreano made pioneering contributions to the fields of evolutionary robotics\, aerial robotics\, and soft robotics. He served in numerous advisory boards and committees\, including the Future and Emerging Technologies division of the European Commission\, the World Economic Forum Agenda Council\, the International Society of Artificial Life\, the International Neural Network Society\, and in the editorial committee of several scientific journals. In addition\, he helped spinning off three drone companies (senseFly.com\, Flyability.com\, Elythor.com) and a non-for-profit portal on robotics and A.I. (RoboHub.org). For more information\, visit his EPFL profile or Google Scholar page. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nPhoto created with DALL-E by Maria Ott. \n 
URL:https://www.scienceofintelligence.de/event/dario-floreano-epfl/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/Screenshot-2025-06-10-at-12.09.38-e1750850859589.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T170000
DTEND;TZID=Europe/Berlin:20250628T223000
DTSTAMP:20260404T112708
CREATED:20250218T122430Z
LAST-MODIFIED:20250519T094209Z
UID:23467-1751130000-1751149800@www.scienceofintelligence.de
SUMMARY:LNDW 2025/Open Labs at RBO: Hands-On with the Future: Control a Soft Robotic Hand!
DESCRIPTION:Where: RBO Lab (5th floor)\nTime: At regular intervals between 5:30 and 10pm\nEvent Type:  Experiment\nBy Alexander Koenig\nLanguage: English and German\nSuitable for kids: Yes\, from 5\nWebsite: www.tu.berlin/en/robotics \nDid you ever wonder what it feels like to control a robotic hand? At the Robotics and Biology Lab you can see\, touch and control a human-like robotic hand that’s soft\, adaptive\, and ready to interact! Unlike stiff\, traditional robots\, this hand has silicone fingers powered by air\, allowing it to bend\, flex\, and gently grasp objects just like your own hand. Watch live demonstrations of grasping and manipulation\, then take control yourself and experience the future of robotics firsthand. Safe\, smart\, and intuitive—come and see soft robotics in action!
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-rbo-scioi-hands-on-with-the-future-control-a-soft-robotic-hand/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/robotic-lndw25-e1739882014290.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T173000
DTEND;TZID=Europe/Berlin:20250628T220000
DTSTAMP:20260404T112708
CREATED:20250218T104656Z
LAST-MODIFIED:20250616T093155Z
UID:23439-1751131800-1751148000@www.scienceofintelligence.de
SUMMARY:LNDW 2025/SCIoI Open Labs: Become a Swordfish and Chase Virtual Fish in a Mixed Reality Game
DESCRIPTION:Where: Showroom\, SCIoI\, TU MAR building Room 2.034\nTime: At regular intervals from 5:30 to 10pm\nBy David Mezey and Palina Bartashevich\nLanguage: English\nSuitable for kids: Yes (only with parental supervision 5+) \nDescription: Visitors can dive into the world of collective intelligence by becoming a marlin (a large swordfish) and chasing a swarm of virtual fish in an interactive projected playground. Through this experience\, they will discover how fish use group strategies to evade predators—both in the virtual world and in nature. In particular\, they will learn about the so-called “fountain effect\,” a defense tactic used by sardines in the Gulf of Mexico. This strategy is not only visually stunning but also highly effective against large predators like marlins. Additionally\, visitors will get to see real drone footage from field experiments\, showing how these large swordfish\, in response to the prey’s collective strategies\, also coordinate and hunt together in groups. \nMore Information: https://www.scienceofintelligence.de/arena-for-hunters-and-the-hunted-finding-principles-of-intelligence-with-sciois-loop-method/ \nExclusion Criteria: People with known history of epilepsy\, severe sea-sickness or disorientation. Visitors willing to try the interactive setup must be in a generally good or normal physical condition. \nAdditional Information: Visitors will move in a 4m by 4m interactive playground. They will interact with a virtual fish school projected on the floor. To do so\, they will hold a special cane that is tracked by the system while they walk within the installation. \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-scioi-open-labs-become-a-swordfish-and-chase-virtual-fish-in-a-mixed-reality-game/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/cobe_prof_fish-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T173000
DTEND;TZID=Europe/Berlin:20250628T220000
DTSTAMP:20260404T112708
CREATED:20250218T105317Z
LAST-MODIFIED:20250603T123711Z
UID:23443-1751131800-1751148000@www.scienceofintelligence.de
SUMMARY:LNDW 2025/SCIoI Open Labs: How Group Structure Shapes the Spread of Behavior
DESCRIPTION:Where: Showroom\, SCIoI\, TU MAR building Room 2.034\nTime: At regular intervals from 5:30 to 10pm\nBy Maryam Karimian\nLanguage: English\nSuitable for kids: No \nThis demo showcases simulation results that illustrate how group size and density influence the spread of behavior by systematically manipulating these factors and evaluating their impact on contagion dynamics. \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025 \n  \n 
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-scioi-open-labs-how-group-structure-shapes-the-spread-of-behaviour/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/zp-TU-HU-ExcelenzForschung-20240122-024.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T173000
DTEND;TZID=Europe/Berlin:20250628T220000
DTSTAMP:20260404T112708
CREATED:20250218T115412Z
LAST-MODIFIED:20250226T110450Z
UID:23461-1751131800-1751148000@www.scienceofintelligence.de
SUMMARY:LNDW 2025/SCIoI Open Labs: Explore Vision Science & Create Art with Your Eyes!
DESCRIPTION:Where: Dark Lab (Room 2.008)\nTime: regular intervals between 5:30 and 10pm\nEvent Type:  Experiment\nBy Ole Hall\, Julie Ouerfelli-Ethier\, and Qu Runfeng\nLanguage: English\, German\, French\nSuitable for kids: Yes\, from 5 \nThe SCIoI Vision Lab is opening its doors to the public! Join us for a unique experience. \nDiscover Eye Tracking – Live demonstrations will show how eye movements provide a window into the brain.\nDraw with Your Eyes – Try our eye-tracking system and draw just by moving your eyes on the screen \nTake your artwork home or save it on your phone. \nFun for All Ages – Suitable for children (ages 5+) and adults alike.\nMultilingual Event – All activities available in German\, English\, and French. Come and see how your eyes reveal more than you think! \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-explore-vision-science-create-art-with-your-eyes/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/lndw25-art.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T173000
DTEND;TZID=Europe/Berlin:20250628T220000
DTSTAMP:20260404T112708
CREATED:20250218T120747Z
LAST-MODIFIED:20250519T094229Z
UID:23469-1751131800-1751148000@www.scienceofintelligence.de
SUMMARY:LNDW 2025/Open Labs at RBO: Feeling Without Seeing
DESCRIPTION:Where: RBO Lab (5th floor)\nTime: regular intervals between 5:30 and 10pm\nEvent Type: Experiment\nBy Furkan Davulcu\nLanguage: English\nSuitable for kids: Yes\, from 5\nWebsite: www.tu.berlin/en/robotics \nHow do you recognize objects just by touch? Our amazing soft robotic hand can do it too! Fitted with special sensors in its squishy fingers\, it feels tiny changes in shape (deformations) to understand what it’s holding – without even looking! Come learn how it works\, and even try controlling the hand yourself! Experience the future of robotics – where machines can truly feel! \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-rbo-scioi-feeling-without-seeing/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2020/11/Robotics-Panda-Hand-Mount-5-1-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T173000
DTEND;TZID=Europe/Berlin:20250628T220000
DTSTAMP:20260404T112708
CREATED:20250218T122924Z
LAST-MODIFIED:20250519T094050Z
UID:23476-1751131800-1751148000@www.scienceofintelligence.de
SUMMARY:LNDW 2025/Open Labs at RBO: Robot Plays Escape Room
DESCRIPTION:Where: RBO Lab (5th floor)\nTime: at regular intervals between 5:30 and 10pm\nEvent Type:  Demonstration\nBy Paul Xu Pu\nLanguage: English\nSuitable for kids: Yes\, from 5\nWebsite: www.tu.berlin/en/robotics \nDo you know our robot can play the escape room game? Join us to explore how we build intelligent systems by combining various abilities. Watch our robot in action as it manipulates objects\, solves puzzles\, operates furniture\, and attempts to open the door — just like in an escape room!
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-rbo-scioi-robot-plays-escape-room/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/robotic-lndw25-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T174500
DTEND;TZID=Europe/Berlin:20250628T184500
DTSTAMP:20260404T112708
CREATED:20250218T114117Z
LAST-MODIFIED:20250623T085245Z
UID:23455-1751132700-1751136300@www.scienceofintelligence.de
SUMMARY:LNDW 2025/SCIoI Open Labs: Smart Swarms – How Mathematics Helps To Understand Swarm Behavior and Collective Intelligence
DESCRIPTION:Where: SCIoI\, TU MAR building Room 2.013\nTime: at 5:45 and 6:45pm (duration: 30 mins)\nEvent Type: Talk\nBy Prof. Pawel Romanczuk\nLanguage: English and German\nSuitable for kids: No \nThis exciting lecture is meant to give a short intro to collective intelligence\, and can also serve as an introduction to the CoBe experiments you will be seeing in the SCIoI showroom. Coordinated swarm behavior is a fascinating biological phenomenon that can offer many advantages to swarm members – from protection against predators to better decision-making in the group. But how can so many individuals coordinate their behavior so perfectly? And under what circumstances are swarms truly smarter than individuals? This lecture provides insight into how mathematical models help us find answers to these and similar questions and gain a deeper understanding of the underlying mechanisms of collective intelligence. \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025 \n  \nGERMAN VERSION: \n“Schlaue Schwärme – Wie Mathematik und hilft Schwarmverhalten und kollektive Intelligenz zu verstehen” \nKoordiniertes Schwarmverhalten ist ein faszinierendes biologisches Phänomen\, das den Mitgliedern des Schwarms viele Vorteile bieten kann – vom Schutz vor Räubern bis hin zu besseren Entscheidungen in der Gruppe. Aber wie können so viele Individuen ihr Verhalten überhaupt so perfekt koordinieren? Und unter welchen Umständen sind Schwärme wirklich klüger als Einzelne? Dieser Vortrag bietet einen Einblick\, wie mathematische Modelle uns helfen\, Antworten auf diese und ähnliche Fragen zu finden und ein tieferes Verständnis für die zugrunde liegenden Mechanismen kollektiver Intelligenz zu erlangen. \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025 \nPhoto by Teryll KerrDouglas on Unsplash
URL:https://www.scienceofintelligence.de/event/lange-nacht-der-wissenschaften-2025-scioi-open-labs-smart-swarms-how-mathematics-helps-to-understand-swarm-behavior-and-collective-intelligence/
LOCATION:SCIoI\, MAR Building\, Marchstr. 23\, Berlin
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/teryll-kerrdouglas-ohVwv3qO7qU-unsplash-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250628T190000
DTEND;TZID=Europe/Berlin:20250628T210000
DTSTAMP:20260404T112708
CREATED:20250116T124955Z
LAST-MODIFIED:20250623T085957Z
UID:23893-1751137200-1751144400@www.scienceofintelligence.de
SUMMARY:Lange Nacht der Wissenschaften 2025\, Excellent Pub Quiz
DESCRIPTION:Join the “Excellent Pub Quiz” at the Long Night of the Sciences 2025! \nBack by popular demand: team up with your friends (2–4 people)\, come up with a fun team name\, and put your knowledge to the test at the Excellent Pub Quiz hosted by Berlin’s seven Clusters of Excellence at TU Berlin. Exciting prizes await! \nGet ready for a quiz night like no other — with brain-tickling questions from across the Clusters’ diverse research areas: applied mathematics (MATH+)\, intelligent systems (Science of Intelligence)\, sustainable chemistry (UniSysCat)\, political theory (SCRIPTS)\, material innovation (Matters of Activity)\, brain science (NeuroCure)\, and global literature (Temporal Communities). There’s something for every kind of science fan! \nThe quiz will be held in two rounds (in German)\, each lasting an hour and featuring different questions inspired by current research. Join one or both — the choice is yours! \nSchedule\n6:00–7:00 PM: On-site registration\n7:00–8:00 PM: Round 1\n8:00–8:30 PM: Break\n8:30–9:30 PM: Round 2 \nWhen: Saturday\, 28 June 2025 – Registration starts at 6:00 PM\nWhere: TU Berlin\, Straße des 17. Juni 135 – Main Building (Ground Floor\, Wangari Maathai Foyer\, back left next to the Audimax) \nNo need to register in advance – just arrive early to secure your spot. Last year\, over 120 participants across 28 teams battled it out for the top spots. This year\, too\, the top three teams will take home fantastic prizes courtesy of the Clusters of Excellence. \nCheck out all of our LNDW events here: www.scienceofintelligence.de/lndw-2025 \nTickets for the Long Night of the Sciences can be purchased here. \n  \n 
URL:https://www.scienceofintelligence.de/event/excellent-pub-quiz-4/
LOCATION:TU Berlin\, Wangari Mathaai Foyer\, Straße des 17. Juni 135\, in the main building
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/03/PubQuiz-28-Juni-Sharepic-insta-Neu.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250701T140000
DTEND;TZID=Europe/Berlin:20250701T153000
DTSTAMP:20260404T112708
CREATED:20250407T095720Z
LAST-MODIFIED:20250627T085132Z
UID:24183-1751378400-1751383800@www.scienceofintelligence.de
SUMMARY:POSTPONED: Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:This event has been postponed to 29 July 2025. \nHow should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250703T100000
DTEND;TZID=Europe/Berlin:20250703T110000
DTSTAMP:20260404T112708
CREATED:20250429T085710Z
LAST-MODIFIED:20250603T123649Z
UID:24490-1751536800-1751540400@www.scienceofintelligence.de
SUMMARY:Raina Zakir (Université Libre De Bruxelles)\, “Robust Decision-Making in Minimalistic Robot Swarms Under Social Noise”
DESCRIPTION:Abstract \nMinimalistic robot swarms hold great promise for applications in healthcare\, disaster response\, and environmental monitoring. A key challenge lies in enabling these robots to rapidly and reliably reach consensus using limited communication\, computation\, and memory. In this talk\, we explore how robot swarms can collectively identify the best among multiple discrete options in their environment. We analyze and compare several prominent decision-making algorithms through both simulations and theoretical modeling. Particular attention is given to how asocial behaviors—introducing social noise—affect convergence and robustness. Our results offer insights into designing simple yet effective voting rules for robust consensus in decentralized swarm systems. \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/raina-zakir-universite-libre-de-bruxelles/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp13.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250711T140000
DTEND;TZID=Europe/Berlin:20250711T160000
DTSTAMP:20260404T112708
CREATED:20250402T102151Z
LAST-MODIFIED:20250711T082214Z
UID:24013-1752242400-1752249600@www.scienceofintelligence.de
SUMMARY:William Warren (Brown University)\, "The Dynamics of Perception and Action: From Pedestrian Interactions to Collective Behavior"
DESCRIPTION:It’s a perplexing time in the study of visual perception. On the one hand\, there is a resurgence of models that freely posit a priori structure in the visual system\, including priors\, generative world models\, and physics engines. On the other hand\, there is the astonishing a posteriori success of deep neural networks trained only on natural images and image sequences. Although their performance offers an existence proof of the sufficiency of image information for certain visual tasks\, the black box of deep learning does not easily offer up that information or how it’s extracted by the visual system. \nA science of perception depends on understanding the visual information that is available in natural environments and is used to guide natural behavior. I propose that we take seriously James Gibson’s information hypothesis: For every perceivable property of the environment\, however subtle\, there must be a variable of information\, however complex\, that uniquely specifies it. The project is to identify the information that the visual system uses to perceive and act within the constraints of a species’ ecological niche. \nTwo decades ago I decided to work out a test case to see whether an information-based account of a natural behavior could be sustained. In this talk I will offer a status report on our effort to build a model of visually controlled human locomotion – a pedestrian model – that scales up from individual behaviors like steering and obstacle avoidance\, to pedestrian interactions like following and collision avoidance\, to the collective behavior of human crowds. Surprisingly\, linear combinations of these nonlinear components can account for the emergence of more complex behavior\, such as self-organized ‘flocking’\, crowd bifurcations\, and stripe formation in crossing flows. \nBio \nBill (he/him) earned his undergraduate degree at Hampshire College (1976)\, his Ph.D. in Experimental Psychology from the University of Connecticut (1982)\, did post-doctoral work at the University of Edinburgh\, and has been a professor at Brown ever since. He served as Chair of the Department of Cognitive and Linguistic Sciences from 2002-10. Warren is the recipient of a Fulbright Research Fellowship\, an NIH Research Career Development Award\, and Brown’s Elizabeth Leduc Teaching Award for Excellence in the Life Sciences. Warren’s research focuses on the visual control of action – in particular\, human locomotion and navigation. He seeks to explain how this behavior is adaptively regulated by multi-sensory information\, within a dynamical systems framework. Using virtual reality techniques\, his research team investigates problems such as the visual control of steering\, obstacle avoidance\, wayfinding\, pedestrian interactions\, and the collective behavior of crowds. Experiments in the Virtual Environment Navigation Lab (VENLab) enable his group to manipulate what participants see as they walk through a virtual landscape\, and to measure and model their behavior. The aim of this research is to understand how adaptive behavior emerges from the dynamic interaction between an organism and its environment. He believes the answers will not be found only in the brain\, but will strongly depend on the physical and informational regularities that the brain exploits. This work contributes to basic knowledge that is needed to understand visual-motor disorders in humans\, and to develop mobile robots that can operate in novel environments. For more information\, visit his faculty profile or the VENLab website. \n  \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n  \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/william-warren-brown-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp3.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250717T100000
DTEND;TZID=Europe/Berlin:20250717T100000
DTSTAMP:20260404T112708
CREATED:20250623T124834Z
LAST-MODIFIED:20250716T123207Z
UID:25730-1752746400-1752746400@www.scienceofintelligence.de
SUMMARY:Matthias Nau (Vrije Universiteit Amsterdam)\, "Revealing General Principles Underlying Active Vision and Memory"
DESCRIPTION:Abstract:\nCognitive neuroscience seeks theories that jointly explain behavioral\, neural\, and mental states. The dominant approach is to use specialized tasks designed to optimally probe a concept of interest (e.g.\, episodic memory)\, and to disentangle behavioral\, sensory\, and mnemonic factors through design (e.g.\, by constraining gaze during image recognition). I will present an alternative framework that instead recognizes that concepts such as perception\, memory\, and action are often inextricable\, both theoretically and empirically\, which I demonstrate for example by showing that brain activity during movie viewing and recall is linked through eye movements. I will argue that new generalizable concepts are needed to explain phenomena across domains\, and outline how such concepts may be empirically derived through multi-task studies: by testing generalization of results across tasks and data modalities\, we reveal the mutual constraints task demands impose on behavioral\, neural\, and mental states. In this context\, I will also highlight the importance of ‘naturalistic’ tasks and behavioral tracking for cognitive neuroscience\, and briefly introduce open-source tools for camera-free MR-based eye tracking. \nImage created by Maria Ott with DALL-E.
URL:https://www.scienceofintelligence.de/event/matthias-nau-vrije-universiteit-amsterdam/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/abstract_ai_vs_human_thought-e1748620484784.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250718T140000
DTEND;TZID=Europe/Berlin:20250718T153000
DTSTAMP:20260404T112708
CREATED:20250429T090411Z
LAST-MODIFIED:20250716T122502Z
UID:24495-1752847200-1752852600@www.scienceofintelligence.de
SUMMARY:Jacob Yates (UC Berkeley)\, "The Role of Motor Signals in Visual Cortex"
DESCRIPTION:Embodiment is fundamental to biological intelligence. Brains do not passively receive the world\, they actively shape what they sense through self-motion. For nearly a century\, we have known that perception and action are deeply entangled\, and that organisms must constantly infer whether a sensory change comes from the environment or from themselves. A longstanding idea holds that sensory signals are either suppressed during movement or that movement effects are subtracted out. However\, recent discoveries in neuroscience\, especially in rodents\, suggest that spontaneous movements strongly influence sensory cortex. In this talk\, I will share our work re-examining this question in primates. We found that movements do not broadly modulate visual cortex unless they move the retina\, creating an inherent ambiguity between motor effects and changes in sensory input. I will describe our new approach to disentangling sensorimotor interactions during natural behavior\, combining high-resolution eye tracking with high-density neural recordings and modern machine learning. By precisely measuring the retinal input during natural vision\, we find that much of what appears to be a motor signal is actually visual reafference\, the lawful\, structured sensory consequences of an animal’s own actions. I will discuss how measuring and modeling this loop can deepen our understanding of active inference in the brain and what it means for designing truly embodied agents that adapt to the world as brains do. \nBio \nJacob Yates (he/him) is an Assistant Professor of Optometry & Vision Science at UC Berkeley and leads the Active Vision and Neural Computation Lab. His research explores how populations of neurons in the cortex and early visual pathways encode the visual world\, with a particular focus on how eye movements generate and utilize information for perception. By combining statistical and machine learning approaches\, his lab builds computational models to better understand neural activity and human perception\, ultimately aiming to bridge the gap between neural coding and real-world visual behavior. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nPhoto by Soliman Cifuentes on Unsplash.
URL:https://www.scienceofintelligence.de/event/jacob-yates-uc-berkeley/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/soliman-cifuentes-RXGLTHZ6Mo8-unsplash-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250724T100000
DTEND;TZID=Europe/Berlin:20250724T110000
DTSTAMP:20260404T112708
CREATED:20250616T105829Z
LAST-MODIFIED:20250723T143455Z
UID:25596-1753351200-1753354800@www.scienceofintelligence.de
SUMMARY:POSTPONED: Alican Mertan (University of Vermont)\, "Morphological Cognition: Evolving Robots Exhibiting Cognitive Behavior without Abstract Controllers"
DESCRIPTION:With the rise of modern deep learning\, neural networks have become an essential part of virtually every artificial intelligence system\, making it difficult to imagine different models for intelligent behavior. In contrast\, nature provides us with many different mechanisms for intelligent behavior\, most of which we have yet to utilize. One such underinvestigated aspect of intelligence is embodiment and the role it plays in intelligent behavior. We suspect that “the unreasonable effectiveness of deep learning” overshadowed the investigation into what bodies mean for intelligence\, especially how they can be a source of intelligent behavior\, as opposed to passively participating in its display.\nTo investigate how bodies alone give rise to intelligent behavior\, we suggest treating bodies not just as an aid to the brain\, but also studying them as doing full cognitive behavior end-to-end. We term such robots that demonstrate cognitive behaviors without an abstract control layer as possessing “morphological cognition”. I will present our initial work on morphological cognition\, where we use simple shape-changing processes to create robots that can perform a range of tasks from locomotion to image classification without any abstract controller (i.e.\, no neural network). \n  \nImage created by Maria Ott with DALL-E
URL:https://www.scienceofintelligence.de/event/alican-mertan-university-of-vermont-morphological-cognition-evolving-robots-exhibiting-cognitive-behavior-without-abstract-controllers/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250729T140000
DTEND;TZID=Europe/Berlin:20250729T160000
DTSTAMP:20260404T112708
CREATED:20250627T090602Z
LAST-MODIFIED:20250724T124821Z
UID:25799-1753797600-1753804800@www.scienceofintelligence.de
SUMMARY:Alan Winfield (UWE Bristol) & Dafna Burema (Science of Intelligence)
DESCRIPTION:How should we think about Ethics when Machines become part of our social worlds? Alan Winfield and Dafna Burema will explore the ethical and societal dimensions of robotics and AI in an interactive fishbowl and in conversation with Master`s students of the course “Introduction to Modeling Collective Behavior”. Alan Winfield\, a pioneer in the field of robot ethics\, will share insights from his work on cognitive robotics\, science communication\, and the development of ethical standards for intelligent systems.\nDafna Burema brings a sociological lens\, focusing on how AI and robots shape—and are shaped by—social values\, particularly in sensitive areas like eldercare. Together\, they’ll reflect on how society can critically engage with intelligent technologies and what ethical frameworks might guide their integration into collective life. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/alan-winfield-uwe-bristol-dafna-burema-science-of-intelligence-2/
LOCATION:Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp8.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250906
DTEND;VALUE=DATE:20250910
DTSTAMP:20260404T112708
CREATED:20250603T115022Z
LAST-MODIFIED:20250618T141057Z
UID:25206-1757116800-1757462399@www.scienceofintelligence.de
SUMMARY:Summer School "Sensory Neuroscience" (Pisa\, Italy)
DESCRIPTION:For the first time this year\, SCIoI will be one of the organizers of the Circle U. Summer School in Pisa\, which already involves the HU Berlin and UCLouvain. Together with renowned researchers\, Master’s students will explore the complexities of the human mind and brain through the study of perception. This summer school is connected with the TNE initiative NEUROBRIDGE\, which promotes exchanges with international institutions. \nRead the article here. \nClick here for more info. \nPhoto by Andrea Cevenini on Unsplash.
URL:https://www.scienceofintelligence.de/event/the-summer-school-in-sensory-neuroscience-in-pisa-italy/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/06/SensoryNeuroscienceSummerSchool25-1-e1750255629928.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250908T190000
DTEND;TZID=Europe/Berlin:20250908T210000
DTSTAMP:20260404T112708
CREATED:20250326T130134Z
LAST-MODIFIED:20250623T085832Z
UID:23900-1757358000-1757365200@www.scienceofintelligence.de
SUMMARY:Excellent Pub Quiz
DESCRIPTION:Dive into the wonderful world of research of the seven Berlin Clusters of Excellence: from literature to chemistry\, from politics to AI\, you and your team can find answers for exciting and surprising questions from the clusters’ research areas. So seek fellows and think of your team name! \nEvery quiz evening focuses on the research of one of the clusters of excellence with a related live performance\, followed by questions from the areas of the rest of the clusters. At the end of the evening\, you will be certainly smarter than before and perhpas with great prizes in your hands. \nEvent langage: German\nAdmission: Free entry\nCapacity: Limited spaces\, so come early to not miss your spot\nModerator: Jochen Müller \n  \nThis event is organized by the Cluster of Excellence Matters of Activity. \nThe other participating clusters are: MATH+\, Science of Intelligence\, Temporal Communities\, SCRIPTS\, UniSysCat\, and NeuroCure.
URL:https://www.scienceofintelligence.de/event/excellent-pub-quiz-5/
LOCATION:Fahimi bar\, Skalitzer Str. 133\, Berlin\, 10999\, Germany
CATEGORIES:For the Public
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/03/PubQuiz-8-Sept-Sharepic-insta-Hochformat.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250911T100000
DTEND;TZID=Europe/Berlin:20250911T110000
DTSTAMP:20260404T112708
CREATED:20250526T094651Z
LAST-MODIFIED:20250902T080023Z
UID:25080-1757584800-1757588400@www.scienceofintelligence.de
SUMMARY:Asieh Daneshi (Science of Intelligence)\, “Is risky behavior contagious?”
DESCRIPTION:More details to follow. \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/asieh-daneshi-behavioral-contagion-in-human-and-artificial-multi-agent-systems/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp2.jpg
END:VEVENT
END:VCALENDAR