BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250603T140000
DTEND;TZID=Europe/Berlin:20250603T153000
DTSTAMP:20260421T080327
CREATED:20250226T122648Z
LAST-MODIFIED:20250606T131027Z
UID:23618-1748959200-1748964600@www.scienceofintelligence.de
SUMMARY:Jens Krause (Science of Intelligence)\, "The Adaptive Value of Collective Behavior"
DESCRIPTION:In this talk Jens Krause will discuss the adaptive value of collective behaviour from different perspectives. One perspective is the potential ability of groups or collectives to make better and even faster decisions. In this context Jens will show some of the modelling approaches to explain collective intelligence and the empirical support for them in the laboratory and in the field. Furthermore\, he will show some empirical findings regarding collective intelligence which challenge our current understanding of the underlying mechanisms. Another perspective is that of collective behaviour as a defense against predators. It has been found in a number of different species that various forms of collective spirals and waves can fend off predators. This implies that at a global group-wide level\, collective patterns are not just beautiful to look at but can provide anti-predator functions which are just beginning to understand. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jens-krause-science-of-intelligence/
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/KK_2-scaled-e1748593902816.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250605T140000
DTEND;TZID=Europe/Berlin:20250605T180000
DTSTAMP:20260421T080327
CREATED:20250407T093220Z
LAST-MODIFIED:20250530T112036Z
UID:24159-1749132000-1749146400@www.scienceofintelligence.de
SUMMARY:Martina Poletti (University of Rochester)\, "Active Foveal Vision" and Michele Rucci (University of Rochester)\, "Active Space-Time Encoding: The Inseparable Link Between Vision and Action"
DESCRIPTION:Martina Poletti’s talk will focus on active foveal vision. Vision is an active process even at its finest scale in the 1-deg foveola\, the visual system is primarily sensitive to changes in the visual input and it has been shown that fixational eye movements reformat the spatiotemporal flow to the retina in a way that is optimal for fine spatial vision. Using high-precision eye-tracking coupled with a system for gaze-contingent display capable of localizing the line of sight with arcminute precision\, and an Adaptive Optics Scanning Light Ophthalmoscope (AOSLO) for high-resolution retinal imaging enabling retinal-contingent manipulations of the visual input\, their results show that the need for active foveolar vision also stems from the non-uniformity of fine spatial vision across this region. Further\, they show that the visual system is highly sensitive even to a small sub-foveolar loss of vision and fixation behavior is readjusted to compensate for this loss. Overall\, the emerging picture is that of a highly non-homogenous foveolar vision characterized by a refined level of control of attention and fixational eye movements at this scale. \nMichele Rucci’s talk explores how the human visual system constructs spatial representations. Unlike other sensory modalities\, where spatial information must be inferred from incoming signals\, vision begins with a sophisticated imaging system—the eye—that explicitly preserves spatial structure on the retina. This might suggest that human vision is primarily a passive spatial process\, in which the eye simply transmits the retinal image to the cortex—much like uploading a digital photograph—to form a map of the scene. However\, this analogy is misleading\, as it overlooks the strong temporal sensitivity of visual neurons and contradicts theoretical models and experimental findings that examine vision in the context of natural motor behavior. Here\, Michele Rucci will review recent evidence supporting active space-time encoding—the idea that\, as with other senses\, vision relies on motor strategies to encode spatial information in the temporal domain. This concept has important implications for understanding the normal functioning of the visual system\, the effects of abnormal oculomotor behavior\, and the development of visual prostheses. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/active-seeing-with-martina-poletti-university-of-rochester-and-michele-rucci-university-of-rochester/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/ChatGPT-Image-May-30-2025-01_17_03-PM.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250606T140000
DTEND;TZID=Europe/Berlin:20250606T160000
DTSTAMP:20260421T080327
CREATED:20250407T093540Z
LAST-MODIFIED:20250603T094631Z
UID:24164-1749218400-1749225600@www.scienceofintelligence.de
SUMMARY:Tony Prescott (University of Sheffield)\, "The Psychology of Artificial Intelligence"
DESCRIPTION:Artificial intelligence and robotics have been making great progress in recent years but how close are we to emulating human intelligence?  This talk will explore the similarities and differences between humans and AIs and discuss the development of biomimetic cognitive systems that more directly think and behave like us.  A key focus will be on layered control architectures for robots inspired by the mammalian brain. The talk will be illustrated with work from my lab on active sensing\, memory\, and sense of self for animal-like and humanoid robots. \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \nFor those who are not in Berlin but would like to join virtually:\nhttps://tu-berlin.zoom-x.de/j/69207754612?pwd=IKxoTdY3dQWccHpce2nA0IsNkNxPHu.1 \nPhoto generated with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/tony-prescott-university-of-sheffield/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/abstract_ai_vs_human_thought-e1748620484784.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250610T140000
DTEND;TZID=Europe/Berlin:20250610T153000
DTSTAMP:20260421T080327
CREATED:20250226T122854Z
LAST-MODIFIED:20250606T131115Z
UID:23624-1749564000-1749569400@www.scienceofintelligence.de
SUMMARY:Andrew J. King (Swansea University)\,"Understanding Animal Collective Behaviour Across Systems"
DESCRIPTION:Andrew King is a scientist driven by curiosity\, exploring questions across species\, contexts\, and methods. His research group investigates how and why individuals engage in collective behaviour\, using a wide range of systems\, perspectives\, and tools. In this seminar\, he will present their fundamental work in behavioural biology\, as well as its applied themes\, including animal management and bio-inspired engineering. \nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/andrew-j-king-shoal-group-swansea-university/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp13.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250612T140000
DTEND;TZID=Europe/Berlin:20250612T180000
DTSTAMP:20260421T080327
CREATED:20250407T094009Z
LAST-MODIFIED:20250611T105232Z
UID:24168-1749736800-1749751200@www.scienceofintelligence.de
SUMMARY:Jennifer Groh (Duke University) and Kristen Grauman (University of Texas)\, "What Eye Movements Have to Do with Hearing"
DESCRIPTION:Jennifer Groh (Duke University) \nHearing works in concert with vision\, such as when we watch someone’s lips move to help us understand what they are saying.  But bridging between these two senses poses computational challenges for the brain.  One such challenge involves movements of the eyes – every time the eyes move with respect to the head\, the relationship between visual spatial input (the retina) and auditory spatial input (sound localization cues anchored to the head) changes.  I will describe this problem from early computational and experimental work showing how and where signals regarding eye movements are incorporated into auditory processing\, closing with a recent discovery from our group that a signal regarding eye movements is sent by the brain to the ears themselves.  This signal casues the eardrum to oscillate in conjunction with eye movements (Gruters et al PNAS 2018) and carries detailed spatial information about the direction and amplitude of the eye movement (Lovich et al PNAS 2023). I will also present new findings concerning the underlying mechanism of this effect\, involving the contributions of the middle ear muscles and outer hair cells\, and the potential impact on sound transduction. \n  \nKristen Grauman (University of Texas)\, “Audio-visual learning in 3D environments” \nPerception systems that can both see and hear have great potential to unlock problems in video understanding\, augmented reality\, and embodied AI. I will present our recent work in egocentric audio-visual (AV) perception. First\, we explore how audio’s spatial signals can augment visual understanding of 3D environments. This includes ideas for self-supervised feature learning from echoes\, AV floorplan reconstruction\, and active source separation\, where an agent intelligently moves to hear things better in a busy environment. Throughout this line of work\, we leverage our open-source SoundSpaces platform\, which allows state-of-the-art rendering of highly realistic audio in real-world scanned environments. Next\, building on these spatial AV and scene acoustics ideas\, we introduce new ways to enhance the audio stream – making it possible to transport a sound to a new physical environment observed in a photo\, or to dereverberate speech so it is intelligible for machine and human ears alike. \n  \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/jennifer-groh-duke-university-and-kristen-grauman-university-of-texas-active-hearing/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp11.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250613T140000
DTEND;TZID=Europe/Berlin:20250613T160000
DTSTAMP:20260421T080327
CREATED:20250407T094415Z
LAST-MODIFIED:20250610T100714Z
UID:24172-1749823200-1749830400@www.scienceofintelligence.de
SUMMARY:Fumiya lida (University of Cambridge) "Info-Bodiment: Informatization of Robot Embodiment for the Next Generation AI Robots"
DESCRIPTION:There is growing interest in applying AI technologies to the control of intelligent robotic systems. While this research has led to promising developments\, it still faces major challenges due to its heavy reliance on learning from limited datasets—often dominated by visual information. In this talk\, I will introduce “Info-Embodiment” as a new research framework for realizing Embodied Intelligence\, along with its underlying technological foundations. As advances in soft robotics and functional materials enable deeper integration between the informational and physical realms\, we are beginning to see the emergence of novel forms of embodied intelligence. Within this evolving landscape\, I will explore how rapidly advancing fields such as machine learning can help accelerate progress. Going beyond conventional models of body control and AI as abstract computational systems\, this approach positions the body itself as an active site of information processing and generation\, opening new possibilities for intelligent behavior. \nBio\nFumiya Iida is Professor of Robotics at the Department of Engineering\, University of Cambridge. Previously he was an assistant professor for bio-inspired robotics at ETH Zurich (2009-2014) and a lecturer at Cambridge (2014-2018). He received his bachelor and master degrees in mechanical engineering at Tokyo University of Science (Japan\, 1999)\, and Dr. sc. nat. in Informatics at University of Zurich (2006). In 2004 and 2005 he was also engaged in biomechanics research of human locomotion at Locomotion Laboratory\, University of Jena (Germany). From 2006 to 2009 he worked as a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory\, Massachusetts Institute of Technology in USA. In 2006 he was awarded the Fellowship for Prospective Researchers from the Swiss National Science Foundation and\, in 2009\, the Swiss National Science Foundation Professorship. He was a recipient of the IROS2016 Fukuda Young Professional Award\, Royal Society Translation Award in 2017\, Tokyo University of Science Award in 2021. His research interests include biologically inspired robotics\, embodied artificial intelligence\, and biomechanics of human locomotion and manipulation\, where he was involved in a number of research projects related to dynamic legged locomotion\, navigation of autonomous robots\, and human-machine interactions. For more information\, visit the Bio-Inspired Robotics Laboratory website. \n  \nThis talk is part of Aravind Battaje‘s course “Mind\, Body\, Environment: An Interactive Seminar on Embodied Intelligence\,” a seminar introducing to key theories and research highlighting this shift in perspective through invited lectures from experts in the field and interactive sessions. \n 
URL:https://www.scienceofintelligence.de/event/fumiya-iida-university-of-cambridge/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/zp-TU-HU-ExcelenzForschung-20240122-073-scaled-e1749550030237.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250617T140000
DTEND;TZID=Europe/Berlin:20250617T153000
DTSTAMP:20260421T080327
CREATED:20250226T124956Z
LAST-MODIFIED:20250617T121156Z
UID:23627-1750168800-1750174200@www.scienceofintelligence.de
SUMMARY:Heiko Hamann (Science of Intelligence)\, "From Models to Machines: A Roboticist’s View on Collective Behavior"
DESCRIPTION:Swarm robotics investigates how large numbers of relatively simple\, autonomous robots can coordinate to complete complex collective tasks. In this lecture\, we explore how models of collective behavior can guide the design of such systems. We highlight how modeling collective behavior is not only a tool for understanding natural systems\, but a powerful method to synthesize coordinated behaviors in robot swarms. We contrast bio-mimicry to more abstract bio-inspired paradigms. Through examples like task allocation and flocking\, we demonstrate how biological insights can shape engineering choices.  An impressive insight from biology is that ‘less is more\,’ that is\, less communication or less knowledge can sometimes increase the swarm’s performance. We conclude by briefly discussing swarm robotics applications that diverge from biological analogies and reflect on future directions. \n--\nThis talk is part of David Mezey‘s course “Introduction to Modeling Collective Behavior\, ” a seminar on collective behavior research\, combined with multiple interactive elements. \n  \n  \nImage created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/heiko-hamann-science-of-intelligence/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/chatgtp19.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20250626T140000
DTEND;TZID=Europe/Berlin:20250626T180000
DTSTAMP:20260421T080327
CREATED:20250402T101646Z
LAST-MODIFIED:20250618T134257Z
UID:24009-1750946400-1750960800@www.scienceofintelligence.de
SUMMARY:Michael Brecht\, "Active touch and Large-Brain Neuroscience in Elephants" and Yasemin Vardar\, "Active Synthetic Touch: Generating Naturalistic Multisensory Tactile Stimuli for Active Exploration"
DESCRIPTION:Michael Brecht (BCCN Berlin) will present data on a systemic investigation of brains and of grasping behavior in elephants. The analysis of sensory nerves suggests that elephants are extremely tactile animals. In elephants\, trunk whisker length is lateralized as a result of heavily lateralized trunk behaviors. The elephant trunk tip appears to be represented by a large cortical three-dimensional trunk-tip model; this observation is reminiscent of the somatosensory cortical snout representation in pigs. The trunk musculature of elephants is breath-takingly complex and filigree. Trunk morphology\, motor neuron organization and grasping differs between African elephants (which pinch objects with their two trunk fingers) and Asian elephants (which have only one finger and wrap objects with their trunk).\nHe will discuss the potential of novel X-ray technologies for large brain analysis. Both behavioral analysis and elephant neuroanatomy reveal striking individual differences between individual elephants. Thus\, it appears that elephants are less equal than other animals. \nImagine you could feel your pet’s fur on a Zoom call\, the fabric of the clothes you are considering purchasing online\, or tissues in medical images. We are all familiar with the impact of digitization of audio and visual information in our daily lives – every time we take videos or pictures on our phones. Yet\, there is no such equivalent for our sense of touch. This talk will encompass Yasemin Vardars (Delft University of Technology) scientific efforts in digitizing naturalistic tactile information for the last decade. She will explain the methodologies and interfaces we have been developing with my team and collaborators for capturing\, encoding\, and recreating the perceptually salient features of tactile textures for active bare-finger interactions. She will also discuss current challenges\, future research paths\, and potential applications in tactile digitization. \nThis talk is part of course Olga Shurygina‘s course “Active Sensing\,” a seminar on cutting-edge research on active sensory perception in humans and other mammals and realted advances in artificial agents’ abilities such as seeing\, grasping\, and navigating in space. \n  \nPhoto created with DALL-E by Maria Ott.
URL:https://www.scienceofintelligence.de/event/michael-brecht-bccn-berlin-and-yasemin-vardar-delft-university-of-technology-active-touch/
LOCATION:SCIoI\, Marchstraße 23\, 10587 Berlin\, Room 2.057
CATEGORIES:Hot Topics in Intelligence Research
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/04/chatgtp12.jpg
END:VEVENT
END:VCALENDAR