BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230228
DTEND;VALUE=DATE:20230305
DTSTAMP:20260404T080436
CREATED:20250219T130041Z
LAST-MODIFIED:20250219T130041Z
UID:23532-1677542400-1677974399@www.scienceofintelligence.de
SUMMARY:Winter School "Ethics of Neuroscience and AI" 2022
DESCRIPTION:The 11th Winter School “Ethics of Neuroscience and AI” is taking place on Feb 28 – March 4\, 2022. It is organized by the BCCN Berlin/ICCN\, the Berlin School of Mind and Brain\, and the Excellence Cluster “Science of Intelligence”. The event is tailored for MSc and PhD students\, but covers a range of topics of potential interest to other researchers\, reflecting on the ethical and societal consequences of modern neuroscience.\nTheoretical foundations\, as well as practical and ethical aspects are addressed. Participants will benefit from a combination of lectures with group work and discussions\, where they will put the learned content into practice. \nScientific organizers: John-Dylan Haynes and Thomas Schmidt. \nKeynote lecture: Kent Kiehl (University of New Mexico) will discuss the ethical issues involved with neuroprediction (Live stream available) \nYou are welcome to join the keynote which will be live-streamed on the Bernstein Network’s Vimeo channel. \nFees: The Winter School is free of cost but registration is necessary. \nVenue: Due to the ongoing pandemic\, the Winter School will be held online.
URL:https://www.scienceofintelligence.de/event/winter-school-ethics-of-neuroscience-and-ai-2022/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/Winterschool__web_I_2022.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230227
DTEND;VALUE=DATE:20230304
DTSTAMP:20260404T080436
CREATED:20250219T125804Z
LAST-MODIFIED:20250219T130224Z
UID:23528-1677456000-1677887999@www.scienceofintelligence.de
SUMMARY:Winter School "Ethics of Neuroscience and AI" 2023
DESCRIPTION:The 12th Winter School “Ethics of Neuroscience and AI” is taking place on Feb 27 – March 3\, 2023. It is organized by the BCCN Berlin/ICCN\, the Berlin School of Mind and Brain\, and the Excellence Cluster “Science of Intelligence”. The event is tailored for MSc and PhD students\, but covers a range of topics of potential interest to other researchers\, reflecting on the ethical and societal consequences of modern neuroscience.\nTheoretical foundations\, as well as practical and ethical aspects are addressed. Participants will benefit from a combination of lectures with group work and discussions\, where they will put the learned content into practice. \nScientific organizers: John-Dylan Haynes and Thomas Schmidt. \nKeynote lecture: Christine Heim (Charité-Universitätsmedizin Berlin) \nFees: The Winter School is free of cost but registration is necessary. \nVenue: The Winter School takes place at the Bernstein Center for Computational Neuroscience Berlin.
URL:https://www.scienceofintelligence.de/event/23528/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2025/02/Web_Winterschool_A2_2023_II.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230223T100000
DTEND;TZID=Europe/Berlin:20230223T110000
DTSTAMP:20260404T080436
CREATED:20221114T105022Z
LAST-MODIFIED:20240813T101537Z
UID:13332-1677146400-1677150000@www.scienceofintelligence.de
SUMMARY:Ryan Burnell\, "A Cognitive Approach to the Evaluation of AI Systems"
DESCRIPTION:Abstract: \nThe capabilities of AI systems are improving rapidly\, and these systems are being deployed in increasingly complex and high-stakes contexts\, from self-driving cars to the detection of medical conditions. As the importance of AI grows\, so too does the need for robust evaluation. If we want to determine the extent to which systems are safe\, effective\, and unbiased\, it is vital that we understand the cognitive capabilities of those systems. In this endeavour\, psychological science has a lot to offer—scientists from cognitive\, developmental\, and comparative psychology have spent many decades developing theories and paradigms to understand the cognitive capabilities of adults\, children\, and animals. Drawing on these theories and paradigms\, we are working to build a framework for evaluating the cognitive capabilities of AI systems that we hope can be used to better track and regulate AI progress. I will present an initial version of the framework and discuss the open questions and challenges of applying cognitive science to AI evaluation. \nThis talk will take place in person at SCIoI. \nPhoto by Michael Dziedzic on Unsplash. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-ryan-burnell-a-cognitive-approach-to-the-evaluation-of-ai-systems/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/michael-dziedzic-aQYgUYwnCsM-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230216T100000
DTEND;TZID=Europe/Berlin:20230216T110000
DTSTAMP:20260404T080436
CREATED:20230207T104351Z
LAST-MODIFIED:20240813T101551Z
UID:14161-1676541600-1676545200@www.scienceofintelligence.de
SUMMARY:Julten Abdelhalim (Science of Intelligence)\, "Tips and Guidelines for your grant application in Germany"
DESCRIPTION:Abstract: \nThis talk will be targeting junior postdocs and phd at their final stages. It will be a short and brief introduction to the major options for grants (those aiming at the stars or smaller ones). Julten will offer some quick tips on the application process. She will also share her own experience in applying to the DFG Sachbeihilfe and ERC Starting Grant. The talk is not a detailed workshop in which we get into details about the proposal writing but rather a summary and a call out for how you should ideally plan your grant application journey. Those interested in detailed consultation are invited to book appointments later.\n\n\nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-julten-abdelhalim-our-career-as-a-scientist-make-a-plan-for-successful-grant-applications/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2023/02/Screenshot-2023-02-07-at-11-43-19-People-–-Science-of-Intelligence.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230209T160000
DTEND;TZID=Europe/Berlin:20230209T173000
DTSTAMP:20260404T080436
CREATED:20230119T092829Z
LAST-MODIFIED:20240813T101605Z
UID:14065-1675958400-1675963800@www.scienceofintelligence.de
SUMMARY:Oliver Brock (Science of Intelligence)\, "About the Interplay of Embodiment and Learning in Intelligent Systems"
DESCRIPTION:Abstract:\nBiological intelligent systems manifest their intelligence in physical interactions with other agents and with their environment. Such interactions require embodiment. Intelligence\, both artificial and biological\, also requires some kind of learning. But what is the relationship between the two? How should the two interact? Do they even have to? What could be a common ground on which this relationship can be explored\, negotiated\, and ultimately designed? In this presentation\, I will attempt to provide my personal answers to these questions. I will argue that one of the reasons (deep) machine learning has not yet been able to replicate its smashing successes in the context of robotics lies in the widespread disregard for the important capabilities provided by the body. Instead of considering embodiment\, machine learning seems to be resorting to massive use of physical simulations. This seems to be unnecessarily complicated without being convincingly effective.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-oliver-brock-2/
LOCATION:MAR 2.057
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/brock_800.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230209T100000
DTEND;TZID=Europe/Berlin:20230209T110000
DTSTAMP:20260404T080436
CREATED:20230116T111824Z
LAST-MODIFIED:20250603T130020Z
UID:14047-1675936800-1675940400@www.scienceofintelligence.de
SUMMARY:Andreagiovanni Reina (Université Libre De Bruxelles)\, “The Power of Inhibition for Collective Decision Making in Minimalistic Robot Swarms”
DESCRIPTION:Abstract:\nI investigate how large groups of simple robots can reach a consensus with decentralized minimalistic algorithms. Simple robots can be useful in nanorobotics and in scenarios with low-cost requirements. I show that through decentralized voting algorithms\, swarms of minimalistic robots can make best-of-n decisions. In my research\, I show that using a biologically-inspired voting model based on inhibitory signals\, the swarm can collectively perform better and be more resilient against a minority of misbehaving robots than in models without inhibition. Our best-of-n decision algorithm can also be used for collective environmental monitoring. I will show that investigating these models can be very interesting and yield surprising results. As Anderson said in 1972\, More is different. In our analysis\, we found that limiting the communication range or the speed of the robots can improve collective performance in a range of relevant conditions. We explain the mechanisms of some of these phenomena with a combination of mathematical models and large-scale robot experiments.\n\nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-giovanni-rena/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/4.-April-LNDW-at-MAR-20220702_Sc.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230126T160000
DTEND;TZID=Europe/Berlin:20230126T173000
DTSTAMP:20260404T080436
CREATED:20230117T132407Z
LAST-MODIFIED:20250603T130033Z
UID:14062-1674748800-1674754200@www.scienceofintelligence.de
SUMMARY:Lars Lewejohann (Science of Intelligence)\, “What’s on a Mouse’s Mind? Behavioral Measures To Understand Experiences and Needs of an Animal”
DESCRIPTION:Lars Lewejohann\, Freie Universität Berlin\, German Federal Institute for Risk Assessment (BfR)\, German Centre for the Protection of Laboratory Animals (Bf3R) \nMice\, like all other living creatures\, have adapted to specific living conditions in the course of evolution. From a human point of view\, the behavior of animals is therefore not always easy to understand. This applies not only to the question of whether mice are actually capable of behaving intelligently\, but also to the question of what is necessary for optimizing animal welfare of laboratory animals. In our work\, we are interested in both questions and follow an animal-centered approach and are giving mice their say. Of course mice cannot fill out questionnaires\, but we have developed a series of behavioral tests that allow to query the animals. In this lecture I will outline our approach with regard to improving housing and living conditions as well as the implications of using mice as a model species for the science of intelligence.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-lars-lewejohann-whats-on-a-mouses-mind-behavioral-measures-to-understand-experiences-and-needs-of-an-animal/
LOCATION:MAR 2.057
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2021/12/lewejohann_800.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230126T100000
DTEND;TZID=Europe/Berlin:20230126T110000
DTSTAMP:20260404T080436
CREATED:20221128T133841Z
LAST-MODIFIED:20240813T101630Z
UID:13403-1674727200-1674730800@www.scienceofintelligence.de
SUMMARY:Scott Robins (Bonn University)\, "What Machines Shouldn't Do"
DESCRIPTION:Abstract: \nFrom writing essays to evaluating potential hires\, machines are doing a lot these days. In all spheres of life\, it seems that machines are being delegated more and more decisions. Some of these machines are being delegated decisions that could have significant impact on human lives.Examples of such machines which have caused such impact are widespread and include machines evaluating loan applications\, machines evaluating criminals for sentencing\, autonomous weapon systems\, driverless cars\, digital assistants\, etc. Considering that machines cannot be held morally accountable for their actions (Bryson\, 2010; Johnson\, 2006; van Wynsberghe & Robbins\, 2018)\, the question that governments\, NGOs\, academics\, and the general public should be asking themselves is: how do we keep meaningful human control (MHC) over these machines? \n\nThe literature thus far details what features the machine or the context must have in order for MHC to be realized. Should humans be in the loop or on the loop? Should we force machines to be explainable? Lastly\, should we endow machines with moral reasoning capabilities? (Ekelhof\, 2019; Floridi et al.\, 2018; Robbins\, 2019a\, 2019b; Santoni de Sio & van den Hoven\, 2018; Wendall Wallach & Allen\, 2010; Wendell Wallach\, 2007). Rather than look to the machine itself or what part humans have to play in the context\, I argue here that we should shine the spotlight on the decisions that machines are being delegated. Meaningful human control\, then\, will be about controlling what decisions get made by machines. \n\nI argue that keeping meaningful human control over machines (especially AI which relies on opaque methods) means restricting machines to decisions that do not require a justifying explanation and can\, in principle\, be proven efficacious. Because contemporary methodologies in AI are opaque\, many machines cannot offer explanations for their outputs. In many cases\, decisions require justifying explanations\, and we should therefore not use machines for such cases. It won’t be surprising that machines should be efficacious if they are to be used – especially in contexts that will have impacts on human beings. Increasingly\, however\, machines are being delegated decisions for which we are unable\, in principle\, to evaluate their efficacy. This should not happen. \n\nThese arguments lead to the conclusion that machines should be restricted to descriptive outputs. It must always be a human being deciding how to employ evaluative terms as these terms not only refer to specific states of affairs but also say something about how the world ought to be. Machines which are able to make decisions based on opaque considerations should not be telling humans how the world ought to be. This is a breakdown of human control in the most severe way. Not only would we be losing control over specific decisions in specific contexts\, but we would be losing control over what descriptive content grounds evaluative classifications. \n\nIn this talk\, I will first discuss what it means to say that a machine is ‘doing’ something. I then briefly discuss different proposals for MHC and why they fall short. I then argue that machines should not be delegated evaluative decisions as they require justifying explanations which machines cannot give and cannot be evaluated for efficacy. While this talk is framed negatively\, it is my hope that this focuses research and development to design and build machines to help us realize our visions for how the world ought to be\, rather than machines that tell us hour the world ought to be. Only humans can decide that.\n \nThis talk will take place in person at SCIoI. \nPhoto by David Levêque on Unsplash \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-scott-robins-bonn-university-what-machines-shouldnt-do/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/david-leveque-GpNOhig3LSU-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230119T160000
DTEND;TZID=Europe/Berlin:20230119T173000
DTSTAMP:20260404T080436
CREATED:20230102T111439Z
LAST-MODIFIED:20240813T101638Z
UID:13961-1674144000-1674149400@www.scienceofintelligence.de
SUMMARY:Ingmar Posner (University of Oxford)\, "Learning to Perceive and to Act - Disentangling Tales from (Structured) Latent Space"
DESCRIPTION:Abstract:\nUnsupervised learning is experiencing a renaissance. Driven by an abundance of unlabelled data and the advent of deep generative models\, machines are now able to synthesise complex images\, videos and sounds. In robotics\, one of the most promising features of these models – the ability to learn structured latent spaces – is gradually gaining traction. The ability of a deep generative model to disentangle semantic information into individual latent-space dimensions seems naturally suited to state-space estimation. Combining this information with generative world-models\, models which are able to predict the likely sequence of future states given an initial observation\, is widely recognised to be a promising research direction with applications in perception\, planning and control. Yet\, to date\, designing generative models capable of decomposing and synthesising scenes based on higher-level concepts such as objects remains elusive in all but simple cases. In this talk I will motivate and describe our recent work using deep generative models for unsupervised object-centric scene inference and generation. Furthermore\, I will make the case that exploiting correlations encoded in latent space\, and learnt through experience\, lead to a powerful and intuitive way to disentangle and manipulate task-relevant factors of variation. I will show that this not only casts a novel light on affordance learning\, but also that the same framework is capable of generating plans executable on complex real-world robot platforms. \nPhoto courtesy by Ingmar Posner. \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-ingmar-posner-university-of-oxford-learning-to-perceive-and-to-act-disentangling-tales-from-structured-latent-space/
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/PastedGraphic-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230119T100000
DTEND;TZID=Europe/Berlin:20230119T113000
DTSTAMP:20260404T080436
CREATED:20230116T101152Z
LAST-MODIFIED:20250603T130050Z
UID:14043-1674122400-1674127800@www.scienceofintelligence.de
SUMMARY:David Garzón Ramos (Université Libre De Bruxelles)\, “Automatic Design of Robot Swarms: Context and Experiments”
DESCRIPTION:Abstract:\n \nSwarm robotics is a promising approach to the coordination of large groups of robots. Traditionally\, the design of collective behaviors for robot swarms has been an iterative manual process: a human designer manually refines the control software of the individual robots until the desired collective behavior emerges.\n\nIn this talk\, I discuss automatic design as an alternative approach to manual design. In automatic methods\, the design process is cast into an optimization problem: given a task to be performed by the swarm\, an optimization process designs a collective behavior to perform the task and produces appropriate control software for the robots. I focus on experiments that highlight the various aspects of the automatic design of robot swarms: classes of collective behaviors\, control architectures\, and the optimization process. In particular\, I present a case study on the design of shepherding behaviors for groups of robots. The results presented in this talk are outcomes of the project DEMIURGE; an ERC funded project devoted to the study of the automatic design of robot swarms (PI Mauro Birattari).\nThis talk will take place in person at SCIoI. \nPhoto by Omar Flores on Unsplash. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-david-garzon-ramos-universite-libre-de-bruxelles-automatic-design-of-robot-swarms-context-and-experiments/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/omar-flores-lQT_bOWtysE-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230112T100000
DTEND;TZID=Europe/Berlin:20230112T110000
DTSTAMP:20260404T080436
CREATED:20221128T133344Z
LAST-MODIFIED:20250603T130102Z
UID:13400-1673517600-1673521200@www.scienceofintelligence.de
SUMMARY:Dustin Lehmann\, Fritz Francisco\, Jorg Raisch\, Pawel Romanczuk (Science of Intelligence)\, “Dynamical Adaptation and Learning: Knowledge Transfer and Cooperative Learning in Groups of Heterogeneous Agents”
DESCRIPTION:Abstract: \nIn groups of agents learning how to solve a common task\, interaction and knowledge transfer between agents is important and can vary depending on network topology. Heterogeneity is one of the key principles that influences the type and quality of interaction between learning agents. Different learning strategies and behaviors can be a driving factor for the learning success at the group and individual level\, whereas differences in dynamics (or capabilities\, behaviors\, internal states\, etc.) can impede the direct transferability of knowledge and may require dynamic adaption of the agents.\nIn this talk\, we show how to infer behavioral heterogeneity in learning groups of fish and how this affects future learning capabilities. Prior knowledge of social partners affects the outcome of learning processes and timing of information uptake. We further investigate behavioral heterogeneity from the perspective of synthetic dynamic systems and how to transfer knowledge between dissimilar agents to enable cooperative learning of how to solve a common task. First results show how to exploit heterogeneity for learning in synthetic agents and which information gradient is beneficial when dealing with novel tasks in a social context.\n \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-dustin-lehmann-fritz-francisco-jorg-raisch-pawel-romanczuk-dynamical-adaptation-and-learning-knowledge-transfer-and-cooperative-learning-in-groups-of-heterogeneous-agents/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/project-52.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230105T160000
DTEND;TZID=Europe/Berlin:20230105T160000
DTSTAMP:20260404T080436
CREATED:20221215T134407Z
LAST-MODIFIED:20250603T130112Z
UID:13676-1672934400-1672934400@www.scienceofintelligence.de
SUMMARY:Peter Neri (Laboratoire Des Systèmes Perceptifs\, CNRS\, Paris)\, “The Unreasonable Recalcitrance of Human Vision to Theoretical Domestication”
DESCRIPTION:Abstract: \nWe can view cortex from two fundamentally different perspectives: a powerful device for performing optimal inference\, or an assembly of biological components not built for achieving statistical optimality. The former approach is attractive thanks to its elegance and potentially wide applicability\, however the basic facts of human pattern vision do not support it. Instead\, they indicate that the idiosyncratic behaviour produced by visual cortex is largely dictated by its hardware components. The output of these components can be steered towards optimality by our cognitive apparatus\, but only to a marginal extent. We conclude that current theories of visually-guided behaviour are at best inadequate\, and we turn to neural networks in an attempt to establish whether the idiosyncratic character of human vision may be learnt from a larger repertoire of functional constraints\, such as the statistics of the natural environment. We challenge deep convolutional networks with the same stimuli/tasks used with human observers and apply equivalent characterization of the stimulus–response coupling. For shallow depth of behavioural characterization\, some variants of network-architecture/training-protocol produce human-like trends; however\, more articulate empirical descriptors expose glaring discrepancies. Our results urge caution in assessing whether neural networks do or do not capture human behavior: ultimately\, our ability to assess ‘’success’’ in this area can only be as good as afforded by the depth of behavioral characterization against which the network is evaluated. More generally\, our results provide a compelling demonstration of how far we still are from securing an adequate computational account of even the most basic operations carried out by human vision. \nPhoto by Mathew Schwartz on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-peter-neri-laboratoire-des-systemes-perceptifs-cnrs-paris-the-unreasonable-recalcitrance-of-human-vision-to-theoretical-domestication/
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/12/mathew-schwartz-sb7RUrRMaC4-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221215T160000
DTEND;TZID=Europe/Berlin:20221215T173000
DTSTAMP:20260404T080436
CREATED:20211222T113627Z
LAST-MODIFIED:20250603T130124Z
UID:11482-1671120000-1671125400@www.scienceofintelligence.de
SUMMARY:John Dylan Haynes (Science of Intelligence)\, “Intelligence in Humans Versus Machines”
DESCRIPTION:Many claims have been made that machine intelligence could make humans superfluous in the near future. Today this claim is largely seen as overstated\, but it is still important to assess the relative strengths of human versus machine cognition. \n\n  \nThis talk will take place in person at SCIoI.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-john-dylan-haynes/
CATEGORIES:PI Lecture
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2018/11/haynes_800.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221215T100000
DTEND;TZID=Europe/Berlin:20221215T110000
DTSTAMP:20260404T080436
CREATED:20220914T121043Z
LAST-MODIFIED:20250603T130137Z
UID:13049-1671098400-1671102000@www.scienceofintelligence.de
SUMMARY:Robert Lange and Luis Gomez (Science of Intelligence)\, “Quantifying and Modelling Collective Behavior Across Ecological Contexts”
DESCRIPTION:Abstract: \nA central challenge in understanding the concept of swarm intelligence is the relation between the behavior of a swarm of agents and its ecological niche. In order to interpret such collective concept\, we have been using analytical and synthetic approaches to get more insights using mainly one particular biological system of Sulphur mollies as study system. We have combined analytical behavioral characterizations of schools of these fish with synthetic state-of-the-art machine learning methods to understand the  functionality of the behavior in real life. In this talk\, we will show our main findings related to the collective behavior. We will show i) that the highly synchronized diving behavior of the school is close to criticality\, ii) how this can be functionally related to effective communication about predator attacks\, and iii) how to study the heterogeneity in collectives by inferring the parameters of models using machine learning algorithms. \nThis talk will take place in person at SCIoI.
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-p12/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/Pawel-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221208T100000
DTEND;TZID=Europe/Berlin:20221208T110000
DTSTAMP:20260404T080436
CREATED:20221128T100636Z
LAST-MODIFIED:20250603T130147Z
UID:13385-1670493600-1670497200@www.scienceofintelligence.de
SUMMARY:Erik Rodner “Please Label Me: Challenges and Efficient Strategies for Data Annotation and Selection”
DESCRIPTION:Abstract: \nLack of data and annotations has been the showstopper for machine learning projects when I started my PhD and 15 years later it still is. In my talk\, I will give a brief overview of recent models we developed for weakly- and semi supervised as well as for active learning.\nIn addition\, we will analyze the relevance of these algorithms from an industrial perspective\, which often contradicts with the usual story line in traditional computer vision publications. \nThis talk will take place in person at SCIoI. \n  \nPhoto by vackground.com on Unsplash
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-with-erik-rodner-please-label-me-challenges-and-efficient-strategies-for-data-annotation-and-selection/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/vackground-com-agUC-v_D1iI-unsplash-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221201T160000
DTEND;TZID=Europe/Berlin:20221201T173000
DTSTAMP:20260404T080436
CREATED:20211222T113440Z
LAST-MODIFIED:20240813T101910Z
UID:11480-1669910400-1669915800@www.scienceofintelligence.de
SUMMARY:Klaus Obermayer (Science of Intelligence)\, "Computational Models of Electric Field Effects and Optimal Control of Neurons and Neural Populations"
DESCRIPTION:Abstract: \nThe brain is a complex dynamical system with processes operating on different spatial scales. At the macroscopic end one observes global dynamical phenomena\, which are called „brain states“ and which are often acompanied by oscillations in different frequency bands or by specific functional connectivity patterns between populations of neuron. A common hypothesis states\, that the global dynamics establishes a task-dependent operating point\, which is required by individual neurons and local networks to perform information processing tasks. Perturbation experiments are performed\, on the one hand\, to perform causal analyses into the consequences of this and related hypotheses and\, on the other hand\, to restore a brain’s operating point in case of dysfunction. \nIn my talk I will summarize some of our recent modelling work to better understand the interaction between the neural dynamics and external control inputs\, taking non-invasive electrical stimulation of neural tissue as an example. I will first present some results on the biophysics of (microscopic) neuron-field interactions and our modelling attempts to propagate these effects to the macroscopic level. In the second part of my presentation I will show\, how techniques from Optimal Control Theory can be used to probe controllability aspects of neural systems and to help design efficient ways of steering the neural dynamics. \n  \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-klaus-obermayer/
CATEGORIES:PI Lecture
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2018/11/obermayer_800.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221201T100000
DTEND;TZID=Europe/Berlin:20221201T110000
DTSTAMP:20260404T080436
CREATED:20220914T121438Z
LAST-MODIFIED:20240813T101934Z
UID:13054-1669888800-1669892400@www.scienceofintelligence.de
SUMMARY:David Bierbach (Science of Intelligence)\, "Anticipation in social interactions among live and artificial agents"
DESCRIPTION:Abstract: \nThe aim of SCIoI’s P10 is to investigate how anticipation and prediction shapes social interactions among live and artificial agents using for example the Robofish system. We will outline our research showing the sophisticated anticipation abilities of live fish\, as well as how we integrated prediction and anticipation into Robofish’s social interaction behaviors. We will furthermore show how experiments with robotic animals can help to promote animal welfare and what is necessary to build biomimetic robots that will be accepted by live animals as conspecifics (see also these articles: https://www.frontiersin.org/articles/10.3389/fbioe.2020.00441/full\,  https://www.annualreviews.org/doi/10.1146/annurev-control-061920-103228\, https://link.springer.com/chapter/10.1007/978-3-030-64313-3_26 ). Finally we will dive into our public outreach activities that include the Robofish exhibition in the Humboldt Labor at Stadtschloss Berlin with more than 100\,000 visitors since 2021. \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-p10-jens-krause-verena-hafner-2/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/Robotic-Fish-2-1536x1024-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221124T160000
DTEND;TZID=Europe/Berlin:20221124T173000
DTSTAMP:20260404T080436
CREATED:20220926T103901Z
LAST-MODIFIED:20240813T101918Z
UID:13096-1669305600-1669311000@www.scienceofintelligence.de
SUMMARY:Jan De Houwer (Ghent University)\, "Learning in Individual Organisms\, Genes\, Machines\, and Groups: A New Way of Defining and Relating Learning in Different Systems"
DESCRIPTION:Abstract:\nLearning is a central concept in many scientific disciplines. Communication about research on learning is\, however\, hampered by the fact that different researchers define learning in different ways. In this talk\, we introduce the extended functional definition of learning that can be used across scientific disciplines. We provide examples of how the definition can be applied to individual organisms\, genes\, machines\, and groups. The extended functional definition (a) reveals a heuristic framework for research that can be applied across scientific disciplines\, (b) allows researchers to engage in intersystem analyses that relate the behavior and learning of different systems\, and (c) clarifies how learning differs from other phenomena such as (changes in) behavior\, damaging systems\, and programming systems. \nPhoto by DeepMind on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-with-jan-de-houwer-learning-in-individual-organisms-genes-machines-and-groups-a-new-way-of-defining-and-relating-learning-in-different-systems/
LOCATION:MAR 2.057
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/deepmind-_HnJfS6WhA8-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221124T113000
DTEND;TZID=Europe/Berlin:20221124T130000
DTSTAMP:20260404T080436
CREATED:20221117T101332Z
LAST-MODIFIED:20240813T101103Z
UID:13344-1669289400-1669294800@www.scienceofintelligence.de
SUMMARY:Thursday morning talk: Nicolas Mandel\, "Kangaroos & Quadcopters"
DESCRIPTION:Abstract: \nThe contents of this presentation will be twofold. In the first part the Centre for Robotics of the Queensland University of Technology (QUT) and its research directions and facilities will be introduced. The research on semantics for the benefit of UAVs\, specifically quadcopters\, will be highlighted. The second part will contain the personal experiences of the presenter of undertaking a PhD in Australia\, highlighting differences\, challenges and lessons learnt along the way.Disclaimer: The views and opinions in this talk are the presenters and do not necessarily reflect the opinions of any of the employers or affiliates.\nThis talk will take place in person at SCIoI. \n  \nPhoto by Indy Bruhin on Unsplash \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-nicolas-mandel-kangaroos-quadcopters/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/indy-bruhin-mJ_oRYZqXdw-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221124T100000
DTEND;TZID=Europe/Berlin:20221124T110000
DTSTAMP:20260404T080436
CREATED:20220914T120810Z
LAST-MODIFIED:20240813T101121Z
UID:13046-1669284000-1669287600@www.scienceofintelligence.de
SUMMARY:What are futures made of? Collactive Materials\, a joint SCIoI/MoA project
DESCRIPTION:Abstract:\nThe BUA-funded experimental knowledge transfer project CollActive Materials\, a collaboration between the Clusters of Excellence Science of Intelligence and Matters of Activity\, encourages speculation on what the future has in store. \nWhich intelligent materials will pave our tomorrows? How can substances and materials change our world in an intelligent way? What will the world look like in the coming decades\, and how can we turn our speculations into something tangible?  Finally\, what kinds of relationships could we create with intelligent materials? \nIn this Thursday Morning Talk the audience will learn more about the CollActive Materials project and all the exciting interactions between the two clusters\, and most importantly\, they will get a chance to dive into the project themselves by taking part in a mini speculative design exercise. \nSPEAKERS:  \nLéa Perraudin is a media theorist and speculative material scholar and works as postdoctoral research associate at the Cluster of Excellence »Matters of Activity. Image Space Material«. Léa currently works on a habilitation project\, bringing forth a media theory of phase transitions by investigating the ties of material and metaphor in contemporary technocapitalist media environments through transience\, dispersal\, abundance and solidification.\nFurthermore\, Léa is the co-leader of the experimental laboratory »CollActive Materials«\, a joint project of the Clusters of Excellence »Matters of Activity« and »Science of Intelligence«\, that intends to gather multiple publics to jointly tackle possible material futures through the method of speculative design. \nMartin Müller researches at the intersection of cultural history and theory\, media studies\, history of knowledge and science\, and design theory. He is a postdoctoral research associate at the Cluster of Excellence »Matters of Activity. Image Space Material« – in the projects »Symbolic Material« and »Material Form Function«. Since 2015 he has been teaching at the Department of Cultural History and Theory at Humboldt-Universität zu Berlin. Martin is the co-leader of the experimental laboratory for knowledge exchange and speculative design »CollActive Materials«. Recently published: »The Will to Engineer. Synthetic Biology and the Escalation of Zoëpolitics«\, in: P. Ribault (Ed.): Design\, Gestaltung\, Formatività\, 2022 \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-matters-of-activity-moa/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/Screenshot-2022-09-20-at-08.21.13.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221117T100000
DTEND;TZID=Europe/Berlin:20221117T110000
DTSTAMP:20260404T080436
CREATED:20220914T120516Z
LAST-MODIFIED:20250603T130205Z
UID:13043-1668679200-1668682800@www.scienceofintelligence.de
SUMMARY:Heiner Spiess (Science of Intelligence)\, “Tools To Study the Generality of Deep Neural Network Representations”
DESCRIPTION:Abstract: \nAs many of us know by now\, Deep Learning has enabled tackling very challenging problems and applications that were previously almost impossible to solve with machine learning. However\, for most of the tasks we want to solve with Deep Learning\, we need large\, if not huge\, amounts of data and computing power. This is very limiting for many applications for which we do not have the necessary amounts of data or for practitioners who do not have access to enough computation power to train well-performing Deep Networks for their desired tasks.We hope to overcome these two limitations by leveraging the generality of already trained models through Transfer Learning or combining the information from multiple\, perhaps relatively small\, datasets with Multi-Task-Learning.In this project\, we are investigating the generality of representations learned by Deep Networks. Today I would like to introduce one of the families of tools we use in this effort: Representational Similarity Analysis (RSA).I will present the methodology behind these tools and provide some insights into Deep Networks gained through their use. However\, I would highlight some concerns to be aware of when using these tools and present some challenges that arise in practice. Considering these concerns\, I will present a variant of these tools that solves some of the existing problems.Furthermore\, I will shortly present a tool that we have developed to synthesize realistic image data\, allowing us to systematically analyse which properties of the data are represented in Deep Networks.Finally\, I want to mention our SCIoI cooperation with project 01 on “Scanpath Prediction in Dynamic Scenes using an end-to-end Deep Learning approach”. \nPhoto by Nina Ž. on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-heiner-spiess/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/nina-z-VKg1oXU-vzo-unsplash-1536x1024-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221110T193000
DTEND;TZID=Europe/Berlin:20221110T220000
DTSTAMP:20260404T080436
CREATED:20221010T102901Z
LAST-MODIFIED:20240813T101150Z
UID:13174-1668108600-1668117600@www.scienceofintelligence.de
SUMMARY:Berlin Science Week 2022 - The Science Slam of the Berlin Clusters of Excellence\, "Clear the stage for science"
DESCRIPTION:At our cluster science slam\, scientists try everything to entertain their audience\, regardless of whether the subject is e.g. mathematics\, neuroscience or active material. The sky is the limit when it comes to what’s possible. Costumes\, props\, movies\, power-point presentations or other experimental setups – it is all allowed. Only time sets the limits – every slammer will have ten minutes at most. And the audience will decide which presentation is best! \nConstitutional Hardball in Action (Robert Benson)\nHow to make the U-Bahn go brrr (Enrico Bortoletto)\nTuning in to the sound of fish (Dr. Antonia Groneberg)\nSearching for Intelligence (Benjamin Lang)\nIn Search of Lost Chaos (Dr. Guillermo Olicón Méndez)\nKnowing the Nature’s messengers (Dr. Ramprasad Misra)\nNature knows best (Dr. Alina Pushkarev) \nVisit the BSW website for more info \nPhoto taken from Berlin Science Week \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/berlin-science-week-2022-the-science-slam-of-the-berlin-clusters-of-excellence-clear-the-stage-for-science/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/10/publikum-von-stage-1920x984-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221110T100000
DTEND;TZID=Europe/Berlin:20221110T110000
DTSTAMP:20260404T080436
CREATED:20220926T105840Z
LAST-MODIFIED:20240813T101157Z
UID:13108-1668074400-1668078000@www.scienceofintelligence.de
SUMMARY:Jan De Bruyne (Leiden University)\, "Liability for Damage Involving AI – Some Regulatory Challenges and Priorities"
DESCRIPTION:More details to follow. \nPhoto by DeepMind on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-jan-de-bruyne/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/deepmind-lISkvdgfLEk-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221105T093000
DTEND;TZID=Europe/Berlin:20221105T110000
DTSTAMP:20260404T080436
CREATED:20221010T102039Z
LAST-MODIFIED:20240813T101204Z
UID:13169-1667640600-1667646000@www.scienceofintelligence.de
SUMMARY:Berlin Science Week 2022 - Collective Materials Workshop\, "What is the Future Made Of?"
DESCRIPTION:Nature is a great designer. Through billions of years of evolution – of design trial and error (or re-route) – it has come up with uniquely functional and beautiful materials. It uses simple materials in clever ways. Natural materials are often sophisticated in structure and function\, yet they are made from simple\, abundant resources. More than that\, they are designed to be part of a natural cycle of making and breaking down – no material is wasted. CollActive Materials\, together with the speculative designer Emilia Tikka\, invites you to a hands-on speculation workshop. Find out how researchers from Matters of Activity and Science of Intelligence use bio-design in their work – and make up your very own version of a bio-inspired future. \nVisit the BSW website page for more info \nPhoto taken from Berlin Science Week \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/berlin-science-week-with-emilia-tikka-kristin-werner-what-is-the-future-made-of/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/10/collactivematerialsspeculativedesignberlinscienceweek.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221105T093000
DTEND;TZID=Europe/Berlin:20221105T110000
DTSTAMP:20260404T080436
CREATED:20221010T101404Z
LAST-MODIFIED:20250212T103646Z
UID:13164-1667640600-1667646000@www.scienceofintelligence.de
SUMMARY:Berlin Science Week 2022 - Panel Discussion with Dafna Burema\, Mattis Jacobs\, and Jonas Frenkel\, "Artificial Intelligence: Examples of AI gone wrong and Ethical Questions"
DESCRIPTION:In this lively debate\, our researchers Dafna Burema\, Mattis Jacobs and Jonas Frenkel from Science of Intelligence will talk about Artificial Intelligence and its ethical implications including examples of AI gone wrong. How do we imagine sustainable futures with robots? What are the open questions scientists face every day when dealing with Artificial Intelligence? \nVisit the BSW website for more info \nPhoto taken from Berlin Science Week \n***Want to attend one of our events? Sign up here.\nTo get regular updates\, subscribe to our mailing list from this page.\nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/berlin-science-week-with-dafna-burema-jonas-frenkel-artificial-intelligence-examples-of-ai-gone-wrong-and-ethical-questions/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/possessed-photography-rDxP1tF3CmA-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221103T100000
DTEND;TZID=Europe/Berlin:20221103T110000
DTSTAMP:20260404T080436
CREATED:20220914T120203Z
LAST-MODIFIED:20240813T101210Z
UID:13040-1667469600-1667473200@www.scienceofintelligence.de
SUMMARY:POSTPONED: Scott Robbins\, "What Machine's Shouldn't Do"
DESCRIPTION:From writing essays to evaluating potential hires\, machines are doing a lot these days. In all spheres of life\, it seems that machines are being delegated more and more decisions. Some of these machines are being delegated decisions that could have significant impact on human lives. Examples of such machines which have caused such impact are widespread and include machines evaluating loan applications\, machines evaluating criminals for sentencing\, autonomous weapon systems\, driverless cars\, digital assistants\, etc. Considering that machines cannot be held morally accountable for their actions (Bryson\, 2010; Johnson\, 2006; van Wynsberghe & Robbins\, 2018)\, the question that governments\, NGOs\, academics\, and the general public should be asking themselves is: how do we keep meaningful human control (MHC) over these machines? \nThe literature thus far details what features the machine or the context must have in order for MHC to be realized. Should humans be in the loop or on the loop? Should we force machines to be explainable? Lastly\, should we endow machines with moral reasoning capabilities? (Ekelhof\, 2019; Floridi et al.\, 2018; Robbins\, 2019a\, 2019b; Santoni de Sio & van den Hoven\, 2018; Wendall Wallach & Allen\, 2010; Wendell Wallach\, 2007). Rather than look to the machine itself or what part humans have to play in the context\, I argue here that we should shine the spotlight on the decisions that machines are being delegated. Meaningful human control\, then\, will be about controlling what decisions get made by machines. \nI argue that keeping meaningful human control over machines (especially AI which relies on opaque methods) means restricting machines to decisions that do not require a justifying explanation and can\, in principle\, be proven efficacious. Because contemporary methodologies in AI are opaque\, many machines cannot offer explanations for their outputs. In many cases\, decisions require justifying explanations\, and we should therefore not use machines for such cases. It won’t be surprising that machines should be efficacious if they are to be used – especially in contexts that will have impacts on human beings. Increasingly\, however\, machines are being delegated decisions for which we are unable\, in principle\, to evaluate their efficacy. This should not happen. \nThese arguments lead to the conclusion that machines should be restricted to descriptive outputs. It must always be a human being deciding how to employ evaluative terms as these terms not only refer to specific states of affairs but also say something about how the world ought to be. Machines which are able to make decisions based on opaque considerations should not be telling humans how the world ought to be. This is a breakdown of human control in the most severe way. Not only would we be losing control over specific decisions in specific contexts\, but we would be losing control over what descriptive content grounds evaluative classifications. \n  \nPhoto by Alex Knight on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-scott-robbins/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/alex-knight-2EJCSULRwC8-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221028T100000
DTEND;TZID=Europe/Berlin:20221028T160000
DTSTAMP:20260404T080436
CREATED:20221010T104347Z
LAST-MODIFIED:20250603T130224Z
UID:13178-1666951200-1666972800@www.scienceofintelligence.de
SUMMARY:Scholar Minds – Mental Health Conference 2022
DESCRIPTION:A serie of lectures\, workshops and a panel discussion about themes related to Mental health in academia\, from opinions of under-represented groups\, to power abuse\, healthy working conditions\, and much more. \nMore info here \nPhoto taken from Scholar Mind webiste \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/scholar-minds-mental-health-conference-2022/
CATEGORIES:External Event
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2022/10/Screenshot-2022-10-10-at-12-39-52-Scholar-Minds-Einstein-Center-for-Neurosciences-Berlin.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221020T100000
DTEND;TZID=Europe/Berlin:20221020T110000
DTSTAMP:20260404T080436
CREATED:20220908T135026Z
LAST-MODIFIED:20250603T130235Z
UID:13017-1666260000-1666263600@www.scienceofintelligence.de
SUMMARY:David Bierbach (Science of Intelligence)\, “Anticipation in Fish-Robot Interactions”
DESCRIPTION:Abstract:\nI will present our current research involving the Robofish. I will put a special focus on our latest research paper that found live fish to be able to anticipate predictably behaving Robofish both in regard to final movement locations as well as movement dynamics.  \nThis talk will take place in person at SCIoI \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-project-11/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/Screenshot-2022-10-17-at-13-30-25-Konnen-Fische-antizipieren.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221013T160000
DTEND;TZID=Europe/Berlin:20221013T173000
DTSTAMP:20260404T080436
CREATED:20211222T112959Z
LAST-MODIFIED:20240813T101240Z
UID:11470-1665676800-1665682200@www.scienceofintelligence.de
SUMMARY:Jens Krause (HU Berlin)\, "Mexican Waves: The Adaptive Value of Collective Behaviour".
DESCRIPTION:Abstract\nThe collective behaviour of animals has attracted considerable attention in recent years\, with many studies exploring how local interactions between individuals can give rise to global group properties. The functional aspects of collective behaviour are less well studied\, especially in the field and relatively few studies have investigated the adaptive benefits of collective behaviour in situations where prey are attacked by predators. This paucity of studies is unsurprising because predator-prey interactions in the field are difficult to observe. Furthermore\, the focus in recent studies on predator-prey interactions has been on the collective behaviour of the prey rather than on the behaviour of the predator. Here I present a field study that investigated the antipredator benefits of waves produced by fish at the water surface when diving down collectively in response to attacks of avian predators. Fish engaged in surface waves that were highly conspicuous\, repetitive\, and rhythmic involving many thousands of individuals for up to 2 min. Collective fish waves increased the time birds waited until their next attack and also reduced capture probability in three avian predators that greatly differed in size\, appearance and hunting strategy. Taken together\, these results support a generic antipredator function of fish waves which could be a result of a confusion effect or a consequence of waves acting as a perception advertisement\, which requires further exploration. \nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-jens-krause-tu-berlin/
CATEGORIES:PI Lecture
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2021/12/jens.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20221013T100000
DTEND;TZID=Europe/Berlin:20221013T110000
DTSTAMP:20260404T080436
CREATED:20220908T134759Z
LAST-MODIFIED:20250603T130249Z
UID:13014-1665655200-1665658800@www.scienceofintelligence.de
SUMMARY:Alan Tump\, Dominik Deffner\, David Mezey (Science of Intelligence)\, “How Cognitive Computational Modeling Can Help Us Better Understand Principles Underlying Collective Intelligence”
DESCRIPTION:Abstract:\nCollective dynamics play a crucial role in everyday decision-making. Whether social influence promotes the spread of accurate information\, and ultimately results in collective intelligence\, or leads to false information cascades and maladaptive social contagion depends on the cognitive mechanisms underlying social interactions. \nIn our talk\, we will argue that cognitive modeling\, in tandem with experiments that allow collective dynamics to emerge\, can mechanistically link cognitive processes at the individual and collective levels and\, thus\, provides a fruitful path forward in identifying principles of collective intelligence. \nWe will show how such cognitive computational approaches are increasingly being used to better understand social and collective decision-making\, and will explore how we can extend this strategy to more unconstrained social decision spaces\, typical of real-world collective intelligence. \n  \nPhoto by Alina Grubnyak on Unsplash \n***Want to attend one of our events? Sign up here.\nTo get regular updates\, subscribe to our mailing list from this page.\nThe Zoom Link will be sent the day before the lecture.
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-alan-trump-domink-deffner-david-mezey-scioi-p26-p34/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/09/tmt1-1.jpg
END:VEVENT
END:VCALENDAR