BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230105T160000
DTEND;TZID=Europe/Berlin:20230105T160000
DTSTAMP:20260429T182919
CREATED:20221215T134407Z
LAST-MODIFIED:20250603T130112Z
UID:13676-1672934400-1672934400@www.scienceofintelligence.de
SUMMARY:Peter Neri (Laboratoire Des Systèmes Perceptifs\, CNRS\, Paris)\, “The Unreasonable Recalcitrance of Human Vision to Theoretical Domestication”
DESCRIPTION:Abstract: \nWe can view cortex from two fundamentally different perspectives: a powerful device for performing optimal inference\, or an assembly of biological components not built for achieving statistical optimality. The former approach is attractive thanks to its elegance and potentially wide applicability\, however the basic facts of human pattern vision do not support it. Instead\, they indicate that the idiosyncratic behaviour produced by visual cortex is largely dictated by its hardware components. The output of these components can be steered towards optimality by our cognitive apparatus\, but only to a marginal extent. We conclude that current theories of visually-guided behaviour are at best inadequate\, and we turn to neural networks in an attempt to establish whether the idiosyncratic character of human vision may be learnt from a larger repertoire of functional constraints\, such as the statistics of the natural environment. We challenge deep convolutional networks with the same stimuli/tasks used with human observers and apply equivalent characterization of the stimulus–response coupling. For shallow depth of behavioural characterization\, some variants of network-architecture/training-protocol produce human-like trends; however\, more articulate empirical descriptors expose glaring discrepancies. Our results urge caution in assessing whether neural networks do or do not capture human behavior: ultimately\, our ability to assess ‘’success’’ in this area can only be as good as afforded by the depth of behavioral characterization against which the network is evaluated. More generally\, our results provide a compelling demonstration of how far we still are from securing an adequate computational account of even the most basic operations carried out by human vision. \nPhoto by Mathew Schwartz on Unsplash \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-peter-neri-laboratoire-des-systemes-perceptifs-cnrs-paris-the-unreasonable-recalcitrance-of-human-vision-to-theoretical-domestication/
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/12/mathew-schwartz-sb7RUrRMaC4-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230112T100000
DTEND;TZID=Europe/Berlin:20230112T110000
DTSTAMP:20260429T182919
CREATED:20221128T133344Z
LAST-MODIFIED:20250603T130102Z
UID:13400-1673517600-1673521200@www.scienceofintelligence.de
SUMMARY:Dustin Lehmann\, Fritz Francisco\, Jorg Raisch\, Pawel Romanczuk (Science of Intelligence)\, “Dynamical Adaptation and Learning: Knowledge Transfer and Cooperative Learning in Groups of Heterogeneous Agents”
DESCRIPTION:Abstract: \nIn groups of agents learning how to solve a common task\, interaction and knowledge transfer between agents is important and can vary depending on network topology. Heterogeneity is one of the key principles that influences the type and quality of interaction between learning agents. Different learning strategies and behaviors can be a driving factor for the learning success at the group and individual level\, whereas differences in dynamics (or capabilities\, behaviors\, internal states\, etc.) can impede the direct transferability of knowledge and may require dynamic adaption of the agents.\nIn this talk\, we show how to infer behavioral heterogeneity in learning groups of fish and how this affects future learning capabilities. Prior knowledge of social partners affects the outcome of learning processes and timing of information uptake. We further investigate behavioral heterogeneity from the perspective of synthetic dynamic systems and how to transfer knowledge between dissimilar agents to enable cooperative learning of how to solve a common task. First results show how to exploit heterogeneity for learning in synthetic agents and which information gradient is beneficial when dealing with novel tasks in a social context.\n \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-dustin-lehmann-fritz-francisco-jorg-raisch-pawel-romanczuk-dynamical-adaptation-and-learning-knowledge-transfer-and-cooperative-learning-in-groups-of-heterogeneous-agents/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/project-52.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230119T100000
DTEND;TZID=Europe/Berlin:20230119T113000
DTSTAMP:20260429T182919
CREATED:20230116T101152Z
LAST-MODIFIED:20250603T130050Z
UID:14043-1674122400-1674127800@www.scienceofintelligence.de
SUMMARY:David Garzón Ramos (Université Libre De Bruxelles)\, “Automatic Design of Robot Swarms: Context and Experiments”
DESCRIPTION:Abstract:\n \nSwarm robotics is a promising approach to the coordination of large groups of robots. Traditionally\, the design of collective behaviors for robot swarms has been an iterative manual process: a human designer manually refines the control software of the individual robots until the desired collective behavior emerges.\n\nIn this talk\, I discuss automatic design as an alternative approach to manual design. In automatic methods\, the design process is cast into an optimization problem: given a task to be performed by the swarm\, an optimization process designs a collective behavior to perform the task and produces appropriate control software for the robots. I focus on experiments that highlight the various aspects of the automatic design of robot swarms: classes of collective behaviors\, control architectures\, and the optimization process. In particular\, I present a case study on the design of shepherding behaviors for groups of robots. The results presented in this talk are outcomes of the project DEMIURGE; an ERC funded project devoted to the study of the automatic design of robot swarms (PI Mauro Birattari).\nThis talk will take place in person at SCIoI. \nPhoto by Omar Flores on Unsplash. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-david-garzon-ramos-universite-libre-de-bruxelles-automatic-design-of-robot-swarms-context-and-experiments/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/omar-flores-lQT_bOWtysE-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230119T160000
DTEND;TZID=Europe/Berlin:20230119T173000
DTSTAMP:20260429T182919
CREATED:20230102T111439Z
LAST-MODIFIED:20240813T101638Z
UID:13961-1674144000-1674149400@www.scienceofintelligence.de
SUMMARY:Ingmar Posner (University of Oxford)\, "Learning to Perceive and to Act - Disentangling Tales from (Structured) Latent Space"
DESCRIPTION:Abstract:\nUnsupervised learning is experiencing a renaissance. Driven by an abundance of unlabelled data and the advent of deep generative models\, machines are now able to synthesise complex images\, videos and sounds. In robotics\, one of the most promising features of these models – the ability to learn structured latent spaces – is gradually gaining traction. The ability of a deep generative model to disentangle semantic information into individual latent-space dimensions seems naturally suited to state-space estimation. Combining this information with generative world-models\, models which are able to predict the likely sequence of future states given an initial observation\, is widely recognised to be a promising research direction with applications in perception\, planning and control. Yet\, to date\, designing generative models capable of decomposing and synthesising scenes based on higher-level concepts such as objects remains elusive in all but simple cases. In this talk I will motivate and describe our recent work using deep generative models for unsupervised object-centric scene inference and generation. Furthermore\, I will make the case that exploiting correlations encoded in latent space\, and learnt through experience\, lead to a powerful and intuitive way to disentangle and manipulate task-relevant factors of variation. I will show that this not only casts a novel light on affordance learning\, but also that the same framework is capable of generating plans executable on complex real-world robot platforms. \nPhoto courtesy by Ingmar Posner. \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/distinguished-speaker-series-ingmar-posner-university-of-oxford-learning-to-perceive-and-to-act-disentangling-tales-from-structured-latent-space/
CATEGORIES:Distinguished Speaker Series
ATTACH;FMTTYPE=image/png:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/PastedGraphic-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230126T100000
DTEND;TZID=Europe/Berlin:20230126T110000
DTSTAMP:20260429T182919
CREATED:20221128T133841Z
LAST-MODIFIED:20240813T101630Z
UID:13403-1674727200-1674730800@www.scienceofintelligence.de
SUMMARY:Scott Robins (Bonn University)\, "What Machines Shouldn't Do"
DESCRIPTION:Abstract: \nFrom writing essays to evaluating potential hires\, machines are doing a lot these days. In all spheres of life\, it seems that machines are being delegated more and more decisions. Some of these machines are being delegated decisions that could have significant impact on human lives.Examples of such machines which have caused such impact are widespread and include machines evaluating loan applications\, machines evaluating criminals for sentencing\, autonomous weapon systems\, driverless cars\, digital assistants\, etc. Considering that machines cannot be held morally accountable for their actions (Bryson\, 2010; Johnson\, 2006; van Wynsberghe & Robbins\, 2018)\, the question that governments\, NGOs\, academics\, and the general public should be asking themselves is: how do we keep meaningful human control (MHC) over these machines? \n\nThe literature thus far details what features the machine or the context must have in order for MHC to be realized. Should humans be in the loop or on the loop? Should we force machines to be explainable? Lastly\, should we endow machines with moral reasoning capabilities? (Ekelhof\, 2019; Floridi et al.\, 2018; Robbins\, 2019a\, 2019b; Santoni de Sio & van den Hoven\, 2018; Wendall Wallach & Allen\, 2010; Wendell Wallach\, 2007). Rather than look to the machine itself or what part humans have to play in the context\, I argue here that we should shine the spotlight on the decisions that machines are being delegated. Meaningful human control\, then\, will be about controlling what decisions get made by machines. \n\nI argue that keeping meaningful human control over machines (especially AI which relies on opaque methods) means restricting machines to decisions that do not require a justifying explanation and can\, in principle\, be proven efficacious. Because contemporary methodologies in AI are opaque\, many machines cannot offer explanations for their outputs. In many cases\, decisions require justifying explanations\, and we should therefore not use machines for such cases. It won’t be surprising that machines should be efficacious if they are to be used – especially in contexts that will have impacts on human beings. Increasingly\, however\, machines are being delegated decisions for which we are unable\, in principle\, to evaluate their efficacy. This should not happen. \n\nThese arguments lead to the conclusion that machines should be restricted to descriptive outputs. It must always be a human being deciding how to employ evaluative terms as these terms not only refer to specific states of affairs but also say something about how the world ought to be. Machines which are able to make decisions based on opaque considerations should not be telling humans how the world ought to be. This is a breakdown of human control in the most severe way. Not only would we be losing control over specific decisions in specific contexts\, but we would be losing control over what descriptive content grounds evaluative classifications. \n\nIn this talk\, I will first discuss what it means to say that a machine is ‘doing’ something. I then briefly discuss different proposals for MHC and why they fall short. I then argue that machines should not be delegated evaluative decisions as they require justifying explanations which machines cannot give and cannot be evaluated for efficacy. While this talk is framed negatively\, it is my hope that this focuses research and development to design and build machines to help us realize our visions for how the world ought to be\, rather than machines that tell us hour the world ought to be. Only humans can decide that.\n \nThis talk will take place in person at SCIoI. \nPhoto by David Levêque on Unsplash \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-scott-robins-bonn-university-what-machines-shouldnt-do/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/david-leveque-GpNOhig3LSU-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230126T160000
DTEND;TZID=Europe/Berlin:20230126T173000
DTSTAMP:20260429T182919
CREATED:20230117T132407Z
LAST-MODIFIED:20250603T130033Z
UID:14062-1674748800-1674754200@www.scienceofintelligence.de
SUMMARY:Lars Lewejohann (Science of Intelligence)\, “What’s on a Mouse’s Mind? Behavioral Measures To Understand Experiences and Needs of an Animal”
DESCRIPTION:Lars Lewejohann\, Freie Universität Berlin\, German Federal Institute for Risk Assessment (BfR)\, German Centre for the Protection of Laboratory Animals (Bf3R) \nMice\, like all other living creatures\, have adapted to specific living conditions in the course of evolution. From a human point of view\, the behavior of animals is therefore not always easy to understand. This applies not only to the question of whether mice are actually capable of behaving intelligently\, but also to the question of what is necessary for optimizing animal welfare of laboratory animals. In our work\, we are interested in both questions and follow an animal-centered approach and are giving mice their say. Of course mice cannot fill out questionnaires\, but we have developed a series of behavioral tests that allow to query the animals. In this lecture I will outline our approach with regard to improving housing and living conditions as well as the implications of using mice as a model species for the science of intelligence.
URL:https://www.scienceofintelligence.de/event/pi-lecture-with-lars-lewejohann-whats-on-a-mouses-mind-behavioral-measures-to-understand-experiences-and-needs-of-an-animal/
LOCATION:MAR 2.057
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2021/12/lewejohann_800.jpg
END:VEVENT
END:VCALENDAR