BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//scienceofintelligence.de - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:scienceofintelligence.de
X-ORIGINAL-URL:https://www.scienceofintelligence.de
X-WR-CALDESC:Events for scienceofintelligence.de
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230112T100000
DTEND;TZID=Europe/Berlin:20230112T110000
DTSTAMP:20260409T135524
CREATED:20221128T133344Z
LAST-MODIFIED:20250603T130102Z
UID:13400-1673517600-1673521200@www.scienceofintelligence.de
SUMMARY:Dustin Lehmann\, Fritz Francisco\, Jorg Raisch\, Pawel Romanczuk (Science of Intelligence)\, “Dynamical Adaptation and Learning: Knowledge Transfer and Cooperative Learning in Groups of Heterogeneous Agents”
DESCRIPTION:Abstract: \nIn groups of agents learning how to solve a common task\, interaction and knowledge transfer between agents is important and can vary depending on network topology. Heterogeneity is one of the key principles that influences the type and quality of interaction between learning agents. Different learning strategies and behaviors can be a driving factor for the learning success at the group and individual level\, whereas differences in dynamics (or capabilities\, behaviors\, internal states\, etc.) can impede the direct transferability of knowledge and may require dynamic adaption of the agents.\nIn this talk\, we show how to infer behavioral heterogeneity in learning groups of fish and how this affects future learning capabilities. Prior knowledge of social partners affects the outcome of learning processes and timing of information uptake. We further investigate behavioral heterogeneity from the perspective of synthetic dynamic systems and how to transfer knowledge between dissimilar agents to enable cooperative learning of how to solve a common task. First results show how to exploit heterogeneity for learning in synthetic agents and which information gradient is beneficial when dealing with novel tasks in a social context.\n \nThis talk will take place in person at SCIoI. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-dustin-lehmann-fritz-francisco-jorg-raisch-pawel-romanczuk-dynamical-adaptation-and-learning-knowledge-transfer-and-cooperative-learning-in-groups-of-heterogeneous-agents/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/project-52.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230119T100000
DTEND;TZID=Europe/Berlin:20230119T113000
DTSTAMP:20260409T135524
CREATED:20230116T101152Z
LAST-MODIFIED:20250603T130050Z
UID:14043-1674122400-1674127800@www.scienceofintelligence.de
SUMMARY:David Garzón Ramos (Université Libre De Bruxelles)\, “Automatic Design of Robot Swarms: Context and Experiments”
DESCRIPTION:Abstract:\n \nSwarm robotics is a promising approach to the coordination of large groups of robots. Traditionally\, the design of collective behaviors for robot swarms has been an iterative manual process: a human designer manually refines the control software of the individual robots until the desired collective behavior emerges.\n\nIn this talk\, I discuss automatic design as an alternative approach to manual design. In automatic methods\, the design process is cast into an optimization problem: given a task to be performed by the swarm\, an optimization process designs a collective behavior to perform the task and produces appropriate control software for the robots. I focus on experiments that highlight the various aspects of the automatic design of robot swarms: classes of collective behaviors\, control architectures\, and the optimization process. In particular\, I present a case study on the design of shepherding behaviors for groups of robots. The results presented in this talk are outcomes of the project DEMIURGE; an ERC funded project devoted to the study of the automatic design of robot swarms (PI Mauro Birattari).\nThis talk will take place in person at SCIoI. \nPhoto by Omar Flores on Unsplash. \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-david-garzon-ramos-universite-libre-de-bruxelles-automatic-design-of-robot-swarms-context-and-experiments/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2023/01/omar-flores-lQT_bOWtysE-unsplash.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20230126T100000
DTEND;TZID=Europe/Berlin:20230126T110000
DTSTAMP:20260409T135524
CREATED:20221128T133841Z
LAST-MODIFIED:20240813T101630Z
UID:13403-1674727200-1674730800@www.scienceofintelligence.de
SUMMARY:Scott Robins (Bonn University)\, "What Machines Shouldn't Do"
DESCRIPTION:Abstract: \nFrom writing essays to evaluating potential hires\, machines are doing a lot these days. In all spheres of life\, it seems that machines are being delegated more and more decisions. Some of these machines are being delegated decisions that could have significant impact on human lives.Examples of such machines which have caused such impact are widespread and include machines evaluating loan applications\, machines evaluating criminals for sentencing\, autonomous weapon systems\, driverless cars\, digital assistants\, etc. Considering that machines cannot be held morally accountable for their actions (Bryson\, 2010; Johnson\, 2006; van Wynsberghe & Robbins\, 2018)\, the question that governments\, NGOs\, academics\, and the general public should be asking themselves is: how do we keep meaningful human control (MHC) over these machines? \n\nThe literature thus far details what features the machine or the context must have in order for MHC to be realized. Should humans be in the loop or on the loop? Should we force machines to be explainable? Lastly\, should we endow machines with moral reasoning capabilities? (Ekelhof\, 2019; Floridi et al.\, 2018; Robbins\, 2019a\, 2019b; Santoni de Sio & van den Hoven\, 2018; Wendall Wallach & Allen\, 2010; Wendell Wallach\, 2007). Rather than look to the machine itself or what part humans have to play in the context\, I argue here that we should shine the spotlight on the decisions that machines are being delegated. Meaningful human control\, then\, will be about controlling what decisions get made by machines. \n\nI argue that keeping meaningful human control over machines (especially AI which relies on opaque methods) means restricting machines to decisions that do not require a justifying explanation and can\, in principle\, be proven efficacious. Because contemporary methodologies in AI are opaque\, many machines cannot offer explanations for their outputs. In many cases\, decisions require justifying explanations\, and we should therefore not use machines for such cases. It won’t be surprising that machines should be efficacious if they are to be used – especially in contexts that will have impacts on human beings. Increasingly\, however\, machines are being delegated decisions for which we are unable\, in principle\, to evaluate their efficacy. This should not happen. \n\nThese arguments lead to the conclusion that machines should be restricted to descriptive outputs. It must always be a human being deciding how to employ evaluative terms as these terms not only refer to specific states of affairs but also say something about how the world ought to be. Machines which are able to make decisions based on opaque considerations should not be telling humans how the world ought to be. This is a breakdown of human control in the most severe way. Not only would we be losing control over specific decisions in specific contexts\, but we would be losing control over what descriptive content grounds evaluative classifications. \n\nIn this talk\, I will first discuss what it means to say that a machine is ‘doing’ something. I then briefly discuss different proposals for MHC and why they fall short. I then argue that machines should not be delegated evaluative decisions as they require justifying explanations which machines cannot give and cannot be evaluated for efficacy. While this talk is framed negatively\, it is my hope that this focuses research and development to design and build machines to help us realize our visions for how the world ought to be\, rather than machines that tell us hour the world ought to be. Only humans can decide that.\n \nThis talk will take place in person at SCIoI. \nPhoto by David Levêque on Unsplash \n 
URL:https://www.scienceofintelligence.de/event/thursday-morning-talk-scott-robins-bonn-university-what-machines-shouldnt-do/
CATEGORIES:Thursday Morning Talk
ATTACH;FMTTYPE=image/jpeg:https://www.scienceofintelligence.de/wp-content/uploads/2022/11/david-leveque-GpNOhig3LSU-unsplash.jpg
END:VEVENT
END:VCALENDAR