Program and Speakers

Schedule

21 August
Monday

22 August
Tuesday

23 August
Wednesday

24 August
Thursday

25 August
Friday

9:00

9:00 Registration
9:30 Opening

Marc Toussaint

Pawel Romanczuk

Guillermo Gallego

Heiko Hamann

10:00

Jörg Raisch

Georg Maritus

Basil el Jundi

Cornelia Fermueller

Shinkyu Park

11:00

Javier Alonso-Mora

Xiaolong Wang

Mike Webster

Panel Discussion

Lauren Sumner-Rooney

12:00

Lunch

Lunch

Lunch

Lunch

Lunch

13:00

Networking

Tutorial
Pia Bideau
Room 2.013

Spotlight talks

Alessio Franci

Martin Saska

14:00

Sabine Hauert

Poster/Demo Session

Guido de Croon

Iain Couzin

15:00

Tutorial
Wolfgang Hönig
Room 2.013

small dinner packet to go and transit

Tutorial
Pia Bideau & Wolfgang Hönig
Room 0.001

Tutorial
David Bierbach
Room 2.013

16:00

Berlin Excursion

17:00

transit

transit

transit


Closing event with barbecue

18:00

Dinner @Schleusenkrug

Dinner @Weltwirtschaft

Dinner Manjurani

19:00

All lectures take place in room 2.057.

Monday, August 21, 2023

Time                                                                                                  Activity

9:00 – 10:00

Opening and Welcome Words

10:00 – 11:00

Jörg Raisch

TU-Berlin

Achieving Consensus in Multi-Agent Systems

About Speaker

Jorg Raisch   studied Engineering Cybernetics at Stuttgart University and Control Systems at UMIST, Manchester, UK. He received a PhD and a Habilitation degree, both from Stuttgart University. He holds the chair for Control Systems in the EECS Department at TU Berlin, and he is also an external scientific member of the Max Planck Institute for Dynamics of Complex Technical Systems. His main research interests are hybrid and hierarchical control, distributed cooperative control, and control of timed discrete event systems in tropical algebras, with applications in chemical, medical, and power systems engineering. He was on the editorial boards of the European Journal of Control, the IEEE Transactions on Control Systems Technology, and Automatica, and served as chair of IFAC Technical Committee 1.3 (Discrete Event and Hybrid Systems) . He is on the editorial boards of Discrete Event Dynamic Systems and Foundations and Trends in Systems and Control.

Abstract

In this talk, I will discuss how multi-agent systems, in the absence of a dedicated central decision unit, can achieve consensus, i.e., agree on objectives and relevant infomation on the environment. I will focus on two distinct types of consensus, namely average and max consensus, and explain why they are relevant for multi-robot scenarios. I will summarise standard results, both for the case of constant and timevarying information topologies. I will then briefly outline how emerging mobile communication technology allows to exploit the superposition property of the wireless communication channel to achieve consensus in large groups much more efficiently. If time permits, this will be illustrated with examples from traffic automation, such as automatic lane changing and distributed automation of traffic intersections for autonomous vehicles.

11:00 – 12:00

Javier Alonso Mora

TU-Delft

About Speaker

Dr. Javier Alonso-Mora is an Associate Professor at the Cognitive Robotics department of the Delft University of Technology , where he leads the Autonomous Multi-robots Laboratory . He is a Principal Investigator at the Amsterdam Institute for Advanced Metropolitan Solutions ( AMS Institute) and co-founder of The Routing Company . He is actively involved in the Delft robotics ecosystem, including the Robotics Institute, the Transportation Institute and Robovalley.

His main research interest is in navigation, motion planning and control of autonomous mobile robots, with a special emphasis on multi-robot systems, on-demand transportation and robots that interact with other robots and humans in dynamic and uncertain environments. He is the recipient of multiple prizes and grants, including the ICRA Best Paper Award on Multi-robot Systems (2019), an Amazon Research Award (2019) and a talent scheme VENI award from the Netherlands Organisation for Scientific Research (2017).

Abstract

—-

12:00 – 13:00

Lunch & Group-Photo (2.057)

13:00 – 14:00

Networking

14:00 – 15:00

Sabine Hauert

University of Bristol

Swarms for People

About Speaker

Sabine Hauert is Associate Professor (Reader) of Swarm Engineering at the University of Bristol in the UK. Her research focusses on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics. Before joining the University of Bristol, Sabine engineered swarms of nanoparticles for cancer treatment at MIT, and deployed swarms of flying robots at EPFL.

Sabine is also President and Co-founder of Robohub.org, and executive trustee of AIhub.org, two non-profits dedicated to connecting the robotics and AI communities to the public.

As an expert in science communication with 10 years of experience, Sabine is often invited to discuss the future of robotics and AI, including in the journal Nature, at the European Parliament, and at the Royal Society. Her work has been featured in mainstream media including BBC, CNN, The Guardian, The Economist, TEDx, WIRED, and New Scientist.

Abstract

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Larger robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

15:00 – 17:00

Wolfgang Hönig

TU-Berlin

Multi-Robot Coordination

About Speaker

Wolfgang Hönig is an independent junior research group leader at TU-Berlin heading the Intelligent Multi-Robot Coordination Lab. Previously, he was a postdoctoral scholar at the Department of Aerospace, California Institute of Technology, advised by Soon-Jo Chung. He holds a PhD in Computer Science from the the University of Southern California, where he was advised by Nora Ayanian. His research focuses on enabling large teams of physical robots to collaboratively solve real-world tasks, using tools from informed search, optimisation, and machine learning.

Intelligent behavior of a single robot is not sufficient for executing tasks with a robotic team effectively. First, we look at the challenges that arise in controls, motion planning, and general decision-making when moving from a single robot to cooperative behavior. Then, we discuss algorithms on how we can overcome these challenges, including hungarian method, buffered voronoi cells, and conflict-based search.

Abstract

One of the foundational behaviors for a team of robots is to be able to move in their environment without any collisions with other robots or obstacles. In multi-robot motion planning, we typically assume that the environment is fully known and that we can plan in a centralized fashion for the whole team of robots. In collision avoidance, we take a more reactive approach, where each robot changes its motion based on the robot’s perceptual input to avoid collisions in a distributed fashion.

This tutorial provides practical insights into Buffered Voronoi Cells (BVC), a modern approach for distributed collision avoidance that only requires the sensing of neighbor positions. Together with single-robot planning techniques, it can be effectively used to plan smooth motions for many mobile robots, including differential-drive robots, car-like robots, and multirotors.

We program in Python and verify the resulting approach in a robotics simulator, assuming that we know the relative positions between the robots. A real-robot experiment with a team of lego mindstorm robots will connect the perception pipeline from the first tutorial to multi-robot coordination.

 

Theory: 

  • Differential Flatness and  Bezier-curve Optimization
  • Buffered Voronoi Cells (BVC)
  • Extensions to learning-based Algorithms

Practice (implementation): 

  • Work with a robotics simulator to execute motions
  • Implement collision avoidance, given helper functions for single-robot motion planning and Voronoi computation

Practice (testing): 

  • Testing the developed algorithm in simulation
  • Demo on a team of lego mindstorm robots

Some reading material:

Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells,” IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 2, pp. 1047–1054, 2017, doi: 10.1109/LRA.2017.2656241.

18:00 – 20:00

Dinner

Tuesday, August 22, 2023

9:00 – 10:00

Marc Toussaint

TU-Berlin

Action and Scene Representations for Robotic Manipulation

About Speaker

Marc Toussaint works at TU Berlin on Learning and Intelligent Systems.  In his view, a key in understanding and creating intelligence is the interplay of learning and reasoning, where learning becomes the enabler for strongly generalizing reasoning and acting in our physical world. Within SCIoI, he is interested in studying computational methods and representations to enable efficient learning and general purpose physical reasoning, and demonstrating such capabilities on real-world robotic systems.

Abstract

While in recent years we worked top-down, considering high-level Task-and-Motion Planning (TAMP) problems first and then developing methods to combine TAMP solvers with perception and control, in this talk I will go bottom-up: I’ll first discuss our take on appropriate representations for reactive manipulation (e.g. using field-based object representations to learn manipulation constraints and multi-object dynamics, as well as Sequence-of-Constraints MPC for control), and then discuss how higher-level systems could build on this — be it our TAMP-style solvers or LLMs.

10:00 – 11:00

Georg Martius

Max Planck Institute for intelligent Systems

Machine learning algorithms for autonomously learning robots

About Speaker

Georg Martius works as group leader at the Max Planck Institute for intelligent Systems. He is interested in autonomous learning, i.e. how an embodied agent can determine what to learn, how to learn, and how to judge its learning success. He believes that robots need to learn from experience to become dexterous and versatile assistants to humans in many real-world domains. Intrinsically motivated learning can help to create a suitable learning curriculum and lead to capable systems without the need to specify every little detail of that process. Here we take inspiration from child development.

Abstract

I am driven by the question of how robots can autonomously develop skills and learn to become versatile helpers for humans. Considering children, it seems natural that they have their own agenda. They playfully explore their environment, without the necessity for somebody to tell them exactly what to do next. Replicating such flexible learning in machines is highly challenging. I will present my research on different machine learning methods as steps towards solving this challenge. Part of my research is concerned with artificial intrinsic motivations — their mathematical formulation and embedding into learning systems. Equally important is to learn the right representations and internal models and I will show how powerful intrinsic motivations can be derived from learned models. With model-based reinforcement learning and planning methods, I show how we can achieve active exploration and playful robots but also safety aware behavior. A really fascinating feature is that these learning-by-playing systems are able to perform well in unseen tasks zero-shot.

When autonomous systems need to make decisions at a higher level, such as deciding about an appropriate order of subtasks in an assembly task, they need to implicitly solve combinatorial problems, which pose a considerable challenge to current deep learning methods. We recently proposed the first unified way to embed a large class of combinatorial algorithms into deep learning architectures, which I will present along with possible applications to robotics.

11:00 – 12:00

Xiaolong Wang

University of California

Geometric Robot Learning for Generalizable Skills Acquisition

About Speaker

Xiaolong Wang  is an Assistant Professor in the ECE department at the University of California, San Diego. He is affiliated with the CSE department, Center for Visual Computing, Contextual Robotics Institute, and the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. He is particularly interested in learning 3D and rich representations from videos on large-scale with minimum cost and uses this representation to guide robots to learn. He is the recipient of the NSF CAREER Award, Sony Research Award, and Amazon Research Award

Abstract

Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation even without visual input, (ii) the collection of large-scale robot physical interaction demonstrations for imitation learning using a simple and user-friendly visual teleoperation system, and (iii) large-scale 3D representation learning that generalizes Reinforcement Learning policies across diverse objects and scenes. I will also showcase the real-world applications of our research, including dexterous manipulation and legged locomotion control.

12:00 – 13:00

Lunch

13:00 – 15:00

Pia Bideau

TU-Berlin

Individual Robot Perception and Learning

About Speaker

Pia Bideau  is a postdoctoral researcher at TU-Berlin and part of the Cluster Science of Intelligence as of January 2020. Her research aims to address the topic of how one can teach a computer to see and understand the world as we humans do, the strengths and weaknesses of a computer vision system compared to a human vision system, and how the two systems can learn from each other. We move, we discover new interesting stuff that raises our curiosity if a perceived situation doesn’t match certain expectations, and we learn. Pia’s research focuses on motion – our motion as well as our motion perception. Motion is a key ability that we as living beings have to explore our environment. Our motion for example helps us to perceive depth, and the motion of objects helps us to recognize these objects even if those are unknown to us. Motion in the visual world  helps us understanding an unstructured environment we are living in. Before she joined the Cluster of Intelligence, Pia received her PhD from the University of Massachusetts, Amherst (USA) working with Prof. Erik Learned-Miller and worked together with Cordelia Schmid and Karteek Alahari as part of an internship at Inria in Grenoble (France).

Abstract

Distance estimation is an essential part of scene recognition and orientation, allowing agents to move in a natural environment. In particular, when animals move in teams (e.g. fish schools, flock of birds), they seem to be very capable of doing this – efficient and accurate enough such that quite astonishing behaviors arise when they move together as a collective. Different sensor systems but also different strategies of movement enable these agents to localize themselves relative to another. Vision is probably the sensor system that is studied in greatest detail, but other sensor systems such as an ultrasonic system allow agents to “see” distance with their ears also in low light conditions.

 

This tutorial will give an introduction into learning based approaches for distance estimation using vision. While there are several cues to extract information about distance we will focus here on object appearance and its relative size. Objects appearing at greater distance will appear smaller than objects nearby. This is one of the fundamental principles of perspective projection. We will extend a classical object detector (such as YOLO) with the ability to estimate distance. As we wish to test our developed algorithm on a real robotic system, a focus lies on fast and efficient computation. For testing a lego mindstorm robot equipped with a RGB camera and a raspberryPi will be used.

 

  • Theory: Introduction into efficient object detection with YOLO
  • Practice (implementation):Learning absolut distance estimates.

               Extending object bounding box predictions with distance estimates
               A new training loss for distance estimates

  • Practice (testing): Testing the developed algorithm on a lego mindstorm robot.

Some reading material:

Detection and Distance estimation via YOLO-Dist:
https://www.mdpi.com/2076-3417/12/3/1354#cite

YOLO overview:
https://www.v7labs.com/blog/yolo-object-detection

https://pjreddie.com/media/files/papers/YOLOv3.pdf

https://pytorch.org/hub/ultralytics_yolov5/

15:00 – 18:00

Berlin Excursion

18:00 – 20:00

Dinner

Wednesday, August 23, 2023

9:00 – 10:00

Pawel Romanczuk

Humboldt-Universität zu Berlin

Self-organization and collective-information processing in animal groups

About Speaker

Pawel Romanczuk works an HU Berlin, at the interface of applied mathematics, theoretical physics, and behavioral biology. He focuses on collective behavior of organismic systems. His research bridges analytical and synthetic sciences to study self-organization, evolutionary adaptations, and functional dynamical behavior.

Abstract

Collective behavior of animals is a fascinating example of self-organization in biology, where complex collective behaviors emerge from simple inter-individual interactions. On the other hand, collective animal behavior is a product of evolution, and thus is assumed to confer fitness benefits to individuals, for example by enabling exchange of social information, promoting accurate collective decisions, or conferring protection from predators. The corresponding interplay between self-organization and function of animal collectives is our main research focus, which we study primarily through agent-based modeling in collaboration with experimental partners. In this lecture, first, I will give a brief introduction into the modeling of collective movement, including new developments on vision-based flocking. Second, I will discuss our recent results on optimal information processing and the so-called “criticality hypothesis”, which proposes that animal collectives should operate a special parameter region close to critical points, where various aspects of collective computations become optimal.

10:00 – 11:00

Basil el Jundi

Norwegian University of Science and Technology

Unravelling the Behavioral and Neural Mechanisms of Insect Migration

About Speaker

Basil el Jundi works as associate professor at the Norwegian University of Science and Technology. He and his team are interested in understanding the behavioral and neural mechanisms underlying spatial orientation in insects. Currently, they are studying the use of compass cues in monarch butterflies and how they are encoded in the butterfly brain. These butterflies are famous for their spectacular annual migration from North America to Central Mexico. How are different navigation cues used for orientation and how are they linked in the brain? To understand this, we perform behavioral experiments (flight simulator) combined with anatomical (confocal imaging, 3D modeling) and electrophysiological studies (intracellular and tetrode recordings).

Abstract

Many animals are well known for their spectacular migrations across the surface of the earth. Remarkably, they can migrate over thousands of kilometers to reach a highly specific location at the end of their journey. One prime example is the annual migration of Monarch butterflies. Each fall, millions of these colorful butterflies migrate over more than 5.000 km from North America and Canada to their overwintering habitat in the mountain ranges of Central Mexico. In my group, we are interested in understanding how these insects master such a remarkable migration in spite of exhibiting a brain that is smaller than a grain of rice. We are studying the compass of Monarch butterflies through behavioral and neuroanatomical techniques, as well as through electrophysiological approaches, such as multichannel tetrode recordings from tetheredflying butterflies. Our recent results suggest that the Monarch butterfly compass relies on multimodal information for orientation that changes its coding in a locomotordependent manner. Moreover, we discovered that the Monarch brain houses goaldirection neurons, similar to the ones described in the mammalian brain. Taken together, Monarch butterflies represent the optimal model organism to shed light on the fundamental principles of animal navigation from the neural principles to the behavioral mechanisms.

11:00 – 12:00

Mike Webster

University of St. Andrews

The STRANGE framework for improving experimental design, reporting standards and reproducibility

About Speaker

Mike Webster works at the University of St. Andrews. He is interested in the behaviour of group-living animals, including social foraging, competition, information diffusion and predator-prey interactions. His work investigates the benefits and costs of grouping, how groups form and function and how the behaviour of individuals shapes, and is shaped by, that of the group. He is also interested in sampling biases in animal behaviour, with a focus on how these arise and how they are reported.

Abstract

Animal behaviour researchers are working hard to improve reproducibility. Most studies are susceptible to sampling biases, testing subjects that are not fully representative of the wider populations for which they seek to make inferences. Biased sample composition can affect the interpretation of data, limit the generalisability of results, complicate comparisons between studies, and ultimately, hamper reproducibility. The STRANGE framework was developed to help animal behaviour researchers identify, mitigate and report sampling biases. STRANGE refers to test subjects’: Social background; Trappability and self-selection; Rearing history; Acclimation and habituation; Natural changes in responsiveness; Genetic make-up; and Experience. These factors are not in themselves problematic, and are often the focus of well-designed research projects, or are explicitly controlled for. Concerns arise whenever samples of subjects are biased with regards to any of these factors and researchers do not account for this. STRANGE encourages an in-depth examination of the causes and consequences of sampling biases, and provides guidance on improving experimental designs and reporting standards.

12:00 – 13:00

Lunch

13:00 – 14:00

Spotlight talks

13:00 – 13:10

Logan Beaver

Boston University

Constraint-Driven Control for Adaptive and Intelligent Systems

 

Abstract

Constraint-driven control is an emerging technique for the control of multi-agent systems in a truly decentralized manner. Under a constraint-driven framework, each agent solves its own optimization problem with the objective of minimizing its energy (or power) consumption subject to a set of task and safety constraints. This approach has the potential to revolutionize autonomous systems; the constraints that drive agent behavior are functions of the local environment and system state, and thus agents can adapt to changes and uncertainty in a data-driven manner. Furthermore, it empowers agents to seamlessly interact in open systems where the total system size may be unknown and changing. In this talk, I will give a conceptual overview of constraint-driven control and present recent results that demonstrate the emergence of collective motion in constraint-driven systems.

13:10 – 13:20

Daphne Cornelisse

NYU Tandon

Towards human-like driving agents through human-regularized RL

 

Abstract

To develop autonomous vehicles that can be safely integrated into human-populated areas, we require rich traffic simulations with realistic human-like drivers. It is increasingly common to use RL algorithms to develop policies for driving agents. However, these algorithms are inefficient and fail to produce human-like behavior in multi-agent settings. To create more realistic models of human driving agents, we explore regularizing an on-policy RL algorithm with human driving trajectories in Nocturne: a 2D driving simulator that offers a range of traffic scenarios. We show that imposing a slight penalty for deviating from the human policy via this regularization term results in better-performing agents across several traffic scenarios. These agents exhibit more human-like behavior and require less training time.

13:20 – 13:30

Alex Mitrevski

Institute for AI and Autonomous Systems, Hochschule Bonn-Rhein-Sieg, Germany

Context- and Failure-Aware Robots for Everyday Environments

 

Abstract

Robots designed for everyday, human-centred environments need to have an ability to handle execution failures and learn from them, as these reduce the reliability of robots and can thus have a negative effect on the overall robot acceptance; however, most existing approaches for modelling the behaviour of robots, for instance in the cognitive robotics context, either ignore the occurrence of failures or only include mechanisms for preventing them, but do not consider how failures can be analysed and resolved permanently in case they do occur. In this talk, I will briefly describe my work that deals with robot execution failures and contextual adaptation. I will particularly discuss a representation for parameterising robot skills, called an execution model, and explain how this can be used for understanding and correcting execution failures. I will also briefly discuss some ideas for representing knowledge about execution contexts and how such knowledge can be useful for personalising a robot’s behaviour, thereby making it possible to avoid perceived failures in human-robot interaction and collaboration scenarios

13:30 – 13:40

Patrick Govoni

Humboldt-Universität zu Berlin & Max Planck Institute for Human Cognitive and Brain Sciences

No title

 

Abstract

Individual foraging strategy, comprising spatial navigation, memory, and decision-making, can be seen as a local solution to a common problem: acquiring patchily distributed resources in a dynamic environment. In an animal group, this computation increases in complexity, requiring an individual to sense both spatial as well as social factors. How the involved representation develops to balance these two sensory inputs has received little attention. I seek to demonstrate, through simulating and evolving agents equipped with visual, memory, and decision-making neural network modules, that fundamental dynamical manifolds emerge in order to coordinate environmental and social features with local food distribution, however with increasingly significant attention to the latter in social groups as individuals can leverage collective computation to forage more effectively.

13:40 – 13:50

Abhishek Naik

Univerity of Alberta

Unifying Perspectives on Intelligence: What Reinforcement Learning Adds to the Common Model of the Agent

 

Abstract

Many related fields are studying the phenomena of intelligence: computer science, psychology, cognitive neuroscience, ethology, behavioral economics, etc. Despite many differences, they share a common model of an ‘intelligent’ agent. I will show where this common model fits in Marr’s levels of understanding of an information-processing system. And show what the paradigm of RL adds to the common model and emerges as a strong candidate for the computational theory of the mind.

14:00 – 17:00

Poster/Demo Session

will follow

18:00 – 20:00

Dinner

Thursday, August 24, 2023

9:00 – 10:00

Guillermo Gallego

TU-Berlin

An introduction to event cameras and their applications

About Speaker

Guillermo Gallego works on Robotic Interactive Perception at TU Berlin as well as on computer vision and robotics. He focuses on robot perception and on optimization methods for interdisciplinary imaging and control problems. Inspired by the human visual system, he works toward improving the perception systems of artificial agents, endowing them with intelligence to transform raw sensor data into knowledge, and to provide autonomy in changing environments.

Abstract

Event cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging problems in computer vision and robotics, such as those involving high-speed, high dynamic range and power-constrained scenarios. This lecture provides sample applications of event cameras and discusses the fundamental question of how the data (called events) can be processed to solve a target task.

10:00 – 11:00

Cornelia Fermueller

University of Maryland at College Park

Neuromorphic Visual Motion and Action Analysis for Robotics

About Speaker

Cornelia Fermueller works in the areas of Computer, Human and Robot Vision at the University of Maryland at College Park. She studies and develops biologically inspired Computer Vision solutions for systems interacting with their environment. In recent years, her work has focused on the interpretation of human activities, and on motion processing for fast active robots using as input bio-inspired event-based sensors.

Abstract

Neuromorphic Computing is a vertical approach of computer engineering of both hardware and software design that seeks to emulate principles of the human brain and nervous system for efficient, low-power, and robust computation, and in recent years various individual concepts have been adopted in main-stream engineering. In this talk I will describe my group’s work on neuromorphic visual motion analysis for navigation and action interpretation. Many real-world AI applications, including self-driving cars, robotics, augmented reality, and human motion analysis are based on visual motion. Yet most approaches treat motion as extension of static images by matching features in consecutive video frames. Inspired by biological vision, we use as input to our computational methods, spatiotemporal filters and events from neuromorphic dynamic vision sensors that simulate the transient response in biology. I will describe a bio-inspired pipeline for the processing underlying the navigation tasks and present algorithms on 3D motion estimation and foreground-background segmentation. The design of these algorithms is guided by a) questions about where to best use geometric constraints in machine learning, and b) experiments with visual motion illusions to gain insight into computational limitations. Finally, I will show advantages of event-based processing for action understanding in robots.

11:00 – 12:00

Panel Discussion

12:00 – 13:00

Lunch

13:00 – 14:00

Alessio Franci

University of Liege

Excitable decision-making: a principle for analyzing and designing fast and flexible decision-making in embodied agents

About Speaker

Alessio Franci  works as a lecturer at the University of Liege. He is broadly interested in the interaction between mathematics, biology (particularly neuroscience), and engineering (particularly control theory and neuromorphic engineering). The brain, its way of knowing and perceiving the world, and the ways in which biology seems to self-organize out of inert matter, always fascinated him. As a physicist, he considers mathematics the natural language through which to describe the various facets of the biological world. Control theory provides both a conceptual and a technical framework to use mathematics to describe open systems, that is, systems with inputs and outputs, like our brain, like a single cell, like a group of neurons or people or bees or robots or ants; like all those biological forms in constant interaction with their environment.

Abstract

Decision-making in embodied agents should be fast and flexible if it is to successfully manage the uncertainty, variability, and dynamic environmental change encountered when operating in the real world. Decision-making is fast if it breaks indecision as quickly as indecision becomes costly. Decision-making is flexible if it adapts to signals important to successful operation, even if they are weak or infrequent.
I will present theory fundamentals, analytical results, and applications of a nonlinear dynamical model of excitable decision-making that allows analyzing and designing fast-and-flexible decision-making in embodied agents. The model is grounded on the principles of feedback control and bifurcation theory. Feedback control theory is key to capture the continuous regulation properties of fast-and-flexible decision-making. Bifurcation theory is key to capture the switch-like nature of decision-making events. The theory will be illustrated on simulations and applications in robotics, neuroscience, and neuromorphic engineering.

14:00 – 15:00

Guido de Croon

Delft University

Insect-inspired AI for autonomous flight of tiny drones

About Speaker

Guido de Croon works at Delft University. Small, lightweight flying robots such as the 20-gram DelFly Explorer form an extreme challenge to artificial intelligence, because of the strict limitations in onboard sensors, processing, and memory. He tries to uncover general principles of intelligence that will allow such limited, small robots to perform complex tasks.

Abstract

Tiny drones are promising for many applications, such as search-and-rescue, greenhouse monitoring, or keeping track of stock in warehouses. Since they are small, they can fly in narrow areas. Moreover, their light weight makes them very safe for flight around humans. However, making such tiny drones fly completely by themselves is an enormous challenge. Most approaches to Artificial Intelligence for robotics have been designed with self-driving cars or other large robots in mind – and these are able to carry many sensors and ample processing. In my talk, I will argue that a different approach is necessary for achieving autonomous flight with tiny drones. In particular, I will discuss how we can draw inspiration from flying insects, and endow our drones with similar intelligence. Examples include the fully autonomous “DelFly Explorer”, a 20-gram flapping wing drone, and swarms of CrazyFlie quadrotors of 30 grams able to explore unknown environments and finding gas leaks. Moreover, I will discuss the promises of novel neuromorphic sensing and processing technologies, illustrating this with recent experiments from our lab. Finally, I will discuss how insect-inspired robotics can allow us to gain new insights into nature. I illustrate this with a recent study, in which we proposed a new theory on how flying insects determine the gravity direction.

15:00 – 17:00

Pia Bideau & Wolfgang Hönig

TU & HU Berlin

Tutorial: 

Abstract

18:00 – 20:00

Dinner

Friday, August 25, 2023

9:00 – 10:00

Heiko Hamann

Universität Konstanz

Swarm Robotics: Basic Scenarios and Challenges

About Speaker

Heiko Hamann works at the University of Konstanz. He is a roboticist with focus on collective systems. With his group, he studies distributed robotics, machine learning for robotics, and bio-hybrid systems. He investigates collective intelligence and especially the swarm-robotics aspects of “Speed-accuracy tradeoffs in distributed collective decision making.”

Abstract

He will give you a quick introduction into key properties and design directives for swarm robotics. We start by looking at a few standard scenarios and tasks of swarm robotics. We keep a focus on implications of having robots with local perception only and especially practical consequences for designing and engineering sensing. In the last part of the talk, we discuss challenges and potential future directions in state-of-the-art swarm robotics research.

10:00 – 11:00

Shinkyu Park

King Abdullah University of Science and Technology

Learning in Large-Population Games and Application to Multi-robot Task Allocation

About Speaker

Shinkyu Park is the Assistant Professor of Electrical and Computer Engineering and Principal Investigator of Distributed Systems and Autonomy Group at King Abdullah University of Science and Technology (KAUST). Park’s research focuses on the learning, planning, and control in multi-agent/multi-robot systems. He aims to make foundational advances in robotics science and engineering to build individual robots’ core capabilities of sensing, actuation, and communication and to train them to learn the ability to work as a team and attain high-level of autonomy in distributed information processing, decision making, and manipulation. Prior to joining KAUST, he was Associate Research Scholar at Princeton University engaged in cross-departmental robotics projects. He received the Ph.D. degree in electrical engineering from the University of Maryland College Park in 2015. Later he held Postdoctoral Fellow positions at the National Geographic Society (2016) and Massachusetts Institute of Technology (2016-2019).

Abstract

In this talk, we discuss the design and analysis of learning models in large-population games and their application to multi-robot task allocation in dynamically changing environments. In population games, given a set of strategies, each agent in a population selects a strategy to engage in repeated strategic interactions with others. Rather than computing and adopting the best strategy selection based on a known cost function, the agents need to learn such strategy selection from instantaneous rewards they receive at each stage of the repeated interactions. In the first part of this talk, leveraging passivity-based analysis of feedback control systems, I explain principled approaches to design learning models for the agent strategy selection that guarantee convergence to the Nash equilibrium of an underlying game, where no agent can be better off by changing its strategy unilaterally. I also talk about the design of higher-order learning models that strengthen the convergence when the agents’ strategy selection is subject to time delays. In the second part, I describe how the population game framework and its learning models can be applied to multi-robot task allocation problems, where a team of robots needs to carry out a set of given tasks in dynamically changing environments. Using multi-robot resource search and retrieval as an example, we discuss how the task allocation can be defined as a population game and how a learning model can be adopted as a decentralized task allocation algorithm for the individual robots to select and carry out the tasks.

11:00 – 12:00

Lauren Sumner-Rooney

Museum für Naturkunde Leibniz-Institut für Evolutions- und Biodiversitätsforschung

Visual system architecture and integration in animals

About Speaker

Lauren Sumner-Rooney works at the Museum für Naturkunde Leibniz-Institut für Evolutions- und Biodiversitätsforschung. Her research group studies the structure, function and evolution of animal visual systems, with a focus on many-eyed organsims such as molluscs, spiders and echinoderms. The group uses a combination of digital morphology, neuroethology, evo-devo and comparative phylogenetic methods to study how and why animals use more than two eyes, and how these unusual visual systems evolve. Other research interests include the evolution of eye loss in dark habitats, the impacts of artificial light on visual ecology, and invertebrate neuroanatomy.

Abstract

Vision is one of the most important evolutionary innovations in animal biology, having transformed the way species navigate, communicate, forage and seek shelter. Eyes have evolved more than 40 times in a stunning array of diverse forms over the past half a billion years. These eyes represent single units within animal visual systems, whose architecture can take a range of configurations. The most familiar comprise a single pair of identical, bilaterally symmetrical, eyes, but this is a tiny fraction of visual system diversity. This talk will explore the diversity of visual system architectures and their functional implications, taking examples from across the animal kingdom, including molluscs, spiders, and echinoderms. The architecture of the system, including the diversity of the eyes within an individual, and their position, determines the total information that can be collected. We will explore these relationships, and their implications for the integration of information across the entire visual system, in relation to visual ecology and behaviour.

12:00 – 13:00

Lunch

13:00 – 14:00

Martin Saska

Czech Technical University of Prague

Advantages and challenges of tightly cooperating aerial vehicles in real-world environment

About Speaker

Martin Saska works at the Czech Technical University of Prague. His research interests are motion planning, swarm robotics, modular robotics, and robotic simulators. During his PhD thesis he worked on “Identification, Optimization and Control with Applications in Modern Technologies.”

Abstract

Using large teams of tightly cooperating Micro Aerial Vehicles (MAVs) in real-world (outdoor and indoor) environments without precise external localization such as GNSS and motion capture systems is the main motivation of this talk. I will present some insights into the research of fully autonomous, bio-inspired swarms of MAVs relying on onboard artificial intelligence. I will discuss the important research question of whether the MAV swarms can adapt better to localization failure than a single robot. In addition to the fundamental swarming research, I will be talking about real applications of multi-robot systems such as indoor documentation of large historical objects (cathedrals) by formations of cooperating MAVs, a cooperative inspection of underground mines inspired by the DARPA SubT competition, localization and interception of unauthorized drones, aerial firefighting, radiation sources localization, power line inspection, and marine teams of cooperating heterogeneous robots.

14:00 – 15:00

Iain Couzin

Max Planck Institute of Animal Behavior in Konstanz

Title will follow

About Speaker

Iain Couzin works at the Max Planck Institute of Animal Behavior in Konstanz. He is Director of the Department of Collective Behavior (Max Planck Institute of Animal Behavior) and a Full Professor at the University of Konstanz, where he is also a spokesperson of the Cluster of Excellence ‘Centre for the Advanced Study of Collective Behaviour’. Previously he was a Full Professor in the Department of Ecology and Evolutionary Biology at Princeton University (2013), and prior to that a Royal Society University Research Fellow in the Department of Zoology, University of Oxford, and a Junior Research Fellow in the Sciences at Balliol College, Oxford (2002-2007).

Abstract

will follow

15:00 – 17:00

David Bierbach

Humboldt Universität zu Berlin

Tutorial: Perception and Learning in Nature

About Speaker

David Bierbach is a biologist working at Humboldt Universität zu Berlin. He is interested in topics that range from individual differences to large-scale collective behaviors. He integrates field-based studies with analytical and experimental approaches in the laboratory. Through his highly interdisciplinary work, he has developed several experimental techniques to study animal behavior in the most standardized ways, from video playbacks and computer animations to the use of bio-mimetic robots. His main research objectives are tropical freshwater fish like clonal mollies (Poecilia formosa), guppies (P. reticulata) or sulfur mollies (P. sulphuraria). At SCIoI, he is investigating how fish use anticipation in their social interactions and how information is effectively transferred within groups.

Abstract

The use of biomimetic robots to study animal social behavior has received considerable attention in recent years. Robots that mimic the appearance and behavior of conspecifics allow biologists to embody specific hypotheses regarding social interactions, perception and learing, and test them in the real world. Much time and effort can be spent on refining the robots to create increasingly realistic interactions with animals. However, we should keep in mind that the robot and its behavior only need to be realistic enough to serve the purpose of the investigation. In this tutorial we will give an introduction into biomimetic robots that interact with live animals by example of Robofish – a fish-like robot that is interacting in real-time with live guppies (Poecilia reticulata) thus enable us to study social interactions, social learning and perception of conspecific.
The tutorial includes an introduction to interactive biomimetic robots along with automated animal tracking. In a practical part, we will test how live fish interact with the robot and how the robot’s behaviors affect its acceptance by the fish. We will see how this information connects to models of collective behavior and social learning.

Further reading:
https://www.annualreviews.org/doi/10.1146/annurev-control-061920-103228
https://link.springer.com/article/10.1007/s00422-018-0787-5
https://www.sciencedirect.com/science/article/abs/pii/S0169534711000851?casa_token=H1z-L83GfA0AAAAA:y1V3hKksH0ghaIc5sqH746uJlcbuul-oI2xAtOjAiKnoTQO2XO0pyDU6_t3IHfGMs_8gnCurGRk
https://royalsocietypublishing.org/doi/full/10.1098/rsbl.2020.0436

17:00 – 19:00

Closing Event