Program and Speakers
Preliminary Schedule
21 August | 22 August | 23 August | 24 August | 25 August |
|
---|---|---|---|---|---|
9:00 | Opening | ||||
10:00 | |||||
11:00 | Panel Discussion | ||||
12:00 | Lunch | Lunch | Lunch | Lunch | Lunch |
13:00 | Networking | Spotlight talks | |||
14:00 | Poster/Demo Session | ||||
15:00 | Berlin Excursion | ||||
16:00 |
|||||
18:00 | Dinner 1 | Dinner 2 | Dinner 3 | Dinner 4 | Closing event |
Monday, August 21, 2023
Time Activity
9:00 – 10:00
Opening and Welcome Words
10:00 – 11:00

Jörg Raisch
TU-Berlin
Title will follow
About Speaker
Jorg Raisch works at TU Berlin. He represents the control discipline. His research interests include both methodological and applied aspects of control. In the context of SCIoI, his work focuses on abstraction-based synthesis of discrete event and hybrid control systems, consistent control hierarchies, and consensus-based control of multiagent systems.
Abstract
will follow
11:00 – 12:00

Javier Alonso-Mora
Delft University of Technology
Title will follow
About Speaker
Javier Alonso-Mora lead the Autonomous Multi-Robots Laboratory at the Delft University of Technology. The goal of the Autonomous Multi-Robots Laboratory at the Delft University of Technology is to develop novel methods for navigation, motion planning, learning and control of autonomous mobile robots, with a special emphasis on multi-robot systems, on-demand transportation and robots that interact with other robots and humans in dynamic and uncertain environments. Building towards the smart cities of the future, our applications include self-driving vehicles, mobile manipulators, micro-aerial vehicles, last-mile logistics and ride-sharing.
Abstract
will follow
12:00 – 13:00
Lunch
13:00 – 14:00
Networking
14:00 – 15:00

Sabine Hauert
University of Bristol
Swarms of people
About Speaker
Sabine Hauert is Associate Professor (Reader) of Swarm Engineering at the University of Bristol in the UK. Her research focusses on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics. Before joining the University of Bristol, Sabine engineered swarms of nanoparticles for cancer treatment at MIT, and deployed swarms of flying robots at EPFL.
Sabine is also President and Co-founder of Robohub.org, and executive trustee of AIhub.org, two non-profits dedicated to connecting the robotics and AI communities to the public.
As an expert in science communication with 10 years of experience, Sabine is often invited to discuss the future of robotics and AI, including in the journal Nature, at the European Parliament, and at the Royal Society. Her work has been featured in mainstream media including BBC, CNN, The Guardian, The Economist, TEDx, WIRED, and New Scientist.
Abstract
As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Larger robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.
15:00 – 17:00

Pia Bideau
TU-Berlin
Tutorial: Individual Robot Perception and Learning
About Speaker
Pia Bideau is a postdoctoral researcher at TU-Berlin and part of the Cluster Science of Intelligence as of January 2020. Her research aims to address the topic of how one can teach a computer to see and understand the world as we humans do, the strengths and weaknesses of a computer vision system compared to a human vision system, and how the two systems can learn from each other. We move, we discover new interesting stuff that raises our curiosity if a perceived situation doesn’t match certain expectations, and we learn. Pia’s research focuses on motion – our motion as well as our motion perception. Motion is a key ability that we as living beings have to explore our environment. Our motion for example helps us to perceive depth, and the motion of objects helps us to recognize these objects even if those are unknown to us. Motion in the visual world helps us understanding an unstructured environment we are living in. Before she joined the Cluster of Intelligence, Pia received her PhD from the University of Massachusetts, Amherst (USA) working with Prof. Erik Learned-Miller and worked together with Cordelia Schmid and Karteek Alahari as part of an internship at Inria in Grenoble (France).
Abstract
Distance estimation is an essential part of scene recognition and orientation, allowing agents to move in a natural environment. In particular, when animals move in teams (e.g. fish schools, flock of birds), they seem to be very capable of doing this – efficient and accurate enough such that quite astonishing behaviors arise when they move together as a collective. Different sensor systems but also different strategies of movement enable these agents to localize themselves relative to another. Vision is probably the sensor system that is studied in greatest detail, but other sensor systems such as an ultrasonic system allow agents to “see” distance with their ears also in low light conditions.
This tutorial will give an introduction into learning based approaches for distance estimation using vision. While there are several cues to extract information about distance we will focus here on object appearance and its relative size. Objects appearing at greater distance will appear smaller than objects nearby. This is one of the fundamental principles of perspective projection. We will extend a classical object detector (such as YOLO) with the ability to estimate distance. As we wish to test our developed algorithm on a real robotic system, a focus lies on fast and efficient computation. For testing a lego mindstorm robot equipped with a RGB camera and a raspberryPi will be used.
- Theory: Introduction into efficient object detection with YOLO
- Practice (implementation):Learning absolut distance estimates.
-
- Extending object bounding box predictions with distance estimates
-
- A new training loss for distance estimates
- Practice (testing): Testing the developed algorithm on a lego mindstorm robot.
Some reading material:
Detection and Distance estimation via YOLO-Dist:
https://www.mdpi.com/2076-3417/12/3/1354#cite
YOLO overview:
https://www.v7labs.com/blog/yolo-object-detection
18:00 – 20:00
Dinner
Tuesday, August 22, 2023
9:00 – 10:00

Marc Toussaint
TU-Berlin
Title will follow
About Speaker
Marc Toussaint works at TU Berlin on Learning and Intelligent Systems. In his view, a key in understanding and creating intelligence is the interplay of learning and reasoning, where learning becomes the enabler for strongly generalizing reasoning and acting in our physical world. Within SCIoI, he is interested in studying computational methods and representations to enable efficient learning and general purpose physical reasoning, and demonstrating such capabilities on real-world robotic systems.
Abstract
Will follow
10:00 – 11:00

Georg Martius
Max Planck Institute for intelligent Systems
Machine learning algorithms for autonomously learning robots
About Speaker
Georg Martius works as group leader at the Max Planck Institute for intelligent Systems. He is interested in autonomous learning, i.e. how an embodied agent can determine what to learn, how to learn, and how to judge its learning success. He believes that robots need to learn from experience to become dexterous and versatile assistants to humans in many real-world domains. Intrinsically motivated learning can help to create a suitable learning curriculum and lead to capable systems without the need to specify every little detail of that process. Here we take inspiration from child development.
Abstract
I am driven by the question of how robots can autonomously develop skills and learn to become versatile helpers for humans. Considering children, it seems natural that they have their own agenda. They playfully explore their environment, without the necessity for somebody to tell them exactly what to do next. Replicating such flexible learning in machines is highly challenging. I will present my research on different machine learning methods as steps towards solving this challenge. Part of my research is concerned with artificial intrinsic motivations — their mathematical formulation and embedding into learning systems. Equally important is to learn the right representations and internal models and I will show how powerful intrinsic motivations can be derived from learned models. With model-based reinforcement learning and planning methods, I show how we can achieve active exploration and playful robots but also safety aware behavior. A really fascinating feature is that these learning-by-playing systems are able to perform well in unseen tasks zero-shot.
When autonomous systems need to make decisions at a higher level, such as deciding about an appropriate order of subtasks in an assembly task, they need to implicitly solve combinatorial problems, which pose a considerable challenge to current deep learning methods. We recently proposed the first unified way to embed a large class of combinatorial algorithms into deep learning architectures, which I will present along with possible applications to robotics.
11:00 – 12:00

Xiaolong Wang
University of California
Geometric Robot Learning for Generalizable Skills Acquisition
About Speaker
Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego. He is affiliated with the CSE department, Center for Visual Computing, Contextual Robotics Institute, and the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. He is particularly interested in learning 3D and rich representations from videos on large-scale with minimum cost and uses this representation to guide robots to learn. He is the recipient of the NSF CAREER Award, Sony Research Award, and Amazon Research Award
Abstract
Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation even without visual input, (ii) the collection of large-scale robot physical interaction demonstrations for imitation learning using a simple and user-friendly visual teleoperation system, and (iii) large-scale 3D representation learning that generalizes Reinforcement Learning policies across diverse objects and scenes. I will also showcase the real-world applications of our research, including dexterous manipulation and legged locomotion control.
12:00 – 13:00
Lunch
13:00 – 15:00

Wolfgang Hönig
TU-Berlin
Tutorial: Multi-Robot Coordination
About Speaker
Wolfgang Hönig is an independent junior research group leader at TU-Berlin heading the Intelligent Multi-Robot Coordination Lab. Previously, he was a postdoctoral scholar at the Department of Aerospace, California Institute of Technology, advised by Soon-Jo Chung. He holds a PhD in Computer Science from the the University of Southern California, where he was advised by Nora Ayanian. His research focuses on enabling large teams of physical robots to collaboratively solve real-world tasks, using tools from informed search, optimisation, and machine learning.
Tutorial Title: Multi-Robot Decision Making
Abstract: Intelligent behavior of a single robot is not sufficient for executing tasks with a robotic team effectively. First, we look at the challenges that arise in controls, motion planning, and general decision-making when moving from a single robot to cooperative behavior. Then, we discuss algorithms on how we can overcome these challenges, including hungarian method, buffered voronoi cells, and conflict-based search.
Abstract
One of the foundational behaviors for a team of robots is to be able to move in their environment without any collisions with other robots or obstacles. In multi-robot motion planning, we typically assume that the environment is fully known and that we can plan in a centralized fashion for the whole team of robots. In collision avoidance, we take a more reactive approach, where each robot changes its motion based on the robot’s perceptual input to avoid collisions in a distributed fashion.
This tutorial provides practical insights into Buffered Voronoi Cells (BVC), a modern approach for distributed collision avoidance that only requires the sensing of neighbor positions. Together with single-robot planning techniques, it can be effectively used to plan smooth motions for many mobile robots, including differential-drive robots, car-like robots, and multirotors.
We program in Python and verify the resulting approach in a robotics simulator, assuming that we know the relative positions between the robots. A real-robot experiment with a team of lego mindstorm robots will connect the perception pipeline from the first tutorial to multi-robot coordination.
Theory:
- Differential Flatness and Bezier-curve Optimization
- Buffered Voronoi Cells (BVC)
- Extensions to learning-based Algorithms
Practice (implementation):
- Work with a robotics simulator to execute motions
- Implement collision avoidance, given helper functions for single-robot motion planning and Voronoi computation
Practice (testing):
- Testing the developed algorithm in simulation
- Demo on a team of lego mindstorm robots
Some reading material:
- Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells,” IEEE Robotics and Automation Letters (RA-L), vol. 2, no. 2, pp. 1047–1054, 2017, doi: 10.1109/LRA.2017.2656241.
15:00 – 18:00
Berlin Excursion
18:00 – 20:00
Dinner
Wednesday, August 23, 2023
9:00 – 10:00

Pawel Romanczuk
Humboldt-Universität zu Berlin
Title will follow
About Speaker
Pawel Romanczuk works an HU Berlin, at the interface of applied mathematics, theoretical physics, and behavioral biology. He focuses on collective behavior of organismic systems. His research bridges analytical and synthetic sciences to study self-organization, evolutionary adaptations, and functional dynamical behavior.
Abstract
Will follow
10:00 – 11:00

Basil de Jundi
Norwegian University of Science and Technology
Title will follow
About Speaker
Basil el Jundi works as associate professor at the Norwegian University of Science and Technology. He and his team are interested in understanding the behavioral and neural mechanisms underlying spatial orientation in insects. Currently, they are studying the use of compass cues in monarch butterflies and how they are encoded in the butterfly brain. These butterflies are famous for their spectacular annual migration from North America to Central Mexico. How are different navigation cues used for orientation and how are they linked in the brain? To understand this, we perform behavioral experiments (flight simulator) combined with anatomical (confocal imaging, 3D modeling) and electrophysiological studies (intracellular and tetrode recordings).
Abstract
Will follow
11:00 – 12:00

Mike Webster
University of St. Andrews
Title will follow
About Speaker
Mike Webster works at the University of St. Andrews. He is interested in the behaviour of group-living animals, including social foraging, competition, information diffusion and predator-prey interactions. His work investigates the benefits and costs of grouping, how groups form and function and how the behaviour of individuals shapes, and is shaped by, that of the group. He is also interested in sampling biases in animal behaviour, with a focus on how these arise and how they are reported.
Abstract
Will follow
12:00 – 13:00
Lunch
13:00 – 14:00
Spoltight talks
14:00 – 17:00
Poster/Demo Session
18:00 – 20:00
Dinner
Thursday, August 24, 2023
9:00 – 10:00

Guillermo Gallego
TU-Berlin
Title will follow
About Speaker
Guillermo Gallego works on Robotic Interactive Perception at TU Berlin as well as on computer vision and robotics. He focuses on robot perception and on optimization methods for interdisciplinary imaging and control problems. Inspired by the human visual system, he works toward improving the perception systems of artificial agents, endowing them with intelligence to transform raw sensor data into knowledge, and to provide autonomy in changing environments.
Abstract
Will follow
10:00 – 11:00

Cornelia Fermueller
University of Maryland at College Park
Neuromorphic Visual Motion and Action Analysis for Robotics
About Speaker
Cornelia Fermueller works in the areas of Computer, Human and Robot Vision at the University of Maryland at College Park. She studies and develops biologically inspired Computer Vision solutions for systems interacting with their environment. In recent years, her work has focused on the interpretation of human activities, and on motion processing for fast active robots using as input bio-inspired event-based sensors.
Abstract
Neuromorphic Computing is a vertical approach of computer engineering of both hardware and software design that seeks to emulate principles of the human brain and nervous system for efficient, low-power, and robust computation, and in recent years various individual concepts have been adopted in main-stream engineering. In this talk I will describe my group’s work on neuromorphic visual motion analysis for navigation and action interpretation. Many real-world AI applications, including self-driving cars, robotics, augmented reality, and human motion analysis are based on visual motion. Yet most approaches treat motion as extension of static images by matching features in consecutive video frames. Inspired by biological vision, we use as input to our computational methods, spatiotemporal filters and events from neuromorphic dynamic vision sensors that simulate the transient response in biology. I will describe a bio-inspired pipeline for the processing underlying the navigation tasks and present algorithms on 3D motion estimation and foreground-background segmentation. The design of these algorithms is guided by a) questions about where to best use geometric constraints in machine learning, and b) experiments with visual motion illusions to gain insight into computational limitations. Finally, I will show advantages of event-based processing for action understanding in robots.
11:00 – 12:00
Panel Discussion
12:00 – 13:00
Lunch
13:00 – 14:00

Alessio Franci
University of Liege
Generalize excitability: a principle for modeling and designing flexible representations in embodied agents
About Speaker
Alessio Franci works as a lecturer at the University of Liege. He is broadly interested in the interaction between mathematics, biology (particularly neuroscience), and engineering (particularly control theory and neuromorphic engineering). The brain, its way of knowing and perceiving the world, and the ways in which biology seems to self-organize out of inert matter, always fascinated him. As a physicist, he considers mathematics the natural language through which to describe the various facets of the biological world. Control theory provides both a conceptual and a technical framework to use mathematics to describe open systems, that is, systems with inputs and outputs, like our brain, like a single cell, like a group of neurons or people or bees or robots or ants; like all those biological forms in constant interaction with their environment.
Abstract
A distinctive property of intelligent embodied agents is that they must be simultaneously continuous, because evolving in the physical time, and discrete, because making successive decisions among finite sets of alternatives. The modern engineering approach is to sharply separate continuous (analog) dynamics and discrete (digital) computation through A/D converters and purely digital intelligence. But biological agents are flexible in the way they represent and process agent-environment interactions, e.g., transitioning from predominantly faithful/analog representations in primary sensory cortices to predominantly categorical/digital representation in prefrontal cortices. Excitability and its modulation are at the basis of biological flexible representations. Grounded on control and bifurcation theory, I will introduce a generalized notion of excitability to model and design flexible representations and to use them in embodied agents. The theory will be illustrated on the design of neuromorphic chips with enhanced robustness and flexibility properties, and to design a simple sensorimotor loop for deadlock-free collision avoidance in autonomous agents.
14:00 – 15:00

Guido de Croon
Delft University
Insect-inspired AI for autonomous flight of tiny drones
About Speaker
Guido de Croon works at Delft University. Small, light–weight flying robots such as the 20-gram DelFly Explorer form an extreme challenge to artificial intelligence, because of the strict limitations in onboard sensors, processing, and memory. He tries to uncover general principles of intelligence that will allow such limited, small robots to perform complex tasks.
Abstract
Tiny drones are promising for many applications, such as search-and-rescue, greenhouse monitoring, or keeping track of stock in warehouses. Since they are small, they can fly in narrow areas. Moreover, their light weight makes them very safe for flight around humans. However, making such tiny drones fly completely by themselves is an enormous challenge. Most approaches to Artificial Intelligence for robotics have been designed with self-driving cars or other large robots in mind – and these are able to carry many sensors and ample processing. In my talk, I will argue that a different approach is necessary for achieving autonomous flight with tiny drones. In particular, I will discuss how we can draw inspiration from flying insects, and endow our drones with similar intelligence. Examples include the fully autonomous “DelFly Explorer”, a 20-gram flapping wing drone, and swarms of CrazyFlie quadrotors of 30 grams able to explore unknown environments and finding gas leaks. Moreover, I will discuss the promises of novel neuromorphic sensing and processing technologies, illustrating this with recent experiments from our lab. Finally, I will discuss how insect-inspired robotics can allow us to gain new insights into nature. I illustrate this with a recent study, in which we proposed a new theory on how flying insects determine the gravity direction.
15:00 – 16:00

David Bierbach
Humboldt Universität zu Berlin
Tutorial: Perception and Learning in Nature
About Speaker
David Bierbach is a biologist working at Humboldt Universität zu Berlin. He is interested in topics that range from individual differences to large-scale collective behaviors. He integrates field-based studies with analytical and experimental approaches in the laboratory. Through his highly interdisciplinary work, he has developed several experimental techniques to study animal behavior in the most standardized ways, from video playbacks and computer animations to the use of bio-mimetic robots. His main research objectives are tropical freshwater fish like clonal mollies (Poecilia formosa), guppies (P. reticulata) or sulfur mollies (P. sulphuraria). At SCIoI, he is investigating how fish use anticipation in their social interactions and how information is effectively transferred within groups.
Abstract
The use of biomimetic robots to study animal social behavior has received considerable attention in recent years. Robots that mimic the appearance and behavior of conspecifics allow biologists to embody specific hypotheses regarding social interactions, perception and learing, and test them in the real world. Much time and effort can be spent on refining the robots to create increasingly realistic interactions with animals. However, we should keep in mind that the robot and its behavior only need to be realistic enough to serve the purpose of the investigation. In this tutorial we will give an introduction into biomimetic robots that interact with live animals by example of Robofish – a fish-like robot that is interacting in real-time with live guppies (Poecilia reticulata) thus enable us to study social interactions, social learning and perception of conspecific.
The tutorial includes an introduction to interactive biomimetic robots along with automated animal tracking. In a practical part, we will test how live fish interact with the robot and how the robot’s behaviors affect its acceptance by the fish. We will see how this information connects to models of collective behavior and social learning.
Further reading:
https://www.annualreviews.org/doi/10.1146/annurev-control-061920-103228
https://link.springer.com/article/10.1007/s00422-018-0787-5
https://www.sciencedirect.com/science/article/abs/pii/S0169534711000851?casa_token=H1z-L83GfA0AAAAA:y1V3hKksH0ghaIc5sqH746uJlcbuul-oI2xAtOjAiKnoTQO2XO0pyDU6_t3IHfGMs_8gnCurGRk
https://royalsocietypublishing.org/doi/full/10.1098/rsbl.2020.0436
18:00 – 20:00
Dinner
Friday, August 25, 2023
9:00 – 10:00

Heiko Hamann
Universität Konstanz
Title will follow
About Speaker
Heiko Hamann works at the University of Konstanz. He is a roboticist with focus on collective systems. With his group, he studies distributed robotics, machine learning for robotics, and bio-hybrid systems. He investigates collective intelligence and especially the swarm-robotics aspects of “Speed-accuracy tradeoffs in distributed collective decision making.”
Abstract
will follow
10:00 – 11:00

Shinkyu Park
King Abdullah University of Science and Technology
Learning in Large-Population Games and Application to Multi-robot Task Allocation
About Speaker
Shinkyu Park is the Assistant Professor of Electrical and Computer Engineering and Principal Investigator of Distributed Systems and Autonomy Group at King Abdullah University of Science and Technology (KAUST). Park’s research focuses on the learning, planning, and control in multi-agent/multi-robot systems. He aims to make foundational advances in robotics science and engineering to build individual robots’ core capabilities of sensing, actuation, and communication and to train them to learn the ability to work as a team and attain high-level of autonomy in distributed information processing, decision making, and manipulation. Prior to joining KAUST, he was Associate Research Scholar at Princeton University engaged in cross-departmental robotics projects. He received the Ph.D. degree in electrical engineering from the University of Maryland College Park in 2015. Later he held Postdoctoral Fellow positions at the National Geographic Society (2016) and Massachusetts Institute of Technology (2016-2019).
Abstract
In this talk, we discuss the design and analysis of learning models in large-population games and their application to multi-robot task allocation in dynamically changing environments. In population games, given a set of strategies, each agent in a population selects a strategy to engage in repeated strategic interactions with others. Rather than computing and adopting the best strategy selection based on a known cost function, the agents need to learn such strategy selection from instantaneous rewards they receive at each stage of the repeated interactions. In the first part of this talk, leveraging passivity-based analysis of feedback control systems, I explain principled approaches to design learning models for the agent strategy selection that guarantee convergence to the Nash equilibrium of an underlying game, where no agent can be better off by changing its strategy unilaterally. I also talk about the design of higher-order learning models that strengthen the convergence when the agents’ strategy selection is subject to time delays. In the second part, I describe how the population game framework and its learning models can be applied to multi-robot task allocation problems, where a team of robots needs to carry out a set of given tasks in dynamically changing environments. Using multi-robot resource search and retrieval as an example, we discuss how the task allocation can be defined as a population game and how a learning model can be adopted as a decentralized task allocation algorithm for the individual robots to select and carry out the tasks.
11:00 – 12:00

Lauren Sumner-Rooney
Museum für Naturkunde Leibniz-Institut für Evolutions- und Biodiversitätsforschung
Title will follow
About Speaker
Lauren Sumner-Rooney works at the Museum für Naturkunde Leibniz-Institut für Evolutions- und Biodiversitätsforschung. Her research group studies the structure, function and evolution of animal visual systems, with a focus on many-eyed organsims such as molluscs, spiders and echinoderms. The group uses a combination of digital morphology, neuroethology, evo-devo and comparative phylogenetic methods to study how and why animals use more than two eyes, and how these unusual visual systems evolve. Other research interests include the evolution of eye loss in dark habitats, the impacts of artificial light on visual ecology, and invertebrate neuroanatomy.
Abstract
will follow
12:00 – 13:00
Lunch
13:00 – 14:00

Martin Saska
Czech Technical University of Prague
Advantages and challenges of tightly cooperating aerial vehicles in real-world environment
About Speaker
Martin Saska works at the Czech Technical University of Prague. His research interests are motion planning, swarm robotics, modular robotics, and robotic simulators. During his PhD thesis he worked on “Identification, Optimization and Control with Applications in Modern Technologies.”
Abstract
Using large teams of tightly cooperating Micro Aerial Vehicles (MAVs) in real-world (outdoor and indoor) environments without precise external localization such as GNSS and motion capture systems is the main motivation of this talk. I will present some insights into the research of fully autonomous, bio-inspired swarms of MAVs relying on onboard artificial intelligence. I will discuss the important research question of whether the MAV swarms can adapt better to localization failure than a single robot. In addition to the fundamental swarming research, I will be talking about real applications of multi-robot systems such as indoor documentation of large historical objects (cathedrals) by formations of cooperating MAVs, a cooperative inspection of underground mines inspired by the DARPA SubT competition, localization and interception of unauthorized drones, aerial firefighting, radiation sources localization, power line inspection, and marine teams of cooperating heterogeneous robots.
14:00 – 15:00

Iain Couzin
Max Planck Institute of Animal Behavior in Konstanz
Title will follow
About Speaker
Iain Couzin works at the Max Planck Institute of Animal Behavior in Konstanz. He is Director of the Department of Collective Behavior (Max Planck Institute of Animal Behavior) and a Full Professor at the University of Konstanz, where he is also a spokesperson of the Cluster of Excellence ‘Centre for the Advanced Study of Collective Behaviour’. Previously he was a Full Professor in the Department of Ecology and Evolutionary Biology at Princeton University (2013), and prior to that a Royal Society University Research Fellow in the Department of Zoology, University of Oxford, and a Junior Research Fellow in the Sciences at Balliol College, Oxford (2002-2007).
Abstract
will follow
15:00 – 17:00
Pia Bideau & Wolfgang Hönig & David Bierbach
TU & HU Berlin
Tutorial: Title will follow
Abstract
will follow
18:00 – 20:00