The SCIoI Summer School – Course Materials

Vidoes will follow!

Machine learning algorithms for autonomously learning robots

By: Georg Martius

I am driven by the question of how robots can autonomously develop skills and learn to become versatile helpers for humans. Considering children, it seems natural that they have their own agenda. They playfully explore their environment, without the necessity for somebody to tell them exactly what to do next. Replicating such flexible learning in machines is highly challenging. I will present my research on different machine learning methods as steps towards solving this challenge. Part of my research is concerned with artificial intrinsic motivations — their mathematical formulation and embedding into learning systems. Equally important is to learn the right representations and internal models and I will show how powerful intrinsic motivations can be derived from learned models. With model-based reinforcement learning and planning methods, I show how we can achieve active exploration and playful robots but also safety aware behavior. A really fascinating feature is that these learning-by-playing systems are able to perform well in unseen tasks zero-shot.

When autonomous systems need to make decisions at a higher level, such as deciding about an appropriate order of subtasks in an assembly task, they need to implicitly solve combinatorial problems, which pose a considerable challenge to current deep learning methods. We recently proposed the first unified way to embed a large class of combinatorial algorithms into deep learning architectures, which I will present along with possible applications to robotics.

Geometric Robot Learning for Generalizable Skills Acquisition

By: Xiaolong Wang

Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation even without visual input, (ii) the collection of large-scale robot physical interaction demonstrations for imitation learning using a simple and user-friendly visual teleoperation system, and (iii) large-scale 3D representation learning that generalizes Reinforcement Learning policies across diverse objects and scenes. I will also showcase the real-world applications of our research, including dexterous manipulation and legged locomotion control.

Swarms for People

By: Sabine Hauert

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Larger robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

Advantages and challenges of tightly cooperating aerial vehicles in real-world environment

By: Martin Saska

Using large teams of tightly cooperating Micro Aerial Vehicles (MAVs) in real-world (outdoor and indoor) environments without precise external localization such as GNSS and motion capture systems is the main motivation of this talk. I will present some insights into the research of fully autonomous, bio-inspired swarms of MAVs relying on onboard artificial intelligence. I will discuss the important research question of whether the MAV swarms can adapt better to localization failure than a single robot. In addition to the fundamental swarming research, I will be talking about real applications of multi-robot systems such as indoor documentation of large historical objects (cathedrals) by formations of cooperating MAVs, a cooperative inspection of underground mines inspired by the DARPA SubT competition, localization and interception of unauthorized drones, aerial firefighting, radiation sources localization, power line inspection, and marine teams of cooperating heterogeneous robots.

Generalize excitability: a principle for modeling and designing flexible representations in embodied agents

By: Alessio Franci

A distinctive property of intelligent embodied agents is that they must be simultaneously continuous, because evolving in the physical time, and discrete, because making successive decisions among finite sets of alternatives. The modern engineering approach is to sharply separate continuous (analog) dynamics and discrete (digital) computation through A/D converters and purely digital intelligence. But biological agents are flexible in the way they represent and process agent-environment interactions, e.g., transitioning from predominantly faithful/analog representations in primary sensory cortices to predominantly categorical/digital representation in prefrontal cortices. Excitability and its modulation are at the basis of biological flexible representations. Grounded on control and bifurcation theory, I will introduce a generalized notion of excitability to model and design flexible representations and to use them in embodied agents. The theory will be illustrated on the design of neuromorphic chips with enhanced robustness and flexibility properties, and to design a simple sensorimotor loop for deadlock-free collision avoidance in autonomous agents.

Neuromorphic Visual Motion and Action Analysis for Robotics

By: Cornelia Fermuller

Neuromorphic Computing is a vertical approach of computer engineering of both hardware and software design that seeks to emulate principles of the human brain and nervous system for efficient, low-power, and robust computation, and in recent years various individual concepts have been adopted in main-stream engineering. In this talk I will describe my group’s work on neuromorphic visual motion analysis for navigation and action interpretation. Many real-world AI applications, including self-driving cars, robotics, augmented reality, and human motion analysis are based on visual motion. Yet most approaches treat motion as extension of static images by matching features in consecutive video frames. Inspired by biological vision, we use as input to our computational methods, spatiotemporal filters and events from neuromorphic dynamic vision sensors that simulate the transient response in biology. I will describe a bio-inspired pipeline for the processing underlying the navigation tasks and present algorithms on 3D motion estimation and foreground-background segmentation. The design of these algorithms is guided by a) questions about where to best use geometric constraints in machine learning, and b) experiments with visual motion illusions to gain insight into computational limitations. Finally, I will show advantages of event-based processing for action understanding in robots.

Learning in Large-Population Games and Application to Multi-robot Task Allocation

By: Shinkyu Park

In this talk, we discuss the design and analysis of learning models in large-population games and their application to multi-robot task allocation in dynamically changing environments. In population games, given a set of strategies, each agent in a population selects a strategy to engage in repeated strategic interactions with others. Rather than computing and adopting the best strategy selection based on a known cost function, the agents need to learn such strategy selection from instantaneous rewards they receive at each stage of the repeated interactions. In the first part of this talk, leveraging passivity-based analysis of feedback control systems, I explain principled approaches to design learning models for the agent strategy selection that guarantee convergence to the Nash equilibrium of an underlying game, where no agent can be better off by changing its strategy unilaterally. I also talk about the design of higher-order learning models that strengthen the convergence when the agents’ strategy selection is subject to time delays. In the second part, I describe how the population game framework and its learning models can be applied to multi-robot task allocation problems, where a team of robots needs to carry out a set of given tasks in dynamically changing environments. Using multi-robot resource search and retrieval as an example, we discuss how the task allocation can be defined as a population game and how a learning model can be adopted as a decentralized task allocation algorithm for the individual robots to select and carry out the tasks.

Insect-inspired AI for autonomous flight of tiny drones

By: Guido de Croon

Tiny drones are promising for many applications, such as search-and-rescue, greenhouse monitoring, or keeping track of stock in warehouses. Since they are small, they can fly in narrow areas. Moreover, their light weight makes them very safe for flight around humans. However, making such tiny drones fly completely by themselves is an enormous challenge. Most approaches to Artificial Intelligence for robotics have been designed with self-driving cars or other large robots in mind – and these are able to carry many sensors and ample processing. In my talk, I will argue that a different approach is necessary for achieving autonomous flight with tiny drones. In particular, I will discuss how we can draw inspiration from flying insects, and endow our drones with similar intelligence. Examples include the fully autonomous “DelFly Explorer”, a 20-gram flapping wing drone, and swarms of CrazyFlie quadrotors of 30 grams able to explore unknown environments and finding gas leaks. Moreover, I will discuss the promises of novel neuromorphic sensing and processing technologies, illustrating this with recent experiments from our lab. Finally, I will discuss how insect-inspired robotics can allow us to gain new insights into nature. I illustrate this with a recent study, in which we proposed a new theory on how flying insects determine the gravity direction.

Tutorial: Individual Robot Perception and Learning

By: Pia Bideau

Distance estimation is an essential part of scene recognition and orientation, allowing agents to move in a natural environment. In particular, when animals move in teams (e.g. fish schools, flock of birds), they seem to be very capable of doing this – efficient and accurate enough such that quite astonishing behaviors arise when they move together as a collective. Different sensor systems but also different strategies of movement enable these agents to localize themselves relative to another. Vision is probably the sensor system that is studied in greatest detail, but other sensor systems such as an ultrasonic system allow agents to “see” distance with their ears also in low light conditions.

 

This tutorial will give an introduction into learning based approaches for distance estimation using vision. While there are several cues to extract information about distance we will focus here on object appearance and its relative size. Objects appearing at greater distance will appear smaller than objects nearby. This is one of the fundamental principles of perspective projection. We will extend a classical object detector (such as YOLO) with the ability to estimate distance. As we wish to test our developed algorithm on a real robotic system, a focus lies on fast and efficient computation. For testing a lego mindstorm robot equipped with a RGB camera and a raspberryPi will be used.

 

  • Theory: Introduction into efficient object detection with YOLO
  • Practice (implementation):Learning absolut distance estimates.
    • Extending object bounding box predictions with distance estimates
    • A new training loss for distance estimates
  • Practice (testing): Testing the developed algorithm on a lego mindstorm robot.

Some reading material:

Detection and Distance estimation via YOLO-Dist:
https://www.mdpi.com/2076-3417/12/3/1354#cite

YOLO overview:
https://www.v7labs.com/blog/yolo-object-detection

https://pjreddie.com/media/files/papers/YOLOv3.pdf

https://pytorch.org/hub/ultralytics_yolov5/