Postdoctoral Project: Robot Multimodal Interaction and Communication
Part of research project: Multimodal Interaction and Communication
Description of the research project
To fully understand the mechanisms of social interaction and communication in humans and to replicate this complex human skill in technological artifacts, we must provide effective means of knowledge transfer between agents. The first aim of this project is therefore to describe core components and determinants of communicative behaviour including joint attention, partner co-representation, information processing from different modalities and the role of motivation and personal relevance. We will compare these functions in human-human, human-robot and robot-robot interactions to identify commonalities and differences. This comparison will also consider the role of different presumed partner attributes (e.g., a robot described as “social” or “intelligent”). We will conduct behavioural and electrophysiological experiments to describe the microstructure of communicative behaviour.
Building on the resulting description of communicative and social interactive behaviour, the second aim of the project is to create predictive models for multimodal communication that can account for these psychological findings in humans. Both the prerequisites and factors acting as priors will be identified, and suitable computational models will be developed that can represent multimodal sensory features in an abstract but still biologically plausible way. In perspective, the models will be used to generate novel predictions of social behavior in human agents.
Throughout the project we will focus on the processing of multimodal information, a central characteristic of social interactions that have nevertheless thus far been investigated mostly within modalities. We assume that multimodal information, e.g. from auditory (speech) and visual (face, eye gaze) or tactile (touch) information, will augment the partner co-representation and will therefore improve communicative behaviour.
Description of the postdoctoral project
This postdoctoral project focuses on multimodal communication and interaction. In a series of experiments, the effects of multimodality will be studied. For the human-robot interaction experiments, suitable social interaction skills need to be defined and implemented to realise joint action and attention.
An important aspect of this project is the study of computational predictive models that can account for learning and communicating using several modalities. Modalities include vision, audition and tactile information. The models will be developed and evaluated in close cooperation with the empirical studies from the analytic side.
Humanoid robots “Pepper” will be available for this project to perform human-robot interaction and communication experiments.
Project start date: October 1, 2019
Applicants must hold a Ph.D. degree in Computer Science or related sciences (Computational Neuroscience, Cognitive Sciences, Robotics) and should have proven skills/background in following topics: