
Principal Investigators:
Rasha Abdel Rahman
Verena Hafner
John-Dylan Haynes
Team members:
Anna Eiserbeck (Doctoral researcher)
Olga Wudarczyk (Postdoctoral researcher)
Antje Lorenz (Postdoctoral researcher)
Anna Kuhlen (external collaborator)
Murat Kirtay (external collaborator)
Doris Pischedda (external collaborator)
Creating a robot that can integrate information from different sources and modalities
Research Unit 1, SCIoI Project 09
The overall goal of this project is to create a robot that can represent and integrate information from different sources and modalities for successful, task-oriented interactions with other agents. To fully understand the mechanisms of social interaction and communication in humans and to replicate this complex human skill in technological artifacts, we must provide effective means of knowledge transfer between agents. The first step of this project is therefore to describe core components and determinants of communicative behavior including joint attention, partner co-representation, information processing from different modalities and the role of motivation and personal relevance (Kaplan, and Hafner, 2006; Kuhlen & Abdel Rahman, 2017; Kuhlen et al., 2017). We will compare these functions in human-human, human-robot, and robot-robot interactions to identify commonalities and differences. This comparison will also consider the role of different presumed partner attributes (e.g., a robot described as “social” or “intelligent”). We will conduct behavioral, electrophysiological, and fMRI experiments to describe the microstructure of communicative behavior.
The second step of the project is to create predictive models for multimodal communication that can account for these psychological findings in humans. Both the prerequisites and factors acting as priors will be identified, and suitable computational models will be developed that can represent multimodal sensory features in an abstract but biologically inspired way (suitable for extracting principles of intelligence; Schillaci et al., 2013). In perspective, the third step of this project is to use these models to generate novel predictions of social behavior in humans.
Throughout the project we will focus on the processing of complex multimodal information, a central characteristic of social interactions, that have nevertheless thus far been investigated mostly within modalities. We assume that multimodal information, e.g. from auditory (speech) and visual (face, eye gaze) or tactile (touch) information, will augment the partner co-representation and will therefore improve communicative behavior.