Multimodal Interaction and Communication

Principal Investigators:

Rasha Abdel Rahman
Verena Hafner
John-Dylan Haynes

Team members:

Doris Pischedda (postdoc)
Murat Kirtay (postdoc)
Olga Wudarczyk (postdoc)
Anna Kuhlen (external collaborator)

Creating a robot that can integrate information from different sources and modalities

Research Unit 1, SCIoI Project 09

The overall goal of this project is to create a robot that can represent and integrate information from different sources and modalities for successful, task-oriented interactions with other agents. To fully understand the mechanisms of social interaction and communication in humans and to replicate this complex human skill in technological artifacts, we must provide effective means of knowledge transfer between agents. The first step of this project is therefore to describe core components and determinants of communicative behavior including joint attention, partner co-representation, information processing from different modalities and the role of motivation and personal relevance (Kaplan, and Hafner, 2006; Kuhlen & Abdel Rahman, 2017; Kuhlen et al., 2017). We will compare these functions in human-human, human-robot, and robot-robot interactions to identify commonalities and differences. This comparison will also consider the role of different presumed partner attributes (e.g., a robot described as “social” or “intelligent”). We will conduct behavioral, electrophysiological, and fMRI experiments to describe the microstructure of communicative behavior.

The second step of the project is to create predictive models for multimodal communication that can account for these psychological findings in humans. Both the prerequisites and factors acting as priors will be identified, and suitable computational models will be developed that can represent multimodal sensory features in an abstract but biologically inspired way (suitable for extracting principles of intelligence; Schillaci et al., 2013). In perspective, the third step of this project is to use these models to generate novel predictions of social behavior in humans.

Throughout the project we will focus on the processing of complex multimodal information, a central characteristic of social interactions, that have nevertheless thus far been investigated mostly within modalities. We assume that multimodal information, e.g. from auditory (speech) and visual (face, eye gaze) or tactile (touch) information, will augment the partner co-representation and will therefore improve communicative behavior.


Related Publications

Spatola, N., & Wudarczyk, O. A. (2020). Implicit Attitudes Towards Robots Predict Explicit Attitudes, Semantic Distance Between Robots and Humans, Anthropomorphism, and Prosocial Behavior: From Attitudes to Human–Robot Interaction. International Journal of Social Robotics.
Kirtay, M., Wudarczyk, O. A., Pischedda, D., Kuhlen, A. K., Abdel Rahman, R., Haynes, J.-D., & Hafner, V. V. (2020). Modeling robot co-representation: state-of-the-art, open issues, and predictive learning as a possible framework. 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 1–8.

User registration

You don't have permission to register

Reset Password