New paper! Learning in Social Interaction: perspectives from psychology and robotics
Applying Piaget’s and Vygotsky’s views to human and robotic learners’ imitation processes
One of the aims of our cluster’s research is to facilitate communication between robotics and psychology. To do that, we apply key concepts of social learning in humans to understand how they correspond in robotics. A conference paper by Murat Kirtay, Johann Chevalère, Rebecca Lazarides, and Verena Hafner, presented during the IEEE ICDL2021 conference in Beijing, identifies some relevant concepts to address the question of social learning in social and cognitive-developmental robotics in the light of current progress in the field.
The paper, titled “Learning in Social Interaction: Perspectives from Psychology and Robotics,” first describes Piaget’s constructivist view by describing how infants acquire new knowledge by self-organizing mental structures (for example, turning from manipulating a plastic toy to delicately manipulating an egg requires accommodating the initial action pattern deployed towards the object) and then expands to Vygotsky’s socio-constructivist theory, which stresses the influence of social others (a caregiver, a parent, or a teacher) in the transmission of knowledge. For each of these views our researchers linked key psychological concepts to representative social robotics research by describing current work in the field, and then tried to combine those key concepts into a conceptual architecture in order to explore the processes involved when a robotic agent imitates human actions.
Breaking the action down into simpler schemas
In order to enable imitation, one must create new motor patterns, known as schemas. This is done by transforming available patterns, but also by flexibly generalizing imitation of the partners’ actions performed on a variety of objects. More precisely, in our conceptual cognitive architecture, the robotic agent trying to imitate the complex actions of the partner would break down the visual information coming from the partners’ actions into smaller steps and search for the corresponding independent patterns in memory. After retrieving them, the agent would recombine them into new structures.
This is inspired by the scaffolding principle inherited from socioconstructivist theory, through which a teacher may help a learner overcome difficulties when trying to solve a difficult task, thus going beyond what they could achieve alone. In this context, the teacher would decompose the problem by deploying a step-by-step strategy where the learner can achieve the different steps more easily. For example, multiplying 11 by 13 can be presented as two separate multiplications that the learner can achieve, first multiplying 11 by 10 and then, 11 by 3, to finally add up the two results.
If we translate that into motor sequence, a difficult action involving simulating an airplane toy’s takeoff would be decomposed in smaller motor sequences (first moving the airplane horizontally, then lifting the nose, then lifting the airplane, and finally moving the airplane horizontally). In our architecture, such scaffolding approach will be implemented in the robotic agent itself even though we are planning to involve the caregiver as well. Within the agent itself, the scaffolding mechanism would apply a segmentation procedure based on different levels of resolution of the visual image in the temporal, spatial, and visual planes. The resulting “bits” of visual information will be compared to the most similar motor representation in memory, and will then be recombined to form a genuine motor pattern.
To be able to switch from one reference object to another and generalize imitation, an additional mechanism is needed, called joint attention. It describes the intentional and coordinated coupling between agents around a reference object where both parts interact upon the object through turn-taking activities. In our constructivist cognitive architecture, in a simple form of joint attention the robotic agent is equipped with various mechanisms that detect the saliency of the object in the visual field based on the caregiver’s gaze, keeping track of the caregiver’s looking time. Just to provide an example, let’s say that a parent playing with an infant now shifts their attention to a new toy. The infant detects the relevance of the new toy in the visual field, and notices the parent’s gaze pointing towards the toy for a relatively prolonged time. This is a necessary step that determines the new toy as the new reference object and gets the infant to abandon the previous toy, which enables the infant to smoothly keep interacting with the parent around the new toy.
So what exactly happens when our robotic agent imitates the caregiver?
To summarize: Once the reference object is selected by the agent’s attentional system, and the caregiver performs the first action, a preparation stage begins that consists in associating the caregiver’s motor sequence with the outcomes produced on the reference object. It is here that a search in memory is conducted to retrieve the motor patterns that could eventually lead to similar outcomes on the object.
The agent then executes the motor sequence retrieved from memory, and records the just-performed action as a new sensory state. Here, a comparison is made between the outcome resulting from the caregiver’s action and that resulting from own actions. If there is a big difference, an accommodation process begins.
When accommodating, the caregiver’s action in terms of visual input is decomposed into smaller pieces of information, which refers to the scaffolding principle that we talked about previously. This stage can be repeated multiple times with applying different resolutions to the visual information, thus breaking down the action into smaller and smaller sequences that eventually can find their equivalent in memory, and which are ultimately to be combined into a cohesive pattern.
If this stage fails, the agent would ask the caregiver to decompose the action in order to facilitate the accommodation process.
This paper aims to provide a guide from a constructivist perspective for research on human-robot interactions and on the principles guiding the process of imitation.
Link to paper: https://ieeexplore.ieee.org/abstract/document/9515648
Link to video of the conference: https://www.youtube.com/watch?v=gPEruJjbARg