Principles of Intelligence
The principle of adaptive representations states that behavioral flexibility is facilitated by adjusting the representations of world state to the agent’s environment, task, and goal. Adaptive representations ensure that a system represents aspects of the world that are most relevant to behavior in a way that is most conducive to generating that behavior.
In the context of social interactions, a fundamental aspect is the way we represent persons based on our expectations, knowledge and experiences with them. Whether a face is represented and perceived as angry may depend on our prior knowledge about the context (Aviezer et al., 2007) or behavior of a human or robot agent ( Maier et al., 2025). Do I presume this person or robot are showing aggressive behavior? Then I may get a more negative representation of their facial features. At the same time, someone else might associate this person or robot with positive experiences, and represent the same face in a more positive light. These flexible adaptations may enable social agents to quickly adapt their communicative and goal-directed behavior.

In the human brain, where specific neurons code specific information, the information coded tends to adapt to the task we are aiming to complete. In this type of adaptive coding, neurons will code certain features and not others depending on the current goal. For example, if I am looking for my son’s football, the pre-frontal area might code or give more weight to object shape, while if I am looking for his red shirt, the area weight the objects’ colors, so I can more quickly and efficiently complete the task.
In other words, intelligent systems are able to flexibly adapt their representations (i.e. how they represent the world) to the context, tasks, and goals.
What would a non-intelligent entity do?
For better understanding it is useful to make a comparison with non intelligent counterparts of the examples above. In this case, a non-intelligent engineering artefact such as a face recognition camera would still recognize the face and its features, but would not be able to adapt its interpretation to its previous experience, or to give bigger weight to certain features based on memory of task.
A more in-depth look
Sensations of the world and actions operating on the world are affected by many factors, e.g., objects, lighting, or the agent’s posture. Perfect knowledge of these processes could, in theory, lead to a disentangled representation, i.e., a representation whose components are fully independent of each other. This idealized situation cannot be achieved, since information about the world is always uncertain and partial. As a result, the representations of an intelligent agent are always approximations to ideal representations. Given a particular task, one can identify a “good” representation for which the relevant factors are more disentangled than in others. Since such “good” representations will depend on the task, an intelligent agent must possess adaptive representations. On a mechanistic level, representational adaptation can be realized by adjusting the weighting of features. For example, when grasping a cup, grasp-relevant features such as shape are more strongly weighted than grasp-irrelevant features such as color. The differential weighting of information is related to an altered information flow between components in the system (see active interconnections). While adaptive representations are a crucial property of individual and social intelligence, how much explanatory power they hold for collective intelligence is still an open question.
Related projects
Project 8: Knowledge-augmented face perception