Principles of Intelligence
The principle describes the ability of intelligent agents to efficiently find multi-faceted solutions to complex tasks with very high-dimensional solution spaces, via incremental assembly of solutions to simpler, low-dimensional problems in an iterative way.
The best way to solve a complex problem in a very high-dimensional space (i.e. a data space with a large number of features or attributes, resulting in a high number of dimensions) is to first solve simpler problems in lower-dimensional space and to learn how to put them together appropriately. If you have discovered an easily-discoverable regularity relevant to a task, you can effectively replace the concerned dimensions with your knowledge of this simple regularity. The resulting problem space is now simpler because some dimensions are already explained.
Now, in this simplified space, a new regularity might be within the reach of the learning ability of the agent: the agent discovers this new regularity, can use it in the future, and has further reduced the dimensionality of the remaining space. More and more regularities move into the reach of the agent. Each of the regularities can be viewed as a factor in the generation of behavior. The agent can compose and re-compose these factors to achieve behavior.

A more in-depth look
Previous SCIoI research on novel variants of federated learning algorithms showcases the principle at work. Federated learning algorithms refer to a type of decentralized machine learning where a shared model is trained across multiple devices or servers (clients) without directly sharing their raw data. In this case, instead of a central processing unit fitting a model from a vast set of data (i.e., solving a large optimization problem), federated learning splits the overall problem into several smaller subproblems, whose solutions are subsequently combined to provide a solution for the overall problem. In an iterative fashion, local computational agents fit local models from (disjoint) subsets of the entire data set, send model parameters to a central unit, which assembles an updated overall model and transmits the latter back to the local agents. Dramatic improvements (in terms of both efficiency and privacy) are possible by recognizing that the central unit does not need access to the local parameters vectors (.ksüz et al., 2023, 2024). Instead, a weighted sum of those vectors (plus rudimentary information on the weights) turns out to be sufficient.
Examples
A good example of this principle is the learning of mathematics in school children: First they learn addition, then based on this they learn multiplication, and based on that they learn division, etc.
Similarly, children’s motor skills develop incrementally. First they learn to move the limbs from proximal to distal. Their motor skills develop from coarse to fine, increasing the dimensionality of the motion space only when the first degrees of freedom are mastered (Baillargeon, 2002).
Connected projects
P3: Mouse lock box
P4: Intelligent kinematic problem solving
P26: Ecologically Reational Strategy Selection
P28: Learning to manipulate from demonstration (to escape from a room)
P46: Mouse lock box 2
P49: Parrobots II