Small worlds, large worlds
A theme of Oliver’s talk was the distinction between “small worlds” and “large worlds.” AI systems excel in small worlds: Benchmarks, games, controlled datasets, carefully scripted demonstrations, all of these are worlds where the rules are fixed. But robots do not live there. They move in the “large world”: unpredictable, partly observable, and constantly shifting. This gap, Oliver argued, is a core challenge of robotics today. The real world simply does not behave like the worlds in which our current models learn.
To illustrate the difference, Oliver pointed to a recent trend in robotics: large language models being connected directly to robots. In principle, these models should handle complexity. But in SCIoI’s own experiments, where language models were tested on a simple physical puzzle — a lockbox mechanism designed for robots — a different picture emerged. The models often failed in striking ways: they repeated actions, ignored feedback, or confidently “declared success” despite not having solved anything. They speak fluently, but fluency is a small-world skill. Acting in the physical world is a large-world challenge.
So, how would we find out what allows humans, animals, and artificial systems to operate in large worlds at all?
Investigating Principles of Intelligence
Oliver pointed to the approach taken at SCIoI: identifying and validating principles that make intelligent behavior possible across different organisms and machines.
Over the past years, SCIoI has formulated a couple of candidate principles. These principles are recurring patterns: Ways in which intelligent systems manage complexity, or extract structure from a world that is too big to model directly.
Drawing from SCIoI projects, Oliver highlighted three such patterns that appear across species and robotic systems, each helping to turn the “large world” into something an agent can act on.
Leveraging structure makes actions robust
In one example, Oliver showed a robot opening a drawer while a human interferes, pushing, pulling, or misaligning the motion. And instead of failing, the robot adapts smoothly.
What makes this possible is not a long list of rules coded into the system. It is the way the robot internally connects the dots: the relationships between its arm, the drawer, and how they move together. When these relationships are structured correctly, even if uncertain situations arise, actions flow from them and remain stable even when the world misbehaves.