Prospective Compression in Human Abstraction Learning
Leonardo Hernandez Cano, Ivan Zareski, Luisa El Amouri, Pinzhe Zhao, Max Mascini + 4 more
TLDR
Humans learn abstractions by anticipating future tasks, a "prospective compression" strategy superior to retrospective methods in non-stationary environments.
Key contributions
- Introduces "prospective compression" as a human strategy for learning abstractions in evolving task environments.
- Demonstrates that current retrospective compression algorithms fail in non-stationary domains.
- Uses the Pattern Builder Task to experimentally validate human prospective compression behavior.
- Shows human abstraction learning anticipates future tasks, unlike LLM-based program synthesis.
Why it matters
This paper reveals a fundamental difference in how humans learn abstractions compared to current AI. Understanding human "prospective compression" can significantly advance online library learning and program synthesis, enabling AI to adapt more effectively to dynamic, real-world task environments. It offers a new paradigm for building more robust and adaptive learning systems.
Original Abstract
A core challenge in program synthesis is online library learning: the incremental acquisition of reusable abstractions under uncertainty about future task demands. Existing algorithms treat library learning as retrospective compression over a static task distribution, where the learned library is determined by the corpus of past tasks. However, real-world learning domains are often non-stationary, with tasks arising from a generative process that evolves over time. We propose and test the hypothesis that in non-stationary domains human library learning selects abstractions prospectively: targeting compression of future tasks. We study this question using the Pattern Builder Task, a visual program synthesis paradigm in which participants construct increasingly complex geometric patterns from a small set of primitives, transformations, and custom helpers that carry forward across trials. Using this task, we conduct two experiments with complementary latent curricula, designed to dissociate between behaviors consistent with prospective compression, and alternative library learning accounts. Using six computational models spanning online library learning strategies, we show that human abstraction behavior reflects sensitivity to latent, non-stationary structure in the task-generating process. This behavior is consistent with prospective compression, and cannot be captured by existing retrospective compression-based algorithms, or inductive biases modeled by LLM-based program synthesis.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.