Measuring Understanding Through Discrete Compositional Knowledge Structures in Hierarchical Automata
TLDR
This paper proposes hierarchical automata with discrete compositional knowledge structures to measure genuine understanding in AI systems.
Key contributions
- Introduces hierarchical automata using FSMs for patterns and higher-order automata for compositions.
- Enables constrained inference to construct automata from single observations.
- Utilizes graph memory to make compositional knowledge directly inspectable.
- Identifies five measurable signatures of understanding, distinguishing it from statistical correlation.
Why it matters
Current AI systems lack clear methods to measure genuine understanding. This framework offers a novel way to make understanding observable and quantifiable through discrete structural signatures. It provides a crucial measurement capability that complements existing AI approaches.
Original Abstract
How do we measure genuine understanding in artificial cognitive systems? Current approaches face a measurement gap: probabilistic systems refine confidence gradually, practice-based systems compile knowledge through repeated execution, and neural systems distribute understanding across opaque embedding spaces. We propose that making understanding measurable requires architectures where understanding formation produces discrete, inspectable structural signatures. This paper presents hierarchical automata built from finite state machines representing patterns and higher-order automata representing compositions. Constrained inference constructs automata from single observations. Similarity detection clusters related automata, making concept robustness quantifiable. Graph memory makes compositional knowledge directly inspectable. Metacognitive mechanisms enable observable reconfiguration. We demonstrate understanding measurement in a simple geometric domain. Graph evolution tracking reveals five measurable signatures: immediate representation formation, structural knowledge, generalization capacity, compositional awareness, and metacognitive access. These measurements distinguish structural understanding from statistical correlation. Our contribution is a framework for making understanding measurable through discrete compositional knowledge structures. This measurement capability complements perceptual learning in neural systems and task execution in neurosymbolic architectures.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.