ArXiv TLDR

Statistical Properties of the King Wen Sequence: An Anti-Habituation Structure That Does Not Improve Neural Network Training

🐦 Tweet
2604.09234

Augustin Chan

cs.LGcs.AIcs.NE

TLDR

The King Wen sequence, despite unique statistical properties resembling curriculum learning, does not improve neural network training and can degrade performance.

Key contributions

  • Rigorous statistical analysis of the King Wen sequence identified four unique properties.
  • Hypothesized these "anti-habituation" properties could benefit neural network training.
  • Extensive experiments showed King Wen ordering consistently degraded neural network performance.
  • Explained that the sequence's high variance destabilizes gradient-based optimization.

Why it matters

This paper rigorously debunks the hypothesis that the ancient King Wen sequence, with its unique "anti-habituation" structure, could benefit modern neural network training. It highlights that statistically distinctive properties don't automatically translate to effective optimization dynamics, providing a cautionary tale for applying complex patterns to AI.

Original Abstract

The King Wen sequence of the I-Ching (c. 1000 BC) orders 64 hexagrams -- states of a six-dimensional binary space -- in a pattern that has puzzled scholars for three millennia. We present a rigorous statistical characterization of this ordering using Monte Carlo permutation analysis against 100,000 random baselines. We find that the sequence has four statistically significant properties: higher-than-random transition distance (98.2nd percentile), negative lag-1 autocorrelation (p=0.037), yang-balanced groups of four (p=0.002), and asymmetric within-pair vs. between-pair distances (99.2nd percentile). These properties superficially resemble principles from curriculum learning and curiosity-driven exploration, motivating the hypothesis that they might benefit neural network training. We test this hypothesis through three experiments: learning rate schedule modulation, curriculum ordering, and seed sensitivity analysis, conducted across two hardware platforms (NVIDIA RTX 2060 with PyTorch and Apple Silicon with MLX). The results are uniformly negative. King Wen LR modulation degrades performance at all tested amplitudes. As curriculum ordering, King Wen is the worst non-sequential ordering on one platform and within noise on the other. A 30-seed sweep confirms that only King Wen's degradation exceeds natural seed variance. We explain why: the sequence's high variance -- the very property that makes it statistically distinctive -- destabilizes gradient-based optimization. Anti-habituation in a fixed combinatorial sequence is not the same as effective training dynamics.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.