ArXiv TLDR

Large Language Models Align with the Human Brain during Creative Thinking

🐦 Tweet
2604.03480

Mete Ismayilzada, Simone A. Luchini, Abdulkadir Gokce, Badr AlKhamissi, Antoine Bosselut + 3 more

q-bio.NCcs.AIcs.CL

TLDR

LLMs align with human brain activity during creative thinking, with alignment scaling with model size and idea originality, and shaped by post-training objectives.

Key contributions

  • LLM representations align with human brain activity in creativity networks (DMN, FPN) during divergent thinking.
  • Brain-LLM alignment scales with model size (DMN) and idea originality (both networks), strongest early in the creative process.
  • Post-training objectives (e.g., creativity-optimized, human behavior fine-tuned) selectively reshape LLM alignment.
  • Reasoning-trained LLMs show reduced alignment with creative neural geometry, shifting representations towards analytical processing.

Why it matters

This research reveals LLMs align with human brain activity during creative thinking, showing post-training objectives selectively shape this alignment. This offers crucial insights for developing AI that truly mimics and enhances human creativity.

Original Abstract

Creative thinking is a fundamental aspect of human cognition, and divergent thinking-the capacity to generate novel and varied ideas-is widely regarded as its core generative engine. Large language models (LLMs) have recently demonstrated impressive performance on divergent thinking tests and prior work has shown that models with higher task performance tend to be more aligned to human brain activity. However, existing brain-LLM alignment studies have focused on passive, non-creative tasks. Here, we explore brain alignment during creative thinking using fMRI data from 170 participants performing the Alternate Uses Task (AUT). We extract representations from LLMs varying in size (270M-72B) and measure alignment to brain responses via Representational Similarity Analysis (RSA), targeting the creativity-related default mode and frontoparietal networks. We find that brain-LLM alignment scales with model size (default mode network only) and idea originality (both networks), with effects strongest early in the creative process. We further show that post-training objectives shape alignment in functionally selective ways: a creativity-optimized \texttt{Llama-3.1-8B-Instruct} preserves alignment with high-creativity neural responses while reducing alignment with low-creativity ones; a human behavior fine-tuned model elevates alignment with both; and a reasoning-trained variant shows the opposite pattern, suggesting chain-of-thought training steers representations away from creative neural geometry toward analytical processing. These results demonstrate that post-training objectives selectively reshape LLM representations relative to the neural geometry of human creative thought.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.