Psychological Concept Neurons: Can Neural Control Bias Probing and Shift Generation in LLMs?
Yuto Harada, Hiro Taiyo Hamada
TLDR
This paper explores how Big Five personality traits are represented in LLMs, finding that while internal representations can be steered, controlling behavioral outputs is much harder.
Key contributions
- Big Five personality information is decodable from early to final LLM layers.
- Concept-selective neurons for Big Five traits are most prevalent in mid-layers.
- Interventions on these neurons causally shift latent representations towards target concepts.
- Direct control over generated labels is weaker and more complex, showing cross-trait spillover.
Why it matters
This paper reveals how psychological concepts are represented and controlled within LLMs. It highlights a critical gap between manipulating internal representations and reliably steering external behaviors. This distinction is vital for developing more controllable and ethically aligned AI systems.
Original Abstract
Using psychological constructs such as the Big Five, large language models (LLMs) can imitate specific personality profiles and predict a user's personality. While LLMs can exhibit behaviors consistent with these constructs, it remains unclear where and how they are represented inside the model and how they relate to behavioral outputs. To address this gap, we focus on questionnaire-operationalized Big Five concepts, analyze the formation and localization of their internal representations, and use interventions to examine how these representations relate to behavioral outputs. In our experiment, we first use probing to examine where Big Five information emerges across model depth. We then identify neurons that respond selectively to each Big Five concept and test whether enhancing or suppressing their activations can bias latent representations and label generation in intended directions. We find that Big Five information becomes rapidly decodable in early layers and remains detectable through the final layers, while concept-selective neurons are most prevalent in mid layers and exhibit limited overlap across domains. Interventions on these neurons consistently shift probe readouts toward targeted concepts, with targeted success rates exceeding 0.8 for some concepts, indicating that the model's internal separation of Big Five personality traits can be causally steered. At the label-generation level, the same interventions often bias generated label distributions in the intended directions, but the effects are weaker, more concept-dependent, and often accompanied by cross-trait spillover, indicating that comparable control over generated labels is difficult even with interventions on a large fraction of concept-selective neurons. Overall, our findings reveal a gap between representational control and behavioral control in LLMs.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.