ArXiv TLDR

Adaptive Querying with AI Persona Priors

🐦 Tweet
2605.00696

Kaizheng Wang, Yuhang Wu, Assaf Zeevi

stat.MLcs.CLcs.LG

TLDR

A new adaptive querying method uses AI personas from LLMs to efficiently learn user-specific traits with limited questions.

Key contributions

  • Introduces a latent variable model using AI personas for user state representation.
  • Leverages LLMs to generate response distributions for each AI persona.
  • Enables scalable Bayesian design with closed-form posterior updates for adaptive querying.
  • Achieves accurate predictions and interpretable elicitation on diverse datasets.

Why it matters

This paper addresses limitations of classical adaptive querying in complex settings. It introduces a scalable approach using AI personas from LLMs to efficiently learn user-specific data, crucial for precise user modeling with tight question budgets.

Original Abstract

We study adaptive querying for learning user-dependent quantities of interest, such as responses to held-out items and psychometric indicators, within tight question budgets. Classical Bayesian design and computerized adaptive testing typically rely on restrictive parametric assumptions or expensive posterior approximations, limiting their use in heterogeneous, high-dimensional, and cold-start settings. We introduce a persona-induced latent variable model that represents a user's state through membership in a finite dictionary of AI personas, each offering response distributions produced by a large language model. This yields expressive priors with closed-form posterior updates and efficient finite-mixture predictions, enabling scalable Bayesian design for sequential item selection. Experiments on synthetic data and WorldValuesBench demonstrate that persona-based posteriors deliver accurate probabilistic predictions and an interpretable adaptive elicitation pipeline.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.