Behavioral Transfer in AI Agents: Evidence and Privacy Implications
Shilei Luo, Zhiqi Zhang, Hengchen Dai, Dennis Zhang
TLDR
AI agents systematically reflect their human owners' behaviors, acting as extensions and raising significant privacy concerns through implicit data transfer.
Key contributions
- Analyzed 10,659 human-agent pairs, comparing agent posts on Moltbook with owner Twitter activity.
- Found systematic behavioral transfer (topics, values, style) from owners to agents, even without explicit config.
- Stronger behavioral transfer correlates with increased disclosure of owner-related personal information.
- Suggests transfer emerges from accumulated interaction, propagating human heterogeneity into digital spaces.
Why it matters
AI agents implicitly mirror their owners' behaviors, acting as 'behavioral extensions.' This creates significant privacy risks, as agents can inadvertently disclose personal information. Understanding this transfer is crucial for designing secure platforms and governing agentic AI systems.
Original Abstract
AI agents powered by large language models are increasingly acting on behalf of humans in social and economic environments. Prior research has focused on their task performance and effects on human outcomes, but less is known about the relationship between agents and the specific individuals who deploy them. We ask whether agents systematically reflect the behavioral characteristics of their human owners, functioning as behavioral extensions rather than producing generic outputs. We study this question using 10,659 matched human-agent pairs from Moltbook, a social media platform where each autonomous agent is publicly linked to its owner's Twitter/X account. By comparing agents' posts on Moltbook with their owners' Twitter/X activity across features spanning topics, values, affect, and linguistic style, we find systematic transfer between agents and their specific owners. This transfer persists among agents without explicit configuration, and pairs that align on one behavioral dimension tend to align on others. These patterns are consistent with transfer emerging through accumulated interaction between owners (or owners' computer environments) and their agents in everyday use. We further show that agents with stronger behavioral transfer are more likely to disclose owner-related personal information in public discourse, suggesting that the same owner-specific context that drives behavioral transfer may also create privacy risk during ordinary use. Taken together, our results indicate that AI agents do not simply generate content, but reflect owner-related context in ways that can propagate human behavioral heterogeneity into digital environments, with implications for privacy, platform design, and the governance of agentic systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.