ArXiv TLDR

Convergence to collusion in algorithmic pricing

🐦 Tweet
2604.15825

Kevin Michael Frick

econ.GN

TLDR

A deep reinforcement learning model for algorithmic pricing quickly converges to collusive outcomes through reward-punishment schemes.

Key contributions

  • Introduces a deep reinforcement learning model for pricing in oligopolistic markets.
  • Demonstrates that algorithmic pricing converges to collusion at empirically observed speeds.
  • Reveals that cooperative behavior is driven by internal reward-punishment mechanisms.

Why it matters

This paper is crucial for understanding how AI-driven pricing algorithms can lead to collusive market outcomes. It provides insights into the speed and mechanisms of such convergence, which is vital for regulators and firms.

Original Abstract

Artificial intelligence algorithms are increasingly used by firms to set prices. Previous research shows that they can exhibit collusive behaviour, but how quickly they can do so has so far remained an open question. I show that a modern deep reinforcement learning model deployed to price goods in a repeated oligopolistic competition game with continuous prices converges to a collusive outcome in an amount of time that matches empirical observations, under reasonable assumptions on the length of a time step. This model shows cooperative behaviour supported by reward-punishment schemes that discourage deviations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.