ArXiv TLDR

Benefits of Low-Cost Bio-Inspiration in the Age of Overparametrization

🐦 Tweet
2604.20365

Kevin Godin-Dubois, Anil Yaman, Anna V. Kononova

cs.ROcs.AI

TLDR

Simpler bio-inspired robot controllers (shallow MLPs, dense CPGs) often outperform overparameterized deep learning models, favoring evolutionary strategies.

Key contributions

  • Compares CPGs and MLPs for robot control, varying parameter spaces and training protocols.
  • Finds shallow MLPs and dense CPGs outperform deeper MLPs and Actor-Critic architectures.
  • Introduces a "Parameter Impact" metric to evaluate parameter efficiency.
  • Demonstrates evolutionary strategies are more parameter-efficient than reinforcement learning.

Why it matters

This paper challenges the assumption that more parameters always lead to better performance in robot control. It empirically shows simpler, bio-inspired models can be more effective and efficient, especially for small I/O spaces. This work suggests re-evaluating model complexity in certain robotics applications.

Original Abstract

While Central Pattern Generators (CPGs) and Multi-Layer Perceptrons (MLP) are widely used paradigms in robot control, few systematic studies have been performed on the relative merits of large parameter spaces. In contexts where input and output spaces are small and performance is bounded, having more parameters to optimize may actively hinder the learning process instead of empowering it. To empirically measure this, we submit a given robot morphology, with limited proprioceptive capabilities, to controller optimization under two bio-inspired paradigms (CPGs and MLPs) with evolutionary- and reinforcement- trainer protocols. By varying parameter spaces across multiple reward functions, we observe that shallow MLPs and densely connected CPGs result in better performance when compared to deeper MLPs or Actor-Critic architectures. To account for the relationship between said performance and the number of parameters, we introduce a Parameter Impact metric which demonstrates that the additional parameters required by the reinforcement technique do not translate into better performance, thus favouring evolutionary strategies.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.