Beyond Silicon: Materials, Mechanisms, and Methods for Physical Neural Computing
Stefan Fischer, Nihat Ay, Olaf Landsiedel, Esfandiar Mohammadi, Sebastian Otte + 2 more
TLDR
A survey unifies diverse physical neural computing methods, proposing a cross-domain benchmarking scheme to address silicon AI limitations.
Key contributions
- Unifies diverse physical neural computing by mapping neural primitives to substrate-specific mechanisms.
- Analyzes architectural paradigms and identifies key engineering constraints like scalability and programmability.
- Introduces a first-order benchmarking scheme for standardized, cross-domain comparison of physical systems.
Why it matters
As silicon AI faces growing energy and data-movement constraints, physical neural computing offers a vital complementary path for pervasive intelligence. This paper unifies the fragmented field, providing a framework and benchmarking tools to accelerate the development of efficient, on-device AI.
Original Abstract
Physical implementations of neural computation now extend far beyond silicon hardware, encompassing substrates such as memristive devices, photonic circuits, mechanical metamaterials, microfluidic networks, chemical reaction systems, and living neural tissue. By exploiting intrinsic physical processes such as charge transport, wave interference, elastic deformation, mass transport, and biochemical regulation, these substrates can realize neural inference and adaptation directly in matter. As silicon GPU-centered AI faces growing energy and data-movement constraints, physical neural computation is becoming increasingly relevant as a complementary path beyond conventional digital accelerators. This trend is driven in particular by pervasive intelligence, i.e., the deployment of on-device and edge AI across large numbers of resource-constrained systems. In such settings, co-locating computation with sensing and memory can reduce data shuttling and improve efficiency. Meanwhile, physical neural approaches have emerged across disparate disciplines, yet progress remains fragmented, with limited shared terminology and few principled ways to compare platforms. This survey unifies the field by mapping neural primitives to substrate-specific mechanisms, analyzing architectural and training paradigms, and identifying key engineering constraints including scalability, precision, programmability, and I/O interfacing overhead. To enable cross-domain comparison, we introduce a first-order benchmarking scheme based on standardized static and dynamic tasks and physically interpretable performance dimensions. We show that no single substrate dominates across the considered dimensions; instead, physical neural systems occupy complementary operating regimes, enabling applications ranging from ultrafast signal processing and in-memory inference to embodied control and in-sample biochemical decision making.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.