Relaxing in Warped Spaces: Generalized Hierarchical and Modular Dynamical Neural Network
Kazuyoshi Tsutsumi, Ernst Niebur
TLDR
A novel hierarchical and modular dynamical neural network learns complex mappings and associates information by relaxing in warped spaces.
Key contributions
- Proposes a hierarchical and modular dynamical neural network model.
- Architecture derived by minimizing an energy function with two neuron types.
- Learns complex 2D mappings using periodic signals in its learning mode.
- Associates information by relaxing state variables in designed "warped spaces".
Why it matters
This paper introduces a unique dynamical neural network that integrates hierarchical and modular design with energy-based learning. Its ability to learn complex mappings and associate information through "warped spaces" provides a novel framework for understanding neural dynamics and information processing.
Original Abstract
We propose a dynamical neural network model with a hierarchical and modular structure. The network architecture can be derived by minimizing an energy function that is originally designed based on two kinds of neurons with quite different time constants. It has multiple subspaces that are spanned by neural parameters employed in the energy function, and adjacent subspaces are related to each other with a layered internetwork. Each internetwork further consists of a pair of a forward subnet and a backward one, and signals flowing through these subnets determine total dynamics of the network. The model can operate in either a learning or an association mode. In the learning mode, when periodic signals equivalent to repetitive neuronal bursting are suitably applied to input ports in all subspaces, mapping relationships corresponding to those input signals are eventually formed in internetworks between subspaces. Various two-dimensional mapping relationships between subspaces can be shaped by employing an appropriate set of periodic input signals with different frequencies based on the same mechanism as a Lissajous curve. The model in the association mode provides an overall framework such that state variables inside the network individually relax in warped spaces, each of which has been designed as favorable for a (or some) state variable(s). The association mode is further classified into two modes; unconstrained and constrained. In the latter mode, for instance, when a sufficiently slow periodic trajectory is set as an input, a warped output trajectory appears in each subspace as if imaginary layered networks with the inverse mapping relationships to existing forward subnets' were located hierarchically from outside to inside. These results suggest that a certainty/uncertainty relation exists between an input trajectory and an output trajectory.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.