ArXiv TLDR

Attractor FCM

🐦 Tweet
2604.27947

Alexis Kafantaris

cs.NEcs.AIcs.LGcs.LO

TLDR

Attractor FCM is a novel gradient-descent, physics-constrained Fuzzy Cognitive Map using residual memory, BPTT, and a fixed-point anchor for efficient learning.

Key contributions

  • Introduces Attractor FCM, a gradient-descent, physics-constrained, Jacobian-based Fuzzy Cognitive Map.
  • Employs residual memory, backpropagation through time, and a recursive fixed-point anchor for weight updates.
  • Utilizes a novel learning algorithm combining Newton's method with adaptive gradient descent to avoid local minima.
  • Incorporates a causal mask to filter updates, respecting physics and expert knowledge for efficient error reduction.

Why it matters

This paper advances Fuzzy Cognitive Maps by integrating deep learning methods like BPTT and adaptive gradient descent. Its novel learning prevents premature convergence and incorporates physics constraints, yielding more robust and accurate models.

Original Abstract

In this paper an attractor FCM is created, tested, and analyzed. This FCM is neither a hebbian based nor agentic, nor a hybrid; it rather is a gradient descent based, physics constrained, Jacobian version of an FCM. Moreover, this model has several quirks; it uses residual memory, back propagation through time, and a fixed point anchor that is recursively implemented to update its weights. The residuals update the recursive part without losing the system memory. The model's anchor enables it to converge in a fixed point for which back propagation through time unrolls it and ensures that the error minimization is for an accurate gradient. Furthermore, a new learning algorithm is utilized. The Newton's method finds the system's fixed point attractor and then gradient descend is adaptively changing the landscape; an adaptive term is used to directly manipulate the weights through the attractor dynamics. As the adaptive term changes, the descent through the landscape is constantly adjusting according to sigmoid saturation, and that prevents premature convergence to a local minimum. Lastly, the updates are filtered by causal mask that informs the network about the physics, respecting the initial expert based opinions, for which model reduces the error to the target in an efficient way.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.