LTBs-KAN: Linear-Time B-splines Kolmogorov-Arnold Networks
Eduardo Said Merin-Martinez, Andres Mendez-Vazquez, Eduardo Rodriguez-Tello
TLDR
LTBs-KAN introduces a linear-time B-spline Kolmogorov-Arnold Network, significantly speeding up KANs while reducing parameters.
Key contributions
- Proposes LTBs-KAN, a novel Kolmogorov-Arnold Network with linear computational complexity.
- Achieves significant speedup by avoiding recursive B-spline computations and intensive algorithms.
- Reduces model parameters through product-of-sums matrix factorization without sacrificing performance.
- Demonstrates improved time complexity and parameter efficiency on image classification tasks.
Why it matters
Kolmogorov-Arnold Networks (KANs) offer improved explainability over MLPs but are computationally slow. This paper's LTBs-KAN makes KANs much faster and more efficient, potentially enabling their wider adoption in various applications. By addressing the computational bottleneck, it paves the way for more explainable and powerful neural networks.
Original Abstract
Kolmogorov-Arnold Networks (KANs) are a recent neural network architecture offering an alternative to Multilayer Perceptrons (MLPs) with improved explainability and expressibility. However, KANs are significantly slower than MLPs due to the recursive nature of B-spline function computations, limiting their application. This work addresses these issues by proposing a novel base-spline Linear-Time B-splines Kolmogorov-Arnold Network (LTBs-KAN) with linear complexity. Unlike previous methods that rely on the Boor-Mansfield-Cox spline algorithm or other computationally intensive mathematical functions, our approach significantly reduces the computational burden. Additionally, we further reduce model's parameter through product-of-sums matrix factorization in the forward pass without sacrificing performance. Experiments on MNIST, Fashion-MNIST and CIFAR-10 demonstrate that LTBs-KAN achieves good time complexity and parameter reduction, when used as building architectural blocks, compared to other KAN implementations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.