Beyond ZOH: Advanced Discretization Strategies for Vision Mamba
Fady Ibrahim, Guangjun Liu, Guanghui Wang
TLDR
This paper systematically compares six discretization schemes for Vision Mamba, finding that Bilinear transform (BIL) offers the best accuracy-efficiency trade-off.
Key contributions
- Systematically compares six advanced discretization schemes (ZOH, FOH, BIL, POL, HOH, RK4) for Vision Mamba.
- Evaluates each method on image classification, semantic segmentation, and object detection benchmarks.
- Finds Polynomial (POL) and Higher-Order Hold (HOH) yield largest accuracy gains with increased computation.
- Recommends Bilinear (BIL) as the new default baseline for SSMs, offering the best precision-efficiency trade-off.
Why it matters
This research highlights the critical impact of discretization strategies on Vision Mamba's performance. It provides empirical evidence for adopting advanced schemes, with Bilinear (BIL) offering a practical balance of accuracy and efficiency. This establishes a new, improved baseline for future SSM development.
Original Abstract
Vision Mamba, as a state space model (SSM), employs a zero-order hold (ZOH) discretization, which assumes that input signals remain constant between sampling instants. This assumption degrades temporal fidelity in dynamic visual environments and constrains the attainable accuracy of modern SSM-based vision models. In this paper, we present a systematic and controlled comparison of six discretization schemes instantiated within the Vision Mamba framework: ZOH, first-order hold (FOH), bilinear/Tustin transform (BIL), polynomial interpolation (POL), higher-order hold (HOH), and the fourth-order Runge-Kutta method (RK4). We evaluate each method on standard visual benchmarks to quantify its influence in image classification, semantic segmentation, and object detection. Our results demonstrate that POL and HOH yield the largest gains in accuracy at the cost of higher training-time computation. In contrast, the BIL provides consistent improvements over ZOH with modest additional overhead, offering the most favorable trade-off between precision and efficiency. These findings elucidate the pivotal role of discretization in SSM-based vision architectures and furnish empirically grounded justification for adopting BIL as the default discretization baseline for state-of-the-art SSM models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.