ArXiv TLDR

Non-Minimal Sampling and Consensus for Prohibitively Large Datasets

🐦 Tweet
2604.22518

Seong Hun Lee, Patrick Vandewalle, Javier Civera

cs.CV

TLDR

NONSAC is a new framework for robust, scalable model estimation from large, noisy datasets using non-minimal sampling and a scoring rule.

Key contributions

  • Introduces NONSAC, a general framework for robust and scalable model estimation from large, noisy datasets.
  • Generates multiple model hypotheses by repeatedly sampling non-minimal data subsets with a robust estimator.
  • Selects the final model based on a predefined scoring rule that evaluates hypothesis quality.
  • Integrates with existing algorithms like RANSAC, improving scalability and robustness to outliers.

Why it matters

This paper addresses the critical challenge of robustly estimating models from prohibitively large and contaminated datasets. NONSAC offers a flexible, estimator-agnostic approach that significantly enhances scalability and resilience to outliers, making it valuable for various computer vision tasks.

Original Abstract

We introduce NONSAC (Non-Minimal Sampling and Consensus), a general framework for robust and scalable model estimation from arbitrarily large datasets contaminated with noise and outliers. NONSAC repeatedly samples non-minimal subsets of data and generates model hypotheses using a robust estimator, producing multiple candidate models. The final model is selected based on a predefined scoring rule that evaluates hypothesis quality. Our framework is estimator-agnostic and can be integrated with existing geometric fitting algorithms such as RANSAC to improve both scalability and robustness to outliers. We propose and evaluate various scoring rules for NONSAC on relative camera pose estimation, Perspective-n-Point, and point cloud registration. Furthermore, we showcase the applicability of NONSAC to correspondence-free point cloud registration by hypothesizing all-to-all correspondences.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.