Point-MF: One-step Point Cloud Generation from a Single Image via Mean Flows
TLDR
Point-MF generates high-quality 3D point clouds from a single image in one step using a Mean-Flow framework, achieving millisecond-level latency.
Key contributions
- Proposes Point-MF, a Mean-Flow-based framework for one-step point cloud reconstruction from a single image.
- Operates directly in point-cloud space, learning a mean velocity field for 1-NFE reconstruction without VAEs.
- Utilizes a Diffusion Transformer with DINOv3 features and time conditioning for effective large-step generation.
- Introduces Denoised Space Anchor, an auxiliary loss to stabilize generation and reduce outliers/artifacts.
Why it matters
Diffusion models for 3D reconstruction are accurate but slow. Point-MF offers a significant speedup with millisecond latency while maintaining high quality, making real-time 3D reconstruction from images more feasible for various applications.
Original Abstract
Single-image point cloud reconstruction must infer complete 3D geometry, including occluded parts, from a single RGB image. While diffusion-based reconstructors achieve high accuracy, they typically require many denoising iterations, resulting in slow and expensive inference. We propose Point-MF, a Mean-Flow-based framework for low-NFE single-image point cloud reconstruction that couples a Mean-Flow-compatible architecture with an auxiliary loss. Specifically, Point-MF operates directly in point-cloud space to learn the mean velocity field and enables one-step reconstruction with a single network function evaluation (1-NFE), without relying on VAE-based latent representations. To make Mean Flow effective under large interval jumps, Point-MF employs a Diffusion Transformer tailored to the Mean-Flow setting, conditioned on frozen DINOv3 image features via a lightweight token adapter and equipped with explicit interval/time conditioning. Moreover, we introduce Denoised Space Anchor, a set-distance auxiliary loss on the denoised-space estimate $x_θ$ induced by the predicted velocity field, to stabilize large-step generation and reduce outliers and density artifacts. On ShapeNet-R2N2 and Pix3D, Point-MF strikes a strong balance between reconstruction quality and inference speed compared to multi-step diffusion baselines and competitive feedforward models, while generating high-quality point clouds with millisecond-level latency.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.