ArXiv TLDR

Variational Neural Belief Parameterizations for Robust Dexterous Grasping under Multimodal Uncertainty

🐦 Tweet
2604.25897

Clinton Enwerem, Shreya Kalyanaraman, John S. Baras, Calin Belta

cs.ROcs.LGeess.SY

TLDR

This paper introduces a variational neural belief parameterization for robust dexterous grasping, improving success under uncertainty and reducing planning time.

Key contributions

  • Represents beliefs about contact and object pose using a differentiable Gaussian mixture for robust grasping.
  • Optimizes for tail robustness using pathwise gradients through a differentiable CVaR surrogate.
  • Achieves robust grasp success and reduces planning time by an order of magnitude in simulation.
  • Demonstrates faster, higher-quality grasping on a real robot and more accurate risk calibration.

Why it matters

Traditional grasping methods struggle with uncertainty and adverse conditions. This work offers a novel, differentiable approach to risk-sensitive grasping, making robotic manipulation more reliable and efficient in complex, real-world scenarios. Its improved planning speed and accuracy are crucial for practical deployment.

Original Abstract

Contact variability, sensing uncertainty, and external disturbances make grasp execution stochastic. Expected-quality objectives ignore tail outcomes and often select grasps that fail under adverse contact realizations. Risk-sensitive POMDPs address this failure mode, but many use particle-filter beliefs that scale poorly, obstruct gradient-based optimization, and estimate Conditional Value-at-Risk (CVaR) with high-variance approximations. We instead formulate grasp acquisition as variational inference over latent contact parameters and object pose, representing the belief with a differentiable Gaussian mixture. We use Gumbel-Softmax component selection and location-scale reparameterization to express samples as smooth functions of the belief parameters, enabling pathwise gradients through a differentiable CVaR surrogate for direct optimization of tail robustness. In simulation, our variational neural belief improves robust grasp success under contact-parameter uncertainty and exogenous force perturbations while reducing planning time by roughly an order of magnitude relative to particle-filter model-predictive control. On a serial-chain robot arm with a multifingered hand, we validate grasp-and-lift success under object-pose uncertainty against a Gaussian baseline. Both methods succeed on the tested perturbations, but our controller terminates in fewer steps and less wall-clock time while achieving a higher tactile grasp-quality proxy. Our learned belief also calibrates risk more accurately, keeping mean absolute calibration error below 0.14 across tested simulation regimes, compared with 0.58 for a Cross-Entropy Method planner.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.