ArXiv TLDR

Bing Wang

6 papers ยท Latest:

Natural Language Processing

Prefix Teach, Suffix Fade: Local Teachability Collapse in Strong-to-Weak On-Policy Distillation

A new on-policy distillation method, "Prefix Teach, Suffix Fade," improves strong-to-weak model training by focusing supervision on locally teachable trajectory segments.

2605.13643
Natural Language Processing

AGoQ: Activation and Gradient Quantization for Memory-Efficient Distributed Training of LLMs

AGoQ introduces novel 4-bit activation and 8-bit gradient quantization to significantly reduce memory and speed up distributed LLM training.

2605.00539
Computer Vision

OneVL: One-Step Latent Reasoning and Planning with Vision-Language Explanation

OneVL introduces a unified VLA and World Model framework, achieving state-of-the-art latent Chain-of-Thought reasoning at real-time speed.

2604.18486
Computer Vision

PhysInOne: Visual Physics Learning and Reasoning in One Suite

PhysInOne is a new large-scale dataset with 2 million videos and detailed annotations for training AI in physics-grounded visual reasoning.

2604.09415
Information Retrieval

Beyond Dense Connectivity: Explicit Sparsity for Scalable Recommendation

SSR introduces explicit sparsity to recommender systems, outperforming dense models by filtering low-utility connections for better scalability and performance.

2604.08011
Computer Vision

UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving

UniDriveVLA unifies autonomous driving tasks by decoupling perception and reasoning with expert Mixture-of-Transformers, achieving SOTA performance.

2604.02190

๐Ÿ“ฌ Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week โ€” summarized, scored, and delivered to your inbox every Monday.