Rui Zhang
4 papers ยท Latest:
Elastic Attention Cores for Scalable Vision Transformers
VECA introduces elastic core-periphery attention for Vision Transformers, achieving linear-time complexity and competitive performance with learned core tokens.
When Relations Break: Analyzing Relation Hallucination in Vision-Language Model Under Rotation and Noise
Visual perturbations like rotation and noise significantly degrade vision-language models' relational reasoning, showing a gap in their robustness.
Black-Box Skill Stealing Attack from Proprietary LLM Agents: An Empirical Study
This paper empirically studies black-box skill stealing from proprietary LLM agents, demonstrating easy extraction and highlighting overlooked copyright risks.
Meta-learning In-Context Enables Training-Free Cross Subject Brain Decoding
A meta-learning method allows training-free, cross-subject fMRI brain decoding by inferring individual neural patterns in-context, eliminating fine-tuning.
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.