Tong Zhang
4 papers ยท Latest:
Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less
Using the same optimizer for LLM finetuning as pretraining significantly reduces forgetting while maintaining performance, a phenomenon called optimizer-model consistency.
Profiling for Pennies: Unveiling the Privacy Iceberg of LLM Agents
LLM agents can create detailed personal profiles cheaply and quickly, exposing significant privacy risks due to platform failures and lack of awareness.
Recursive Multi-Agent Systems
RecursiveMAS scales multi-agent collaboration by casting the system as a unified latent-space recursive computation, improving performance and efficiency.
Evolution of Optimization Methods: Algorithms, Scenarios, and Evaluations
This paper comprehensively reviews and empirically evaluates deep learning optimization methods, identifying key trends and future research directions.
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.