Quoc V. Le
2 papers ยท Latest:
Natural Language Processing
Transcending Scaling Laws with 0.1% Extra Compute
UL2R fine-tuning significantly improves large language model performance and scaling efficiency with only 0.1% extra compute, enabling substantial computational savings and emergent abilities.
2210.11399
Natural Language ProcessingFinetuned Language Models Are Zero-Shot Learners
Instruction tuning large language models on diverse NLP tasks significantly enhances their zero-shot learning capabilities, outperforming much larger models like GPT-3.
2109.01652
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.