Tom B. Brown
3 papers ยท Latest:
Natural Language Processing
Language Models are Few-Shot Learners
GPT-3, a 175 billion parameter language model, demonstrates strong few-shot learning abilities across diverse NLP tasks without task-specific fine-tuning.
2005.14165
Machine LearningScaling Laws for Neural Language Models
This paper identifies power-law scaling relationships between language model performance and factors like model size, dataset size, and compute, enabling optimal training strategies under fixed compute budgets.
2001.08361
Statistical Machine LearningDeep reinforcement learning from human preferences
This paper demonstrates that deep reinforcement learning agents can be effectively trained using human preferences as feedback instead of explicit reward functions, enabling complex task learning with minimal human input.
1706.03741
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.