Omer Levy
2 papers ยท Latest:
Natural Language Processing
LIMA: Less Is More for Alignment
LIMA shows that fine-tuning a large language model on just 1,000 curated examples can achieve performance comparable to state-of-the-art models, highlighting the dominant role of pretraining over extensive instruction tuning.
2305.11206
Natural Language ProcessingRoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa revisits BERT pretraining with optimized hyperparameters and more data, achieving state-of-the-art NLP performance and revealing that BERT was originally undertrained.
1907.11692
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.