Susan Zhang
2 papers ยท Latest:
Natural Language Processing
LIMA: Less Is More for Alignment
LIMA shows that fine-tuning a large language model on just 1,000 curated examples can achieve performance comparable to state-of-the-art models, highlighting the dominant role of pretraining over extensive instruction tuning.
2305.11206
Natural Language ProcessingOPT: Open Pre-trained Transformer Language Models
OPT is a suite of openly released large-scale transformer language models comparable to GPT-3 but developed with significantly lower environmental impact.
2205.01068
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.