The nextAI Solution to the NeurIPS 2023 LLM Efficiency Challenge
Gyuwon Park, DongIl Shin, SolGil Oh, SangGi Ryu, Byung-Hak Kim
TLDR
nextAI efficiently fine-tuned LLaMa2 70B on a single A100 GPU for the NeurIPS LLM Efficiency Challenge using QLoRA and Flash Attention 2.
Key contributions
- Fine-tuned LLaMa2 70B model on a single A100 40GB GPU within 24 hours.
- Leveraged Quantized-Low Rank Adaptation (QLoRA) and Flash Attention 2 for efficiency.
- Developed a custom, iteratively tested dataset from diverse open-source resources.
- Achieved significant resource reduction and high accuracy on QA benchmarks.
Why it matters
This paper demonstrates the practical feasibility of optimizing massive LLMs like LLaMa2 70B in highly resource-constrained environments. It highlights how advanced techniques can reduce computational costs while maintaining high accuracy. This is crucial for broader LLM adoption in real-world applications.
Original Abstract
The rapid evolution of Large Language Models (LLMs) has significantly impacted the field of natural language processing, but their growing complexity raises concerns about resource usage and transparency. Addressing these challenges, we participated in the NeurIPS LLM Efficiency Challenge, aiming to fine-tune a foundation model within stringent constraints. Our focus was the LLaMa2 70 billion model, optimized on a single A100 40GB GPU within a 24-hour limit. Our methodology hinged on a custom dataset, carefully assembled from diverse open-source resources and benchmark tests, aligned with the challenge's open-source ethos. Our approach leveraged Quantized-Low Rank Adaptation (QLoRA) Fine tuning, integrated with advanced attention mechanisms like Flash Attention 2. We experimented with various configurations of the LoRA technique, optimizing the balance between computational efficiency and model accuracy. Our fine-tuning strategy was underpinned by the creation and iterative testing of multiple dataset compositions, leading to the selection of a version that demonstrated robust performance across diverse tasks and benchmarks. The culmination of our efforts was an efficiently fine-tuned LLaMa2 70B model that operated within the constraints of a single GPU, showcasing not only a significant reduction in resource utilization but also high accuracy across a range of QA benchmarks. Our study serves as a testament to the feasibility of optimizing large-scale models in resource-constrained environments, emphasizing the potential of LLMs in real-world applications.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.