ML Research Wiki / Benchmarks / parameter-efficient fine-tuning / HellaSwag

HellaSwag

parameter-efficient fine-tuning Benchmark

Performance Over Time

📊 Showing 3 results | 📏 Metric: Accuracy (% )

Top Performing Models

Rank Model Paper Accuracy (% ) Date Code
1 LLaMA2-7b 📚 GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs 76.68 2024-08-27 📦 On-Point-RND/GIFT_SW
2 LLaMA2-7b 📚 LoRA: Low-Rank Adaptation of Large Language Models 76.67 2021-06-17 📦 labmlai/annotated_deep_learning_paper_implementations 📦 hiyouga/llama-efficient-tuning 📦 tatsu-lab/stanford_alpaca
3 LLaMA2-7b 📚 DoRA: Weight-Decomposed Low-Rank Adaptation 76.27 2024-02-14 📦 NVlabs/DoRA 📦 catid/dora 📦 seanzhang-zhichen/llama3-chinese 📦 nbasyl/DoRA 📦 ayyucedemirbas/DoRA

All Papers (3)