ML Research Wiki / Benchmarks / parameter-efficient fine-tuning / WinoGrande

WinoGrande

parameter-efficient fine-tuning Benchmark

Performance Over Time

📊 Showing 3 results | 📏 Metric: Accuracy (% )

Top Performing Models

Rank Model Paper Accuracy (% ) Date Code
1 LLaMA2-7b 📚 GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs 70.80 2024-08-27 📦 On-Point-RND/GIFT_SW
2 LLaMA2-7b 📚 DoRA: Weight-Decomposed Low-Rank Adaptation 70.09 2024-02-14 📦 NVlabs/DoRA 📦 catid/dora 📦 seanzhang-zhichen/llama3-chinese 📦 nbasyl/DoRA 📦 ayyucedemirbas/DoRA
3 LLaMA2-7b 📚 LoRA: Low-Rank Adaptation of Large Language Models 69.85 2021-06-17 📦 labmlai/annotated_deep_learning_paper_implementations 📦 hiyouga/llama-efficient-tuning 📦 tatsu-lab/stanford_alpaca

All Papers (3)