ML Research Wiki / Benchmarks / parameter-efficient fine-tuning / BoolQ

BoolQ

parameter-efficient fine-tuning Benchmark

Performance Over Time

📊 Showing 4 results | 📏 Metric: Accuracy (% )

Top Performing Models

Rank Model Paper Accuracy (% ) Date Code
1 LLaMA2-7b QLoRA: Efficient Finetuning of Quantized LLMs 82.63 2023-05-23 📦 qwenlm/qwen 📦 QwenLM/Qwen-7B 📦 artidoro/qlora
2 LLaMA2-7b GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs 82.63 2024-08-27 📦 On-Point-RND/GIFT_SW
3 LLaMA2-7b DoRA: Weight-Decomposed Low-Rank Adaptation 81.93 2024-02-14 📦 NVlabs/DoRA 📦 catid/dora 📦 seanzhang-zhichen/llama3-chinese 📦 nbasyl/DoRA 📦 ayyucedemirbas/DoRA
4 LLaMA2-7b LoRA: Low-Rank Adaptation of Large Language Models 80.28 2021-06-17 📦 labmlai/annotated_deep_learning_paper_implementations 📦 hiyouga/llama-efficient-tuning 📦 tatsu-lab/stanford_alpaca

All Papers (4)