ML Research Wiki / Benchmarks / Referring expression generation / ColonINST-v1 (Unseen)

ColonINST-v1 (Unseen)

Referring expression generation Benchmark

Performance Over Time

📊 Showing 17 results | 📏 Metric: Accuray

Top Performing Models

Rank Model Paper Accuray Date Code
1 ColonGPT (w/ LoRA, w/o extra data) Frontiers in Intelligent Colonoscopy 80.18 2024-10-22 📦 ai4colonoscopy/intelliscope
2 MobileVLM-1.7B (w/ LoRA, w/ extra data) MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices 78.03 2023-12-28 📦 meituan-automl/mobilevlm
3 LLaVA-Med-v1.0 (w/o LoRA, w/ extra data) LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day 75.25 2023-06-01 📦 microsoft/LLaVA-Med
4 Bunny-v1.0-3B (w/ LoRA, w/ extra data) Efficient Multimodal Learning from Data-centric Perspective 75.08 2024-02-18 📦 baai-dcai/bunny
5 LLaVA-Med-v1.0 (w/o LoRA, w/o extra data) LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day 75.07 2023-06-01 📦 microsoft/LLaVA-Med
6 MGM-2B (w/o LoRA, w/ extra data) Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models 74.30 2024-03-27 📦 dvlab-research/MGM 📦 dvlab-research/minigemini
7 MobileVLM-1.7B (w/o LoRA, w/ extra data) MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices 73.14 2023-12-28 📦 meituan-automl/mobilevlm
8 LLaVA-Med-v1.5 (w/ LoRA, w/o extra data) LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day 73.05 2023-06-01 📦 microsoft/LLaVA-Med
9 LLaVA-v1.5 (w/ LoRA, w/ extra data) Improved Baselines with Visual Instruction Tuning 72.88 2023-10-05 📦 huggingface/transformers 📦 haotian-liu/LLaVA 📦 LLaVA-VL/LLaVA-NeXT
10 MiniGPT-v2 (w/ LoRA, w/o extra data) MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning 72.05 2023-10-14 📦 vision-cair/minigpt-4 📦 zebangcheng/emotion-llama

All Papers (17)