📊 Showing 5 results | 📏 Metric: Average Accuracy
Rank | Model | Paper | Average Accuracy | Date | Code |
---|---|---|---|---|---|
1 | GPT-4o | Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments | 76.22 | 2025-02-10 | 📦 video-db/ocr-benchmark |
2 | Gemini-1.5 Pro | Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments | 76.13 | 2025-02-10 | 📦 video-db/ocr-benchmark |
3 | Claude-3 Sonnet | Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments | 67.71 | 2025-02-10 | 📦 video-db/ocr-benchmark |
4 | RapidOCR | Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments | 56.98 | 2025-02-10 | 📦 video-db/ocr-benchmark |
5 | EasyOCR | Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments | 49.30 | 2025-02-10 | 📦 video-db/ocr-benchmark |