1 | gemini-2.0-flash | 0.829 | 0.102 | 7.41 | 0.000480 |
2 | gemini-2.0-flash-001 | 0.814 | 0.176 | 6.85 | 0.000421 |
3 | gemini-1.5-flash | 0.797 | 0.143 | 9.54 | 0.000238 |
4 | gemini-2.0-pro-exp | 0.764 | 0.227 | 11.95 | TBA |
5 | gemini-2.0-flash-thinking-exp | 0.746 | 0.266 | 10.46 | TBA |
6 | gemini-1.5-pro | 0.732 | 0.265 | 11.44 | 0.003332 |
7 | gpt-4o | 0.687 | 0.247 | 10.16 | 0.004736 |
8 | gpt-4o-mini | 0.642 | 0.213 | 9.71 | 0.000275 |
9 | gemma-3-27b-it (via OpenRouter) | 0.628 | 0.299 | 18.79 | 0.000096 |
10 | gemini-1.5-flash-8b | 0.551 | 0.223 | 3.91 | 0.000055 |
11 | Llama-Vision-Free (via Together AI) | 0.531 | 0.198 | 6.93 | 0 |
12 | Llama-3.2-11B-Vision-Instruct-Turbo (via Together AI) | 0.524 | 0.192 | 3.68 | 0.000060 |
13 | qwen/qwen-2.5-vl-7b-instruct (via OpenRouter) | 0.482 | 0.209 | 11.53 | 0.000052 |
14 | Llama-3.2-90B-Vision-Instruct-Turbo (via Together AI) | 0.461 | 0.306 | 19.26 | 0.000426 |
15 | Llama-3.2-11B-Vision-Instruct (via Hugging Face) | 0.451 | 0.257 | 4.54 | 0 |
16 | microsoft/phi-4-multimodal-instruct (via OpenRouter) | 0.366 | 0.287 | 10.8 | 0.000019 |