Dubesor LLM Benchmark table

Small-scale manual performance comparison benchmark I made for myself. This table showcases the results I recorded of various AI models across different personal tasks I encountered over time (currently 83). I use a weighted rating system and calculate the difficulty for each tasks by incorporating the results of all models. This is particularly relevant in scoring when failing easy questions or passing hard ones.

NOTE, THAT THIS JUST ME SHARING THE RESULTS FROM MY OWN SMALL-SCALE PERSONAL TESTING. YYMV! OBVIOUSLY THE SCORES ARE JUST THAT AND MIGHT NOT REFLECT YOUR OWN PERSONAL EXPERIENCES OR OTHER WELL-KNOWN BENCHMARKS.

This table currently supports:
intro tooltipsdynamic sortingsearchingfilteringcomparinghighlightingexporting.

Model (65) TOTAL Pass Refine Fail Refusal $ mTok Reason STEM Utility Code Censor
(last updated 2024-09-17: added Mistral-Small-Instruct-2409 local)

FAQ

×