ProLLM Leaderboards
StackUnseen
Evaluates an LLM's ability to answer recent StackOverflow questions, highlighting its effectiveness with new and emerging content.
# | Name | Provider | Acceptance |
---|---|---|---|
1 | O1 Preview | OpenAI | 0.938 |
2 | O1 | OpenAI | 0.938 |
3 | O3 Mini (High) | OpenAI | 0.938 |
StackEval
Evaluates an LLM's capability to function as a coding assistant by answering a variety of coding-related questions across different programming languages and question types.
# | Name | Provider | Acceptance |
---|---|---|---|
1 | O1 Preview | OpenAI | 0.981 |
2 | O1 | OpenAI | 0.976 |
3 | O3 Mini (Medium) | OpenAI | 0.974 |
Q&A Assistant
Evaluates an LLM's effectiveness as a team member in a business environment by assessing its ability to provide accurate and contextually relevant responses. It utilizes diverse queries covering both technical (such as coding) and non-technical areas.
# | Name | Provider | Acceptance |
---|---|---|---|
1 | O1 Mini | OpenAI | 0.989 |
2 | O3 Mini (High) | OpenAI | 0.982 |
3 | O1 Preview | OpenAI | 0.964 |
Summarization
Evaluates an LLM's ability to accurately summarize long texts from diverse sources such as YouTube video transcripts, websites, PDFs, and direct text inputs. It also assesses the model's capacity to follow detailed user instructions to extract specific data insights. The dataset consists of 41 unique entries in English, which have been translated into Afrikaans, Brazilian Portuguese, and Polish using machine translation.
# | Name | Provider | Accuracy |
---|---|---|---|
1 | O1 | OpenAI | 0.823 |
2 | O1 Preview | OpenAI | 0.822 |
3 | O3 Mini (High) | OpenAI | 0.794 |
Function Calling
Evaluates an LLM's ability to accurately use defined functions to perform specific tasks, such as web searches, code execution, and planning multiple function calls. Input data is a conversation history, and a list of possible tools to use.
# | Name | Provider | Accuracy |
---|---|---|---|
1 | GPT-4o | OpenAI | 0.825 |
2 | Gemini-2.0 Flash | 0.820 | |
3 | Mistral Large | Mistral | 0.798 |
OpenBook Q&A
Evaluates an LLM's ability to answer questions based on provided context, extracted from files.
# | Name | Provider | Relevance |
---|---|---|---|
1 | Deepseek R1 | DeepSeek AI | 0.847 |
2 | DeepSeek-V3 | DeepSeek AI | 0.815 |
3 | O1 Preview | OpenAI | 0.810 |
Entity Extraction
Evaluates an LLM's ability to identify and extract specific entities from ad descriptions, given predefined definitions and potential values for each entity.
# | Name | Provider | F1 Score |
---|---|---|---|
1 | GPT-4o | OpenAI | 0.854 |
2 | Mistral Small 3 | Mistral | 0.853 |
3 | MiniMax-Text-01 | Minimax | 0.851 |
SQL Disambiguation
Evaluates an LLM's ability to disambiguate user requests for generating SQL queries based on the given business rules and database schema. A question can either be answered using the schema, a combination of the schema and the business rules, or requires additional information to be answered.
# | Name | Provider | Accuracy |
---|---|---|---|
1 | O1 | OpenAI | 0.492 |
2 | GPT-3.5 Turbo | OpenAI | 0.477 |
3 | Qwen2.5-72B Instruct | Alibaba | 0.460 |
LLM-as-a-Judge
Evaluates an LLM's ability to judge the acceptability of other LLM answers to given technical and non-technical questions, including some coding questions.
# | Name | Provider | Accuracy |
---|---|---|---|
1 | GPT-4o | OpenAI | 0.846 |
2 | GPT-4 Turbo | OpenAI | 0.838 |
3 | O1 Mini | OpenAI | 0.838 |
Transcription
Evaluates transcription models on multi-lingual, multi-speaker audio with varying levels of background noise across multiple business domains such as software-development, finance, classifieds, food-delivery, and healthcare. The dataset consists of 150 unique audio samples, with each sample being augmented to generate a low and a high noise version.
# | Name | Provider | Accuracy |
---|---|---|---|
1 | Whisper Large-v3 | OpenAI | 0.779 |
2 | Gemini-1.5-Flash | 0.717 | |
3 | Gemini-1.5-Pro | 0.708 |