Difference between revisions of "AI benchmarks"

From GISAXS
Jump to: navigation, search
(Reasoning)
(Assess Specific Attributes)
 
(11 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
* [https://epoch.ai/ Epoch AI]
 
* [https://epoch.ai/ Epoch AI]
 
** [https://epoch.ai/data/notable-ai-models Notable AI models]
 
** [https://epoch.ai/data/notable-ai-models Notable AI models]
 +
** [https://epoch.ai/data/ai-benchmarking-dashboard AI benchmarking dashboard]
 +
 +
==Lists of Benchmarks==
 +
* 2025-05: [https://x.com/scaling01 Lisan al Gaib]: [https://x.com/scaling01/status/1919092778648408363 The Ultimate LLM Benchmark list]
 +
** [https://x.com/scaling01/status/1919217718420508782 Average across 28 benchmarks]
 +
 +
==Analysis of Methods==
 +
* 2025-04: [https://arxiv.org/abs/2504.20879 The Leaderboard Illusion]
  
 
=Methods=
 
=Methods=
Line 30: Line 38:
 
==Software/Coding==
 
==Software/Coding==
 
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code])
 
* 2025-02: [https://arxiv.org/abs/2502.12115SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?] ([https://github.com/openai/SWELancer-Benchmark code])
 +
 +
==Math==
 +
* [https://www.vals.ai/benchmarks/aime-2025-03-13 AIME Benchmark]
 +
 +
==Science==
 +
* 2025-07: [https://allenai.org/blog/sciarena SciArena: A New Platform for Evaluating Foundation Models in Scientific Literature Tasks] ([https://sciarena.allen.ai/ vote], [https://huggingface.co/datasets/yale-nlp/SciArena data], [https://github.com/yale-nlp/SciArena code])
  
 
==Visual==
 
==Visual==
 +
* 2024-06: [https://charxiv.github.io/ Charting Gaps in Realistic Chart Understanding in Multimodal LLMs] ([https://arxiv.org/abs/2406.18521 preprint], [https://charxiv.github.io/ leaderboard])
 
* 2025-03: [https://arxiv.org/abs/2503.14607 Can Large Vision Language Models Read Maps Like a Human?] MapBench
 
* 2025-03: [https://arxiv.org/abs/2503.14607 Can Large Vision Language Models Read Maps Like a Human?] MapBench
 +
 +
==Conversation==
 +
* 2025-01: [https://arxiv.org/abs/2501.17399 MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMs] ([https://scale.com/research/multichallenge project], [https://github.com/ekwinox117/multi-challenge code], [https://scale.com/leaderboard/multichallenge leaderboard])
  
 
==Creativity==
 
==Creativity==
 +
* See also: [AI creativity]
 
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text]
 
* 2024-10: [https://arxiv.org/abs/2410.04265 AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text]
 
* 2024-11: [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code])
 
* 2024-11: [https://openreview.net/pdf?id=fz969ahcvJ AidanBench: Evaluating Novel Idea Generation on Open-Ended Questions] ([https://github.com/aidanmclaughlin/AidanBench code])
Line 46: Line 65:
  
 
==Assistant/Agentic==
 
==Assistant/Agentic==
 +
See: [[AI_Agents#Optimization|AI Agents: Optimization]]
 
* [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants]
 
* [https://arxiv.org/abs/2311.12983 GAIA: a benchmark for General AI Assistants]
 
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 
* [https://www.galileo.ai/blog/agent-leaderboard Galileo AI] [https://huggingface.co/spaces/galileo-ai/agent-leaderboard Agent Leaderboard]
 
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents
 
* [https://huggingface.co/spaces/smolagents/smolagents-leaderboard Smolagents LLM Leaderboard]: LLMs powering agents
 
* OpenAI [https://openai.com/index/paperbench/ PaperBench: Evaluating AI’s Ability to Replicate AI Research] ([https://cdn.openai.com/papers/22265bac-3191-44e5-b057-7aaacd8e90cd/paperbench.pdf paper], [https://github.com/openai/preparedness/tree/main/project/paperbench code])
 
* OpenAI [https://openai.com/index/paperbench/ PaperBench: Evaluating AI’s Ability to Replicate AI Research] ([https://cdn.openai.com/papers/22265bac-3191-44e5-b057-7aaacd8e90cd/paperbench.pdf paper], [https://github.com/openai/preparedness/tree/main/project/paperbench code])
 +
* 2025-06: [https://arxiv.org/abs/2506.22419 The Automated LLM Speedrunning Benchmark: Reproducing NanoGPT Improvements]
  
 
==Science==
 
==Science==
 
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]]
 
See: [[Science_Agents#Science_Benchmarks|Science Benchmarks]]

Latest revision as of 11:39, 1 July 2025

General

Lists of Benchmarks

Analysis of Methods

Methods

Task Length

GmZHL8xWQAAtFlF.jpeg

Assess Specific Attributes

Various

Hallucination

Software/Coding

Math

Science

Visual

Conversation

Creativity

Reasoning

Assistant/Agentic

See: AI Agents: Optimization

Science

See: Science Benchmarks