Researchers at the Center for AI Safety and Scale AI have published “Humanity’s Last Exam” — a test designed to measure how close today’s most powerful artificial intelligence (AI) models are to meeting or exceeding human-level knowledge across several domains.

The test was launched in January 2025, but scientists outlined the framework and their thinking behind its design for the first time in a new study published Jan. 28 in the journal Nature. It contains a corpus of 2,500 questions across more than 100 subjects, with input from more than 1,000 subject-matter experts from 500 institutions across 50 countries.

You may like

At launch, the researchers tested OpenAI’s GPT-4o and o1 models, Google’s Gemini 1.5 Pro, Anthropic’s Claude 3.5 Sonnet and DeepSeek R1. OpenAI’s o1 system notched the top spot with a score of just 8.3%.

Despite this poor performance, the researchers wrote at the time that “given the rapid pace of AI development, it is plausible that models could exceed 50% accuracy on HLE by the end of 2025.”

As of Feb. 12, 2026, the highest score achieved so far is 48.4%, set by Google’s Gemini 3 Deep Think. Human experts, meanwhile, score around 90% in their respective domains.

Massive Multitask Language Understanding (MMLU) dataset, which was authored with participation from Center for AI Safety founder Dan Hendrycks, only test a small subset of expert-level domain knowledge, primarily focusing on coding and mathematics.

Even state-of-the-art benchmarks such as Francois Chollet’s ARC-AGI suite struggle to outpace the memorization and searchability problems that the creators of Humanity’s Last Exam suggest the new test addresses. Gemini’s Deep Think, for example, achieved 84.6% on the ARC-AGI-2 benchmark, just a week after failing to reach 50% on the HLE test.

artificial general intelligence (AGI).

“High accuracy on HLE would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence,” the scientists said in the study.

“Doing well on HLE is a necessary, but not a sufficient criterion to say that machines have reached true intelligence,” Manuel Schottdorf, a neuroscientist at the University of Delaware’s Department of Psychological and Brain Sciences, said in a recent statement. Schottdorf is one of the many experts whose question was accepted into the HLE’s corpus.

“They will have to be good enough to solve these questions, but that as a fact alone can’t allow us to conclude that machines are truly intelligent.”