OpenAI has highlighted GPT-5’s ability to create software in its entirety and demonstrate better reasoning capabilities – with answers that show workings, logic and inference.
The company claims it has been trained to be more honest, provide users with more accurate responses and says that, overall, it feels more human.
According to Altman, the model is “significantly better” than its predecessors.
“GPT-3 sort of felt to me like talking to a high school student… 4 felt like you’re kind of talking to a college student,” he said in a briefing ahead of Thursday’s launch.
“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.”
For Prof Carissa Véliz of the Institute for Ethics in AI, however, GPT-5’s launch may not be as significant as its marketing may suggest.
“These systems, as impressive as they are, haven’t been able to be really profitable,” she said, also noting that they can only mimic – rather than truly emulate – human reasoning abilities.
“There is a fear that we need to keep up the hype, or else the bubble might burst, and so it might be that it’s mostly marketing.”
The BBC’s AI Correspondent Marc Cieslak gained exclusive access to GPT-5 before it’s official launch.
“Apart from minor cosmetic differences the experience was similar to using the older chatbot: give it tasks or ask it questions by typing a text prompt.
It’s now powered by what’s called a reasoning model which essentially means it thinks harder about solving problems, but this seems more like an evolution than revolution for the tech.”
The company will roll out the model to all users from Thursday.
In the coming days it will become a lot clearer whether it really is as good as Sam Altman claims it is.