I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.
I’m thinking otherwise. I think GPT5 is a much smaller model with some fallback to previous models when needed.
Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL) - which doesn’t give a flying fuck about energy efficiency since a while.
Obviously it’s higher. If it was any lower, they would’ve made a huge announcement out of it to prove they’re better than the competition.
It’s cheaper though, so very likely it’s more efficient somehow.
I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.
I’m thinking otherwise. I think GPT5 is a much smaller model with some fallback to previous models when needed.
Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL) - which doesn’t give a flying fuck about energy efficiency since a while.
And they don’t want to disclose the energy efficiency becaaaause … ?
They probably wouldn’t really care how efficient it is, but they certainly would care that the costs are lower.
I’m almost sure they’re keeping that for the Earnings call.