It’s actually wild that it was the #1 LLM for a while in terms of accuracy and usefulness.
But for whatever reason they seemingly keep fucking around with it for political reasons trying to make it politically agree with Elons stupid politics.
In February when Grok 3 was first on every chatbot arena metric – that was just a bit after deepseek r1, and o3, and the space has evolved a lot since then.
However, checking wikipedia it actually seems like they juiced the metrics by using unfair comparisons and letting Grok try the problems 64 times and returning the best answer in their comparisons.
It’s actually wild that it was the #1 LLM for a while in terms of accuracy and usefulness.
But for whatever reason they seemingly keep fucking around with it for political reasons trying to make it politically agree with Elons stupid politics.
Have a source on that claim?
In February when Grok 3 was first on every chatbot arena metric – that was just a bit after deepseek r1, and o3, and the space has evolved a lot since then.
However, checking wikipedia it actually seems like they juiced the metrics by using unfair comparisons and letting Grok try the problems 64 times and returning the best answer in their comparisons.
https://en.wikipedia.org/wiki/Grok_(chatbot)#Grok-3