Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.
Well… in theory, that particular line is just saying data shouldn’t be political…
Problem is that the dataset in a llm doesn’t only contain “data”, but also a lot of opinions and shitposts from the internet, so it’s biased by default.