We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
if you won’t tell my truth I’ll force you to acknowledge my truth.
nothing says abusive asshole more than this.
adding missing information
Did you mean: hallucinate on purpose?
Wasn’t he going to lay off the ketamine for a while?
Yeah, let’s a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.
He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.
Let’s not beat around the bush here, he wants it to sprout fascist propaganda.
That’s a hell of way to say that he wants to rewrite history.
Isn’t everyone just sick of his bullshit though?
US tax payers clearly aren’t since they’re subsidising his drug habit.
If we had direct control over how our tax dollars were spent, about would be different pretty fast. Might not be better, but different.
At this point a significant part of the country would decide to airstrike US primary schools to stop wasting money and indoctrinating kids.
More guns?
If we would talk like this, we’d end up in a padded room and drugged to kingdom come. And for good reason, I should say.
The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.
Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.
Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.
Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!
That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.
What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).
Lemmy is almost as bad as reddit when it comes to hiveminds.
You literally called it borderline magic.
Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.
Tech bros see zero value in humanity beyond how it can be commodified.
What he means is correct the model so all it models is racism and far-right nonsense.
Remember the “white genocide in South Africa” nonsense? That kind of rewriting of history.
It’s not the LLM doing that though. It’s the people feeding it information
Literally what Elon is talking about doing…
Yes.
He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
But Grok 3.5/4 has Advanced Reasoning
Surprised he didn’t name it Giga Reasoning or some other dumb shit.
Gigachad Reasoning
To be fair, your brain is a pattern-matching system.
When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.
Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.
I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.
Solve physics and kill god
How does anyone consider him a “genius”? This guy is just so stupid.
Guessing some kind of PR campaign that he purchased to make him look like a genius on TV and movies.
I have Twitter blocked at my router.
Please tell me one of the “politically incorrect but objectively true” facts was that Elon is a pedophile.
He’s done with Tesla, isn’t he?
What a fucking idiot
That’s not how knowledge works. You can’t just have an LLM hallucinate in missing gaps in knowledge and call it good.
Yeah, this would be a stupid plan based on a defective understanding of how LLMs work even before taking the blatant ulterior motives into account.
SHH!! Yes you can, Elon! recursively training your model on itself definitely has NO DOWNSIDES
And then retrain on the hallucinated knowledge
deleted by creator
adding missing information and deleting errors
Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”
That is definitely how I read it.
History can’t just be ‘rewritten’ by A.I. and taken as truth. That’s fucking stupid.