

Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!
/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!
Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!
I see what you’re saying here: if the media prints lies from a government it’s not the media lying, it’s the government. If Israel says Hamas beheaded 40 babies and that’s found out to be a lie, it wasn’t the media lying about Hamas beheading 40 babies and so the media is entirely innocent of printing the lies fed to it by a government like Israel.
Here’s the thing: if a government lies all the fucking time and the media keeps printing what the government claims anyway, then that makes them complicit in spreading the government’s lies. We all know Israel’s government spreads lies, so printing the lies it spread about Hamas is just doing the government’s work for them. The media doesn’t get to wash its hands of the things it prints just because it puts “Israel says” before the headline.
It’s literally the least you can do, less than literally anything else, and doing the least possible is not good enough.
You don’t get to wash your hands because you voted for Kodos. Has Harris even come out against bombing Iran?
I’m saying that the terms “natural” and “artificial” are in a dialectical relationship, they define each other by their contradictions. Those words don’t mean anything once you include everything humans do as natural; you’ve effectively defined “artificial” out of existence and as a result also defined “natural” out of existence.
Even conventional detonations on top of nuclear reactors or nuclear stockpiles?
If we define human inputs as “natural” then the word basically ceases to mean anything.
It’s the equivalent of saying that paintings and sculptures emerge naturally because artists are human and humans are natural.
LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.
Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.
What’s so fantastical about the US using a tactical nuke and then lying about it?
Looks like that didn’t happen here, but I see no reason that it couldn’t.
I’m not disputing this, but I also don’t see why that’s important.
What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:
If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.
And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.
Did you actually read through the paper?
From the paper:
to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?
But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.
A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.
human in ➡️ human out
*immense relief*
There’s different isotopes in the fallout, different radio spectrometry or whatever, but I think they could lie and say that the reason it looks like a conventional nuclear weapon is because Iran was hiding a nuke and they blew it up.
It’s the perfect crime. They bombed nuclear sites, which means there are going to be radiation spikes regardless of what kind of bombs they dropped, which means they could drop small nukes and probably get away with it.
I didn’t say they’re encoding raw data from nature
Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.
Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.
It’s not merely natural. It’s human.
If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.
We don’t know that.
We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.
LLMs are not getting raw data from nature. They’re being fed data produced by us and uploaded into their database: human writings and human observations and human categorizations and human judgements about what data is valuable. All the data about our reality that we feed them is from a human perspective.
This is a feature, and will make them more useful to us, but I’m just arguing that raw natural data won’t naturally produce human-like outputs. Instead, human inputs produce human-like outputs.
But it’s emerging from networks of data from humans, which means our object concept representation is in the data. This isn’t random data, after all, it comes from us. Seems like the LLMs are just regurgitating what we’re feeding them.
What this shows, I think, is how deeply we are influencing the data we feed to LLMs. They’re human-based models and so they produce human-like outputs.
Isn’t this just because LLMs use the object concept representation data from actual humans?
Fuck this YIMBY abundancecrat shit. Building more homes for investors and landlords to hoard wouldn’t actually lower housing prices and neither would gentrifying neighborhoods by flooding them with development. That will just accumulate more wealth for a small elite property owning class.
There are already more homes than homeless people. It doesn’t fix the problem.
Thus showing that, if Democrats were in office, this exact same situation would have happened… except Republicans would probably come out against the war to attack Democrats.
I hate this fucking country.
I know pfas levels are immediately lower after the donation. I’m not under the impression that pfas levels stay low for very long i.e. long enough to safely donate blood again.
Plasma donations can be done more frequently, though, so that might be actually effective way to reduce contamination.
But, it’s kind of messed up that we’re donating contaminated blood and/or plasma. Is that good for the people that use our blood? Who knows!
And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.
Human-like object concept representations emerge from datasets made by humans because humans made them.