queermunist she/her

/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!

  • 8 Posts
  • 4.16K Comments
Joined 2 years ago
cake
Cake day: July 10th, 2023

help-circle


  • I see what you’re saying here: if the media prints lies from a government it’s not the media lying, it’s the government. If Israel says Hamas beheaded 40 babies and that’s found out to be a lie, it wasn’t the media lying about Hamas beheading 40 babies and so the media is entirely innocent of printing the lies fed to it by a government like Israel.

    Here’s the thing: if a government lies all the fucking time and the media keeps printing what the government claims anyway, then that makes them complicit in spreading the government’s lies. We all know Israel’s government spreads lies, so printing the lies it spread about Hamas is just doing the government’s work for them. The media doesn’t get to wash its hands of the things it prints just because it puts “Israel says” before the headline.








  • I’m not disputing this, but I also don’t see why that’s important.

    What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.

    Did you actually read through the paper?

    From the paper:

    to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

    But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.

    A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

    human in ➡️ human out





  • I didn’t say they’re encoding raw data from nature

    Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.

    Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.

    It’s not merely natural. It’s human.

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    We don’t know that.

    We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.