☆ Yσɠƚԋσʂ ☆

  • 7.08K Posts
  • 8.12K Comments
Joined 5 years ago
cake
Cake day: January 18th, 2020

help-circle





  • The knowledge that western domination over the world is crumbling, that neolibarlism is becoming discredited, and that the capitalist system is imploding. Meanwhile, there’s plenty of positive news coming out China every day. China is building infrastructure, transitioning off fossil fuels, and improving the standard of living for its people. China is showing what an alternative cooperative model of development looks like, one that’s not based on constant war and exploitation.


















  • I haven’t defined artificial out of existence at all. My definition of artificial is a system that was consciously engineered by humans. The human mind is a product of natural evolutionary processes. Therefore, the way we perceive and interpret the world is inherently a natural process. I don’t see how it makes sense to say that human representation of the world is not natural.

    An example of something that’s artificial would be taking a neural network we designed, and having it build a novel representation of the world that’s unbiased by us from raw inputs. It would be an designed system, as opposed to one that evolved naturally, with its own artificial representation of the world.













  • A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

    Again, I’m not disputing this point, but I don’t see why it’s significant to be honest. As I’ve noted, human representation of the world is not arbitrary. We evolved to create efficient models that allow us to interact with the world in an effective way. We’re now seeing that artificial neural networks are able to create similar types of internal representations that allow them to meaningfully interact with the data organized in a way that’s natural for humans.

    I’m not suggesting that human style representation of the world is the one true way to build a world model, or that other efficient representations aren’t possible. However, that in no way detracts from the fact that LLMs can create a useful representation of the world, that’s similar to our own.

    Ultimately, the end goal of this technology is to be able to interact with humans, to navigate human environments, and to accomplish tasks that humans want to accomplish.





  • I didn’t say they’re encoding raw data from nature. I said they’re learning to interpret multimodal representations of the encodings of nature that we feed them in human compatible formats. What these networks are learning is to make associations between visual, auditory, tactile, and text representations of objects. When a model recognizes a particular modality such as a sound, it can then infer that it may be associated with a particular visual object, and so on.

    Meanwhile, the human perspective itself isn’t arbitrary either. It’s a result of evolutionary selection process that shaped the way our brains are structured. This is similar to how brains of other animals encode reality as well. If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.