If I made a simulation, I would be interested in how the simulated agents interact with each other. I would only set some very basic restrictions on them (don’t fall out of bounds, maintain self-preservation). I would be very interested in what kinds of questions they come up with, what kind of structures they make using cooperation, overall behavior (assuming i’m interested in the agents in the first place).
Of course, if the simulation is not good enough, I’ll just close the simulation, change some parameters and restart the sim using an earlier snapshot.
Source: I worked with simulations.
maintain self-preservation
The simulator running us clearly did not define this restriction.
Do they, though?
How many people can stop for some moment and think “Yeah, this thing called existence seems so bizarre… Maybe this spoon I’m holding right now actually doesn’t exist?” on their own?
People wake up and rush in an exhausting day-to-day stuff, until they sleep to rest for another busy day. People’s minds are constantly flooded with mundane stuff. For many, many people, it’s mundanely impossible to have some spare time to stop and come to realize that there’s no “real”.
But when a person did come to that conclusion, even when they aren’t so dedicated to keep questioning the conundrums of existence, they can’t plant The Seed of Doubt inside the minds of others, because, as mentioned before, the other people are too busy to listen to something that won’t really help but make them gaze into the depths of the abyss and be gazed back in a wonderful yet painful connection with the primordial chaos filling the emptiness of every single atom.
So, the “creators” of this “simulation” (actually I believe there is something more complex to be nominated, it has to do with too-long-to-describe cosmic principles all the way to the aeternal interplay between primordial order and primordial chaos) don’t need to “disallow/prohibit” the pondering and speculation of the nature of the existence, the constant bodily and mundane call for survival makes it impossible to have a time and space for those questions to happen, and those who have will simply have no means to effectively spread the act of their own questioning.
Why not? Not like they can break out or anything
Because their creators allowed them to ponder and speculate about it.
Just because we’re living in a simulation doesn’t mean we are simulated. So perhaps the architects of the simulation can’t simply program our questions away.
Yes it does. What it might not mean is that we are intended.
Not necessarily. You’re correct that we cannot account for intention. Neither can we assert whether we are simulated. Even if we can prove this reality is simulated we cannot be sure if we are part of the simulation or inserted into it (a la The Matrix) from our current position.
Video game designers do something similar to this in hiding “Easter eggs” in their games and the code that makes the game that often break the 4th wall or just bypass it.
Maybe it’s fun? See who can figure it out and come as close as they can to the truth without actually getting to the truth?
If we’re in a simulation, it’s probably a massive universe-spanning one. We’re just a blip, both within the scale of the space of the universe and within the history of time of the universe. In that case, we’re not important enough for a simulation creator to even care to adjust our capabilities at all. They’re not watching us. We’re not the point of the simulation.
Why do we allow ants to ponder us as we walk over them?
My best guess: The thought processes required to ponder the possibility of a simulation are too important to the goal of the simulation itself to disable.
Obviously for the lols.
Because that’s what people outside of a simulation would do.
Have you ever seen the movie “The Thirteenth Floor”? It’s like that.
:)
Have you ever tried driving to a place you’d never go?
You’ve probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.
When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.
The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.
My point is that AI researchers found a way to simulate some kind of artificial brains, from which some “intelligence” emerges in a way that these same researchers are far from deeply understanding.
If we live in a simulation, my guess is that life was not manually designed by the simulation’s creators, but rather that it emerged from the simulation’s rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs
If we weren’t capable of higher reasoning to ask this kind of question, it wouldn’t be a very good simulation, would it?
It’s probably a bug.
Fuck, if we’re in a simulation I’d be most amazed that nobody has managed to trigger a null pointer exception to crash the whole thing yet.
Oh, also, infinite recursion… and we got so close with https://youtu.be/xz6OGVCdov8