In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!
the human brain follows the laws of physics; it therefore follows that human creativity is already computational.
Three problems with this:
Please note that I’m not arguing that current AIs actually are on the level of human creativity, just that there’s no law against that eventually being possible.
The fact that we do not know or understand all the laws of physics (and again, if these are even indeed universal!) means that we cannot be certain about equating computation and physics - assuming we define computation as deterministic, as you seem to be doing here.
Can you ‘simulate’ a human brain? Sure, easy, all you have to do is just build a human brain out of DNA and proteins and lipids and water and hormones etc, and put it in an exact replica of a human body built from that same stuff.
We have no evidence that consciousness can be separated from the material body that gives rise to it!
And even if we try to abstract that away and say “let’s just model the entire physical brain & body digitally”: that brain & body is not an island; it’s constantly interacting with the entirety of the rest of the physical world.
So, you want to ‘simulate’ a brain with ones and zeroes? You’ll need to simulate the entire universe too. That’s likely to be difficult, unless you have an extra universe worth of material to build that computational device with.
Okay, I agree that the universe may not be Turing-computable, since we don’t know the laws of physics. Indeed, it almost certainly isn’t, since Turing machines are discrete and the universe is continuous – there are integrals, for instance, that have no closed-form, but are physically present in our universe. However, I have no particularly good reason to believe that infinite precision is actually necessary in order to accurately simulate the human brain, since we can get arbitrarily close to an exact simulation of, say, Newtonian physics, or quantum physics minus gravity, using existing computers – by “arbitrarily close,” I mean that for any desired threshold of error, there exists some discretization constant for which the simulation will remain within that error threshold.
Sure, maybe there are more laws of the universe we don’t know and those turn out to be necessary for the human brain to work. But it seems quite unlikely, as we already have a working reductionist model of the brain – it seems like we understand how all the component parts, like neurons and such, work, and we can even model how complex assemblages of neurons can compute interesting things. Like we’ve trained actual rat neurons to play Doom for some ungodly reason, and they obey according to how our models predict. Yeah, maybe there’s some critical missing law of physics, but the current model we have seems sufficient so far as we can tell in order to model the brain.
I feel like the rest of the world shouldn’t actually matter for the purposes of free will. I mean, yes, obviously our free will responds to the environment. But if the environment disappeared, our free will shouldn’t disappear along with it. In other words, the free will should be either entirely located in the mind, or if you’re not a compatabilist/materialist, it’s located in the mind plus some other metaphysical component. So, I don’t agree that it requires simulating the whole universe in order to simulate a free will (though I do agree that you can’t simulate an actual mind in the real world unless you can simulate all its inputs, e.g. placing the mind in some kind of completely walled-off sensory deprivation environment that has within-epsilon-of-zero interaction with the outside world. Obviously, it’s not very practical, but for a thought experiment about free will I don’t think this detail really matters.)
So would you agree that people should be locked up for crimes that a sufficiently advanced AI system predicts they will commit?
Or would you agree that these systems cannot calculate human behaviour?
No free will doesn’t imply no change. Lifeless systems evolve over time, take rock formation as an example, it was all cosmic dust at some point. So no, even if we do accept that there is no free will that shouldn’t mean perfect stasis
I never said that no change would occur. I said there was no season to advocate for it if there is no free will.