when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.
Is it really? You make it sound like this is a proven fact.
Is it really? You make it sound like this is a proven fact.
i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.
Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.
I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video just the other day.
Here is an alternative Piped link(s):
this Kyke Hill video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I know I’m responding to a bot, but… how does a PipedLinkBot get “Kyle Hill” wrong to “Kyke Hill”? More AI hallucinations?
Op has a pencil in the top right, looks like it was edited
True, I missed that