in terms of communication utility, it’s also a very accurate term.
when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.
when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.
it’s the same song, but played on a different instrument.
when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.
Is it really? You make it sound like this is a proven fact.
in terms of communication utility, it’s also a very accurate term.
when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.
when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.
it’s the same song, but played on a different instrument.
Is it really? You make it sound like this is a proven fact.
i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.
Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.
I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video just the other day.
Here is an alternative Piped link(s):
this Kyke Hill video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I know I’m responding to a bot, but… how does a PipedLinkBot get “Kyle Hill” wrong to “Kyke Hill”? More AI hallucinations?
Op has a pencil in the top right, looks like it was edited
True, I missed that
I think a more accurate term would be confabulate based on your explanation.