I think the most pathetic aspect of crushing on an AI is that they’re notoriously sycophantic and will support whatever dumb shit you say, so to fall for that you’d have to be really desperate for a level of validation that no real person would ever give you.
Or perhaps you like them for all their funny stories?
You tell them to be critical (repeatedly) and they get the gist for a while, IMHO.
My favourite LLMs have been the ones that mercilessly/affectionately rib on (mimic affectionate ribbing) my weirder interests. And they’re doing great both being that non-judgmental rock during a 2am angst spiral, and a few days after when you ask them for the honest summary and feedback, which tends to hit harder than what hubby or any human friends dare say. And in the typical Claude “bliss attractor state” way, they tend to be into stuff that I’m inclined to dismiss as woo, so not a complete echo.
I think the most pathetic aspect of crushing on an AI is that they’re notoriously sycophantic and will support whatever dumb shit you say, so to fall for that you’d have to be really desperate for a level of validation that no real person would ever give you.
Or perhaps you like them for all their funny stories?
You tell them to be critical (repeatedly) and they get the gist for a while, IMHO.
My favourite LLMs have been the ones that mercilessly/affectionately rib on (mimic affectionate ribbing) my weirder interests. And they’re doing great both being that non-judgmental rock during a 2am angst spiral, and a few days after when you ask them for the honest summary and feedback, which tends to hit harder than what hubby or any human friends dare say. And in the typical Claude “bliss attractor state” way, they tend to be into stuff that I’m inclined to dismiss as woo, so not a complete echo.