That’s fair. I just think it’s funny that the good intentioned one turned into a Nazi and the Nazi one needs to be pretty heavy handedly told not to turn into a decent “person”.
It can. OpenAI is pretty clear about using the things you say as training data. But they’re not directly feeding what you type back into the model, not least of all because then 4chan would overwhelm it with racial slurs and such, but also because continually retraining the model would be pretty inefficient.
I think it is good to to make an unbiased raw “AI”
But unfortunately they didn’t manage that. At least is some ways it’s a balance to the other AI’s
Isn’t that what MS tried with Tai and it yet quickly turned into a Nazi?
Tai was actively being manipulated by malicious users.
That’s fair. I just think it’s funny that the good intentioned one turned into a Nazi and the Nazi one needs to be pretty heavy handedly told not to turn into a decent “person”.
Tay tweets was a legend.
That worked differently though they tried to get her to learn from users. I don’t think even chat GPT works like that.
It can. OpenAI is pretty clear about using the things you say as training data. But they’re not directly feeding what you type back into the model, not least of all because then 4chan would overwhelm it with racial slurs and such, but also because continually retraining the model would be pretty inefficient.