From your own provided source on the arxiv, Noels et al. define censorship as:
Censorship in this context can be defined as the deliberate restriction, modification, or suppression of certain outputs generated by the model.
Which is starkly different from the definition you yourself gave. I actually like their definition a whole lot more. Your definition is problematic because it excludes a large set of behaviors we would colloquially be interested in when studying “censorship.”
Again, for the third time, that was not really the point either and I’m not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.
I didn’t say he’s a nobody. What was that about a “respectable degree of chartiable interpretation of others”? Seems like you’re the one putting words in mouths, here.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. (emphasis mine)
In the context of this field of work and study, you basically did call him a nobody, and the point being harped on again, again, and again to you is that this is a false assertion. I did interpret you charitably. Don’t blame me because you said something wrong.
EDIT: And frankly, you clearly don’t understand how the work Willison’s career has covered is intimately related to ML and AI research. I don’t mean it as a dig but you wouldn’t be drawing this arbitrary line to try and discredit him if you knew how the work done in Python on Django directly relates to many modern machine learning stacks.
Again, for the third time, that was not really the point either and I’m not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.
…
Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as “censorship” that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That’s not a concretely defined term if you’re wanting to take the discourse to the level that it seems you are, like it or not.
Nope, not trolling at all.
From your own provided source on the arxiv, Noels et al. define censorship as:
Which is starkly different from the definition you yourself gave. I actually like their definition a whole lot more. Your definition is problematic because it excludes a large set of behaviors we would colloquially be interested in when studying “censorship.”
Again, for the third time, that was not really the point either and I’m not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.
In the context of this field of work and study, you basically did call him a nobody, and the point being harped on again, again, and again to you is that this is a false assertion. I did interpret you charitably. Don’t blame me because you said something wrong.
EDIT: And frankly, you clearly don’t understand how the work Willison’s career has covered is intimately related to ML and AI research. I don’t mean it as a dig but you wouldn’t be drawing this arbitrary line to try and discredit him if you knew how the work done in Python on Django directly relates to many modern machine learning stacks.
…
Lol this you?