To be fair, a lot of the people who believe that have no concept of “shame” in the first place.
Bruh, if you write this poorly maybe do use it? But yes of course you should acknowledge using it. Readers want to know if they are reading rehashed garbage or original material. Your writing is very poor and AI writing is uninteresting so either way I guess I wouldn’t worry about it too much. If you want to write and be read, work on improving your writing; doing so will go much further than trying to squeeze copy out of a LLM.
Ha, fuck yeah it is.
It’s going to be plagiarism so yes, it is.
I’ve asked Copilot at work for word help. I’ll ask out something like, what’s a good word that sounds more professional than some other word? And it’ll give me a few choices and I’ll pick one. But that’s about it.
They’re useful, but I won’t let them do my work for me, or give them anything they can use (we have a corporate policy against that, and yet IT leaves Copilot installed/doesn’t switch to something like Linux).
By their nature, LLMs are truly excellent as thesauruses. It’s one of the few tasks they’re really designed to be good at.
Back to in-class essays!
Uh, yes. Yes it is.
It is lazy. It will be sloppy, shoddily made garbage.
The shame is entirely on the one who chose to use the slop machine in the first place.
The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can’t be convinced otherwise, give 'em slop.
But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.
I laugh at all these desperate “AI good!” articles. Maybe the bubble will pop sooner than I thought.
Its gonna suck. Because of course they’re gonna get bailed out. It’s gonna be “too big to fail” all over again.
Because “national security” or some such nonsense.
If it’s not shameful, why not disclose it?
Regardless, I see its uses in providing structure for those who have issues expressing themselves competently, but not in providing content, and you should always check all the sources that the LLM quotes to make sure it’s not just nonsense. Basically, if someone else (or even yourself with a bit more time) could’ve written it, I guess it’s “okay”.
If it’s not shameful, why not disclose it?
https://en.wikipedia.org/wiki/Threshold_of_originality
If you don’t disclose it, you can claim copyright even if you have no right to. LLM-generated code is either plagiarism (but lawmakers proved that they don’t care about enforcing copyright on training data which has funny implications) or public domain because machine generation is not creative human work.
if that task is offloaded to spicy autocomplete, all and any learning of this skill is avoided, so it’s not mega useful
That presumes that is how people are using AI. I use it all the time, but AI never replaces my own judgement or voice. It’s useful. It’s not life-changing.
yes they do, wtf are you talking about https://futurism.com/openai-use-cheating-homework
Not them.
Imagine if the AI bots learn to target and prioritize content not generated by AI (if they aren’t already). Labeling your content as organic makes it so much more appetizing for bots.