With recent events hilighting the value of quality moderation, it got me to consider: How can we help you out?
What steps, considerations, encouragements or warnings would you give the userbase regarding best practices for using the report feature (or other interraction with mods)? Reporting less, reporting more, putting more detail in the form, or just leaving it blank?
I was thinking of maybe putting together a psa-style infographic (a la think before you post) if the answers you give are poignant or significantly unknown to the average user.
Read the rules for a community and report accordingly. People like to complain when stuff stays up, but nobody has bothered to report it…
If you’re in doubt if some content violates the rules, report it, and let the mods decide if it’s okay or not okay. That is not abuse of the report function.
Include a short description on why you’re reporting some piece of content. Specially in larger comms, the mod queue can get really large. If reasonable mention the rule being violated; a simple “r1 off topic” goes a long way.
Context is everything. If what a user said only sounds bad in a certain context, say it. If the user is clearly problematic due to their profile, say it.
You’re probably better off not interacting with the content that you’re reporting.
Stop giving shitty mods a free pass. Honest mistakes happen; but if the mod in question is assumptive, disingenuous, trigger-happy, or eager to enable certain shitty types of user, spread the word about their comm being poorly moderated. And don’t interact directly with the comm.
I’d add and don’t be butt hurt if the Mod does not agree.
I’ve done that before and had mods threaten to ban me if I reported anyone again. Ironically, it was someone that was harassing and being uncivil that I was reporting. Like WAY worse than most comments I’ve seen removed. And this was on a fairly popular community here that is still around.
It’s not always so easy to assume that mods are going to be fair. Reporting people comes with a risk.
You’re probably better off not interacting with the content that you’re reporting.
I’ll second this. Responding to the problematic content usually just leads to a lot more problematic content that the mods need to sort through.
Also depending on what it is responding can just make you into a target for harassment when/if that person comes back on a different account. Best to just avoid it and not make yourself known to them.
Okay already glad I made this post as I have definitely erred in this regard several times!
I have definitely erred in this regard several times!
I think that everyone did this at least once, so don’t worry too much. Still, it’s less work for the mods if you don’t do it.
For sure. Responding is the natural thing to do. But, for me at least, I end up reading every response and deciding if I’m safe to just remove the first reported comment, or every child comment, or a mix.
Maybe some mods just delete the parent comment and all children to keep it simple, but I try to weigh each response fairly.
So, the things I appreciate as an admin are a) knowing whether it’s simply a matter of breaking community rules, or whether it needs an admins attention. As it is, I see all reports made for all communities on the instance, and I leave the ones for communities that I don’t moderate, but they then sit around until a moderator actions them. And that can make it hard to tell at a glance which ones require an admins eyes.
Secondly, if it’s not immediate obvious from the reported post, what else do I need to do? “Look at posters history” / “Will make sense if you read the previous post” etc.
Tangentially related: if you see something that needs to be addressed now, like CSAM or gore, notify an instance admin via Matrix. That tool can send push notifications, so you’re more likely to get a prompt response. Some instances also have public Matrix chats you can use.
You can find the Matrix account info for Lemmy users by clicking the “Send Secure Message” button in a user’s profile.
Still report as well, it sends emails to the mods and the admins. Just make sure it’s identifiable at a glance, like just type “CSAM” or whatever 1-2 words makes sense. You can add details after to explain but it needs to be obvious at a glance, and also mods/admins can send those to a special priority inbox to address it as fast as possible. Having those reports show up directly in Lemmy makes it quicker to action or do bulk actions when there’s a lot of spam.
It’s also good to report it directly into the Lemmy admin chat on Matrix as well afterwards, because in case of CSAM, everyone wants to delete it from their instance ASAP in case it takes time for the originating instance to delete it.
One thing to be aware of, at least when I’ve tried it reports do not federate from Lemmy to kbin.
Also, not everything has Mastodon-style reports built in. Not sure about modern-day Friendica, but AFAIK, Hubzilla and (streams) don’t.
So if you’re on Mastodon, you report a Hubzilla user on their home hub, and nothing happens, that doesn’t mean that moderation is neglected to such degrees that Fediblocking the whole hub is justified. It doesn’t mean either that Hubzilla’s culture is so much different from Mastodon’s although it is.
It simply means that Hubzilla doesn’t understand Mastodon reports.
I’ve noticed a sharp increase in spam and I’ve been reporting each one simply as “spam”.
I then block the user
Many of these posts have dozens of down votes.
Several go back months, which I discover when a new variant turns up.
I’m unsure if what I’m doing is helping or not, and as an ICT professional, I’m not sure why this obvious spam isn’t caught earlier.
It’s helpful. The issue is the moderation with mass removal of content doesn’t always federate in ways that are obvious. And it was worse with older versions of lemmy.
Report it, and at least your instance admins can remove it from your instance
I then block the user
Is this not a slightly selfish action? It solves the problem for you, but doesn’t make the community better for everyone. I feel like blocking users should be reserved for issues like harassment, not spam.
Blocking spam is not selfish, no.
I’m not sure I’d call it selfish. There are users with particularly distasteful opinions that I prefer to downvote when I see them expressed. If I blocked them, then I wouldn’t be able to help shape the community in a positive manner.
If I blocked them, then I wouldn’t be able to help shape the community in a positive manner.
Yeah, I feel like this is an important aspect that many users miss.
That’s fine to do once you’ve reported it: you’ve done your part, there’s no value still seeing the post it’s gonna get removed anyway.
But what about future posts from the same account? If I’ve blocked the account, I can’t report posts I don’t see.
Is this not a slightly selfish action? It solves the problem for you, but doesn’t make the community better for everyone. I feel like blocking users should be reserved for issues like harassment, not spam.
This is an aspect that I had not considered. Even thinking about it now leaves me unsure of the best way forward.
Specifically, whilst it’s a valid argument that blocking the user only solves this for me, and not blocking would help me see if the issue was dealt with, I feel that leaving the user free to roam across my screen is impacting me directly and if I’m not a moderator in a community, it’s not my place to second guess their decision to leave such a user and post in place.
In other words, I’m stating to a moderator that I think that this post is spam and should be dealt with accordingly, but if you leave it alone, that’s your choice.
I moderate several communities outside of the fediverse and spam in my communities is a one-strike ban. That’s not what everyone does.
Having now thought through this again, now in more detail, I’m comfortable with blocking the user.
I don’t and will not use blocking for users basically for that reason, it doesn’t actually solve anything it’s just pretending things are okay for you when they aren’t for everyone else. It’s probably the reason why reporting doesn’t happen as much as it should.
Even in harassment it is particularly useless because even though you stop seeing them and getting notifications they can continue replying in your threads, they can even use that to turn people against you. If it at least prevented posting and commenting on your profile I’d use it in those situations, but as it stands it doesn’t, so it’s useless to me, because if somebody is harassing me, the last thing I want to do is hide them while allowing them to keep harassing me, it gives them mote opportunities to cause trouble and removes my opportunity to report them for that trouble.
Very well put.
Here’s a few on reports:
-If you think it merits a report report it.
-If you’re unsure if it breaks a rule DM a mod.
-Idealy on a report write the rule # broken. If you’re typing past three sentences it’s may have unnecessary info.
-If someone is breaking a “don’t be mean” rule, the answer isn’t for you to break it too.
Here’s a few non report ways you can help:
-Set a positive tone.
-A lot of good posts/comments probably never get heard because people think it’s too lower effort or no one is interested.
-Crosspost to related communities to help people find related communities.
Report AND downvote. Reports only go to your instance and the originating instance whereas downvotes go everywhere. Highly downvoted content will get noticed by someone, eventually.
With recent events hilighting the value of quality moderation
What happened?
thread that explains it: https://lemmy.blahaj.zone/comment/8694944