• PhilipTheBucket@quokk.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Well, not really any of the above. I’ve tried with some mild success to build a “troll detection” system, but it needs far more work. Also, in the months since my initial work on this matter, I’ve found some far better approaches and would want to implement them. So my old work isn’t reflective of the new direction I’m planning to take.

    I’ve actually done a version of this and a couple of other various ideas about it. The current WIP idea works totally differently to what you are talking about, I actually got as far as making a community for it, but then abandoned the effort because I couldn’t figure out a way to deploy it that would be in any way productive.

    I’m going to say it knowing ahead of time that roughly 100% of the people reading are going to think it’s a terrible idea: It is an LLM-based moderator that watches the conversation and can pick out bad faith types on conduct in the conversation. I actually 100% agree with you about political conversation online being almost exclusively a big waste of time (including because of the way moderation happens and people trying to deliberately distort the narrative). This was just my idea to try to help it.

    The thing that led me to never do anything with it was that I didn’t feel like anyone would ever buy into it enough to even take part in a conversation where it was deployed (even assuming it worked passably well which is not proven). If you care about these issues also, though, would you like to try the experiment of having the whole conversation we’re having with it observing the conversation and weighing in? I would actually like to, I’d be fine with continuing with the questions you were asking and continuing this whole debate about moderation and its impact on Lemmy, in that context. Let me know.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      The thing that led me to never do anything with it was that I didn’t feel like anyone would ever buy into it enough to even take part in a conversation where it was deployed

      Yeah I think its got to work for people to buy into it. And frankly my earliest implementations were “inconsistent” at best.

      My thought right now is that the tool needs to do a first pass to encode the “meta-structure”, or perhaps… scaffolding(?) of a conversation… then proceed to encode the impressions/ leanings. I have tools that can do this in-part, but it needs to be… “bigger”… whatever that means. So there is sentiment analysis, easy enough. There is key phrase extraction. And thats fine for a single comment… but how do we encode the dynamic of a conversation? Well thats quite a bit more tricky.

      • Proudly Green@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        4 days ago

        still seems to me u guys are doing it for witchhunting. if someone doesn’t like someone they can just ban them. you two going on and on about writing a program and using ai to catch peopel you don’t like is icky. I’ll be one of the people voting against this if it ever goes wide on lemmy. no thanks. u all need to touch grass, ur way too caught up in lemmy