Bluesky, a trendy rival to X, finally opens to the public::undefined

  • makeasnek@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    Sounds like somebody gave you some incorrect information re: banning.

    • You don’t need a w3c standard to have a protocol that is open source and used globally, it’s just one way to go about that. You can also have standards which are not made through w3c but are made through some other governance body, or you can have standards where the standard just kind of evolves from a bunch of different devs trying different versions of things until there’s one main way which floats to the top since everybody prefers it. Nostr has the NIP (Nostr improvement proposal) process which has been used to make standards for everything from video streaming to calendar events/invites.
    • Relays on nostr, which are the equivalent to instances in ActivityPub/mastodon/lemmy can set their own moderation policies, defederate from other relays, etc all the same as in ActivityPub. The moderation abilities are the same. This means relays can choose what content they allow and ban users/topics/content from other relays, etc. The key difference is that you are by default connected to multiple relays. So if your relay blocks a user you really want to follow, you can keep following that user and see them in your feed, they just don’t show up for other users on that relay. If a relay blocks you, you can’t post content to that relay. So you get the best of both worlds: relays have curated, moderated public squares with trending hashtags and tweets while not reducing your ability to choose who to follow and who can follow you.
    • Identity portability is another key feature: if your instance goes down, you don’t lose all your DMs, followers, etc.
    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      I see what you’re saying about it not needing a standards body, and of course that can work fine, but for me it’s an advantage that AP is maintained by a body independent from any specific implementation. An equivalent would be if the AP spec was defined by the Mastodon devs and community—not a bad thing, just not as good in my mind.

      The relays thing I think was what the unable to really ban comes from. Are there moderation tools to propagate bans across relays quickly? Does nostr have the same issues as with lemmy instances where an admin abandons the relay and it gets overrun with shit? Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.

      The user portability aspect is the main advantage of it that I can see, and it looks like a pretty clever solution to the issue. Though personally speaking, I only really care about my subscription list, which I sync between two accounts already using my lemmy client. I understand some people might care more about the other stuff though (particularly on microblog platforms)

      • makeasnek@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 months ago

        Before we get into the weeds here, let’s start with an important basic premise: Moderation ability, at a protocol level, from an instance/relay admin perspective in nostr and AP is identical.

        Are there moderation tools to propagate bans across relays quickly?

        Relay operators can share ban lists like they do in AP. Relay operators can only directly control their own relay, not other relays. I don’t know the ins-and-outs of how the interface on the admin side looks, but at a protocol level, AP and Nostr offer the same abilities.

        Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.

        Relays sharing ban lists help can solve this problem. I would argue that we don’t want to give that power (to ban a user from the entire network) to a single relay admin or even a couple relay admins (since anybody can be a relay admin), so broad consensus of some form needs to exist OR sets of relays can form their own little networks of trust where they will automatically trust a ban from other admins in that network. A relay admin doesn’t need to be able to ban somebody from the entire network if they simply disagree with that user’s post, they can just ban the user on their own relay. There is value in having public squares with varying degrees of moderation, among other reasons, because laws about what kind of speech are acceptable vary country by country. There is value in having mainstream platforms which refuse to host some kinds of content and having that be a different moderation policy than the one used by the government, for example. Remember that legality and morality are not the same and that there are differences in what is illegal vs illegal in different jurisdictions. We don’t want the legal standards of Russia or China to the legal standards the entire network has to follow.

        If the user is doing something which is very illegal, which I believe you are referring to, that is a job for law enforcement. Neutral networks like the internet are traditionally policed “at the edges”. We don’t have gmail proactively filtering for objectionable or illegal content because of the consequences that come from that privacy invasion, false positives, additional computational load, reducing reliability of sending/receive between email carriers, etc. Comcast is not inspecting packets as they fly through their network at a the speed of light, delaying them, and determining if they should be passed or not. It’s the internet, they just pass them through. Instead, we say “this is an open, neutral network and if you break the law, LEO will deal with it”.

        • 9point6@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.

          My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.

          Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.