“The SCOPE Act takes effect this Sunday, Sept. 1, and will require everyone to verify their age for social media.”

So how does this work with Lemmy? Is anyone in Texas just banned, is there some sort of third party ID service lined up…for every instance, lol.

But seriously, how does Lemmy (or the fediverse as a whole) comply? Is there some way it just doesn’t need to?

  • abff08f4813c@j4vcdedmiokf56h3ho4t62mlku.srv.us
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    And now I hit some kind of length limit so I had to break up the post. Moving right along,

    That’s why I had the idea of creating and using the federation-bot account - this way there’s no confirmation of identities or transfer of personal data.

    But what if someone wants to participate in a community on a different instance?

    It would still work. The difference instance would fetch the link containing the requested content and pass that on to the end user, where either the web UI running on the user’s browser or the user’s app would load the content. (Akin to a web browser loading the web page). It’d be up to to the piece running on the end user’s computer to match it all together.

    At least, the texts and their context, along with the username and home instance, need to be revealed.

    Yes, but the point is that, like an old-school forum, this is not revealed except by (and from) the original instance hosting the content, and only to the end user. It’s not revealed until the end user’s app/browser fetches the content from the original server. So since only a link is federated, the PII only exists on those two places. Meaning that the server admin has a much easier job to delete data, as they only have to get it deleted off their own instance.

    If the end user then does webscraping … well how can you prevent that?

    And if someone creates a malicious instance that follows the link and screenscrapes it … I assume it also falls under the “cannot prevent” bucket.

    Taking a mental step back, it’s probably premature to worry about technological implementations. Sending data around does not have to be a violation. Compliance will require partly better information, and partly different administration. The legal aspects should be worked out before the necessary tools for the administrators are implemented.

    The problem here is that means we devs have to sit back and wait. When will we get the answers we need? And how long do we have to be exposed before we can actually work on solving the problem?

    We really do need a foundation like the EFF to provide that legal advice and support, but I think coming up with technical fixes is still worthwhile even as we wait…

    There are also a lot of regulation for the backend, that instance owners have to comply with but which won’t be noticed by users. Documenting the data processing, who has access, possibly make data impact assessments, maybe notify the local data protection office, …

    This seems like a good legal guide for an admin’s and instance’s jurisdiction is a must.

    Oh, and by german law there also needs to be a (physical) address that can be served legal papers.

    Interesting. In the US you can hire a lawyer to service that purpose, typically. In some jurisdictions, I wonder if something like https://www.alliancevirtualoffices.com/ may also work.

    There’s also more from the DSA, like releasing transparency reports on moderation twice a year, making regular backups and testing those, … I’m not quite sure what all is demanded by the DSA.

    You’ve mentioned this a bunch of times but … what’s the DSA again? I have no doubt it’s related but curious to understand exactly what it is and how it fits in.

    Could there be jurisdictions that have only DSA and no GDPR, and others with GDPR and no DSA?