Reddit has a new AI training deal to sell user content::Reddit has reportedly made a deal with an unnamed AI company to allow access to its platform’s content for the purposes of AI model training.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 个月前

    This is the best summary I could come up with:


    Reddit will let “an unnamed large AI company” have access to its user-generated content platform in a new licensing deal, according to Bloomberg yesterday.

    The deal, “worth about $60 million on an annualized basis,” the outlet writes, could still change as the company’s plans to go public are still in the works.

    The news also follows an October story that Reddit had threatened to cut off Google and Bing’s search crawlers if it couldn’t make a training data deal with AI companies.

    Last year, it successfully stonewalled its way out of the biggest protest in its history after changes to its third-party API access pricing caused developers of the most popular Reddit apps to shut down.

    As Bloomberg writes, Reddit’s year-over-year revenue was up by 20 percent by the end of 2023, but it was still $200 million shy of a $1 billion target it had set two years prior.

    The company was reportedly advised to seek a $5 billion valuation when it opens up for public investment, which is expected to happen in March.


    The original article contains 346 words, the summary contains 175 words. Saved 49%. I’m a bot and I’m open source!

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 个月前

      it’s all but guaranteed. Reminds me of this Computerphile video: https://youtu.be/WO2X3oZEJOA?t=874 TL;DW: there were “glitch tokens” in GPT (and therefore ChatGPT) which undeniably came from Reddit usernames.

      Note, there’s no proof that these reddit usernames were in the training data (and there’s even reasons to assume that they weren’t, watch the video for context) but there’s no doubt that OpenAI already had scraped reddit data at some point prior to training, probably mixed in with all the rest of their text data. I see no reason to assume they completely removed all reddit text before training. The video suggest reasons and evidence that they removed certain subreddits, not all of reddit.

    • NoRodent@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 个月前

      I mean, there’s /r/SubSimulatorGPT2 that’s been running for years… Although that one was at least hilarious to read because at that stage the AI was in the sweet spot of being simultaneously coherent while making total lapses in logic.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 个月前

    When spez took away API access, he basically shit on the social contract that offered a fair exchange of free access for the content we fed into reddit. There is no contract. There are no terms. If you use reddit now, you are giving away everything you are to be indexed and mangled by statistics.

    I’m going to make the request in the AM that Lemmy should add robots.txt rules to disallow AI crawlers, to at least indicate we’re not interested. We need legislation that tells scrapers what they can access.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 个月前

    For anyone looking for a gibberish generator to replace their Reddit content with, here’s one. This shit is like poison for those large models.

    For automatic edition I’m not sure on what people can use nowadays; back then just before the APIcalypse I’ve used power delete suite, I’m not sure if it still works and I’m not creating a Reddit account just to test it out.

    • greaprr@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 个月前

      Not that I’m against telling Reddit to fuck off in no uncertain terms, but won’t providing this kind of poisoning to AI training just make it more resilient to exactly this kind of thing?

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 个月前

    They say it’s $60 million on an annualized basis. I wonder who’d pay that, given that you can probably scrape it for free.

    Maybe it’s the AI act in the EU. That might cause trouble in that regard. The US is seeing a lot of rent-seeker PR, too, of course. That might cause some to hedge their bets.

    Maybe some people had not realized that yet, but limiting fair use does not just benefit the traditional media corporations but also the likes of Reddit, Facebook, Apple, etc. Making “robots.txt” legally binding would only benefit the tech companies.

    • Crack0n7uesday@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 个月前

      They can and do, but they want the training models to come from highly moderated sources otherwise every AI chatbot would be spewing the most racist parts of 4chan because people would train it that way as a joke.

    • Verserk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 个月前

      Anything can, the difference is reddit holds the exclusive rights to user comments on their site, and they’ve chosen to sell it.

    • Steak@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 个月前

      Dick dick pussy cunt cock dick pussy ass shit cunt shit motherfucker shit motherfucker ass tits cunt cock motherfucker shit ass tits motherfucker shit c’mon. Scrape that🔥