• dindonmasker@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Telling people to stop doing something because it burns the planet doesn’t really changes their mind in general unfortunately. Best you can do is put the numbers in their face so that they can’t avoid the truth. But that only works on people who care.

      • RageAgainstTheRich@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        That is sadly the truth with many things. People just don’t care unless it personally affects them. And even then it depends if it hits hard enough 💔.

    • RageAgainstTheRich@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I actually don’t know that much about LLM’s. I do know they require a ton of energy to train the models. But once those are trained, the smaller models especially, don’t require that much to run, right? I once tried to run a local one to see how much it took, and my gpu maxed out for a few seconds and the LLM spit out text and it was done. While when playing games, the gpu maxes out for hours.

      Again, i don’t know super much about them as i have only used it a few times over the years to break down big tasks into smaller tasks for my AuDHD when i am very overwhelmed, and it was kinda nice for that.

      The image generation stuff is pretty bad though from what i have read. Plus it steals peoples art. Fuck that shit.

      Please do tell me if i understand wrong. Because i don’t want to contribute to a bunch of bad shit ruining the climate.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago
        1. It’s not like the companies train one model and they use it for months until they need new version. They train new models all the time to update them and test new ideas.
        2. They don’t use small models. Typical LLMs offered by ChatGPT or Claude are the big ones
        3. They process thousands of queries per second so their GPUs are maxed out all the time, not just for few seconds.
        • RageAgainstTheRich@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Wouldn’t it then help to run the smaller ones locally instead of using the big ones like ChatGPT?

          I read that one called Deepmind or something in china took a lot less to train and is just as strong. Is that true?

          What do people usually use LLM’s for? I know they suck for most things people are using them for like coding. But what do people use them for that justifies all the hype?

          Again, please don’t think i am trying to justify it. I just don’t know super much about them.