The Hated One has been pretty solid in the past regarding privacy/security, imho. I found this video of his rather enlightening and concerning.

  • LLMs and their training consume a LOT of power, which consumes a lot of water.
  • Power generation and data centers also consume a lot of water.
  • We don’t have a lot of fresh water on this planet.
  • Big Tech and other megacorps are already trying to push for privatizing water as it becomes more scarce for humans and agriculture.

—personal opinion—

This is why I personally think federated computing like Lemmy or PeerTube to be the only logical way forward. Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O.

Of course, then the 'Net would be back to serving humanity instead of stock-serving megacultists. . .

  • umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    thats if ai ever gets “sentient” in our lifetimes like the suits keep insisting it will

    • MonkeMischief@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I don’t forsee it becoming “sentient” so much as “Being given a stupid amount of access and resources to figure out a problem by itself, and stupidly pursuing the maximization of that goal with zero context.”

      There’s that darkly humorous hypothetical that an Ai tasked with maximizing making paperclips would continue to do so, using every resource it could get a hold of, and destroying any threat to further paperclip production!

      So that, with data center expansion and water. Lol

      See “paperclip maximizer” under “hypothetical examples” Here: https://en.m.wikipedia.org/wiki/Instrumental_convergence

      • umbrella@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        oh this is happening today. the ultra-addictive social media thing is mostly through machine learning algos being tuned to do this regardless of anything else.

        • MonkeMischief@lemmy.todayOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          EXACTLY. High-five!

          That’s what I worry about. Right now we can ignore social media somewhat, but if Ai gets wedged into contracts with government/infrastructure and other unavoidable daily life, I imagine that’s where a plausible threat could come from.

          I’ve no doubt such things are already in the works. Ai controlled traffic lights or something, for instance. Obviously the military and law enforcement are already giddy about it, of course.

          Giving a stupid machine a seemingly simple goal to pursue and the wrong set of keys could lead to disasterous consequences, I think. We also have the whole “Do Ai cars protect the driver or all human life even if it risks the driver?” Debate.

          “But it’s trendy, it’s the future! And there’s so much venture capital involved, how lucrative!” Seems to be how major decisions are made these days.

          I don’t see it some day “waking up” and thinking “I feel like humans are unnecessary.” It’s scarier than that…it will see us as just another variable to control and “maximize” us out of the picture.