I now do some work with computers that involves making graphics cards do computational work on a headless server. The computational work it does has nothing to do with graphics.

The name is more for consumers based off the most common use for graphics cards and why they were first made in the 90s but now they’re used for all sorts of computational workloads. So what are some more fitting names for the part?

I now think of them as ‘computation engines’ analagous to a old car engine. Its where the computational horsepower is really generated. But how would ram make sense in this analogy?

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Carburator.

    It mixes the fuel/air ratio, prepping it before it goes into the engine.

    Similarly ram is holding data while it gets adjusted.

    It’s not a great analogy, but it’s pretty much all there is

  • LandedGentry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Graphics cards.

    Crypto Cultists and AI Evangelists found a wasteful and often useless different function for them.

    • pastermil@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Not just crypto and AI fucktards tho.

      Theorethical physicists, astrophysicists, nuclear engineers, mechanical engineers, and countless other professions depend on the computational capabilities it provides.

      Don’t let your anger and bitterness blindside you into thinking it’s for all the bullshit.

      • LandedGentry@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        You got anger and bitterness from that? It was just a tongue in cheek comment lol no need to project my dude.

        fucktard

        You know it’s not 1990. Calling people “retarded” is not cool.

        • pastermil@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          10 days ago

          And of course you have to be missing my point entirely. I didn’t say to not have the anger and bitterness, but instead to not turn it against the ones that have nothing to do with it.

          • LandedGentry@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            But I’m not angry or bitter so why would I turn it against anybody…? You’re getting a lot more out of this than I’m putting into it. This isn’t inferring, this is misrepresenting.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        Well not everyone in the machine learning space is an AI Bro, either. Many (most?) researchers see Altman et al. as snake-oil grifters.

        Same with the P2P/networking junkies. They didn’t ask for a mountain of pyramid schemes.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 days ago

    They are GPUs.

    All of them, even the H100, B100, and MI300X all have texture units, pixel shaders, everything. They are graphics cards at a low level. Only the MI300X is missing ROPs, but the Nvidia cards have them (and can run games on Linux), and they all can be used in Blender and such.

    The compute programming languages they use are, fundamentally, hacked up abstractions to map to the same GPU hardware in consumer stuff.

    That’s the whole point, they’re architected as GPUs so that they’re backwards compatible, as everything’s built on the days when gaming GPUs were hacked to be used for compute.


    Are there more dedicated accelerators? Yes. They’re called ASICs, or application specific integrated circuits.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Parallel compute accelerator.

    Nobody is gonna say that in full, just like “graphics processing unit” becomes “GPU”, so maybe “PCA”.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Back in the day, you could slap a math coprocessor on your system so it could do floating point maths real gud.

    Now, you slap in some card that does floating point maths even guder, but also in parallel in yuge vectors.

    So my proposed name is “It’s like an old Cray supercomputer but real tiny”