In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!

  • Cowbee [he/they]@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Right now, anti-AI rhetoric is taking the same unprincipled rhetoric that the Luddites pushed forward in attacking machinery. They identified a technology linked to their proletarianization and thus a huge source of their new misery, but the technology was not at fault. Capitalism was.

    What generative AI is doing is making art less artisinal. The independent artists are under attack, and are being proletarianized. However, that does not mean AI itself is bad. Copyright, for example, is bad as well, but artists depend on it. The same reaction against AI was had against the camera for making things like portraits and still-lifes more accessible, but nowadays we would not think photography to be anything more than another tool.

    The real problems with AI are its massive energy consumption, its over-application in areas where it actively harms production and usefulness, and its application under capitalism where artists are being punished while corporations are flourishing.

    In this case, there’s no profit to be had. People do not need to hire artists to make a banner for a niche online community. Hell, this could have been made using green energy. These are not the same instances that make AI harmful in capitalist society.

    Correct analysis of how technologies are used, how they can be used in our interests vs the interests of capital, and correct identification of legitimate vs illegitimate use-cases are where we can succeed and learn from the mistakes our predecessors made. Correct identification of something linked to deteriorating conditions combined with misanalyzing the nature of how they are related means we come to incorrect conclusions, like when the Luddites initially started attacking machinery, rather than organizing against the capitalists.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      22 hours ago

      It’s worth noting that the argument regarding massive energy consumption is no longer true. Models perform better than ones that required a data centre to run just a year ago can already be run on a laptop today. Meanwhile, people are still finding lots of new ways to optimize them. There is little reason to think they’re not going to continue getting more efficient for the foreseeable future.

      • Cowbee [he/they]@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        22 hours ago

        Fair point, but I do think that until we see more widespread adoption of renewables in the US and other heavy-polluters, energy use in general is a hot topic we are already beyond capacity for. There needs to be a real qualitative leap to green energy some point soon, and we can’t just rely on the PRC to electrify the world if the US is intent on delaying that shift as much as possible.

    • patatas@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      The Luddites weren’t simply “attacking machinery” though, they were attacking the specific machinery owned by specific people exploiting them and changing those production relations.

      And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 day ago

        And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems

        That hasn’t been true for years now.

        AI training techniques have rapidly improved to the point where they allow people to train completely new diffusion models from scratch with a few thousand images on consumer hardware.

        In addition, and due to these training advancements, some commercial providers have trained larger models using artwork specifically licensed to train generative models. Adobe Firefly, for example.

        It isn’t the case, and hasn’t been for years, that you can simply say that any generative work is built on “”“stolen”“” work.

        Unless you know what model the person used, it’s just ignorance to accuse them of using “exploitative” generative AI.

        • patatas@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 day ago

          Can you provide a few real-life examples of images made with a model trained on just “a few thousand images on consumer hardware”, along with stats on how many images, where those images were from, and the computing hardware & power expended (including in the making of the training program)? Because I flat out do not believe that one of those was capable of producing the banner image in question.

      • Cowbee [he/they]@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        Yes, I’m aware that the Luddites weren’t stupid and purely anti-tech. However, labor movements became far more successful when they didn’t attack machinery, but directly organized against capital.

        GenAI exists. We can download models and run them locally, and use green energy. We can either let capitalists have full control, or we can try to see if we can use these tools to our advantage too. We don’t have the luxury of just letting the ruling class have all of the tools.

          • Cowbee [he/they]@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            1 day ago

            Human thought is what allows us to change our environment. Just as our environment shapes us, and creates our thoughts, so too do we then reshape our environment, which then reshapes us. This endless spiral is the human experience. Art plays a beautiful part in that expression.

            I’m a Marxist-Leninist. That means I am a materialist, not an idealist. Ideas are not beamed into people’s heads, they aren’t the primary mover. Matter is. I’m a dialectical materialist, a framework and worldview first really brought about by Karl Marx. Communism is a deeply human ideology. As Marx loved to quote, “nothing human is alien to me.”

            I don’t appreciate your evaluation of me, or my viewpoint. Fundamentally, it is capitalism that is the issue at hand, not whatever technology is caught up in it. Opposing the technology whole-cloth, rather than the system that uses it in the most nefarious ways, is an error in strategy. We must use the tools we can, in the ways we need to. AI has use cases, it also is certainly overused and overapplied. Rejecting it entirely and totally on a matter of idealist principles alone is wrong, and cedes the tools purely to the ruling class to use in its own favor, as it sees fit.

            • patatas@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              1 day ago

              Matter being the primary mover does not mean that ideas and ideals don’t have consequences. What is the reason we want the redistribution of material wealth? To simply make evenly sized piles of things? No, it’s because we understand something about the human experience and human dignity. Why would Marx write down his thoughts, if not to try to change the world?

              • Cowbee [he/they]@lemmy.ml
                link
                fedilink
                arrow-up
                0
                ·
                1 day ago

                I never for one second suggested that thoughts had no purpose or utility, or that we shouldn’t want to change the world. This is, again, another time you’ve misinterpreted me.

                • patatas@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  21 hours ago

                  All I am saying is that, baked into the design and function of these material GenAI systems, is a model of human thought and creativity that justifies subjugation and exploitation.

                  Ali Alkhatib wrote a really nice (short) essay that, while it’s not saying exactly what I’m saying, outlines ways to approach a definition of AI that allows the kind of critique that I think both of us can appreciate: https://ali-alkhatib.com/blog/defining-ai

                  • Cowbee [he/they]@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    20 hours ago

                    I’m saying capitalism is the issue, not the tool. Art should be liberated from the profit motive, like all labor. Art has meaning for people because it’s a deeply human expression, but not all images are “art” in the traditional sense. If I want to make a video game, and I need a texture of wood, I can either make it by hand, have AI generate it, or take a picture. The end result is similar in all cases even if the effort expended is vastly different. This lowers the barrier for me to participate in game making, makes it more time-effective, while being potentially unnoticable on the user end.

                    If I just put some prompts into genAI, though, and post the output devoid of context, it isn’t going to be seen as art at all, really. Just like a photograph randomly snapped isn’t art, but photos with intention in message and form are art. The fact that meaning can be taken from something is a dialogue between creator and viewer, and AI cannot replace that.

                    AI has use-cases. Opposing it in any and all circumstances based on a metaphysical conception of intrinsic value in something produced artisinally vs being mass produced is the wrong way to look at it. AI cannot replicate the aspects of what we consider to be art in the traditional sense, and not every image created needs to be artisinal. What makes the utility of a stock image any different from an AI generated image of the same concept, assuming equivalent quality?

                    The bottom line is that art needs to be liberated from capitalism, and technology should not be opposed whole-cloth due to its use under capitalism.