• roofuskit@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Oppressed people don’t like the walled garden information tools made and profited from by the people using them as a scapegoat distraction for their fleecing of society?

  • augustus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Yes, generative AI is a normative neurotypical triangulation machine. Why would this be thought of favorably?

    • zlatko@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      I think it’s because the average person doesn’t understand about five words in your first sentence. They can understand marketing bull that they’re fed, though.

  • Tanis Nikana@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Trans lady here, appalled by AI! A lot of the middle management I work with are eager for it, and since I work in M365 administration, my boss keeps compelling me to flip the CoPilot switch to “on” for people.

    I hate it.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?

      The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            I believe that a future built on AI should account for the people the technology puts at risk.

            I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”

            The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that

            identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories

            but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.

      • NocturnalMorning@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Because it singles people out for no reason. There is absolutely no reason to do a study like this that focuses on marginalized groups. Does this study make these marginalized groups lives better somehow by putting this information out there? Not a chance.

        Research for the sake of doing research is assinine, and its rampant in academia. We have a publish or perish attitude in academia that is so pervasive its sickening…ask me how I know that (my partner is a professor)

        And we basically all but force people to write papers and try to come up with novelty to justify their existence as a professor.

        AI is a scourge on this earth in how we use it today. We didnt need a study to tell us that, much less to single out a few groups of people, who frankly dont need to be singled out anymore than they already have been by the Trump administration.

        • astutemural@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 days ago

          I mean, would you not want to do this specifically to see its effects on marginalized groups? That seems like a pretty good reason to me.

          • NocturnalMorning@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            Admittedly, I didnt read the article. I think the research is actually beneficial after reading the article, and its exactly the kind of research I think should be done on AI.

            Spoke prematurely based on the headline, go figure…

      • Kn1ghtDigital@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        The whole thing is done is bad faith to make a correlation that isn’t there. I just conducted a study that says people are always cats. My study doesn’t show any actual correlation but I think I once heard that a cat man exists so there is potential for study.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Knowledge based fields were historically a “safe space” for queer and disabled people. If you are just super fucking smart and could be a wizard in a programming language, or were a genius physicist, you could get to the point where you were too valuable to fire for being trans or disabled. I may be trans and an unperson in the place I live, but I can do calculus, and there’s no way they can take that away from me.

    There’s an attack on knowledge itself going on right now. A desire by the rich to control information. They want to force us into an unreality where skill and knowledge are meaningless. This hurts people who are socially marginalized, because it takes away one of our few paths for economic survival.

    It goes with the attacks on DEI. What they want is a tool that can replace the need for talent, so that they can select who gets to have jobs. They want all jobs to be Graeber’s “bullshit jobs” so that skill is meaningless and they can allot them out to the people they think “deserve” them.

  • Mossy Feathers (She/Her)@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    God, the number of people here who don’t know what “more likely” means is insane. Just because you aren’t trans, enby or disabled doesn’t mean the study is bullshit because you hate AI. It means that if you walk up to a random person and ask them about AI, they’re more likely to hate it if they exist in one of those groups.

    Secondly, studies like this have value because they can clue people into issues that a community is having. If everyone is neutral about a thing, except for disabled people (who hate it), then maybe that means that the thing is having a disproportionately negative impact on disabled people. Studies like this are not unlike saying “hey, there’s smoke over there, there might be a fire.”

    • paultimate14@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      The thing is, EVERYONE hates AI except for a very small number of executives and the few tech people who are falling for the bullshit the same way so many fell for crypto.

      It’s like saying a survey indicates that trans people are more likely to hate American ISP’s. Everyone hates them and trans people are underrepresented in the population of ISP shareholders and executives. It doesn’t say anything about the trans community. It doesn’t provide any actionable or useful information.

      It’s stating something uninteresting but applying a coat of rainbow paint to try to get clicks and engagement.

      • missingno@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        The average person is not informed enough to even be aware of the problems with AI. Look at how aggressively AI is being marketed, and realize that this marketing works.

      • NoneOfUrBusiness@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        You might be living in an echo chamber. Most Americans use AI at least sometimes and plenty use it regularly according to studies.

        • paultimate14@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          We could argue all day over who is experiencing reality or who is in an echo chamber.

          Pew Research found that US adults who are not “AI Experts” are more likely to view AI as negative and harmful.

          • zlatko@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            On a tangent, to me as an outsider it seems that most Americans are more likely to view anything as negative. I have no scientific backing for my shitpost though.

          • NoneOfUrBusiness@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            9 days ago

            We could argue all day over who is experiencing reality or who is in an echo chamber.

            We could, or you could read the article where it addresses exactly that point. Most demographics are slightly positive on AI, with some neutral and only nonbinary people as slightly negative. The representative US sample is at 4.5/7.

            • paultimate14@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 days ago

              https://fedia.io/m/technology@lemmy.world/t/2531490/-/comment/11832636

              You might be living in an echo chamber. Most Americans use AI at least sometimes and plenty use it regularly according to studies.

              You literally are right here accusing me of being in an echo chamber for thinking Americans view AI negatively, then when I back that up with a source you are now… Claiming that the article says that.

              Except that the whole “most demographics are positive on AI” piece that you toss in counters your own countering of my disagreement. You’re talking in circles here.

              It’s also worth noting this article is using a sample size of 700 and doesn’t go all that heavily into the methodology. The author describes themself as a “social computing scholar” and states that they purposefully oversampled these minority groups.

              The conclusion is nothing but wasted time and clicks. You’re in this thread telling people to “read the article” and I’m in here to warn people that it’s not worth their time to do so.

              And this is part of a trend I’ve noticed on Lemmy lately: people posting obviously bad articles, users commenting that the articles are bad, and usually about 3-4 other users in the comments arguing and trying to drive more engagement to the article. More clicks, more ad revenue.

  • archchan@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    In the words of Miyazaki:

    Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      It’s mathematically an insult to life itself. It changes evolution in human societies to reduce dissent and diversity of thought. And evolution is important in the sense that to stay on one place you have to run very fast.

      So it’s sort of a tool for regress. Honestly - similar to the Web itself. It was intended as a hypertext system for scientists. For social interaction there were e-mail and e-news.

      I’m thinking - I thought always that Sun is a very cool company, but at the same time they are also the ones who’ve popularized this messy understanding of the future in which, with some commercial adjustments by today’s big tech, we still live. And that understanding was highly centralist, sort of a digital empire.

  • SoleInvictus@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Disabled, vehemently anti-AI enby here. The only thing I’m good at professionally is being a great big brain, so taking knowledge work away from me makes me angry.