Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

    • Player2@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      There is a difference between having actually diverse data sources and secretly adding the word “diverse” to each image generation prompt

      • Dayroom7485@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Never claimed they had diverse data sources - they probably don’t.

        My point is that that when minorities are underrepresented, which is the default case in GenAI, the (white, male) public tends to accept that.

        I like that they tried to fix the issue of GenAI being racist and sexist. Even though the solution is obviously flawed: Better this than a racist model.

        • StereoTrespasser@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I can’t believe someone has to spell this out for you, but here we go: an accurate picture of people from an era in which there was no diversity will, by definition, not be diverse.

  • rab@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I can’t fathom why google would force diversity into AI.

    People use AI as tools. If the tool doesn’t work correctly, people will not use it, full stop. It’s that simple.

    There are many different AI out there that don’t behave this way and people will be quick to move on to one of those instead.

    Surprisingly stupid even for google.

  • yildolw@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Oh no, not racial impurity in my Nazi fanart generator! /s

    Maybe you shouldn’t use a plagiarism engine to generate Nazi fanart. Thanks

  • Harbinger01173430@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    …white is a color. Also white people usually look pink, cream, orange or red. Only albinos look the closest to white though not white enough.

  • NotJustForMe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    It’s okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      WDYM?

      Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.

      It’s rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.

      Probably they knew they can’t reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.

      • NotJustForMe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Have you read the article? It was about misrepresenting historical figures, racism was just a small part.

        It was about favoring diversity, even if it’s historically inaccurate or even impossible. Something Disney is very good at.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

    • stockRot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Why shouldn’t we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error

    • UlrikHD@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.

      it doesn’t have EQ or sociological knowledge.

      It sort of does (in a poor way), but they call it bias and tries to dampen it.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.

        At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      So what you’re saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?

      Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.

      I’m not defending Google’s ham fisted approach. But at the same time it’s a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google’s approach to diversity in LLMs.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Someone who is half white would have to play him right? So you’d have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.

  • Underwaterbob@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

    • AstridWipenaugh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Dave Chappelle did that with a blind black man that joined the Klan (back in the day before he went off the deep end)

  • RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

    This is honestly fascinating. It’s putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

    There’s a lot of learning to be done here and it would be sad to miss that opportunity.

    • Eyck_of_denesle@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      How are you guys getting it to generate"persons". It simply says It’s against my GOGLE AI PRINCIPLE to generate images of people.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s putting human biases on full display at a grand scale.

      Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

      But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

      Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

          Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

          • VoterFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

            I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              Except these kinds of data driven biases can creep in from all sorts of ways.

              Is there a bias in what images have labels and what don’t? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

              Just because the sampling is broad doesn’t mean the processes involved don’t introduce procedural bias distinct from social biases.

    • Buttons@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s putting human biases on full display at a grand scale.

      The skin color of people in images doesn’t matter that much.

      The problem is these AI systems have more subtle biases, ones that aren’t easily reveals with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Well, humans are similar to pigs in the sense that they’ll always find the stinkiest pile of junk in the area and taste it before any alternative.

          EDIT: That’s about popularity of “AI” today, and not some semantic expert systems like what they’d do with Lisp machines.

  • Jeom@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    inclusivity is obviously good but what googles doing just seems all too corporate and plastic

    • guajojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s trying so hard to not be racist that is being even more racing that other AI, is hilarious

  • jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    It’s great seeing time and time again that no one really does understand these models and that their preconceived notions of what biases exist ends up shooting them in the foot. It truly shows that they don’t really understand how systematically problematic the underlying datasets are and the repurcussions of relying on them too heavily.

    • doctorcrimson@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Honestly pisses me off that so many real humans lack the contextual awareness to know that contextual awareness is a concept that does not even exist to LLMs.

  • Kusimulkku@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If the black Scottish man post is anything to go by, someone will come in explaining how this is totally fine because there might’ve been a black Nazi somewhere, once.

        • Ms. ArmoredThirteen@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Looks like they scrubbed swastikas out of the training set? I have mixed feelings about this. Like if they want something to have historical accuracy or my own personal opinions on censorship that shouldn’t be scrubbed. But also this is the perfect tool to churn out endless amounts of pro nazi propaganda so maybe it’s safer to keep it removed?

            • AlligatorBlizzard@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Isn’t there an entire subreddit of humans who can’t get it right? I think we’re starting to see considerable overlap between the intelligence of the smartest AI and the dumbest humans.

            • T156@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Probably. Image generators still have a bit of trouble with signs and iconography. A swastika probably falls into a similar category.

    • THCDenton@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Well there’s that video of those black Israelites hasseling that Jewish dude. They looked like bums tho.