Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.
Oh no minorities are overrepresented, quick, do something!
There is a difference between having actually diverse data sources and secretly adding the word “diverse” to each image generation prompt
Never claimed they had diverse data sources - they probably don’t.
My point is that that when minorities are underrepresented, which is the default case in GenAI, the (white, male) public tends to accept that.
I like that they tried to fix the issue of GenAI being racist and sexist. Even though the solution is obviously flawed: Better this than a racist model.
I can’t believe someone has to spell this out for you, but here we go: an accurate picture of people from an era in which there was no diversity will, by definition, not be diverse.
The idea was noble, their implementation was ham fisted.
I can’t fathom why google would force diversity into AI.
People use AI as tools. If the tool doesn’t work correctly, people will not use it, full stop. It’s that simple.
There are many different AI out there that don’t behave this way and people will be quick to move on to one of those instead.
Surprisingly stupid even for google.
Who exactly are they apologizing to? Is it the Nazis?
They didn’t apologize. Headlines just say they did.
Oh no, not racial impurity in my Nazi fanart generator! /s
Maybe you shouldn’t use a plagiarism engine to generate Nazi fanart. Thanks
…white is a color. Also white people usually look pink, cream, orange or red. Only albinos look the closest to white though not white enough.
It’s just the name of a racial category. There are no black people either.
Sure there are. Maybe not Vanta Black
There are literally Jewish Israeli Nazis. Not fascists, but literal moustache hitler nazis.
It’s okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶
WDYM?
Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.
It’s rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.
Probably they knew they can’t reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.
Have you read the article? It was about misrepresenting historical figures, racism was just a small part.
It was about favoring diversity, even if it’s historically inaccurate or even impossible. Something Disney is very good at.
I have, I was asking about Disney reference only.
Are you referring to the little mermaid? If so, get tf over yourself… it’s literally a fictional children’s story.
Do you have examples?
Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy
Why shouldn’t we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world
I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error
I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.
it doesn’t have EQ or sociological knowledge.
It sort of does (in a poor way), but they call it bias and tries to dampen it.
I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with
At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.
At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.
Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.
deleted by creator
So what you’re saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?
Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.
I’m not defending Google’s ham fisted approach. But at the same time it’s a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google’s approach to diversity in LLMs.
deleted by creator
Someone who is half white would have to play him right? So you’d have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.
deleted by creator
But you see where this gets dicey right?
It’s also different when someone’s race is central to their story.
How do you feel about Hamilton?
deleted by creator
This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.
Dave Chappelle did that with a blind black man that joined the Klan (back in the day before he went off the deep end)
deleted by creator
A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.
This is honestly fascinating. It’s putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.
There’s a lot of learning to be done here and it would be sad to miss that opportunity.
How are you guys getting it to generate"persons". It simply says It’s against my GOGLE AI PRINCIPLE to generate images of people.
They actually neutered their AI on thursday, after this whole thing blew up.
So right now, everyone’s fucked because Google decided to make a complete mess of this.
It’s putting human biases on full display at a grand scale.
Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.
But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.
Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.
Not human biases. Biases in the labeled data set.
Who made the data set? Dogs? Pigeons?
If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?
Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.
I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.
I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.
Except these kinds of data driven biases can creep in from all sorts of ways.
Is there a bias in what images have labels and what don’t? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?
Just because the sampling is broad doesn’t mean the processes involved don’t introduce procedural bias distinct from social biases.
It’s putting human biases on full display at a grand scale.
The skin color of people in images doesn’t matter that much.
The problem is these AI systems have more subtle biases, ones that aren’t easily reveals with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.
In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.
Well, humans are similar to pigs in the sense that they’ll always find the stinkiest pile of junk in the area and taste it before any alternative.
EDIT: That’s about popularity of “AI” today, and not some semantic expert systems like what they’d do with Lisp machines.
inclusivity is obviously good but what googles doing just seems all too corporate and plastic
It’s trying so hard to not be racist that is being even more racing that other AI, is hilarious
It’s great seeing time and time again that no one really does understand these models and that their preconceived notions of what biases exist ends up shooting them in the foot. It truly shows that they don’t really understand how systematically problematic the underlying datasets are and the repurcussions of relying on them too heavily.
Honestly pisses me off that so many real humans lack the contextual awareness to know that contextual awareness is a concept that does not even exist to LLMs.
Its not an issue. Gemini can generate the apology for you.
If the black Scottish man post is anything to go by, someone will come in explaining how this is totally fine because there might’ve been a black Nazi somewhere, once.
Hey! If Demoman catches you talkin’ anymore shit like that he’s gonna turn the lot of us into a fine red spray!
Kanye?
Someone needs to edit this to feature Kanye
Looks like they scrubbed swastikas out of the training set? I have mixed feelings about this. Like if they want something to have historical accuracy or my own personal opinions on censorship that shouldn’t be scrubbed. But also this is the perfect tool to churn out endless amounts of pro nazi propaganda so maybe it’s safer to keep it removed?
I wonder if it’s just a hard shape to get right, like hands.
Isn’t there an entire subreddit of humans who can’t get it right? I think we’re starting to see considerable overlap between the intelligence of the smartest AI and the dumbest humans.
Probably. Image generators still have a bit of trouble with signs and iconography. A swastika probably falls into a similar category.
Well there’s that video of those black Israelites hasseling that Jewish dude. They looked like bums tho.